Monday, July 28, 2008

Concept Materialized view

The terms snapshot and materialized view are synonymous. Both refer to a database object that contains the results of a query of one or more tables.

Materialized views allow you to maintain copies of remote data on your local node. The copies can be updatable with the Advanced Replication feature and are read-only without this feature.

What is called master table?


The tables in the query are called master tables (a replication term) or detail tables (a data warehouse term).This reference uses "master tables" for consistency.

What is master database?

The databases containing the master tables are called the master databases.

For data warehousing purposes, the materialized views commonly created are materialized aggregate views, single-table materialized aggregate views, and materialized join views. In a data warehousing environment, all master tables must be local.


What are Privileges required to create Materialized view in your schema?

CREATE MATERIALIZED VIEW or CREATE SNAPSHOT system privilege and either the CREATE TABLE or CREATE ANY TABLE system privilege.

You must also have access to any master tables of the materialized view that you do not own, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.


What are Privileges required to create Materialized view in another user schema?

You need CREATE ANY MATERIALIZED VIEW or CREATE ANY SNAPSHOT system privilege and access to any master tables of the materialized view that you do not own, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.

The owner of the materialized view must have the CREATE TABLE system privilege. Owner should be able to access any master tables of the materialized view that the schema owner does not own and to any materialized view logs defined on those master tables, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.

To create the materialized view with query rewrite enabled, The owner of the master tables must have the QUERY REWRITE system privilege.

If you are not the owner of the master tables, you must have the GLOBAL QUERY REWRITE system privilege.

If the schema owner does not own the master tables, then the schema owner must have the GLOBAL QUERY REWRITE privilege.

The user whose Schema in which materialized view is present must have sufficient quota in the target tablespace to store the materialized view's master table and index, or must have the unlimited tablespace system privilege.

Oracle create few object to maintain materialized view's data when materialized view are created, this object are one internal table, at least one index & may create one view in the schema of the materialized view.

Wednesday, July 23, 2008

Types of Recovery

1. What is instance recovery ?

Instance and crash recovery are the automatic application of redo log records to Oracle data blocks after a crash or system failure. During normal operation, if an instance is shut down cleanly (as when using a SHUTDOWN IMMEDIATE statement), rather than terminated abnormally, then the in-memory changes that have not already been written to the datafiles on disk are written to disk as part of the checkpoint performed during shutdown.

However, if a single instance database crashes or if all instances of an Oracle Real Application Cluster configuration crash, then Oracle performs crash recovery at the next startup. If one or more instances of an Oracle Real Application Cluster configuration crash, then a surviving instance performs instance recovery automatically. Instance and crash recovery occur in two steps: cache recovery followed by transaction recovery.

The database can be opened as soon as cache recovery completes, so improving the performance of cache recovery is important for increasing availability.

2. What is crash recovery?

Crash recovery is same as instance recovery.

3. What is cache recovery?


Cache Recovery (Rolling Forward)
During the cache recovery step, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database (update transactions each second) and the time between checkpoints.


4. What is transaction recovery?

Transaction Recovery (Rolling Back)
To make the database consistent, the changes that were not committed at the time of the crash must be undone (in other words, rolled back). During the transaction recovery step, Oracle applies the rollback segments to undo the uncommitted changes.


5. What is media recovery?

Media Recovery

Datafile media recovery is sometimes also called media recovery.
Datafile media recovery is used to recover from a damaged (or lost) datafile, control file or spfile. The goal of datafile media recovery is to restore database integrity.

In order for a datafile to be able to be recovered, either:

• The database is not open, or
• The datafile is offline


A datafile that needs media recovery cannot be brought online.
A database cannot be opened if any of the online datafiles need media recovery.

A media recovery sort of replays the redo generated to a restored block, to make that block as of a certain time. Recovery of a datafile always begins with the lowest SCN recorded in the datafile header.

6. Differences to crash recovery

Datafile media recovery differs from crash recovery in the following points:
• Oracle will not initiate media recovery on its own, it must be explicitly demanded by a DBA.
• It will restore a datafile saved away in a backup. Crash recovery recovers a datafile that was left by a crash.
• Media recovery uses both archived redo logs and online redo log to recover. Crash recovery only needs the redo logs.

Tuesday, July 22, 2008

db_file_multiblock_read_count

WHAT IS DB_FILE_MULTIBLOCK_READ_COUNT?


  • The db_file_multiblock_read_count initialization parameter determines the maximum number of database blocks read in one I/O operation during a full table scan. The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance.

  • While it may be tempting to set a very high value for Before changing the value of db_file_multiblock_read_count to improve overall performance, we must consider several factors before doing so.

  • The total number of I/Os actually required to perform a full table scan depends on other factors such as the size of the table and whether parallel query is being used. The cost-based optimizer uses all of these factors, including db_file_multiblock_read_count to determine the cost of full table scans.

  • The goal of setting the db_file_multiblock_read_count parameter is that table scans are performed in fewer, larger I/Os. This is done by evaluating the number of blocks required to complete each table scan over time, then adjusting the parameter so that on average, most scans can be performed in one I/O.

  • Starting with Oracle10g Release 2 the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter is now automatically tuned to use a default value when this parameter is not set explicitly. This default value corresponds to the maximum I/O size that can be performed efficiently.

  • This value is platform-dependent and is 1MB for most platforms.Because the parameter is expressed in blocks, it will be set to a value thatis equal to the maximum I/O size that can be performed efficiently divided bythe standard block size.

db file scattered read -- WAIT

  • This wait happens when a session is waiting for a multiblock IO to complete. This typically occurs during full table scans or index fast full scans.

Oracle reads up to DB_FILE_MULTIBLOCK_READ_COUNT consecutive blocks at a time and scatters them into buffers in the buffer cache.

  • That means It occurs when Oracle performs multiblock reads from disk into non-contiguous('scattered') buffers in the Buffer Cache.

  • How this is done depends on the platform and the release of Oracle you are running.

When Will Index FFS be used in preference to FTS?

From the Oracle8 Server Concepts manual:
1. The index must contain all the columns referenced in the query.
2. Index FFS is only available with Cost Based Optimizer (CBO) (Index hint forces CBO).
3. Index FFS can be hinted with /*+ INDEX_FFS(table index) */ .
Example
SQL> select /*+ INDEX_FFS (emp emp_ix) */ empno, deptno, ename from emp;

An Index FFS will scan all blocks in the index. The returned data is not sorted.

Index FFS can use multiblock I/O and can be parallelized just like a Full Table Scan.

If this Wait Event is a significant portion of Wait Time then a number of approaches are possible:-

Starting with Oracle9i the new view V$SQL_PLAN view can help:
(ignore data dictionary SQL in the output of these queries)

For Full Table scans:
select sql_text from v$sqltext t, v$sql_plan p
where t.hash_value=p.hash_value and p.operation='TABLE ACCESS'
and p.options='FULL'
order by p.hash_value, t.piece;

For Fast Full Index scans:
select sql_text from v$sqltext t, v$sql_plan p
where t.hash_value=p.hash_value and p.operation='INDEX'
and p.options='FULL SCAN'
order by p.hash_value, t.piece;


In Oracle8i a possible approach is to find sessions performing multiblock reads by querying V$SESSION_EVENT for this Wait Event and then SQL Tracing them. Alternatively, the Top SQL statements for Physical Reads can be investigated to see if their execution plans contain Full Table or Fast Full Index scans.

In cases where such multiblock scans occur from optimal execution plans it is possible to tune the size of multiblock I/Os issued by Oracle by setting the instance parameter DB_FILE_MULTIBLOCK_READ_COUNT so that

DB_BLOCK_SIZE x DB_FILE_MULTIBLOCK_READ_COUNT = max_io_size of system
How to resolve wait 'db file scattered read'?
  • The goal of setting the db_file_multiblock_read_count parameter is that table scans are performed in fewer, larger I/Os. This is done by evaluating the number of blocks required to complete each table scan over time, then adjusting the parameter so that on average, most scans can be performed in one I/O.

  • Starting with Oracle10g Release 2 the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter is now automatically tuned to use a default value when this parameter is not set explicitly. This default value corresponds to the maximum I/O size that can be performed efficiently.

  • This value is platform-dependent and is 1MB for most platforms. Because the parameter is expressed in blocks, it will be set to a value thatis equal to the maximum I/O size that can be performed efficiently divided bythe standard block size.

  • Partitioning can also be used to reduce the amount of data to be scanned as Partition Pruning can restrict the scan to a subset of the segment's partitions.

  • Finally, you can consider reducing the data held in the most frequently accessed segments (by moving older unneeded data out of the database) or moving these segments to new faster disks to reduce the response time on their I/Os.

Identifying buffer busy wait

How to Identify the Segment Associated with Buffer Busy Waits

It is important to know the segments which are affected by buffer busy waits in order to address the cause for the waits.

This identification can be done by using 10046 traces (with TKProfs) or AWR / statspack output.

It is considered that the 10046 trace is better than the AWR / Statspack data because it allow you to directly associate the waits with SQL statements causing them and allows you to easily see how the waits affect the execution of that statement.

How to analyzing both 10046 / TKprof

The technique of analyzing both 10046 / TKprof

TKProf
• In the "Overall Totals" section, its confirm that "buffer busy wait" events (Overall Totals, recursive and non-recursive) have the highest wait times.
• Determine which call type is associated with the highest elapsed time: execute or fetch
• Generate a new TKProf report sorted by the call type found for the highest elapsed times.

For example:

Execute calls:

tkpof trace_file_name output_file sort=exeela

Fetch calls:

tkpof trace_file_name output_file sort=fchela

• Choose a few of the top statements in this new TKProf report and find them in the original trace file.
• Examine parts of the raw trace file where the top statements are running and look at the lines with "WAIT #" for the buffer busy wait event corresponding to the cursors.

For example:

WAIT #2: nam='buffer busy waits' ela= 222 file#=4 block#=78391 class#=1 obj#=57303 tim=1151844401945055

• Find the value of the P1, P2, and P3 fields. These correspond to the file number, block number, and reason code ( 9.2) or block class (10g). You will likely find many "WAIT#" lines for different combinations of P1, P2, and sometimes P3 values. The goal is to determine which segments are related to these waits.

• In 10g, this is very easy because theWAIT line will include the object ID for the segment (in the example above, it's: obj#=57303). You can find the information about the object using this query:


SQL>SELECT owner, object_name, object_type FROM dba_objects WHERE object_id = 57303;

• If you need to find the segment using file# and block#, you can use this query:


SQL>SELECT owner, segment_name, file_id, block_id starting_block_id, block_id + blocks ending_block_id, blocks
FROM
dba_extents
WHERE file_id = &file_num AND ( block_id <= &block_id AND (&block_id < (block_id + blocks)) );

Now you know the segment names and SQL statements with highest buffer busy waits.


TKPROF

If you are working with Tkprof connect to a new session, oracle create trace for a session

SQL> Alter session set sql_trace = true;

SQL> Alter session set time_statistics = true;

After this write a query for which you have to see performance

suppose below is the query for which you need to see performance.
SQL> select * from emp where deptno = 10;

SQL> Alter session set sql_trace = false;

Go to commond prompt
D:\>tkprof c:\ora01780.trc c:\report.txt explain = username/password sys = yes

Then see report.txt

Solution: - Find correct solution if there is problem.


Monday, July 21, 2008

Tuning parallel queries



Parallel SQL statements were a part runs in serial are candidates of tuning.

Oracle 9i introduced the view V$SQL_PLAN where we can find the execution plans for all SQL statement in the library cache. We can use this view to find parallel running SQL statements which are candidates for tunning.

We use the fact that the distribution method
PARALLEL_FROM_SERIAL (in the explain plan we see S->P) is an indicator that some parts of the SQL statement runs in serial. This is often caused by different settings of the parallel degree or that we force a query to run in parallel with only one parallel hint.


The query:

  • SQL>select p.hash_value ,t.sql_text from v$sqltext t, v$sql_plan p where p.other_tag In ('PARALLEL_FROM_SERIAL' ) and t.HASH_VALUE = p.HASH_VALUE and t.ADDRESS = p.ADDRESS order by t.HASH_VALUE , t.ADDRESS , t.PIECE;

Finds all 'bad' running parallel SQL statements in the library cache where a part runs in serial.

This query will give HASH_VALUE and SQL_TEXT which is candidate for tuning.

Then with the help of Execution plan we can check the degree setting

What exactly we see in execution plan?

The table which is used in query(candidate for tuning) has a parallel degree of 1 and so we scan this table in serial. In the IN-OUT column of PLAN_TABLE we find the value s->P (PARALLEL_FROM_SERIAL) the script looks for statement with this condition.

To investigate the degree setting on the tables and indexes we run below mention two query

  • SQL> select table_name , degree from user_tables where table_name in ('tablename’, 'tablename');

  • SQL> select index_name , degree from user_indexes where table_name in ('tablename’, 'tablename' );

Here tablename is name of table used in query which is candidate for tuning.

How to fix this problem?

The only problem is of degree 1 of the table We change the degree of table to 4

  • SQL> alter table tablename parallel 4;
And suppose if problem is of degree only then after altering table we have fixed this issue and in the column IN-OUT of new execution plan we can also see in parallel which give in the most cases a performance benefits. We can also see that we have lower costs.

Summary

[1] Find parallel running queries where some parts run inserial.
[2] Check all tables and indexes if the have the correct parallel degree.
[3] If you use parallel hints consider to use parallel degree on object ( table/index ) level or use more parallel hints that all tables/indexes can access in parallel.

Explain Plan

Before creating EXPLAIN PLAN for any Query, you must identify the exact query which is consuming maximum database resource.

First Identify the query (Done by DBA)

How to identify SQL consuming maximum data base resources
  • Go to V$SQLAREA

  • SQL>Select sql_TEXT, DISK_READS, BUFFER_GETS FROM V$SQLAREA ORDER BY DISK_READS DESC;

  • Get first 10 SQL where disk reads are highest and get first 10 SQL where buffer_gets are highest.

Below are the steps to create Explain Plan

For EXPLAIN PLAN we must have PLAN table created in our schema

Create this table PLAN_TABLE

  • To create PLAN_TABLE there is SQL in

ORACLE_HOME/RDBMS/ADMIN/UTLXPLAN
Execute this query.

  • SQL> @$ORACLE_HOME/rdbms/admin/utlxpls


How to give query to PLAN_TABLE, for this write below written query this query should be from first 10 queries which were consuming more resources.

  • Suppose in our Case if we need explain plan for query

SQL>Select * from emp where empno =7902

  • SQL>Explain Plan Set statement_id = ’st1’For Select * from emp where =7902;

    In above query Statement_id = ‘st1’ is there we can have any thing in place of st1 it means st2 or xyz etc.

    This query will not give output on screen so in explain PLAN it explain execution PLAN

    SQL> col OPTIONS format a10
    col OPTIMIZER format a10
    col OBJECT_NAME format a10
    select id,options, operation,object_name, OPTIMIZER,CPU_COST, IO_COST,temp_space
    from plan_table
    where statement_id = '&Statement'
    order by id ;


    If the query don’t give the value of CPU_COST, IO_COST, TEMP_SPACE then oracle has used RBO in this it will do FTS but if we want to ignore FTS then we have to create index on column which is used in where clause.

    Explain plan will tells us how QUERY is executed

  • Solution

If there is full table scan create index on table.

Friday, July 18, 2008

Concept of Cache hit ratio

What is the cache hit ratio, what impact does it have on performance of an Oracle database and what is involved in tuning it?

A cache is a block of memory for temporary storage of data likely to be used again. The
CPU and hard drive frequently use a cache, as do web browsers and web servers.

A cache is made up of a pool of entries. Each entry has a datum (a nugget of data) which is a copy of the datum in some backing store. Each entry also has a tag, which specifies the identity of the datum in the backing store of which the entry is a copy.

When the cache client (a CPU, web browser, operating system) wishes to access a datum presumably in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired datum, the datum in the entry is used instead. This situation is known as a cache hit. So, for example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the contents of the web page is the datum. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.

The alternative situation, when the cache is consulted and found not to contain a datum with the desired tag, is known as a cache miss. The previously uncached datum fetched from the backing store during miss handling is usually copied into the cache, ready for the next access.

During a cache miss, the CPU usually ejects some other entry in order to make room for the previously uncached datum. The
heuristic used to select the entry to eject is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the least recently used entry.

Oracle DB is based on caching and reusability.


Below mention hit ratio affect oracle database performance

  • Library cache HIT ratio by v$librarycache
  • Dictionary cache HIT ratio by v$rowcache
  • Buffer cache HIT ratio should be more than 90% but at the same time Physical read and write should also be considered. (V$SYSSTAT).

    The cache miss ratio is the number of cache misses compared to the total number of cache read attempts.

This is calculated as follows:

  • Select sum(getmisses) / sum(gets) "Miss ratio" From v$rowcache;


The miss ratio should be less than 15%. If this is not the case, increase the initialization parameter SHARED_POOL_SIZE.

Index not used for tables

Full Table scan is used under the rule-based optimization.

A full table scan is used if:

  • No Indexes exist on the table.

  • No limiting conditions are placed on the rows(that is, the query asks for all the rows). ex. if the query has no where clause, all the rows will be returned.

  • No limiting conditions are placed on the rows corresponding to the leading column of any index on the table. ex. if a three-column concatenated index is created on the City-State-Zip columns, then a query that had a limiting condition on the State column only, would not be able to use the Index, since State is not the leading column of the Index .

  • Limiting conditions are laced on the lead column of an Index, but the conditions are used inside expressions. ex. If an Index exists on the City column, then a limiting condition of:

    WHERE CITY = 'NEW YORK'

    could use the index, But , if the limiting condition was instead:

    WHERE UPPER(CITY) = 'NEW YORK'

    then the index on the City column would not be used because the City column is inside the UPPER function. If you had concatenated the City column with a text string, the index would not be used. ex. if the limiting condition was:

    WHERE CITY 'X' LIKE 'ORLANDO%'

    then the index on the City column would not be used.

  • Limiting conditions placed on the rows correspond to the leading column of an index, but the conditions are either NULL checks or inequalities.

    For example, if an index exists on the City column, none of the following will be able to use the index:

    WHERE CITY IS NULL
    WHERE CITY IS NOT NULL
    WHERE CITY != 'ORLANDO'

  • Limiting conditions placed on the rows correspond to the leading columns of the index, but the conditions use the LIKE operatorand the value starts with '%' or the value is a bind variable. ex. neither of the following will be able to use the index:

    WHERE CITY LIKE '%YORK%'
    WHERE CITY LIKE : CITY_BIND_VARIABLE

    NOTE:- The bind variable may contain trailing '%' or no '%' at all. Regardless, an index will not be used.

    If cost-based optimization is used, Oracle will use full table scans for all of the above cases shown for rule-based optimzation. Additionally, the cost-based optimizer may decide to use full table scans if the table has not been analyzed, if the table is small, if the indexed columns are not selective, or if the optimization goal is set to ALL_ROWS.

    To avoid unplanned full table scans, make sure the query can use an index.

Row Chaining & Migration

USNING ORACLE BLOCKS EFFICIENTLY

Row Chaining

  • It happens due to inserts.
  • While inserting row in block what happens if your row size is bigger than the block size, at this time oracle will do row chaining.
  • In row chaining oracle splits the row and insert it in many blocks.
  • Due to row chaining application is going to be slow down.

Solution to avoid this draw back
o Normalization of table ( task of application people)

Row Migration

  1. Happens due to updations , because the PCT_FREE is properly set.
  2. If the free space is not enough for the row than oracle will allocate new block and allocate that row in new block , but pointer for the new block will be there in old block.
  3. Report will be slow if the row is scattered in many blocks

Solution to avoid this draw back
o Set proper PCT_FREE

How To Identify The Changed And Migrated Row?
There are 2 ways

[1] First Method
o Run query UTLCHAIN.SQL , This query is oracle supplied and stored at rdbms/admin folder

o To run this query connect as sys

  • CONNECT Sys/Manager
  • SQL> @:\oracle\oracle90\rdbms\admin\UTLCHAIN;

o Once you run this query table get created

  • SQL> desc CHAINED_ROWS;

o Internally oracle doesn’t differentiate between chained and migrated row

[ 2] Method
o From DBA_TABLES;

  • SQL> DESC DBA_TABLES;
  • SQL> SELECT owner, table_name, chain_cnt, FROM DBA_TABLES WHERE OWNER=’’;

We can use any of this two methods to identify between chained or migrated rows

How To Analyze The Tables ?

  • SQL> ANALYZE TABLE dept COMPUTE STATISTICS ;

o Analyze table compute statistics is used by CBO.
o CBO is used for generating execution plan
o This command generates statistics and update it in dba_tables

Using UTLCHAIN utility, UTL_CHAIN table is generated.

  • SQL> ANALYZE TABLE . COMPUTE STATISTICS ;

It will populates CHAINED_ROW table with chained and migrated row.

  • SQL>SELECT owner_name, table_name, head_rowid FROM CHAINED_ROWS;

Best Solution for all this drawbacks
o Export the table and import it back with modified storage clause.

High Water Mark

  • It is the last access block in table.
  • HWM moves forward only.
  • Oracle never resets the HWM when data is deleted, however it resets it when data is Truncated
  • When FTS (full table scan) is done, oracle scans up to HWM i.e. improves the performance

How to find HWM?

o For this do ANALYZE

  • SQL> ANALYZE TABLE COMPUTE STATISTICS;

    Difference between COMPUTE STATISTICS & ESTIMATE STATISTICS that Estimate statistics is only sampling , not actual statistics.

    o So always use compute statistics

    o Run this query

    • SQL>COL TABLE_NAME format a10
    • SQL>select table_name, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN from dba_tables where owner = '&OWNER';

    Columns For DBA_TABLES
    -------------------------------------------------------
    § -- NUM_ROWS – number of rows
    § -- BLOCKS - Below HWM
    § -- EMPTY BLOCKS – Blocks empty Ahead HWM
    § -- AVG_SPACE - Avg Free space in Blocks below HWM in bytes
    § -- CHAIN_CNT - chain count
    § -- AVG_ROW_LEN – average row length in bytes

    Parameter for using index

    Which spfile/init.ora file parameter exists to force the CBO to make the execution path of a given statement use an index, even if the index scan may appear to be calculated as more costly?

    To make CBO behave like RBO on steroids is achieved by manipulating two parameters, OPTIMIZER_INDEX_CACHING and OPTIMIZER_ INDEX_COST_ADJ. Those two parameters regulate the following things:

    OPTIMIZER_INDEX_CACHING makes CBO assume that for any index certain percentage of blocks is cached. If the value of the parameter is 80, CBO assumes that 80% of the index blocks are cached. According to Jonathan's book and private communication, this parameter has no effect on single table access paths, it only affects in-list iterators and nested loop joins.

    OPTIMIZER_ INDEX_COST_ADJ parameter defines the cost of a single-block I/O as a percentage of a cost of a multi-block I/O request. Single block I/O is most frequently used when retrieving data from an index, while multi-block I/O is normally used when retrieving data from a table. Reference manual claims that this parameter determines a cost of an I/O against index as a percentage of a cost of an I/O against tables.
    These two parameters can be manipulated both on the instance level, with ALTER SYSTEM or at the session level, with ALTER SESSION.
    Those two parameters will make CBO behave like RBO if an index is there, it will be used. Unfortunately, this approach suffers from the same shortcoming as RBO if an index is there and we don't want it used, we have to explicitly disable it, by using hints or otherwise. That is something that we can live with, in the OLTP world. Of course, this is the real world so databases are almost never pure OLTP databases.

    Concept of DUAL table.

    Dual is a table which is created by oracle along with the data dictionary. It consists of exactly one column whose name is dummy and one record. The value of that record is X.

    sql> desc dual
    Name..... Null..... Type
    ----------------------- -------- ----------------
    DUMMY VARCHAR2(1)

    sql> select * from dual;
    D
    -
    X

    The owner of dual is SYS but dual can be accessed by every user. As dual contains exactly one row (unless someone fiddled with it), it is guaranteed to return exactly one row in select statements if a constant expression selected against dual, such as in:

    select sysdate from dual ;

    Although it is possible to delete the one record, or insert additional records, but one really should not do that!.

    Buffer Busy Wait

    You’re getting high “busy buffer waits” - how can you find what’s causing it?

    Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check the v$waitstat view to see what areas are causing the problem. The value of the "count" column tells where the problem is, the "class" column tells you with what. UNDO is rollback segments, DATA is data base buffers.

    · How u get buffer busy wait? What are the reasons for buffer busy wait?

    The Buffer Busy Waits Oracle metric occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of the data block is locked. This buffer busy wait condition can happen for either of the following reasons:
    1. The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete.
    2. Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.

    · What DBA can do to tune buffer busy wait?

    Buffer busy waits are due to contention between particular blocks, there's nothing DBA can do until DBA know which blocks are in conflict and why the conflicts are occurring. Tuning therefore involves identifying and eliminating the cause of the block contention.

    One of the most confounding problems with Oracle is the resolution of buffer busy wait events. Buffer busy waits are common in an I/O-bound Oracle system, as evidenced by any system with read (sequential/scattered) waits in the top-five waits in the Oracle STATSPACK report

    · By how many ways buffer busy waits can be reduce?

    The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.

    Reducing buffer busy waits reduces the total I/O on the system. This can be accomplished by tuning the SQL to access rows with fewer block reads by adding indexes, adjusting the database writer or adding freelists to tables and indexes. Even if there is a huge db_cache_size , the DBA may still see buffer busy waits and, in this case, increasing the buffer size will not help.

    The most common remedies for high buffer busy waits include database writer (DBWR) contention tuning, adding freelists (or ASSM), and adding missing indexes.

    Sizing SGA


    Define the SGA
    Oracle Instance is made up of Back ground process and memory, oracle uses shared memory for its working this shared memory is composed of SGA and PGA.

    Simply stated, the system global area (SGA) is just shared memory structures that are created at instance startup, hold information about the instance and control its behavior.

    • How you would configure SGA for a mid-sized OLTP environment?

    Oracle Database 10g automates the management of shared memory used by an instance and liberates administrators from having to manually configure the sizes of shared memory components. Automatic Shared Memory Tuning significantly simplifies Oracle database administration by introducing a more dynamic, flexible and adaptive memory management scheme. It makes more effective use of available memory therefore reducing the cost incurred on acquiring additional hardware.

    In automatic memory management feature we can set value of SGA_TARGET in parameter file. Oracle it self manage space in SGA as per the requirement.


    Allowing Oracle to take Control

    There is really nothing to switching into automatic shared memory tuning. You only need to set the SGA_TARGET parameter.

    1. Take a look and see if you are already in automated sizing of SGA

    SQL> show parameter sga_target
    NAME TYPE VALUE
    -------------------------------------------------
    sga_target big integer 0

    2. Alter system to begin automated sizing of SGA

    SQL> alter system set sga_target=216m;
    System altered.

    3. Done

    There is no thumb rule to configure SGA

    SGA is made up of 5 parts

    Suppose if we consider SGA of 1 GB than and we allocate
    1. Shared pool as 48% of SGA
    2. Database buffer as 48% of SGA.
    3. Log Buffer as 4 MB of SGA.
    4. Java pool 20 MB
    5. Large pool 20 MB.


    But if SGA is more than 1 GB suppose 10 GB than above mention % of all part will not suit, like for 1GB 4 MB was log buffer but for 10GB 40MB for log buffer may be too much.


    Suppose only Oracle is running on the server and MTS is chosen for OLTP.

    Reserve 10% of RAM for UNIX/Linux or 20% of RAM for Windows, the rest of RAM is allocated to SGA.


    Log_buffer=3-4M

    Large_pool_size: For dedicated Oracle server, 20-30M is enough. For MTS, the UGA will be here. Estimate parallel connection and MTS server processes.

    Java_pool_size=10M

    Shared_pool_size: If all the SQL statements that sent to ORACLE are using bind variable adequately, then 300M is enough in most cases and it should greater than 100M depending on total RAM.

    Data buffer: All the rest RAM should be allocated to Data buffer.

    Below is some referenced materials related to this issue:

    If you only have Oracle on the server, start by reserving 10% of RAM for UNIX/Linux or 20% of RAM for Windows. With whatever RAM is left-over for SGA Sizing and PGA Sizing

    For dedicated Oracle servers, the maximum total RAM SGA size can be computed as follows:

    OS Reserved RAM — This is RAM required to run the OS kernel and system functions, 10% of total RAM for UNIX/Linux, and 20% of total RAM for Windows.

    Oracle Database Connections RAM — Each Oracle connection requires OS RAM regions for sorting and hash joins. (This does not apply when using the Oracle multithreaded server or pga_aggregate_target.) The maximum amount of RAM required for a session is as follows:

    2 MB RAM session overhead + sort_area_size + hash_area_size


    So Automatic Memory management is a best option for configuring SGA..


    • What is involved in tuning the SGA?

    1. Check the statspack report.
    2. Check hit ratio of Data buffer. If it is less than 90%, then we need to increase the Data buffer.
    3. Check hit ratio of Shared pool. If it is less than 95%, then we need to increase the Shared pool.
    4. Check log buffer. If redo buffer allocation retries/redo entries is greater than 1%, then we need to increase log_buffer.
    5. Determine how to use keep pool and recycle pool efficiently.

    Monday, July 14, 2008

    AUDITING

    Oracle audit can help detect unauthorized access and internal abuse of the data held in the database


    • Audit is done to monitor Database activity (i.e. gathering a database activities for planning and tuning purposes.).
    • Tracing or tracking activity of a user SQL.
    • To collect statistics.
    • Parameter related to auditing
      AUDIT_TRIAL = TRUE
    • DBA can set this parameter for Auditing
      SET AUDIT_TRAIL = TRUE.
    • Practically we can set auditing for each and every SQL statement
    • After setting ADUIT_TRIAL = TRUE oracle data base will be slowed down
    • Whatever audit is started its entry comes into DBA_AUDIT_TRIAL.

    Audit record always contain the following information

    • Username .
    • Session identifier.
    • Terminal identifier.
    • Name of the schema object accessed.
    • Operation performed or attempted.
    • Completion code of the operation.
    • Date and timestamp.
    • System privileges used.

    TYPES OF AUDITING

    [1] Statement: Audits the SQL statements used.

    - AUDIT SELECT BY SCOTT;

    [2] Privilege: Audits the Privilege used.

    - AUDIT CREATE TRIGGER.

    [3] Object: Audits the use of a specific object.

    - AUDIT SELECT ON scott.emp;

    CLASS

    [1] BY user:

    • Audit particular users work.

    [2] BY SESSION:

    • Every thing on that session is audited.
    • One Audit record per session.

    AUDIT SESSION BY SCOTT WHENEVER SUCCESSFUL;

    [3] BY ACCESS:

    • Audit record is generated for each statement.

    [4] WHENEVER SUCCESSFUL

    • AUDIT CREATE TABLE WHENEVER SUCCESSFUL;
    • AUDIT ALL BY SCOTT WHENEVER SUCCESSFUL;

    Closing the AUDIT

    • NOAUDIT CREATE TABLE WHENEVER SUCCESSFUL;

    [5] WHENEVER NOT SUCCESSFUL

    • AUDIT CREATE TABLE WHENEVER NOT SUCCESSFUL;


    What are the performance and complexity issues?


    Audit is generally perceived to be complex and slow. The reason for this is usually ignorance. If many or all options are turned on, then the resultant audit trail produced can be large and difficult to interpret and manage. Furthermore, if audit is used on all tables and views in the database, then this can have an effect on performance. Every time an action performed is auditable a record is written to the database; clearly the more audit is used, the more records will be written to the system tablespace purely for audit. In some cases double the amount of access to the database can be performed: the original write and the audit record being written.


    The watchword here is simplicity and caution. Use only the audit that is needed to give an overall view of what is happening and for detailed monitoring of critical data and objects. The simpler the audit trail set-up, the more likely it is that the data will be analyzed and be of some use. It is important to define what actions or abuses are being checked for so that simple reports can be written to filter the audit trail for these actions. A default installation of Oracle has audit turned off by default and Oracle does not come with any standard default audit settings or reports to analyse any audit trail produced. These reasons, and the fact that there are many options available are, in my opinion, why audit is perceived to be complex.


    The standard audit commands do not allow audit to be performed at row level. It is also not possible to audit the actions of privileged users such as SYS and "as sysdba" until Oracle 9iR2.

    ORA-00470 LGWR terminated.

    LGWR Process got terminated with ORA-00604 And 00600

    Its a Case of 13 Instance Node (RAC) , Name of DB is ADSPRDP.

    Second instance ADSPRDP2 got crashed & when I tried to start instance got below error in alert log. LGWR process was getting terminated with ORA-00604 and ORA-00600


    ORA-00470: LGWR process terminated with error

    Fri Jul 4 13:44:11 2008
    Master archival failure: 470
    Fri Jul 4 13:44:11 2008
    Errors in file /u01/app/oracle/admin/ADSPRDP/bdump/adsprdp2_j000_2115.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-00449: background process 'LGWR' unexpectedly terminated with error 600
    ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], []
    Fri Jul 4 13:44:12 2008



    Currently instance is started and all required service is running properly on it.
    Question Arises why that happened in my environment....?

    As per my finding for this environment Patch was missing.


    It is a db of oracle apps, May be patch has to be applied to fully resolve ORA-00604, (Patch 4232513 by MetaLink)

    For My environment I had suggest patch.

    For more understanding of ORA-00470 below are the two other case discussed, but that were probably different from our case.

    CASE 1

    Problem Description:
    If database performance is degrading and eventually you crash.


    You look in the PMON trace file and find the following errors.

    error 470 detected in background process
    00447, 00000, "fatal error in background process"
    // *Cause: One of the background processes died unexpectedly.
    // *Action: Warm start the system.
    ora-470
    00470, 00000, "LGWR process terminated with error"
    // *Cause: The log writer process died
    // *Action: Warm start instance

    DBWR and SMON indicate the same thing.

    If you are receiving the following errors in PMON, LGWR and SMON trace files: error 470 detected in background process OPIRIP: Uncaught error 447. Error stack: ORA-00447: fatal error in background process ORA-00470: LGWR process terminated with error

    No errors are being reported in the alert.log. You did receive the following alert.log entry: "ORACLE Instance - Can not allocate log, archival required"

    You are forced to do the following to get the database back up:

    SHUTDOWN ABORT
    STARTUP
    SHUTDOWN IMMEDIATE
    STARTUP


    Solution: INCREASE MAXFILES SETTING

    Solution Description:
    That means we need to verify and possibly increase MAXFILES setting.


    Solution Explanation: goes like this suppose, the user had recently added new datafiles to his instance.

    On checking the MAXFILES kernal parameter, user discovered that it was set to 60. Checking V$datafiles showed 47 data files. Computing the total of datafiles, control files, mirror redo logs, init.ora and alert.log files, total exceeded the 60 file limit.

    CASE 2

    Problem Description:
    If you have problems with the database crashing & if you look in the alert.log and find no errors or trace files referenced. When you check the bdump directory, you do find three trace files, a DBWR, a LGWR and a PMON trace file. The DBWR trace file contains the following errors:


    ora-00447 fatal error in background process ora-00470 LGWR process terminated with error

    In the LGWR trace file you find:

    ora-00447 fatal error in background process
    ora-00313 open failed for members of log group of thread
    ora-00312 online log thread :
    ora-07362 sfifi: open error, unable to open file.
    ora-00470 LGWR process terminated with error


    In the PMON trace file you find:

    ora-00447 fatal error in background process ora-00470 LGWR process terminated with error

    Problem Explanation:

    The OS kernel parameter 'NFILES' parameter is set too low. When Oracle is trying to open an online log file, it is failing due to operating system restraints. You may have recently upgraded or migrated and added more redo log groups or mirrors. Thus you are now having this problem where you did not previously.

    Solution: INCREASE OS PARAMETER 'NFILES'.

    Solution Description:

    Have your System Administrator increase the 'NFILES' parameter on the Operation System.

    Solution Explanation: The OS kernel parameter 'NFILES' parameter is set too low. When Oracle is trying to open an online log file, it is failing due to operating system restraints.


    Sunday, July 13, 2008

    ORA-02049 waiting for lock

    User (i.e. Developer) Was getting the following error from few days

    ORA-02049 Timeout: Distributed Transaction Waiting for Lock

    While executing insert query on his Application DB SERVERS.The name of the Database instance is PPID

    As per user ---> I am executing below query.


    INSERT INTO PPID_HEADER SELECT A.* FROM PPID_HEADER@PPIDP.WORLD A WHERE A.DOCUMENT_ID IN (SELECT B.DOCUMENT_ID FROM DOCS_2B_MOVED B);

    COMMIT;

    ORA-02049 Timeout: Distributed Transaction Waiting for Lock

    Is there anything you can do?


    Solution:-

    About Error Messages: ORA-02049: timeout: distributed transaction waiting for lock


    1. May be you can get error ora-02049 because you using bitmap index that was built on the table and you are trying to insert data, so dropping the bitmap index and creating a normal b-tree index even though the column has only distinct values will solve your problem.
    2. There can be dead lock while accessing table.
    3. Possibly there may be bug while inserting through database link.
    4. ORA-02049: timeout: distributed transaction waiting for lock
    • Cause: exceeded INIT.ORA distributed_lock_timeout seconds waiting for lock.
    • Action: treat as a deadlock Possibly there may be bug while inserting through database link.

    The possible solution as per oracle is to increase the value of the parameter 'DISTRIBUTED_LOCK_TIMEOUT' (default is 60 sec).

    But NOTE: -This is an internal Oracle parameter. Do NOT use it unless instructed to do so by Oracle Support. Playing with this parameter may be harmful.


    Deleting Row From Large Table


    Database Programer / DBA has to test all the case in TEST Environment they have to Test below existing methods and find one that best suite for deletion

    • Test Case 1 – Basic Delete

    • Test Case 2– Delete with PLSQL and forall loop, committing only once

    • Test Case 3 – Delete using anonymous PL/SQL block, looping through the cursor, and deleting one tow at a time. Loop has counter, programmed to trigger commit after each 1000 records.

    • Test Case 4 – a.) Deleting using PL/SQL stored procedure, commit each 1000 records & b.) Deleting using PL/SQL stored procedure, commit only once.

    • Test Case 5 – Delete using standard anonymous PL/SQL block, using execute immediate function and committing only once.

    We can get the best result by EXECUTE_IMMEDIATE calls inside an anonymous PL/SQL block, (i.e. test case 5) This is not surprising, since Oracle suggests this method for mass data deletion.
    Important recommendations to speed a delete operation:


    • Do not delete all the data in one shot ; break data into chunks and commit in between .

    • delete indexes before and recreate after.

    • for tables with many deletes, try to avoid bitmap indexes which cause locking problems and performance degradation .

    • use parallel DML when possible .

    • Check for index on the FK columns, which have to be locked when a row is deleted from the parent (each row deleted in the Master is causing a full table scan in one or both of the cascaded tables.) .

    • deleting using ROWID (fetching the entire ROWID result set into a cursor).


    Potential problems due to mass delete action is rising CPU usage and disk I/O.