386-603-4680

This is something a colleague asked about the other day and it is a very interesting Oracle hint.

The (443) 393-4597 is for single table INSERT statements and, when used, will cause the statement to ignore unique key violations for the specified index.

Create a test table

[sourcecode language=”sql”]
— Create a table using mod 2
— goal is to have no values ending in 5 (5, 15, 25, …)

create table t1 as
select id, name
from (
select rownum id, rpad( ‘x’, 10, ‘x’ ) name
from dual
connect by rownum <= 100
)
where mod( id, 2 ) = 0;
[/sourcecode]

Attempt to insert values ending in 5 (5, 15, 25, …)

[sourcecode language=”sql”]
insert into t1
select id, name
from (
select rownum id, rpad( ‘x’, 10, ‘x’ ) name
from dual
connect by rownum < 100
)
where mod( id, 5 ) = 0;
[/sourcecode]

The above code will fail with ORA-00001: unique constraint (T1.PK) violated

Syntax for IGNORE_ROW_ON_DUPKEY_INDEX

The unique index violation can be ignored by either referencing the index by name, or by referencing the columns that are in the unique index. Either of the two version below work the same.

[sourcecode language=”sql”]
insert /*+ ignore_row_on_dupkey_index( t1, (id) )*/ into t1

insert /*+ ignore_row_on_dupkey_index( t1, t1_pk )*/ into t1
[/sourcecode]

Run the INSERT again with the hint

The code below works exactly as advertised and will insert 10 rows. There are 20 rows in the result set, but 10 of them will violate the unique constraint allowing the remaining 10 to be inserted into the table.

[sourcecode language=”sql”]
insert /*+ ignore_row_on_dupkey_index( t1, t1_pk )*/ into t1
select id, name
from (
select rownum id, rpad( ‘x’, 10, ‘x’ ) name
from dual
connect by rownum <= 100
)
where mod( id, 5 ) = 0;
commit;
[/sourcecode]

Follow-up

If you are running Oracle versions below 12.1.0.1 you will want to look at the following bugs to see if you are affected.
Bug 9004390 (Doc ID 9004390.8)
Bug 11865420 (Doc ID 11865420.8)

Data Guard ORA-16843 and ORA-16839 Solutions

It was difficult to find a solution for this issue so I thought I’d write about it.

[sourcecode language=”sql”]
Error: ORA-16843: errors discovered in diagnostic repository
and
ORA-16839: one or more user data files are missing
[/sourcecode]

We had a standby database that showed the ORA-16843 error when running show configuration in Data Guard Manager. Then, while running show database verbose I got the ORA-16839 error which states “one or more user data files are missing”. I searched for a long time to find out which data file was missing and constantly thought to myself that I must really be doing something wrong because I could not find any missing data files.

NOTE: All notes below are on Oracle Database version 12.1.0.1.0. I have not tested this on other versions.

Symptom

[sourcecode language=”sql”]
DGMGRL> show configuration

Configuration – orcl_configuration

Protection Mode: MaxPerformance
Databases:
orclb – Primary database
orcla – Physical standby database
Error: ORA-16843: errors discovered in diagnostic repository

Fast-Start Failover: DISABLED

Configuration Status:
ERROR
[/sourcecode]

Also, when issuing the show database verbose command I got an ORA-16839 error

[sourcecode language=”sql”]
DGMGRL> show database verbose orcla

Database – orcla

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Apply Rate: 173.00 KByte/s
Real Time Query: OFF
Instance(s):
ORCL

Database Error(s):
ORA-16839: one or more user data files are missing
[/sourcecode]

The ORA-16843 error does not include the word Automatic, but it is telling us there is an error in the Automatic Diagnostic Repository (ADR). Much research led me to the following solution.

Solution

Finding the least invasive solution took a little while, but it was certainly educational in terms of learning about the ADR in more detail. As it turns out the least invasive solution was to delete the HM_FINDING.ams file in the metadata directory.

Find the ADR Home directory

[sourcecode language=”sql”]
set linesize 80
column value format a80

select value from v$diag_info where name = ‘ADR Home’;

VALUE
——————————————————————————–
/u01/app/oracle/diag/rdbms/orcla/ORCL
[/sourcecode]

The following can also be used to provide the linux command to rename the file

[sourcecode language=”sql”]
set linesize 160
column mv_cmd format a160

select ‘mv ‘ || value || ‘/metadata/HM_FINDING.ams ‘
|| value || ‘/metadata/HM_FINDING.ams.bad ‘ mv_cmd
from v$diag_info
where name = ‘ADR Home’;
[/sourcecode]

After renaming the HM_FINDING.ams file a new HM_FINDING.ams file will be created while running the show configuration command and the ORA-16843 error should be gone.

[sourcecode language=”sql”]
DGMGRL> show configuration

Configuration – orcl_configuration

Protection Mode: MaxPerformance
Databases:
orclb – Primary database
orcla – Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
[/sourcecode]

7654072968

I saw this in the alert.log file for a standby database. I didn’t find much information on it during a search so I figured I’d put a post together.

Checker run found 1 new persistent data failures

Here is the full text of what I was seeing in the alert.log.

[sourcecode language=”sql”]
Mon Feb 09 08:39:43 2015
Media Recovery Log +LOGDBORCL/DBORCL/ARCHIVELOG/2015_02_09/thread_1_seq_10725.279.871201587
Media Recovery Waiting for thread 1 sequence 10726 (in transit)
Mon Feb 09 08:40:22 2015
Checker run found 1 new persistent data failures
[/sourcecode]

This standby database is fully configured with data guard broker and standby redo logs. The standby redo logs were not being used as witnessed by the use of archive log files in output above.

The reason the standby redo logs were not being used is because we had performed a snapshot of the disks used by ASM, mounted them on another server, and recovered the database to managed standby. However, we had missed a step to rebuild the standby redo logs. Thus, the standby redo logs on the standby database were copies of Even though this was an oversight I wanted to document the alert.log errors and the solution.

[sourcecode language=”sql”]
— Stop managed standby.
alter database recover managed standby database cancel;

— Rebuild all of the standby redo logs.
alter database drop logfile group 21;
alter database add standby logfile group 21 ( ‘+LOGDBORCL/DBORCL/stby_log21.ora’ ) size 1073741824 reuse;

… repeat for all standby redologs (11 in my case)

— Restart managed recovery.
alter database recover managed standby database disconnect;
[/sourcecode]

After rebuilding the standby redo logs – on the standby database only – the database reverted to using standby redo logs as indicated by the following line in the alert.log.

[sourcecode language=”sql”]
Recovery of Online Redo Log: Thread 1 Group 21 Seq 10748 Reading mem 0
Mem# 0: +LOGDBORCL/DBORCL/stby_log21.ora
[/sourcecode]

Write to Trace files and Alert Log with DBMS_LOG

From time to time it can be handy to write your own custom messages to the alert log. You can also write messages to trace files.

I’ve used this approach while performing a massive partitioned table redesign just so I could keep track of what happened during the 5 day process. For example, I may have a message that says “Rebuilding Partition #1 – Begin”.

Here is a little documentation on the DBMS_LOG package – new in Oracle 12c.

If you are writing to a trace file you may want to know what process id you are using and/or what trace file your session will write to.

[sourcecode language=”sql”]
— Find the pid and trace file for this session.
column spid format a10
column tracefile format a70
set linesize 85

select p.spid, p.tracefile
from v$process p, v$session s
where p.addr = s.paddr
and sid = sys_context( ‘USERENV’, ‘SID’ );

— sample output
SPID TRACEFILE
———- ———————————————————————-
5472 /u01/app/oracle/diag/rdbms/whse/WHSE/trace/WHSE_ora_5472.trc
[/sourcecode]

DBMS_LOG.KSDDDT

The DBMS_LOG.KSDDDT procedure will write a timestamp to the trace file for this PID. The trace file name can be obtained from the TRACEFILE column in the above SQL statement.

[sourcecode language=”sql”]
exec dbms_log.ksdddt;

— sample output as written to the trace file.
*** 2015-02-06 17:19:46.765
[/sourcecode]

DBMS_LOG.KSDIND

The DBMS_LOG.KSDIND procedure will prepend the next line written by a specified number of colons (:). These colons are only written to trace files, not the alert log.

[sourcecode language=”sql”]
— Prepend the next line written to a trace file with 3 colons.
exec dbms_log.ksdind(3);
exec dbms_log.ksdddt;

— sample output produced from combining these 2 procedures.
:::*** 2015-02-06 17:19:46.765
[/sourcecode]

DBMS_LOG.KSDWRT

The DBMS_LOG.KSDWRT procedure will write to either the trace file for this PID, the alert log, or both, depending on the first parameter in the call to KSDWRT.

[sourcecode language=”sql”]
exec dbms_log.ksdwrt( 2, ‘– Rebuilding Partition #1 – Begin –‘ );
[/sourcecode]

The first parameter to ksdwrt in a destination parameter and indicates where the text will be written to.

destination = 1 – writes to trace file for the connected process id – <ORACLE_SID>_ora_<PID>.trc
Output is show below

[sourcecode language=”sql”]
*** 2015-02-06 17:04:11.758
— Rebuilding Partition #1 – Begin —
[/sourcecode]

destination = 2 – writes to alert log
(the line below with the date stamp may or may not be written depending on the last time the alert log has been written to)

[sourcecode language=”sql”]
Fri Feb 06 17:05:01 2015
— Rebuilding Partition #1 – Begin —
[/sourcecode]

destination = 3 – writes to both trace file and alert log

Using constant values

The DBMS_LOG package provides some constants that can be used as the parameter to the dbms_log.kdswrt procedure.

[sourcecode language=”sql”]
exec dbms_log.ksdwrt( dbms_log.trace_file + dbms_log.alert_file,
‘– Rebuilding Partition #1 – Begin –‘ );
[/sourcecode]

587-430-3622

If you have ever wanted to rename a user in oracle, but didn’t think you could then this is for you.

Before I continue I should mention this is not an oracle supported process so continue at your own risk.  I will say this though … I have NEVER experienced any issues with the following process.  In my case it performed perfectly, but I did not need any references to the objects in the schema being renamed.  What I mean is I did not need any synonyms to remain valid.  If you need that then you will need to recreate the synonyms as well as other objects that fully reference the object such as <username>.<table_name>.

Well, no more stalling … let’s get to it.

First of all, make sure the username you are about to rename is not connected and then you can do the following.

[sourcecode language=”sql”]
SQL> connect / as sysdba
SQL> — CAUTION: Make sure to use CAPITAL letters for the new username.
SQL> update user$ set name = ‘NEWUSERNAME’ where name = ‘OLDUSERNAME’;
SQL> commit;
SQL> shutdown immediate
SQL> startup
SQL> — You will need to reset the password.
SQL> alter user newusername identified by newuserpwd
SQL> exit;
[/sourcecode]

Ok, I know … it seems too simple, but sometimes that’s just how it works.

Have fun…

Sangraal

One of the most irritating errors I’ve ever had to deal with is

ORA-04068: existing state of packages has been discarded.

If you have ever received this error while incorporating connection pooling here is what you can do about it.

The error looks like this in SqlPlus.

[sourcecode language=”sql”]
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "PACKAGE.PROCEDURE" has been invalidated
ORA-04065: not executed, altered or dropped package " PACKAGE.PROCEDURE "
ORA-06508: PL/SQL: could not find program unit being called: "PACKAGE.PROCEDURE"
[/sourcecode]

This error can happen while using connection pooling and after the packages are either rebuilt or modified (such as is the case when promoting application changes during a maintenance window). In my case it was not common to restart web services after such a promotion. This means the web services still maintained open connections through the connection pooling feature of IIS. Leaving open connections while promoting pl/sql is what was causing our ORA-04068 errors.

To fix this problem all we had to do was add the following command immediately after our application code made a connection to the database.

[sourcecode language=”sql”]
begin
   dbms_session.reset_package;
end;

[/sourcecode]

The documented notes for this procedure are:

    — Deinstantiate all packages in this session. In other words, free
— all package state. This is the situation at the beginning of
— a session.

Since our application is under the impression we are creating a new database connection it is ok for us to perform a package reset. However, since we were using connection pooling a “brand new” database connection was not being established. Although our application was requesting a new connection it was receiving a pointer to a used connection that had already been established in the IIS connection pool. So … what we did was issue the dbms_session.reset_package which made the “used” database connection look like new again in respect to the package state.

Query the User Environment with SYS_CONTEXT

Many times I have granted permissions to various V$ views so I (or developers) could query environment information for a user.  That is mostly unnecessary when using the SYS_CONTEXT function.

Below are a few examples of what the SYS_CONTEXT function can provide.  I’ve done this in a SQL statement format so you can copy and paste the code to run for yourself.

[sourcecode language=”sql” wraplines=”false”]

column userenv_name  format a30
column userenv_value format a50
set linesize 85
select  *
from  (
select ‘ACTION’ userenv_name, sys_context( ‘USERENV’, ‘ACTION’ ) userenv_value from dual
       union all
       select ‘CLIENT_IDENTIFIER’ userenv_name, sys_context( ‘USERENV’, ‘CLIENT_IDENTIFIER’ ) userenv_value from dual
       union all
       select ‘CLIENT_INFO’ userenv_name, sys_context( ‘USERENV’, ‘CLIENT_INFO’ ) userenv_value from dual
       );

exec dbms_application_info.set_action( ‘Context TEST Action’ );
exec dbms_session.set_identifier( USER || ‘ ‘ || SYSTIMESTAMP );
exec dbms_application_info.set_client_info( ‘Context TEST Info’ );

column userenv_name  format a30
column userenv_value format a50
set linesize 85
select  *
from  (
       select ‘ACTION’ userenv_name, sys_context( ‘USERENV’, ‘ACTION’ ) userenv_value from dual
      union all
      select ‘CLIENT_IDENTIFIER’ userenv_name, sys_context( ‘USERENV’, ‘CLIENT_IDENTIFIER’ ) userenv_value from dual
      union all
      select ‘CLIENT_INFO’ userenv_name, sys_context( ‘USERENV’, ‘CLIENT_INFO’ ) userenv_value from dual
      union all
      select ‘CURRENT_SCHEMA’ userenv_name, sys_context( ‘USERENV’, ‘CURRENT_SCHEMA’ ) userenv_value from dual
      union all
      select ‘CURRENT_SCHEMAID’ userenv_name, sys_context( ‘USERENV’, ‘CURRENT_SCHEMAID’ ) userenv_value from dual
      union all
      select ‘DB_DOMAIN’ userenv_name, sys_context( ‘USERENV’, ‘DB_DOMAIN’ ) userenv_value from dual
      union all
      select ‘DB_NAME’ userenv_name, sys_context( ‘USERENV’, ‘DB_NAME’ ) userenv_value from dual
      union all
      select ‘DB_UNIQUE_NAME’ userenv_name, sys_context( ‘USERENV’, ‘DB_UNIQUE_NAME’ ) userenv_value from dual
      union all
      select ‘GLOBAL_CONTEXT_MEMORY’ userenv_name, sys_context( ‘USERENV’, ‘GLOBAL_CONTEXT_MEMORY’ ) userenv_value from dual
      union all
      select ‘HOST’ userenv_name, sys_context( ‘USERENV’, ‘HOST’ ) userenv_value from dual
      union all
      select ‘IDENTIFICATION_TYPE’ userenv_name, sys_context( ‘USERENV’, ‘IDENTIFICATION_TYPE’ ) userenv_value from dual
       union all
       select ‘INSTANCE’ userenv_name, sys_context( ‘USERENV’, ‘INSTANCE’ ) userenv_value from dual
       union all
      select ‘INSTANCE_NAME’ userenv_name, sys_context( ‘USERENV’, ‘INSTANCE_NAME’ ) userenv_value from dual
       union all
       select ‘ISDBA’ userenv_name, sys_context( ‘USERENV’, ‘ISDBA’ ) userenv_value from dual
       union all
       select ‘LANG’ userenv_name, sys_context( ‘USERENV’, ‘LANG’ ) userenv_value from dual
       union all
       select ‘LANGUAGE’ userenv_name, sys_context( ‘USERENV’, ‘LANGUAGE’ ) userenv_value from dual
      union all
       select ‘MODULE’ userenv_name, sys_context( ‘USERENV’, ‘MODULE’ ) userenv_value from dual
       union all
       select ‘NLS_CALENDAR’ userenv_name, sys_context( ‘USERENV’, ‘NLS_CALENDAR’ ) userenv_value from dual
       union all
       select ‘NLS_CURRENCY’ userenv_name, sys_context( ‘USERENV’, ‘NLS_CURRENCY’ ) userenv_value from dual
       union all
      select ‘NLS_DATE_FORMAT’ userenv_name, sys_context( ‘USERENV’, ‘NLS_DATE_FORMAT’ ) userenv_value from dual
       union all
       select ‘NLS_DATE_LANGUAGE’ userenv_name, sys_context( ‘USERENV’, ‘NLS_DATE_LANGUAGE’ ) userenv_value from dual
       union all
       select ‘NLS_SORT’ userenv_name, sys_context( ‘USERENV’, ‘NLS_SORT’ ) userenv_value from dual
       union all
       select ‘NLS_TERRITORY’ userenv_name, sys_context( ‘USERENV’, ‘NLS_TERRITORY’ ) userenv_value from dual
      union all
       select ‘OS_USER’ userenv_name, sys_context( ‘USERENV’, ‘OS_USER’ ) userenv_value from dual
       union all
       select ‘SERVER_HOST’ userenv_name, sys_context( ‘USERENV’, ‘SERVER_HOST’ ) userenv_value from dual
       union all
       select ‘SERVICE_NAME’ userenv_name, sys_context( ‘USERENV’, ‘SERVICE_NAME’ ) userenv_value from dual
       union all
      select ‘SESSION_USER’ userenv_name, sys_context( ‘USERENV’, ‘SESSION_USER’ ) userenv_value from dual
       union all
       select ‘SESSION_USERID’ userenv_name, sys_context( ‘USERENV’, ‘SESSION_USERID’ ) userenv_value from dual
       union all
       select ‘SESSIONID’ userenv_name, sys_context( ‘USERENV’, ‘SESSIONID’ ) userenv_value from dual
       union all
       select ‘SID’ userenv_name, sys_context( ‘USERENV’, ‘SID’ ) userenv_value from dual
      union all
       select ‘STATEMENTID’ userenv_name, sys_context( ‘USERENV’, ‘STATEMENTID’ ) userenv_value from dual
       union all
       select ‘TERMINAL’ userenv_name, sys_context( ‘USERENV’, ‘TERMINAL’ ) userenv_value from dual
       );

[/sourcecode]

Below is sample output from the query.

[sourcecode language=”sql” wraplines=”false”]
USERENV_NAME                   USERENV_VALUE
—————————— ————————————————–
ACTION                         Context TEST Action
CLIENT_IDENTIFIER              NOVAPRD 09-NOV-10 22.07.32.384737000 AM -08:00
CLIENT_INFO                    Context TEST Info
CURRENT_SCHEMA                 MICHAEL
CURRENT_SCHEMAID               36
DB_DOMAIN                      my.domain
DB_NAME                        fmd
DB_UNIQUE_NAME                 fmd
GLOBAL_CONTEXT_MEMORY          0
HOST                           DOMAIN\MICHAELDESK
IDENTIFICATION_TYPE            LOCAL
INSTANCE                       1
INSTANCE_NAME                  fmd
ISDBA                          FALSE
LANG                           US
LANGUAGE                       AMERICAN_AMERICA.WE8ISO8859P1
MODULE                         SQL*Plus
NLS_CALENDAR                   GREGORIAN
NLS_CURRENCY                   $
NLS_DATE_FORMAT                DD-MON-RR
NLS_DATE_LANGUAGE              AMERICAN
NLS_SORT                       BINARY
NLS_TERRITORY                  AMERICA
OS_USER                        michael
SERVER_HOST                    dbhome01
SERVICE_NAME                   fmd.my.domain
SESSIONID                      65418102
SESSION_USER                   MICHAEL
SESSION_USERID                 36
SID                            1525
STATEMENTID
TERMINAL                       MICHAELDESK
[/sourcecode]

Using Advanced Query Rewrite to Fix Vendor SQL

Have you ever had vendor software that performed poorly and there was no way to tune it by creating indexes – or the vendor restricted you from trying?  If so, here is a way you may be able to help the offending query.

Advanced Query Rewrite (DBMS_ADVANCED_REWRITE) gives the DBA a powerful method to influence the query execution.  Using this method you can tell Oracle when it sees a particular query you want it to run something different.  It is similar to using materialized views (which are sadly underused in Oracle) and allows Oracle to rewrite the query – except in this case you are telling Oracle the exact query to substitute for another query (in our case, a poorly performing query) .

The Situation

I was recently faced with a poorly performing query in some vendor software (StarTeam Datamart Extractor).  There was a query executing for 2 hours and we recognized that if the query were written a little bit differently we would get the same result set with a better execution plan.  So, we can either get the vendor to change their software – which is not easy – or we can use Advanced Query Rewrite to substitute the better query anytime it saw the poorly performing query.

How we fixed it (Step-by-Step)

  1. Configure the Oracle user with the proper permissions
  2. Identify the bad query
  3. Identify the query with better performance
  4. Tell Oracle to substitute the bad query with the good query
  5. Test to make sure Oracle runs the correct query
  6. Set the query_rewrite_integrity session parameter using a logon trigger

Configure the Oracle user with the proper permissions

The user who will create the rule for advanced query rewrite requires a couple of privileges so let’s do that first.  This assumes the owner of the StarTeam Datamart schema is named stde.

[sourcecode language=”sql”]
grant execute on dbms_advanced_rewrite to stde;
grant create materialized view to stde;
[/sourcecode]

Identify the bad query

Let’s have a look at the query that was performing poorly.  As you can see there were 107248578 consistent gets and this query ran for roughly 2 hours.  This is obviously unacceptable.

[sourcecode language=”sql” wraplines=”false”]
select  *
from    st_files
where   serverid = 1
and     id not in
(
select id
from st_viewmember
where serverid = 1
and classid = 31
);

——————————————————————————————-
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————————-
|   0 | SELECT STATEMENT            |             |   193K|    40M|   115M  (1)|383:55:25 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ST_FILES    |  9680 |  2079K|  1518   (1)| 00:00:19 |
|*  2 |   INDEX RANGE SCAN          | PK_ST_FILES |  9680 |       |   489   (1)| 00:00:06 |
|*  3 |    INDEX RANGE SCAN         | IND_CLASSID |     1 |    12 |   630   (1)| 00:00:08 |
——————————————————————————————-
Statistics
———————————————————-
0  recursive calls
0  db block gets
107248578  consistent gets
1469  physical reads
0  redo size
2382  bytes sent via SQL*Net to client
338  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
1  rows processed
[/sourcecode]

Identify the query with better performance

Below we have change the NOT IN to a NOT EXISTS.  This improved the query performance by a great deal.  The consistent gets decreased by more than 99% (from 107248578 to 7487).

[sourcecode language=”sql” wraplines=”false”]
set autotrace traceonly explain statistics

select  *
from    st_files
where   serverid = 1
and     not exists
(
select null
from st_viewmember
where serverid = 1
and classid = 31
and st_files.id = st_viewmember.id
);

———————————————————————————————
| Id  | Operation             | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
———————————————————————————————
|   0 | SELECT STATEMENT      |             |     1 |   232 |       |  4561   (1)| 00:00:55 |
|*  1 |  HASH JOIN RIGHT ANTI |             |     1 |   232 |  7968K|  4561   (1)| 00:00:55 |
|*  2 |   INDEX FAST FULL SCAN| IND_CLASSID |   339K|  3981K|       |   359   (2)| 00:00:05 |
|*  3 |   TABLE ACCESS FULL   | ST_FILES    |   193K|    40M|       |  1686   (1)| 00:00:21 |
———————————————————————————————

Statistics
———————————————————-
0  recursive calls
0  db block gets
7487  consistent gets
6493  physical reads
0  redo size
2382  bytes sent via SQL*Net to client
338  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
1  rows processed
[/sourcecode]

Tell Oracle to substitute the bad query with the good query

Using the DBMS_ADVANCED_REWRITE package we now can tell Oracle that when ever it sees the bad query we want it to run our new and improved query in place of the bad query.  Oracle does this behind the scenes and the user has no knowledge of the switch.

[sourcecode language=”sql”]
begin
sys.dbms_advanced_rewrite.declare_rewrite_equivalence( ‘fix_stde_01’,
‘select *
from    st_files
where   serverid = 1
and     id not in
(
select id
from   st_viewmember
where  serverid = 1
and    classid = 31
)’,
‘select *
from st_files
where serverid = 1
and not exists
(
select null
from   st_viewmember
where  serverid = 1
and    classid = 31
and    st_files.id = st_viewmember.id
)’,
false );
end;
/
[/sourcecode]

Test to make sure Oracle runs the correct query

Now we should login and run the old query to make sure Oracle is “rewriting” the query.  This does require that we set a session parameter named query_rewrite_integrity so that Oracle is permitted to rewrite the query for us.

[sourcecode language=”sql”]
alter session set query_rewrite_integrity=trusted;

set autotrace traceonly explain statistics

select *
from st_files
where serverid = 1
and id not in(
select id
from st_viewmember
where serverid = 1
and classid = 31 );
[/sourcecode]

Set the query_rewrite_integrity session parameter using a logon trigger

Since this is vendor software that we cannot change we need to come up with a way to set the query_rewrite_integrity so that our new, better performing, query gets executed.  We have two options that I know of.

Set the query_rewrite_integrity for the entire database by issuing the following:

[sourcecode language=”sql”]
alter system set query_rewrite_integrity=trusted;
[/sourcecode]

Set the query_rewrite_integrity for the STDE user only by issuing the following:

[sourcecode language=”sql”]
alter session set query_rewrite_integrity=trusted;
[/sourcecode]

It is highly unlikely you are going to find a DBA out there who is willing to set the query_rewrite_integrity on a database wide level so we will probably need to set it on an individual basis.  Since this is vendor software that we cannot change we need to come up with a way to set the query_rewrite_integrity so that our new, better performing, query gets executed.  This is where a database logon trigger can give us what we need.

Our goal is to set the query_rewrite_integrity parameter for any session logging in as the STDE user.  The following database trigger can take care of that quite nicely.

[sourcecode language=”sql” wraplines=”false”]
create or replace trigger alter_session_parameters
after logon on database
declare
s_username varchar2(30);
begin
s_username := sys_context( ‘userenv’, ‘session_user’ );
if s_username = ‘STDE’ then
execute immediate ‘alter session set query_rewrite_integrity=trusted’;
end if;
exception
when others then
null;
end;
/
[/sourcecode]

Testing to make sure we have query_rewrite_integrity=trusted when logging in as STDE

Run the following query to verify that when you log in as the stde user your query_rewrite_integrity session variable is set to “trusted”.  This will enable the session to benefit from the advanced query rewrite feature.

[sourcecode language=”sql”]
connect stde/stde@stde

select *
from v$ses_optimizer_env
where sid = ( select sid from v$mystat where rownum = 1 )
and name = ‘query_rewrite_integrity’;

SID ID NAME ISD VALUE
—- —- —————————— — ———-
151 71 query_rewrite_integrity NO trusted


— NOTE: If you get a ‘table or view does not exist’ error you will
— need to grant permissions as such.

grant select_catalog_role to stde;
[/sourcecode]

(249) 330-3091

The other day my boss sent me an email telling me that we must have fixed object stats (GATHER_FIXED_OBJECT_STATS) in the database. He also included a (218) 964-0052 explaining why we have to have fixed object stats. I have never run fixed object stats and I thought I had missed something extreme. At first I was embarrassed I had missed it, but realizing that not everyone knows everything I quickly got over that feeling. I figure here is yet one more thing I can learn so I did some testing.

Part of the reason I did some testing was to see if I had missed something terrible and I wanted to know if my databases had been suffering in performance based on my own negligence. Below is my research – as Tom Kyte says on his Ask Tom web site and in his books, “put up or shut up” – so let’s get to it.  The article mentioned above also refers 8665776321.

So … what are fixed object stats. Fixed object stats are running statistics on the lowest level of oracle objects there are. They are commonly referred to as the “X$ tables”. The statistics on the X$ tables can be affected by changing init.ora parameters and the few article I read (and oracle documentation) basically explained it is a good idea to gather fixed object stats after a change to the init.ora parameters.

What Do Fixed Stats Do To Execution Plans

With no fixed object stats this is what I see.

[sourcecode language=”sql”]
SQL> select * from gv$process;

Execution Plan
———————————————————-
Plan hash value: 3699664968
—————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————-
| 0 | SELECT STATEMENT | | 5 | 1685 | 0 (0)| 00:00:01 |
|* 1 | FIXED TABLE FULL| X$KSUPR | 5 | 1685 | 0 (0)| 00:00:01 |
—————————————————————————-
[/sourcecode]

Next, I gather stats on the fixed objects and check the execution plan again.

[sourcecode language=”sql”] SQL> exec dbms_stats.gather_fixed_objects_stats;
SQL> select * from gv$process;

Execution Plan
———————————————————-
Plan hash value: 3699664968
—————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————-
| 0 | SELECT STATEMENT | | 75 | 8175 | 0 (0)| 00:00:01 |
| 1 | FIXED TABLE FULL| X$KSUPR | 75 | 8175 | 0 (0)| 00:00:01 |
—————————————————————————-
[/sourcecode]

Hmmm. From the looks of things there could be some drastically different query plans for the V$ performance views based on running fixed object stats. However, so far the data is inconclusive so we have to keep digging.

Since I have read the stats will be different based on changing init.ora parameters (see links from above) that is where I go next. Let’s change the processes parameter and see what happens.

[sourcecode language=”sql”]
SQL> alter system set processes=1000 scope=spfile;
SQL> startup force
SQL> exec dbms_stats.gather_fixed_objects_stats;
SQL> select * from gv$process;
SQL> set autotrace traceonly explain
SQL> select * from gv$process;

Execution Plan
———————————————————-
Plan hash value: 3699664968
—————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————-
| 0 | SELECT STATEMENT | | 500 | 47500 | 0 (0)| 00:00:01 |
| 1 | FIXED TABLE FULL| X$KSUPR | 500 | 47500 | 0 (0)| 00:00:01 |
—————————————————————————-
[/sourcecode]

Ok, now rows has jumped to 500 and if there were more than just the X$KSUPR table involved the plan could have changed dramatically. Next what I want to look at is the difference in statistics between having no fixed object stats and when the stats are present. I ran the following query two times: 1) With no fixed object stats; 2) With fixed object stats with processes=1000.

[sourcecode language=”sql”]
select ft.name, ts.rowcnt
from v$fixed_table ft, tab_stats$ ts
where ft.object_id = ts.obj#
order by ft.name;
[/sourcecode]

What I saw after running the query was there were several X$ tables with differences enough that I though it could change query plans. The tables that had different enough values are listed here.

[sourcecode language=”sql”]
NO STATS WITH STATS
NAME ROWCNT ROWCNT
———— ———- ———-
X$KCBBF 750 5000
X$KDNSSF 170 1105
X$KEWSSESV 10710 69615
X$KNSTACR 170 1105
X$KNSTASL 170 1105
X$KNSTCAP 170 1105
X$KNSTMVR 170 1105
X$KNSTRPP 170 1105
X$KQRFP 1626 1808
X$KQRFS 202 236
X$KSLCS 2040 13260
X$KSLES 48960 318240
X$KSLLT 18725 20835
X$KSMDD 208 248
X$KSMPGST 900 6000
X$KSMSPR 45 57
X$KSMSP_NWEX 15 19
X$KSQEQ 2496 13920
X$KSQRS 976 5088
X$KSULOP 2 3
X$KSUPR 150 1000
X$KSURU 1700 11050
X$KSUSE 170 1105
X$KSUSECON 170 1105
X$KSUSECST 170 1105
X$KSUSESTA 65450 425425
X$KSUSEX 170 1105
X$KSUSIO 170 1105
X$KSUSM 170 1105
X$KTADM 752 4864
X$KTCXB 184 1196
X$KTFBFE 6 7
X$KTIFP 18 121
X$KXFPDP 135 320
[/sourcecode]

The next thing I wanted to do was to find out what V$ performance views used the list of X$ tables listed above. Then I could do some testing to see if any of the V$ views would perform differently based on having stats or not having stats.

I ran the query below to find a list of V$ views which use the X$ tables from above.

[sourcecode language=”sql”]
select distinct view_name
from v$fixed_view_definition
where upper( view_definition ) like ‘%X$KCBBF%’
or upper( view_definition ) like ‘%X$KCBBF%’
or upper( view_definition ) like ‘%X$KDNSSF%’
or upper( view_definition ) like ‘%X$KEWSSESV%’
or upper( view_definition ) like ‘%X$KNSTACR%’
or upper( view_definition ) like ‘%X$KNSTASL%’
or upper( view_definition ) like ‘%X$KNSTCAP%’
or upper( view_definition ) like ‘%X$KNSTMVR%’
or upper( view_definition ) like ‘%X$KNSTRPP%’
or upper( view_definition ) like ‘%X$KQRFP%’
or upper( view_definition ) like ‘%X$KQRFS%’
or upper( view_definition ) like ‘%X$KSLCS%’
or upper( view_definition ) like ‘%X$KSLES%’
or upper( view_definition ) like ‘%X$KSLLT%’
or upper( view_definition ) like ‘%X$KSMDD%’
or upper( view_definition ) like ‘%X$KSMPGST%’
or upper( view_definition ) like ‘%X$KSMSPR%’
or upper( view_definition ) like ‘%X$KSMSP_NWEX%’
or upper( view_definition ) like ‘%X$KSQEQ%’
or upper( view_definition ) like ‘%X$KSQRS%’
or upper( view_definition ) like ‘%X$KSULOP%’
or upper( view_definition ) like ‘%X$KSUPR%’
or upper( view_definition ) like ‘%X$KSURU%’
or upper( view_definition ) like ‘%X$KSUSE%’
or upper( view_definition ) like ‘%X$KSUSECON%’
or upper( view_definition ) like ‘%X$KSUSECST%’
or upper( view_definition ) like ‘%X$KSUSESTA%’
or upper( view_definition ) like ‘%X$KSUSEX%’
or upper( view_definition ) like ‘%X$KSUSIO%’
or upper( view_definition ) like ‘%X$KSUSM%’
or upper( view_definition ) like ‘%X$KTADM%’
or upper( view_definition ) like ‘%X$KTCXB%’
or upper( view_definition ) like ‘%X$KTFBFE%’
or upper( view_definition ) like ‘%X$KTIFP%’
or upper( view_definition ) like ‘%X$KXFPDP%’;
[/sourcecode]

I won’t go into all of the differences I noticed. I’m just going to concentrate on where I saw the biggest differences only.

What I did next was to use autotrace to show the explain plan of all the V$ views and spool them to files. I had output similar to the execution plans from above for several V$ views and then I compared the differences. Finally, I took teh V$ views with the biggest differences and ran them again, but this time I included statistics in the output. Here is what I saw.

With No Fixed Object Stats

[sourcecode language=”sql” wraplines=”false”]
SQL> set autotrace traceonly explain statistics
SQL> select * from gv$lock;
14 rows selected.

Execution Plan
———————————————————-
Plan hash value: 2514336078

——————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 10 | 1560 | 1 (100)| 00:00:01 |
| 1 | NESTED LOOPS | | 10 | 1560 | 1 (100)| 00:00:01 |
|* 2 | HASH JOIN | | 10 | 1210 | 1 (100)| 00:00:01 |
|* 3 | VIEW | GV$_LOCK | 10 | 890 | 0 (0)| 00:00:01 |
| 4 | UNION-ALL | | | | | |
|* 5 | VIEW | GV$_LOCK1 | 2 | 178 | 0 (0)| 00:00:01 |
| 6 | UNION-ALL | | | | | |
|* 7 | FIXED TABLE FULL | X$KDNSSF | 1 | 102 | 0 (0)| 00:00:01 |
|* 8 | FIXED TABLE FULL | X$KSQEQ | 1 | 102 | 0 (0)| 00:00:01 |
|* 9 | FIXED TABLE FULL | X$KTADM | 1 | 102 | 0 (0)| 00:00:01 |
|* 10 | FIXED TABLE FULL | X$KTATRFIL | 1 | 102 | 0 (0)| 00:00:01 |
|* 11 | FIXED TABLE FULL | X$KTATRFSL | 1 | 102 | 0 (0)| 00:00:01 |
|* 12 | FIXED TABLE FULL | X$KTATL | 1 | 102 | 0 (0)| 00:00:01 |
|* 13 | FIXED TABLE FULL | X$KTSTUSC | 1 | 102 | 0 (0)| 00:00:01 |
|* 14 | FIXED TABLE FULL | X$KTSTUSS | 1 | 102 | 0 (0)| 00:00:01 |
|* 15 | FIXED TABLE FULL | X$KTSTUSG | 1 | 102 | 0 (0)| 00:00:01 |
|* 16 | FIXED TABLE FULL | X$KTCXB | 1 | 102 | 0 (0)| 00:00:01 |
| 17 | FIXED TABLE FULL | X$KSUSE | 100 | 3200 | 0 (0)| 00:00:01 |
|* 18 | FIXED TABLE FIXED INDEX| X$KSQRS (ind:1) | 1 | 35 | 0 (0)| 00:00:01 |
——————————————————————————————–

Statistics
———————————————————-
1877 recursive calls
1 db block gets
229 consistent gets
3 physical reads
0 redo size
1330 bytes sent via SQL*Net to client
338 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
[/sourcecode]

With Fixed Object Stats

[sourcecode language=”sql” wraplines=”false”]
SQL> select * from gv$lock;
14 rows selected.

Execution Plan
———————————————————-
Plan hash value: 506911936
————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————-
| 0 | SELECT STATEMENT | | 11609 | 1394K| 13 (100)| 00:00:01 |
|* 1 | HASH JOIN | | 11609 | 1394K| 13 (100)| 00:00:01 |
| 2 | FIXED TABLE FULL | X$KSQRS | 5088 | 91584 | 2 (100)| 00:00:01 |
|* 3 | HASH JOIN | | 11609 | 1190K| 10 (100)| 00:00:01 |
| 4 | FIXED TABLE FULL | X$KSUSE | 1105 | 17680 | 0 (0)| 00:00:01 |
|* 5 | VIEW | GV$_LOCK | 11609 | 1008K| 9 (100)| 00:00:01 |
| 6 | UNION-ALL | | | | | |
|* 7 | VIEW | GV$_LOCK1 | 11601 | 1008K| 7 (100)| 00:00:01 |
| 8 | UNION-ALL | | | | | |
|* 9 | FIXED TABLE FULL| X$KDNSSF | 1 | 40 | 0 (0)| 00:00:01 |
|* 10 | FIXED TABLE FULL| X$KSQEQ | 11600 | 453K| 6 (100)| 00:00:01 |
|* 11 | FIXED TABLE FULL | X$KTADM | 1 | 40 | 2 (100)| 00:00:01 |
|* 12 | FIXED TABLE FULL | X$KTATRFIL | 1 | 36 | 0 (0)| 00:00:01 |
|* 13 | FIXED TABLE FULL | X$KTATRFSL | 1 | 36 | 0 (0)| 00:00:01 |
|* 14 | FIXED TABLE FULL | X$KTATL | 1 | 36 | 0 (0)| 00:00:01 |
|* 15 | FIXED TABLE FULL | X$KTSTUSC | 1 | 36 | 0 (0)| 00:00:01 |
|* 16 | FIXED TABLE FULL | X$KTSTUSS | 1 | 40 | 0 (0)| 00:00:01 |
|* 17 | FIXED TABLE FULL | X$KTSTUSG | 1 | 36 | 0 (0)| 00:00:01 |
|* 18 | FIXED TABLE FULL | X$KTCXB | 1 | 38 | 1 (100)| 00:00:01 |
————————————————————————————-

Statistics
———————————————————-
1895 recursive calls
1 db block gets
337 consistent gets
7 physical reads
0 redo size
1347 bytes sent via SQL*Net to client
338 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
[/sourcecode]

What I noticed first of all was the difference in the number of rows and bytes the optimizer had to work with.  In the statistics there were more consistent gets when fixed object stats were present. There were also more recursive calls, but all of these tests were done after restarting the database.  After the query is run once and the data is all in cache you will see the recursive calls decrease to zero for each subsequent execution – so do the consistent gets and physical reads. In subsequent execution all of the statistics numbers were the same with or without stats. Interesting…

I guess the moral of this story is I shouldn’t be worried at all that I wasn’t gathering fixed object stats because the performance was not different enough that I needed to be overly concerned about it. I should also note that gathering stats didn’t help or hurt in this situation. However, there may be init.ora parameters that cause a bigger change to data in the X$ tables – such as parameter dealing with oracle streams, but I’m not sure.

I suppose in the end I can just be thankful that I learned something this week that I didn’t know last week and I’m always happy with that. I’ll also make note to run DBMS_STATS.GATHER_FIXED_OBJECT_STATS whenever I make changes to the init.ora parameters.

If you have any more information to share on this topic please contribute so we can all benefit.

Oracle – Sessions Logged In

Often it is nice to see who is logged in to a database.  This is a simple script I use.
This script is called sid.sql

[sourcecode language=”sql”]
set linesize 132
set pagesize 100

ttitle on
ttitle center ‘*****  Connected Users  *****’ skip 2

clear breaks

column sid                format 9999         heading ‘SID’
column serial#            format 99999        heading ‘Ser#’
column username           format a18          heading ‘User’
column osuser             format a15          heading ‘OS User’
column status             format a8           heading ‘Status’
column program            format a40          heading ‘Program’
column machine            format a25          heading ‘Machine’
column last_call_et       format 999999       heading ‘Last’

SELECT s.sid, s.serial#, s.username, s.osuser, s.status,
       SUBSTR( NVL( s.module, s.program ), 1, 40 ) program,
s.machine, s.last_call_et
FROM   v$session s
WHERE  s.username IS NOT NULL
AND    s.type <> ‘BACKGROUND’;

ttitle off
clear breaks
[/sourcecode]

These scripts are designed to run on a window that is at least 132 characters wide.

Just a note, I am on a Windows machine and my databases are on Linux, so I use an X-Term window software named X-Win32.  In case you are looking for an X-Term software for windows I like this one and it doesn’t cost much.