Oracle: Show installed components/options and feature-usage

To check which components are installed in a database:

SELECT * FROM DBA_REGISTRY;

This view also shows the coresponding schema to a component – and also it’s related schemas.

To check which options are enabled:

SELECT * FROM V$OPTION;

To check which features are used:

SELECT * FROM DBA_FEATURE_USAGE_STATISTICS;
Advertisements

Oracle: “LOGOFF BY CLEANUP” Action Name in DBA_AUDIT_TRAIL

Developer reports problems with a Tomcat-Server. Something with established connection-pool-sessions which suddenly cease to work, but apparently have not disconnected. Blaming the database as the cause for that mess.

So, I checked the DBA_AUDIT_TRAIL for that servers connections. There were tons of normal “LOGON”- / “LOGOFF”-actions recorded, but also occasionally a bunch of “LOGOFF BY CLEANUP”-actions. “CLEANUP” to me smelled like PMON cleaned up some abandoned connections whose client-sides died away. To verify my assumption I checked the Oracle support docs at MOS (Doc ID 274697.1), which states:

(...)
"LOGOFF BY CLEANUP" Action Name in SYS.AUD$ Table
-------------------------------------------------
This audit entry is written by PMON when it clears a dead process/session once
the DBA enabled auditing on :

CREATE SESSION or
CONNECT or
SESSION

When a server process dies abruptly, it would not have got the opportunity to
put the audit record.

The PMON finds this dead process and starts cleaning up this dead process.
At this time, PMON writes this audit record indicating the process termination
was abnormal.

The known situations when we have a dead process is when the server process is
either killed at the OS level or within the database using the ALTER SYSTEM KILL
SESSION command.
When the server process cannot contact the client, the server process exits
gracefully.
This does not constitute a LOGOFF BY CLEANUP record, but a LOGOFF record.
(...)

This seemed to prove me wrong, as this note says that the problem lies on the DB-server-side. So I started off examining my otherwise smooth running DB-Server for anomalies, being the root of that evil. But in vain. “LOGOFF BY CLEANUP”-actions still happened leaving me without a clue.

Remembering my first assumption I set up a test-case for to check what action would be really recorded in the audit-trail when I simply kill my connected sqlplus-process at the client-side. According to the note above, this should end up in a “gracefully exited server process” with a plain “LOGOFF” record. But guess what…

After some testing I found the above MOS-Note to be wrong and misleading. According to my tests, the opposite of the note’s assertion is true.

As I presumed, the “LOGOFF BY CLEANUP” is recorded, when the client-side vanishes without a clean disconnect. It’s the client-process that “dies abruptly” – not the server’s one.
Killing the session via “ALTER SYSTEM KILL SESSION” or killing the server-process at OS-Level leaves no trace in the audit-log – contradicting the note above.

Tested with Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production.

Oracle: Getting the DDL for scheduler-jobs

Getting the DDL for tables, views etc. is easy using the package DBMS_METADATA:

DBMS_METADATA.GET_DDL (
object_type     IN VARCHAR2,
name            IN VARCHAR2,
schema          IN VARCHAR2 DEFAULT NULL,
version         IN VARCHAR2 DEFAULT 'COMPATIBLE',
model           IN VARCHAR2 DEFAULT 'ORACLE',
transform       IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

For example, to extract the DDL for table HELP in the schema SYSTEM we use this:

SELECT DBMS_METADATA.GET_DDL('TABLE', 'HELP', 'SYSTEM') FROM DUAL;

and we get that:

"
  CREATE TABLE "SYSTEM"."HELP" 
   (	"TOPIC" VARCHAR2(50) NOT NULL ENABLE, 
	"SEQ" NUMBER NOT NULL ENABLE, 
	"INFO" VARCHAR2(80), 
	 CONSTRAINT "HELP_TOPIC_SEQ" PRIMARY KEY ("TOPIC", "SEQ")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
  STORAGE(INITIAL 16384 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM"  ENABLE
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 49152 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" "

But if we try to adapt this for to get the DDL of a scheduler-job, we’ll get stuck:

select dbms_metadata.get_ddl('JOB','JOB01') from dual;

ORA-31604: invalid NAME parameter "NAME" for object type JOB in function SET_FILTER
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 116
ORA-06512: at "SYS.DBMS_METADATA_INT", line 4517
ORA-06512: at "SYS.DBMS_METADATA_INT", line 8252
ORA-06512: at "SYS.DBMS_METADATA", line 2752
ORA-06512: at "SYS.DBMS_METADATA", line 2639
ORA-06512: at "SYS.DBMS_METADATA", line 4220
ORA-06512: at line 1
31604. 00000 -  "invalid %s parameter \"%s\" for object type %s in function %s"
*Cause:    The specified parameter value is not valid for this object type.
*Action:   Correct the parameter and try the call again.

The same for scheduler-programs or -schedules:

select dbms_metadata.get_ddl('SCHEDULE','EVERY_MONDAY_800PM') from dual;

ORA-31600: invalid input value SCHEDULE for parameter OBJECT_TYPE in function GET_DDL
ORA-06512: at "SYS.DBMS_METADATA", line 2576
ORA-06512: at "SYS.DBMS_METADATA", line 2627
ORA-06512: at "SYS.DBMS_METADATA", line 4220
ORA-06512: at line 1
31600. 00000 -  "invalid input value %s for parameter %s in function %s"
*Cause:    A NULL or invalid value was supplied for the parameter.
*Action:   Correct the input value and try the call again.

The trick is to use “PROCOBJ” (stands for ‘procedural object’) as object-type for all scheduler-objects:

SELECT DBMS_METADATA.GET_DDL('PROCOBJ', ['JOBNAME' | 'SCHEDULE' | 'PROGRAM' ]) FROM DUAL;

select dbms_metadata.get_ddl('PROCOBJ','REBUILD_INDEXES') from dual;

"  
BEGIN 
dbms_scheduler.create_job('"REBUILD_INDEXES"',
job_type=>'STORED_PROCEDURE', 
job_action=>'REBUILDINDEXES', 
number_of_arguments=>0,
start_date=>TO_TIMESTAMP_TZ('07-MAR-2014 12.00.00,000000000 AM CET','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), 
repeat_interval=> 'FREQ=DAILY; BYHOUR=20', 
end_date=>NULL,
job_class=>'"DEFAULT_JOB_CLASS"', 
enabled=>FALSE, 
auto_drop=>FALSE,
comments=> 'Rebuild all indexes dependent on the Index-to-Table-Size-Ratio'
);
dbms_scheduler.enable('"REBUILD_INDEXES"');
COMMIT; 
END; 
"

Edit 2014-08-05:
Please note that it is not possible to get the DDL for scheduler-jobs owned by user SYS.

[oracle@testsrv ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 5 13:27:34 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select owner, job_name, job_type from dba_scheduler_jobs where job_name='PURGE_AUDIT_TRAIL';

OWNER                          JOB_NAME                       JOB_TYPE
------------------------------ ------------------------------ ----------------
SYS                            PURGE_AUDIT_TRAIL              PLSQL_BLOCK

SQL>

SQL> SELECT DBMS_METADATA.GET_DDL('PROCOBJ','PURGE_AUDIT_TRAIL','SYS') FROM DUAL;
ERROR:
ORA-31603: object "PURGE_AUDIT_TRAIL" of type PROCOBJ not found in schema "SYS"
ORA-06512: at "SYS.DBMS_METADATA", line 5088
ORA-06512: at "SYS.DBMS_METADATA", line 7589
ORA-06512: at line 1

no rows selected

SQL>

As per MOS-Notes this is “because by using PROCOBJ the package [DBMS_METADATA] is using similar functionality as which datapump export would use and as known SYS objects are marked as non-exportable” (Doc ID 567504.1).
This note also states, that the lack of exporting Scheduler-DDL by using “JOB” as object-type is filed as a bug at Oracle. So there is hope, that one day we will have a more intuitive way for extracting Scheduler-DDL.

Oracle: Short introduction to the Oracle Scheduler

The Oracle-Scheduler is a nice alternative to cronjobs or alike – and it is not as complicated as it may seem at a first glance.

Well, with the package DBMS_SCHEDULER it is possible to define very granular scheduler-objects. There are “programs” that hold the definition for to be executed things, or “schedules” that hold the definition for certain schedule-types. And all of these can be combined to “jobs” consisting of “programs” and “schedules”. And even job-chains are possible, that respond to certain events and start the next job-chain-job dependent on an event.

But if we just want a certain procedure to be executed at a regular intervall it is quite easy. We just need the DBMS_SCHEDULER.CREATE_JOB:

DBMS_SCHEDULER.CREATE_JOB (
   job_name             IN VARCHAR2,
   job_type             IN VARCHAR2,
   job_action           IN VARCHAR2,
   number_of_arguments  IN PLS_INTEGER              DEFAULT 0,
   start_date           IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   repeat_interval      IN VARCHAR2                 DEFAULT NULL,
   end_date             IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   job_class            IN VARCHAR2                 DEFAULT 'DEFAULT_JOB_CLASS',
   enabled              IN BOOLEAN                  DEFAULT FALSE,
   auto_drop            IN BOOLEAN                  DEFAULT TRUE,
   comments             IN VARCHAR2                 DEFAULT NULL,
   credential_name      IN VARCHAR2                 DEFAULT NULL,
   destination_name     IN VARCHAR2                 DEFAULT NULL);

For a simple job that executes my already created procedure “REBUILDINDEXES” every day at 20:00 the following would suffice:

begin
dbms_scheduler.create_job
 (job_name => 'REBUILD_INDEXES',
  job_type => 'STORED_PROCEDURE',
  job_action=> 'REBUILDINDEXES',
  start_date=> trunc(sysdate),
  repeat_interval=> 'FREQ=DAILY; BYHOUR=20;',
  enabled=>true,
  comments=>'Rebuild all indexes dependent on the Index-to-Table-Size-Ratio');
end;

The “job_type” can be one of this:

PLSQL_BLOCK – an anonymous block of PL/SQL-Code
STORED_PROCEDURE – a defined procedure
EXECUTABLE – a shell-script or alike
CHAIN – a job chain

To alter a created job, here to execute the above job every 30 minutes, we use DBMS_SCHEDULER.SET_ATTRIBUTE:

DBMS_SCHEDULER.SET_ATTRIBUTE (
   name           IN VARCHAR2,
   attribute      IN VARCHAR2,
   value          IN {BOOLEAN|DATE|TIMESTAMP|
                        TIMESTAMP WITH TIME ZONE|TIMESTAMP WITH LOCAL TIME ZONE|
                        INTERVAL DAY TO SECOND});
begin
dbms_scheduler.set_attribute
('REBUILD_INDEXES',
'repeat_interval',
'FREQ=MINUTELY; INTERVAL=30');
end;

To see what jobs are created, we use the views USER_SCHEDULER_JOBS or DBA_SCHEDULER_JOBS. There we can see what jobs are enabled, what they shall do or for what time the next run is scheduled.

Information about run jobs can be obtained from the views USER_SCHEDULER_JOB_LOG and USER_SCHEDULER_JOB_RUN_DETAILS.

The details of logging and the retention-time of the log-data are controlled by job_classes. Per default our job is set to the “DEFAULT_JOB_CLASS”. Information about all job_classes can be found in DBA_SCHEDULER_JOB_CLASSES.

To create an own JOB_CLASS we use DBMS_SCHEDULER.CREATE_JOB_CLASS.

DBMS_SCHEDULER.CREATE_JOB_CLASS (
   job_class_name            IN VARCHAR2,
   resource_consumer_group   IN VARCHAR2 DEFAULT NULL,
   service                   IN VARCHAR2 DEFAULT NULL,
   logging_level             IN PLS_INTEGER
                                DEFAULT DBMS_SCHEDULER.LOGGING_RUNS,
   log_history               IN PLS_INTEGER DEFAULT NULL,
   comments                  IN VARCHAR2 DEFAULT NULL);

To alter a JOB_CLASS we use the DBMS_SCHEDULER.SET_ATTRIBUTE like above.

The default log_history is set to 30 days, as we can see in DBA_SCHEDULER_GLOBAL_ATTRIBUTE.

To alter global scheduler attributes we use DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE.

Per default an oracle-maintenance-job is run every nigth at 3 a.m. which purges obsolete log-data according to the JOB_CLASS-Definition. Notice that there are pre-defined job-classes with a log_history of 1 million days; so eventually a regular (manual) check and purge with DBMS_SCHEDULER.PURGE_LOG is needed.

Oracle: 11g automatically enables auditing

In one of my dbca-created databases the SYSTEM-Tablespace filled up until the DB halted with this error:

ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.AUD$ by 8192 in tablespace SYSTEM
ORA-02002: error while writing to audit trail

The named table SYS.AUD$ is the target for all auditing-data. But actually I didn’t intentionally enabled any auditing.

It appeared that since Oracle Database 11g the Database Configuration Assistent (dbca) and Database Upgrade Assistent (dbua) automatically sets the parameter audit_trail=’DB’ to enable a base-auditing by default which goes to the system-tablespace. In the dbca’s UI is no single hint of that fact since 11g R2. And now it bit me.

SQL> select name, value from v$parameter where name like 'audit_trail';

NAME                      VALUE
------------------------- ----------------------------------------
audit_trail               DB

SQL>

This setting exists in all of my dbca-databases. But until now I was neither aware of it nor was there ever a problem. So how come the 2,5 GB system-tablespace filled up until the brim within 12 month on a system with moderate traffic?!
Examining the auditdata in dba_audit_trail showed up permanent LOGONs/LOGOFFs with connection-durations of just some 30 seconds until a few (less than five) minutes.

The reason seems to be a “misconfigured” connection-pool on the application-server that disconnects to soon instead of holding the sessions for 30 minutes or so to reuse them. And so I ended up with more than 10-million lines of audit-data with the size of more than 2GB and a 100% used system-tablespace.

For a quick fix I simply truncated the table SYS.AUD$.

truncate table AUD$;

On next occasion I maybe stop the auditing – or set up a maintenance-job for the AUD$.

I tended to not use the DBMS_AUDIT_MGMT for cleaning up the audit-trail. As my auditing is for information-purposes only and not for compliance, I do not archive the log-data. Using DBMS_AUDIT_MGMT requires to set an archiving-date prior to delete the records from SYS.AUD$. This would impose unnecessary load on my database, as it would mean updating all audit-records with the archiving-date and then deleting all that records – each time generating redo-information. Also I wanted a rolling window of 180 days of audit-trail in my database. So the easiest way was to simply set up a scheduler-job:

begin
dbms_scheduler.create_job
 (job_name => 'PURGE_AUDIT_TRAIL',
  job_type => 'PLSQL_BLOCK',
  job_action=> 'delete from sys.aud$ where ntimestamp# < sysdate-180;commit;',
  start_date=> trunc(sysdate),
  repeat_interval=> 'FREQ=DAILY; BYHOUR=20;',
  enabled=>true,
  comments=>'Delete auditlog-records older than 180 days. (MT)');
end;

Some more info on auditing:
http://juliandontcheff.wordpress.com/2011/05/12/auditing-vs-performance-in-the-oracle-database/

Oracle: Recreate invalid synonyms

During ex-import-operations it is often the case, that related synonyms in other schemas become invalid, because of the temporary absence of the underlying objects.

Unlike other objects like procedures or functions invalid synonyms are not automatically “recompiled” on next use. So we have to recreate them one by one. To ease this task and recreate all of them in a batch we can use this piece of code, which dynamically recreates all invalid synonyms in the database:

begin
for a in (
    select 'create or replace synonym ' || a.owner || '.' || a.synonym_name || ' for ' || a.table_owner || '.' || a.table_name 
    as cmd from dba_synonyms a where a.owner in 
    (select distinct b.owner from dba_objects b where b.object_type='SYNONYM' and b.STATUS='INVALID'))
loop
    execute immediate a.cmd;
end loop;
end;

If you don’t have access to the named “dba_*” views, you can simply replace “dba_” with “all_”.

Create database-link in another user’s schema

Ever wondered, why you can’t create db-links in the schema of another user – not even as DBA?

When creating objects like tables in another schema, we prefix the tablename with the name of the target-schema (e.g. “create table appuser.first_table …”). But in db-link-names we are allowed to use dots as part of the name – and hence can’t prefix like usual.

Here are three solutions to this problem:

Marko Sutic uses the “password hijacking approach”,

Neil Johnson creates a db-link-create-procedure in the target-schema

and on “Oracle DBA Tips” we find a sophisticated solution using DBMS_SYS_SQL without any need to create foreign objects or hijack passwords.

The latter I hadn’t tested yet, but it looks very interesting.

My favorite is the db-link-create-procedure. Here is a generic procedure for that matter:

CREATE PROCEDURE targetschema.create_db_link (linkname in varchar2, remote_user in varchar2, remote_pwd in varchar2, conn_string in varchar2) is
  V_SQL varchar2(1000);
BEGIN
    V_SQL := 'CREATE DATABASE LINK ' || linkname || ' CONNECT TO ' || remote_user || ' IDENTIFIED BY ' || remote_pwd || ' USING ''' || conn_string || '''';
    EXECUTE IMMEDIATE (V_SQL);
END create_db_link;

grant create database link to targetschema;

exec targetschema.create_db_link('linkname','user','pwd','connectstring');

And maybe a cleanup afterwards:

revoke create database link from targetschema;

drop procedure targetschema.create_db_link;

Dropping database-links in another schema is analogously:

CREATE PROCEDURE targetschema.drop_db_link (linkname in varchar2) is
  V_SQL varchar2(100);
BEGIN
    V_SQL := 'DROP DATABASE LINK ' || linkname;
    EXECUTE IMMEDIATE (V_SQL);
END drop_db_link;
select * from dba_db_links;

exec targetschema.drop_db_link('linkname');