Oracle: Short introduction to the Oracle Scheduler

The Oracle-Scheduler is a nice alternative to cronjobs or alike – and it is not as complicated as it may seem at a first glance.

Well, with the package DBMS_SCHEDULER it is possible to define very granular scheduler-objects. There are “programs” that hold the definition for to be executed things, or “schedules” that hold the definition for certain schedule-types. And all of these can be combined to “jobs” consisting of “programs” and “schedules”. And even job-chains are possible, that respond to certain events and start the next job-chain-job dependent on an event.

But if we just want a certain procedure to be executed at a regular intervall it is quite easy. We just need the DBMS_SCHEDULER.CREATE_JOB:

   job_name             IN VARCHAR2,
   job_type             IN VARCHAR2,
   job_action           IN VARCHAR2,
   number_of_arguments  IN PLS_INTEGER              DEFAULT 0,
   repeat_interval      IN VARCHAR2                 DEFAULT NULL,
   job_class            IN VARCHAR2                 DEFAULT 'DEFAULT_JOB_CLASS',
   enabled              IN BOOLEAN                  DEFAULT FALSE,
   auto_drop            IN BOOLEAN                  DEFAULT TRUE,
   comments             IN VARCHAR2                 DEFAULT NULL,
   credential_name      IN VARCHAR2                 DEFAULT NULL,
   destination_name     IN VARCHAR2                 DEFAULT NULL);

For a simple job that executes my already created procedure “REBUILDINDEXES” every day at 20:00 the following would suffice:

 (job_name => 'REBUILD_INDEXES',
  job_type => 'STORED_PROCEDURE',
  job_action=> 'REBUILDINDEXES',
  start_date=> trunc(sysdate),
  repeat_interval=> 'FREQ=DAILY; BYHOUR=20;',
  comments=>'Rebuild all indexes dependent on the Index-to-Table-Size-Ratio');

The “job_type” can be one of this:

PLSQL_BLOCK – an anonymous block of PL/SQL-Code
STORED_PROCEDURE – a defined procedure
EXECUTABLE – a shell-script or alike
CHAIN – a job chain

To alter a created job, here to execute the above job every 30 minutes, we use DBMS_SCHEDULER.SET_ATTRIBUTE:

   name           IN VARCHAR2,
   attribute      IN VARCHAR2,
   value          IN {BOOLEAN|DATE|TIMESTAMP|
                        INTERVAL DAY TO SECOND});

To see what jobs are created, we use the views USER_SCHEDULER_JOBS or DBA_SCHEDULER_JOBS. There we can see what jobs are enabled, what they shall do or for what time the next run is scheduled.

Information about run jobs can be obtained from the views USER_SCHEDULER_JOB_LOG and USER_SCHEDULER_JOB_RUN_DETAILS.

The details of logging and the retention-time of the log-data are controlled by job_classes. Per default our job is set to the “DEFAULT_JOB_CLASS”. Information about all job_classes can be found in DBA_SCHEDULER_JOB_CLASSES.


   job_class_name            IN VARCHAR2,
   resource_consumer_group   IN VARCHAR2 DEFAULT NULL,
   service                   IN VARCHAR2 DEFAULT NULL,
   logging_level             IN PLS_INTEGER
                                DEFAULT DBMS_SCHEDULER.LOGGING_RUNS,
   log_history               IN PLS_INTEGER DEFAULT NULL,
   comments                  IN VARCHAR2 DEFAULT NULL);

To alter a JOB_CLASS we use the DBMS_SCHEDULER.SET_ATTRIBUTE like above.

The default log_history is set to 30 days, as we can see in DBA_SCHEDULER_GLOBAL_ATTRIBUTE.

To alter global scheduler attributes we use DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE.

Per default an oracle-maintenance-job is run every nigth at 3 a.m. which purges obsolete log-data according to the JOB_CLASS-Definition. Notice that there are pre-defined job-classes with a log_history of 1 million days; so eventually a regular (manual) check and purge with DBMS_SCHEDULER.PURGE_LOG is needed.


Oracle: “ORA-28221: REPLACE not specified” on password-change

If we apply a PASSWORD_VERIFY_FUNCTION to a user’s profile, we would need a little more syntax than usual to change the users’s password. A simple “ALTER USER username IDENTIFIED BY password;” for most users would not work:

SQL> conn test/test123@testdb

SQL> alter user test identified by QwertzQwertz2014;
alter user test identified by QwertzQwertz2014
ERROR at line 1:
ORA-28221: REPLACE not specified

We have to add a “REPLACE old_password” to our command:

SQL> alter user test identified by QwertzQwertz2014 replace test123;

User altered.


This “REPLACE” can only be omitted, if the executing user has the “ALTER USER” system privilege or if a PASSWORD_VERIFY_FUNCTION is not in use. In the presence of a PASSWORD_VERIFY_FUNCTION the REPLACE can only be omitted if the user changes his/her password for the first time.

This is important to keep in mind for cases when we want the password to be set via an application. Here the application’s logic must be able to handle the above.


Edit 2014-08-27:
It took some time for me to grasp the logic behind this syntax – but now I think I got it:

As the users passwords are always stored encrypted / hashed in the database, there would be otherwise no chance for a PASSWORD_VERIFY_FUNCTION to check the new password for a minimum number of changed characters compared to the old password.

If the new password would differ in at least one single character, the hash-value of that password would be completely different to the hash of the old password – so no use for the aforementioned check. That way we could just check for “NEW_PWD != OLD_PWD”. Since there is no profile limit for a minimum number of changed password-characters, the only way to check this is in a PASSWORD_VERIFY_FUNCTION – and hence it makes sense to just request the old password ( or request the old password at all ) in case we use a PASSWORD_VERIFY_FUNCTION.

Just verified this on My Oracle Support (MOS) and found it confirmed under
“Password Verify Function Not Enforcing Difference Between Old and New Passwords” (Doc ID 816932.1)

Oracle: 11g automatically enables auditing

In one of my dbca-created databases the SYSTEM-Tablespace filled up until the DB halted with this error:

ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.AUD$ by 8192 in tablespace SYSTEM
ORA-02002: error while writing to audit trail

The named table SYS.AUD$ is the target for all auditing-data. But actually I didn’t intentionally enabled any auditing.

It appeared that since Oracle Database 11g the Database Configuration Assistent (dbca) and Database Upgrade Assistent (dbua) automatically sets the parameter audit_trail=’DB’ to enable a base-auditing by default which goes to the system-tablespace. In the dbca’s UI is no single hint of that fact since 11g R2. And now it bit me.

SQL> select name, value from v$parameter where name like 'audit_trail';

NAME                      VALUE
------------------------- ----------------------------------------
audit_trail               DB


This setting exists in all of my dbca-databases. But until now I was neither aware of it nor was there ever a problem. So how come the 2,5 GB system-tablespace filled up until the brim within 12 month on a system with moderate traffic?!
Examining the auditdata in dba_audit_trail showed up permanent LOGONs/LOGOFFs with connection-durations of just some 30 seconds until a few (less than five) minutes.

The reason seems to be a “misconfigured” connection-pool on the application-server that disconnects to soon instead of holding the sessions for 30 minutes or so to reuse them. And so I ended up with more than 10-million lines of audit-data with the size of more than 2GB and a 100% used system-tablespace.

For a quick fix I simply truncated the table SYS.AUD$.

truncate table AUD$;

On next occasion I maybe stop the auditing – or set up a maintenance-job for the AUD$.

I tended to not use the DBMS_AUDIT_MGMT for cleaning up the audit-trail. As my auditing is for information-purposes only and not for compliance, I do not archive the log-data. Using DBMS_AUDIT_MGMT requires to set an archiving-date prior to delete the records from SYS.AUD$. This would impose unnecessary load on my database, as it would mean updating all audit-records with the archiving-date and then deleting all that records – each time generating redo-information. Also I wanted a rolling window of 180 days of audit-trail in my database. So the easiest way was to simply set up a scheduler-job:

 (job_name => 'PURGE_AUDIT_TRAIL',
  job_type => 'PLSQL_BLOCK',
  job_action=> 'delete from sys.aud$ where ntimestamp# < sysdate-180;commit;',
  start_date=> trunc(sysdate),
  repeat_interval=> 'FREQ=DAILY; BYHOUR=20;',
  comments=>'Delete auditlog-records older than 180 days. (MT)');

Some more info on auditing:

Oracle: Recreate invalid synonyms

During ex-import-operations it is often the case, that related synonyms in other schemas become invalid, because of the temporary absence of the underlying objects.

Unlike other objects like procedures or functions invalid synonyms are not automatically “recompiled” on next use. So we have to recreate them one by one. To ease this task and recreate all of them in a batch we can use this piece of code, which dynamically recreates all invalid synonyms in the database:

for a in (
    select 'create or replace synonym ' || a.owner || '.' || a.synonym_name || ' for ' || a.table_owner || '.' || a.table_name 
    as cmd from dba_synonyms a where a.owner in 
    (select distinct b.owner from dba_objects b where b.object_type='SYNONYM' and b.STATUS='INVALID'))
    execute immediate a.cmd;
end loop;

If you don’t have access to the named “dba_*” views, you can simply replace “dba_” with “all_”.

Create database-link in another user’s schema

Ever wondered, why you can’t create db-links in the schema of another user – not even as DBA?

When creating objects like tables in another schema, we prefix the tablename with the name of the target-schema (e.g. “create table appuser.first_table …”). But in db-link-names we are allowed to use dots as part of the name – and hence can’t prefix like usual.

Here are three solutions to this problem:

Marko Sutic uses the “password hijacking approach”,

Neil Johnson creates a db-link-create-procedure in the target-schema

and on “Oracle DBA Tips” we find a sophisticated solution using DBMS_SYS_SQL without any need to create foreign objects or hijack passwords.

The latter I hadn’t tested yet, but it looks very interesting.

My favorite is the db-link-create-procedure. Here is a generic procedure for that matter:

CREATE PROCEDURE targetschema.create_db_link (linkname in varchar2, remote_user in varchar2, remote_pwd in varchar2, conn_string in varchar2) is
  V_SQL varchar2(1000);
    V_SQL := 'CREATE DATABASE LINK ' || linkname || ' CONNECT TO ' || remote_user || ' IDENTIFIED BY ' || remote_pwd || ' USING ''' || conn_string || '''';
END create_db_link;

grant create database link to targetschema;

exec targetschema.create_db_link('linkname','user','pwd','connectstring');

And maybe a cleanup afterwards:

revoke create database link from targetschema;

drop procedure targetschema.create_db_link;

Dropping database-links in another schema is analogously:

CREATE PROCEDURE targetschema.drop_db_link (linkname in varchar2) is
  V_SQL varchar2(100);
    V_SQL := 'DROP DATABASE LINK ' || linkname;
END drop_db_link;
select * from dba_db_links;

exec targetschema.drop_db_link('linkname');

Oracle: “ORA-12519, TNS:no appropriate service handler found” on JDBC-Connection

In the context of a Java/Liferay/JackRabbit-Setting developer wants to export all stored files from database to filesystem for migration-purposes. After some uncertain time this always runs repeatedly on errors of this kind:

12:19:31,488 DEBUG [MaintenanceUtil:64] Executing com.liferay.portal.convert.ConvertDocumentLibrary
12:19:31,497 INFO  [ConvertProcess:41] Starting conversion for com.liferay.portal.convert.ConvertDocumentLibrary
12:19:31,537 DEBUG [MaintenanceUtil:64] Migrating 276 document library files
12:21:07,739 ERROR [DatabaseFileSystem:225] failed to initialize file system
java.sql.SQLException: Listener refused the connection with the following error:
ORA-12519, TNS:no appropriate service handler found
(stacktrace follows...)

After this error the application goes on until next occurrence of that error.

Monitoring the listener.log while executing the migration-application shows a massive increase of connection-requests.

[oracle@serv1 ~]$ tail -f ./diag/tnslsnr/serv1/listener/trace/listener.log | egrep -i "hostdb|ORA-|TNS-"

28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 12519
TNS-12519: TNS:no appropriate service handler found
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 12519
TNS-12519: TNS:no appropriate service handler found
28-JUN-2013 16:52:32 * service_update * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST= * establish * hostdb * 0

Apparently the app opens a new connection for every to be exported file – and this ~3-8 times per second. Monitoring the “v$session”-view shows no significant increase in the overall number of sessions – still round about 110. So the lifetime of every such connection must be very short. Actually the app uses a connection-pool on the application-server but this seems to be not properly implemented.
Digging the web for that error-message brought up the advice to rise the value for parameter “PROCESSES” on the db.
But as we can see, this limit was never hit until now:

SQL> select * from v$resource_limit where resource_name in ( 'processes','sessions');

------------- ------------------- --------------- -------------------- -----------
processes                     109             138        150                  150
sessions                      122             161        248                  248


Apart from that I wouldn’t accept to rise that limit, with a corresponding restart of my otherwise well running DB, just because of an untidy implemented use of connectionpooling. And also I guess it would just defer the next appearance of the same error.

At I finally found a posting that shed some light on the problem. As also stated on MOS (ID 240710.1) the listener counts all incoming connections and rises the “TNS-12519 TNS:no appropriate service handler found” if the sum of connections hit the PROCESSES-Limit of that particular database. Unfortunately the listener’s count is not always accurate as he’s not aware of the disconnects until the next “service_update” from db to listener happens. So if, as in my case, there are very many short connections, it is likely to happen that the listener mistakenly locks access to the DB for new connections.
In another post I found the advice to use a shared-server-configuration to cope with the problem. This works like a connection-pool on the DB-Side and so we would just have a handful of server-processes fed by some dispatchers with user-requests instead of one server-process for every dedicated connection.

If you have XML-DB-Option installed, shared_server should be enabled by default. To find the name of our XDB-dispatcher-service we could query:

SQL> select name, value from v$parameter where name='dispatchers';

----------- ----------------------------------------
dispatchers (PROTOCOL=TCP) (SERVICE=hostdbXDB)


If shared_server is not yet configured, don’t worry. To enable shared_server we just have to set the parameter shared_server to a value greater than zero. This can be done dynamically without rebooting the database.


This starts one shared_server-process, what should be enough for testing. For information on suggested numbers of shared_server-processes in relation to the expected number of concurrent sessions, please dig the web.

If shared_server is configured, a dispatcher is implicitly started. To configure our dispatcher to respond on a named service we do:


Now we have to configure the JDBC-Connect-String to use this service-name. To use Oracle Service-Names in JDBC we must use this syntax (from Oracle-Docu):

Thin-style service names are supported only by the JDBC Thin driver. The syntax is:


For example:


If this doesn’t work you should try to use the fully qualified service-name in the format “servicename.domainname” (e.g. “”).

If we want every (TCP-)connection, that doesn’t explicitly request a dedicated_server, to use the shared_server, we simply omit the service_name in our dispatcher’s config and set the dispatcher to respond on all TCP-connections:


If we now restart our application to make it use the new setting, we should see it using the dispatcher-process when querying “lsnrctl service”:

[oracle@serv1 ~]$ lsnrctl service

LSNRCTL for Linux: Version - Production on 02-JUL-2013 10:07:23

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Services Summary...
Service "" has 1 instance(s).
  Instance "hostdb", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:14179 refused:0 state:ready

Service "" has 1 instance(s).
  Instance "hostdb", status READY, has 1 handler(s) for this service...
      "D000" established:5649 refused:0 current:74 max:1022 state:ready

In the line ” “D000″ established:5649 refused:0 current:74 max:1022 state:ready” we see the dispatcher “D000” that has served 5649 connections until now and currently holds 74 sessions connected.

More info about our shared-server-(usage) can be found by querying the following views:

select * from V$SHARED_SERVER;
select * from V$CIRCUIT;
select * from V$DISPATCHER;
select * from V$DISPATCHER_CONFIG;

E.g. the view “V$CIRCUIT” shows all current connections to the dispatcher.

Querying “select count(*), server from v$session group by server;” would show us the number of connections per access-path:

DEDICATED: dedicated user-connections to DB
SERVER: shared_server-connections doing some stuff
NONE: shared_server-connections on idle

If a session is inactive, it is just connected to it’s dispatcher-process and no shared_server-process is active on behalf of that session – hence the “NONE”.

While solving this problem I came across a listener-parameter new to me, with which it is possible to limit the connection-rate per second (e.g. in defense of DDoS): “CONNECTION_RATE_listener name

Oracle: Free space in Fast-Recovery-Area (FRA)

Sooner or later I guess every DBA will be faced with this error-message

ORA-00257: archiver error. Connect internal only, until freed.

and angry phone-calls from users complaining about a hanging database.

The reason for this error is a lack of space for archiving redologs.

When using a FRA and a proper set up backup that deletes obsolete files automatically, you propably only have two options:

  • Increase the FRA-Size by changing the parameter “db_recovery_file_dest_size” and/or
  • Relocate the FRA-Directory to another disc by changing the parameter “db_recovery_file_dest”
  • If space is available, both options are done quick and easy and get you out of your mess instantly. So don’t panic – and beware of deleting e.g. some archive-logs on OS-Level. As laid out in my post “Oracle: How to tell DB about manually freed space in FRA“, the latter won’t get you further and may even worsen your situation.

    To increase the FRA-Size do this:

    alter system set db_recovery_file_dest_size=100g scope=both;

    To change to FRA-Location do this:

    alter system set db_recovery_file_dest='/usr4/oracle/fast_recovery_area' scope=both;

    Changing the FRA-Location is not complicated as stated here: “Oracle: Relocate Fast-Recovery-Area (FRA)