Oracle: “ORA-28221: REPLACE not specified” on password-change

If we apply a PASSWORD_VERIFY_FUNCTION to a user’s profile, we would need a little more syntax than usual to change the users’s password. A simple “ALTER USER username IDENTIFIED BY password;” for most users would not work:

SQL> conn test/test123@testdb
Connected.

SQL> alter user test identified by QwertzQwertz2014;
alter user test identified by QwertzQwertz2014
*
ERROR at line 1:
ORA-28221: REPLACE not specified

We have to add a “REPLACE old_password” to our command:

SQL> alter user test identified by QwertzQwertz2014 replace test123;

User altered.

SQL>

This “REPLACE” can only be omitted, if the executing user has the “ALTER USER” system privilege or if a PASSWORD_VERIFY_FUNCTION is not in use. In the presence of a PASSWORD_VERIFY_FUNCTION the REPLACE can only be omitted if the user changes his/her password for the first time.

This is important to keep in mind for cases when we want the password to be set via an application. Here the application’s logic must be able to handle the above.

Reference: http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_4003.htm#SQLRF53632

Edit 2014-08-27:
It took some time for me to grasp the logic behind this syntax – but now I think I got it:

As the users passwords are always stored encrypted / hashed in the database, there would be otherwise no chance for a PASSWORD_VERIFY_FUNCTION to check the new password for a minimum number of changed characters compared to the old password.

If the new password would differ in at least one single character, the hash-value of that password would be completely different to the hash of the old password – so no use for the aforementioned check. That way we could just check for “NEW_PWD != OLD_PWD”. Since there is no profile limit for a minimum number of changed password-characters, the only way to check this is in a PASSWORD_VERIFY_FUNCTION – and hence it makes sense to just request the old password ( or request the old password at all ) in case we use a PASSWORD_VERIFY_FUNCTION.

Just verified this on My Oracle Support (MOS) and found it confirmed under
“Password Verify Function Not Enforcing Difference Between Old and New Passwords” (Doc ID 816932.1)

Advertisements

Oracle: 11g automatically enables auditing

In one of my dbca-created databases the SYSTEM-Tablespace filled up until the DB halted with this error:

ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.AUD$ by 8192 in tablespace SYSTEM
ORA-02002: error while writing to audit trail

The named table SYS.AUD$ is the target for all auditing-data. But actually I didn’t intentionally enabled any auditing.

It appeared that since Oracle Database 11g the Database Configuration Assistent (dbca) and Database Upgrade Assistent (dbua) automatically sets the parameter audit_trail=’DB’ to enable a base-auditing by default which goes to the system-tablespace. In the dbca’s UI is no single hint of that fact since 11g R2. And now it bit me.

SQL> select name, value from v$parameter where name like 'audit_trail';

NAME                      VALUE
------------------------- ----------------------------------------
audit_trail               DB

SQL>

This setting exists in all of my dbca-databases. But until now I was neither aware of it nor was there ever a problem. So how come the 2,5 GB system-tablespace filled up until the brim within 12 month on a system with moderate traffic?!
Examining the auditdata in dba_audit_trail showed up permanent LOGONs/LOGOFFs with connection-durations of just some 30 seconds until a few (less than five) minutes.

The reason seems to be a “misconfigured” connection-pool on the application-server that disconnects to soon instead of holding the sessions for 30 minutes or so to reuse them. And so I ended up with more than 10-million lines of audit-data with the size of more than 2GB and a 100% used system-tablespace.

For a quick fix I simply truncated the table SYS.AUD$.

truncate table AUD$;

On next occasion I maybe stop the auditing – or set up a maintenance-job for the AUD$.

I tended to not use the DBMS_AUDIT_MGMT for cleaning up the audit-trail. As my auditing is for information-purposes only and not for compliance, I do not archive the log-data. Using DBMS_AUDIT_MGMT requires to set an archiving-date prior to delete the records from SYS.AUD$. This would impose unnecessary load on my database, as it would mean updating all audit-records with the archiving-date and then deleting all that records – each time generating redo-information. Also I wanted a rolling window of 180 days of audit-trail in my database. So the easiest way was to simply set up a scheduler-job:

begin
dbms_scheduler.create_job
 (job_name => 'PURGE_AUDIT_TRAIL',
  job_type => 'PLSQL_BLOCK',
  job_action=> 'delete from sys.aud$ where ntimestamp# < sysdate-180;commit;',
  start_date=> trunc(sysdate),
  repeat_interval=> 'FREQ=DAILY; BYHOUR=20;',
  enabled=>true,
  comments=>'Delete auditlog-records older than 180 days. (MT)');
end;

Some more info on auditing:
http://juliandontcheff.wordpress.com/2011/05/12/auditing-vs-performance-in-the-oracle-database/

Oracle: Recreate invalid synonyms

During ex-import-operations it is often the case, that related synonyms in other schemas become invalid, because of the temporary absence of the underlying objects.

Unlike other objects like procedures or functions invalid synonyms are not automatically “recompiled” on next use. So we have to recreate them one by one. To ease this task and recreate all of them in a batch we can use this piece of code, which dynamically recreates all invalid synonyms in the database:

begin
for a in (
    select 'create or replace synonym ' || a.owner || '.' || a.synonym_name || ' for ' || a.table_owner || '.' || a.table_name 
    as cmd from dba_synonyms a where a.owner in 
    (select distinct b.owner from dba_objects b where b.object_type='SYNONYM' and b.STATUS='INVALID'))
loop
    execute immediate a.cmd;
end loop;
end;

If you don’t have access to the named “dba_*” views, you can simply replace “dba_” with “all_”.

Create database-link in another user’s schema

Ever wondered, why you can’t create db-links in the schema of another user – not even as DBA?

When creating objects like tables in another schema, we prefix the tablename with the name of the target-schema (e.g. “create table appuser.first_table …”). But in db-link-names we are allowed to use dots as part of the name – and hence can’t prefix like usual.

Here are three solutions to this problem:

Marko Sutic uses the “password hijacking approach”,

Neil Johnson creates a db-link-create-procedure in the target-schema

and on “Oracle DBA Tips” we find a sophisticated solution using DBMS_SYS_SQL without any need to create foreign objects or hijack passwords.

The latter I hadn’t tested yet, but it looks very interesting.

My favorite is the db-link-create-procedure. Here is a generic procedure for that matter:

CREATE PROCEDURE targetschema.create_db_link (linkname in varchar2, remote_user in varchar2, remote_pwd in varchar2, conn_string in varchar2) is
  V_SQL varchar2(1000);
BEGIN
    V_SQL := 'CREATE DATABASE LINK ' || linkname || ' CONNECT TO ' || remote_user || ' IDENTIFIED BY ' || remote_pwd || ' USING ''' || conn_string || '''';
    EXECUTE IMMEDIATE (V_SQL);
END create_db_link;

grant create database link to targetschema;

exec targetschema.create_db_link('linkname','user','pwd','connectstring');

And maybe a cleanup afterwards:

revoke create database link from targetschema;

drop procedure targetschema.create_db_link;

Dropping database-links in another schema is analogously:

CREATE PROCEDURE targetschema.drop_db_link (linkname in varchar2) is
  V_SQL varchar2(100);
BEGIN
    V_SQL := 'DROP DATABASE LINK ' || linkname;
    EXECUTE IMMEDIATE (V_SQL);
END drop_db_link;
select * from dba_db_links;

exec targetschema.drop_db_link('linkname');

Oracle: “ORA-12519, TNS:no appropriate service handler found” on JDBC-Connection

In the context of a Java/Liferay/JackRabbit-Setting developer wants to export all stored files from database to filesystem for migration-purposes. After some uncertain time this always runs repeatedly on errors of this kind:

12:19:31,488 DEBUG [MaintenanceUtil:64] Executing com.liferay.portal.convert.ConvertDocumentLibrary
12:19:31,497 INFO  [ConvertProcess:41] Starting conversion for com.liferay.portal.convert.ConvertDocumentLibrary
12:19:31,537 DEBUG [MaintenanceUtil:64] Migrating 276 document library files
12:21:07,739 ERROR [DatabaseFileSystem:225] failed to initialize file system
java.sql.SQLException: Listener refused the connection with the following error:
ORA-12519, TNS:no appropriate service handler found
(stacktrace follows...)

After this error the application goes on until next occurrence of that error.

Monitoring the listener.log while executing the migration-application shows a massive increase of connection-requests.

[oracle@serv1 ~]$ tail -f ./diag/tnslsnr/serv1/listener/trace/listener.log | egrep -i "hostdb|ORA-|TNS-"


28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57601)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57602)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57603)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57604)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57605)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57606)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57607)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57608)) * establish * hostdb * 12519
TNS-12519: TNS:no appropriate service handler found
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57609)) * establish * hostdb * 12519
TNS-12519: TNS:no appropriate service handler found
28-JUN-2013 16:52:32 * service_update * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57610)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57611)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57612)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57613)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57614)) * establish * hostdb * 0

Apparently the app opens a new connection for every to be exported file – and this ~3-8 times per second. Monitoring the “v$session”-view shows no significant increase in the overall number of sessions – still round about 110. So the lifetime of every such connection must be very short. Actually the app uses a connection-pool on the application-server but this seems to be not properly implemented.
Digging the web for that error-message brought up the advice to rise the value for parameter “PROCESSES” on the db.
But as we can see, this limit was never hit until now:

SQL> select * from v$resource_limit where resource_name in ( 'processes','sessions');

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION   LIMIT_VALUE
------------- ------------------- --------------- -------------------- -----------
processes                     109             138        150                  150
sessions                      122             161        248                  248

SQL>

Apart from that I wouldn’t accept to rise that limit, with a corresponding restart of my otherwise well running DB, just because of an untidy implemented use of connectionpooling. And also I guess it would just defer the next appearance of the same error.

At http://www.orafaq.com I finally found a posting that shed some light on the problem. As also stated on MOS (ID 240710.1) the listener counts all incoming connections and rises the “TNS-12519 TNS:no appropriate service handler found” if the sum of connections hit the PROCESSES-Limit of that particular database. Unfortunately the listener’s count is not always accurate as he’s not aware of the disconnects until the next “service_update” from db to listener happens. So if, as in my case, there are very many short connections, it is likely to happen that the listener mistakenly locks access to the DB for new connections.
In another post I found the advice to use a shared-server-configuration to cope with the problem. This works like a connection-pool on the DB-Side and so we would just have a handful of server-processes fed by some dispatchers with user-requests instead of one server-process for every dedicated connection.

If you have XML-DB-Option installed, shared_server should be enabled by default. To find the name of our XDB-dispatcher-service we could query:

SQL> select name, value from v$parameter where name='dispatchers';

NAME        VALUE
----------- ----------------------------------------
dispatchers (PROTOCOL=TCP) (SERVICE=hostdbXDB)

SQL>

If shared_server is not yet configured, don’t worry. To enable shared_server we just have to set the parameter shared_server to a value greater than zero. This can be done dynamically without rebooting the database.

ALTER SYSTEM SET SHARED_SERVER=1 SCOPE=BOTH;

This starts one shared_server-process, what should be enough for testing. For information on suggested numbers of shared_server-processes in relation to the expected number of concurrent sessions, please dig the web.

If shared_server is configured, a dispatcher is implicitly started. To configure our dispatcher to respond on a named service we do:

ALTER SYSTEM SET DISPATCHER='(PROTOCOL=TCP) (SERVICE=hostdbSS)' SCOPE=BOTH;

Now we have to configure the JDBC-Connect-String to use this service-name. To use Oracle Service-Names in JDBC we must use this syntax (from Oracle-Docu):

Thin-style service names are supported only by the JDBC Thin driver. The syntax is:

@//host_name:port_number/service_name

For example:

jdbc:oracle:thin:scott/tiger@//myhost:1521/myservicename

If this doesn’t work you should try to use the fully qualified service-name in the format “servicename.domainname” (e.g. “hostdbXDB.private.domain.com”).

If we want every (TCP-)connection, that doesn’t explicitly request a dedicated_server, to use the shared_server, we simply omit the service_name in our dispatcher’s config and set the dispatcher to respond on all TCP-connections:

ALTER SYSTEM SET DISPATCHER='(PROTOCOL=TCP)' SCOPE=BOTH;

If we now restart our application to make it use the new setting, we should see it using the dispatcher-process when querying “lsnrctl service”:

[oracle@serv1 ~]$ lsnrctl service

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 02-JUL-2013 10:07:23

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=serv1.private.domain.com)(PORT=1522)))
Services Summary...
Service "hostdb.private.domain.com" has 1 instance(s).
  Instance "hostdb", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:14179 refused:0 state:ready
         LOCAL SERVER

Service "hostdbXDB.private.domain.com" has 1 instance(s).
  Instance "hostdb", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:5649 refused:0 current:74 max:1022 state:ready
         DISPATCHER 
         (ADDRESS=(PROTOCOL=tcp)(HOST=serv1.private.domain.com)(PORT=32829))

In the line ” “D000″ established:5649 refused:0 current:74 max:1022 state:ready” we see the dispatcher “D000” that has served 5649 connections until now and currently holds 74 sessions connected.

More info about our shared-server-(usage) can be found by querying the following views:

select * from V$SHARED_SERVER;
select * from V$CIRCUIT;
select * from V$DISPATCHER;
select * from V$SHARED_SERVER_MONITOR;
select * from V$DISPATCHER_CONFIG;

E.g. the view “V$CIRCUIT” shows all current connections to the dispatcher.

Querying “select count(*), server from v$session group by server;” would show us the number of connections per access-path:

DEDICATED: dedicated user-connections to DB
SERVER: shared_server-connections doing some stuff
NONE: shared_server-connections on idle

If a session is inactive, it is just connected to it’s dispatcher-process and no shared_server-process is active on behalf of that session – hence the “NONE”.

While solving this problem I came across a listener-parameter new to me, with which it is possible to limit the connection-rate per second (e.g. in defense of DDoS): “CONNECTION_RATE_listener name

Oracle: Free space in Fast-Recovery-Area (FRA)

Sooner or later I guess every DBA will be faced with this error-message

ORA-00257: archiver error. Connect internal only, until freed.

and angry phone-calls from users complaining about a hanging database.

The reason for this error is a lack of space for archiving redologs.

When using a FRA and a proper set up backup that deletes obsolete files automatically, you propably only have two options:

  • Increase the FRA-Size by changing the parameter “db_recovery_file_dest_size” and/or
  • Relocate the FRA-Directory to another disc by changing the parameter “db_recovery_file_dest”
  • If space is available, both options are done quick and easy and get you out of your mess instantly. So don’t panic – and beware of deleting e.g. some archive-logs on OS-Level. As laid out in my post “Oracle: How to tell DB about manually freed space in FRA“, the latter won’t get you further and may even worsen your situation.

    To increase the FRA-Size do this:

    alter system set db_recovery_file_dest_size=100g scope=both;
    

    To change to FRA-Location do this:

    alter system set db_recovery_file_dest='/usr4/oracle/fast_recovery_area' scope=both;
    

    Changing the FRA-Location is not complicated as stated here: “Oracle: Relocate Fast-Recovery-Area (FRA)

    Oracle: Relocate Fast-Recovery-Area (FRA)

    As our database grows and grows, here especially in respect of backupsize and daily generated archive-logs, we may sometime get to a point when we have to relocate our FRA to another device with more free space. This could also be the case in a situation when we have run out of FRA-space because of an unexpected sudden massive increase of archived redologs. I just had the latter when some developers had run an upgrade of their application.

    Luckily it is no big fuss to change the FRAs destination directory. It’s just a matter of changing the location with

    alter system set db_recovery_file_dest='/usr4/oracle/fast_recovery_area' scope=both;
    

    and increasing the FRAs size with

    alter system set db_recovery_file_dest_size=100g scope=both;
    

    To test the new settings we can initiate a redolog-archiving:

    alter system switch logfile;
    alter system checkpoint;
    

    After this we should see a new subdirectory in the new FRA-Location with our DB’s SID as name and some subfolders in there containing our archived redolog.

    Beware not to move the existing/old FRA-Files to the new location using OS-Commands, as the DB would not notice this and couldn’t keep track of that files. We can just let the files stay in the old FRA-Location until they age out and get deleted automatically by our DB. The absolute path to each file is recorded in the DB, so that the change of the db_recovery_file_dest is no problem for existing files. But keep in mind, that the overall size of files in the old db_recovery_file_dest adds up to the total size of our FRA. The FRA is a mere virtual concept that has no necessary 1:1 relationship to a physical directory on disc. So if we create a backup outside of our db_recovery_file_dest, this adds up too to our FRA-Size.