Oracle: “ORA-12519, TNS:no appropriate service handler found” on JDBC-Connection

In the context of a Java/Liferay/JackRabbit-Setting developer wants to export all stored files from database to filesystem for migration-purposes. After some uncertain time this always runs repeatedly on errors of this kind:

12:19:31,488 DEBUG [MaintenanceUtil:64] Executing com.liferay.portal.convert.ConvertDocumentLibrary
12:19:31,497 INFO  [ConvertProcess:41] Starting conversion for com.liferay.portal.convert.ConvertDocumentLibrary
12:19:31,537 DEBUG [MaintenanceUtil:64] Migrating 276 document library files
12:21:07,739 ERROR [DatabaseFileSystem:225] failed to initialize file system
java.sql.SQLException: Listener refused the connection with the following error:
ORA-12519, TNS:no appropriate service handler found
(stacktrace follows...)

After this error the application goes on until next occurrence of that error.

Monitoring the listener.log while executing the migration-application shows a massive increase of connection-requests.

[oracle@serv1 ~]$ tail -f ./diag/tnslsnr/serv1/listener/trace/listener.log | egrep -i "hostdb|ORA-|TNS-"


28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57601)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57602)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57603)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57604)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57605)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57606)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57607)) * establish * hostdb * 0
28-JUN-2013 16:52:31 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57608)) * establish * hostdb * 12519
TNS-12519: TNS:no appropriate service handler found
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57609)) * establish * hostdb * 12519
TNS-12519: TNS:no appropriate service handler found
28-JUN-2013 16:52:32 * service_update * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57610)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57611)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57612)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57613)) * establish * hostdb * 0
28-JUN-2013 16:52:32 * (CONNECT_DATA=(SID=hostdb)(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=appuser))) * (ADDRESS=(PR
OTOCOL=tcp)(HOST=10.107.108.140)(PORT=57614)) * establish * hostdb * 0

Apparently the app opens a new connection for every to be exported file – and this ~3-8 times per second. Monitoring the “v$session”-view shows no significant increase in the overall number of sessions – still round about 110. So the lifetime of every such connection must be very short. Actually the app uses a connection-pool on the application-server but this seems to be not properly implemented.
Digging the web for that error-message brought up the advice to rise the value for parameter “PROCESSES” on the db.
But as we can see, this limit was never hit until now:

SQL> select * from v$resource_limit where resource_name in ( 'processes','sessions');

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION   LIMIT_VALUE
------------- ------------------- --------------- -------------------- -----------
processes                     109             138        150                  150
sessions                      122             161        248                  248

SQL>

Apart from that I wouldn’t accept to rise that limit, with a corresponding restart of my otherwise well running DB, just because of an untidy implemented use of connectionpooling. And also I guess it would just defer the next appearance of the same error.

At http://www.orafaq.com I finally found a posting that shed some light on the problem. As also stated on MOS (ID 240710.1) the listener counts all incoming connections and rises the “TNS-12519 TNS:no appropriate service handler found” if the sum of connections hit the PROCESSES-Limit of that particular database. Unfortunately the listener’s count is not always accurate as he’s not aware of the disconnects until the next “service_update” from db to listener happens. So if, as in my case, there are very many short connections, it is likely to happen that the listener mistakenly locks access to the DB for new connections.
In another post I found the advice to use a shared-server-configuration to cope with the problem. This works like a connection-pool on the DB-Side and so we would just have a handful of server-processes fed by some dispatchers with user-requests instead of one server-process for every dedicated connection.

If you have XML-DB-Option installed, shared_server should be enabled by default. To find the name of our XDB-dispatcher-service we could query:

SQL> select name, value from v$parameter where name='dispatchers';

NAME        VALUE
----------- ----------------------------------------
dispatchers (PROTOCOL=TCP) (SERVICE=hostdbXDB)

SQL>

If shared_server is not yet configured, don’t worry. To enable shared_server we just have to set the parameter shared_server to a value greater than zero. This can be done dynamically without rebooting the database.

ALTER SYSTEM SET SHARED_SERVER=1 SCOPE=BOTH;

This starts one shared_server-process, what should be enough for testing. For information on suggested numbers of shared_server-processes in relation to the expected number of concurrent sessions, please dig the web.

If shared_server is configured, a dispatcher is implicitly started. To configure our dispatcher to respond on a named service we do:

ALTER SYSTEM SET DISPATCHER='(PROTOCOL=TCP) (SERVICE=hostdbSS)' SCOPE=BOTH;

Now we have to configure the JDBC-Connect-String to use this service-name. To use Oracle Service-Names in JDBC we must use this syntax (from Oracle-Docu):

Thin-style service names are supported only by the JDBC Thin driver. The syntax is:

@//host_name:port_number/service_name

For example:

jdbc:oracle:thin:scott/tiger@//myhost:1521/myservicename

If this doesn’t work you should try to use the fully qualified service-name in the format “servicename.domainname” (e.g. “hostdbXDB.private.domain.com”).

If we want every (TCP-)connection, that doesn’t explicitly request a dedicated_server, to use the shared_server, we simply omit the service_name in our dispatcher’s config and set the dispatcher to respond on all TCP-connections:

ALTER SYSTEM SET DISPATCHER='(PROTOCOL=TCP)' SCOPE=BOTH;

If we now restart our application to make it use the new setting, we should see it using the dispatcher-process when querying “lsnrctl service”:

[oracle@serv1 ~]$ lsnrctl service

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 02-JUL-2013 10:07:23

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=serv1.private.domain.com)(PORT=1522)))
Services Summary...
Service "hostdb.private.domain.com" has 1 instance(s).
  Instance "hostdb", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:14179 refused:0 state:ready
         LOCAL SERVER

Service "hostdbXDB.private.domain.com" has 1 instance(s).
  Instance "hostdb", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:5649 refused:0 current:74 max:1022 state:ready
         DISPATCHER 
         (ADDRESS=(PROTOCOL=tcp)(HOST=serv1.private.domain.com)(PORT=32829))

In the line ” “D000″ established:5649 refused:0 current:74 max:1022 state:ready” we see the dispatcher “D000” that has served 5649 connections until now and currently holds 74 sessions connected.

More info about our shared-server-(usage) can be found by querying the following views:

select * from V$SHARED_SERVER;
select * from V$CIRCUIT;
select * from V$DISPATCHER;
select * from V$SHARED_SERVER_MONITOR;
select * from V$DISPATCHER_CONFIG;

E.g. the view “V$CIRCUIT” shows all current connections to the dispatcher.

Querying “select count(*), server from v$session group by server;” would show us the number of connections per access-path:

DEDICATED: dedicated user-connections to DB
SERVER: shared_server-connections doing some stuff
NONE: shared_server-connections on idle

If a session is inactive, it is just connected to it’s dispatcher-process and no shared_server-process is active on behalf of that session – hence the “NONE”.

While solving this problem I came across a listener-parameter new to me, with which it is possible to limit the connection-rate per second (e.g. in defense of DDoS): “CONNECTION_RATE_listener name

Oracle: Free space in Fast-Recovery-Area (FRA)

Sooner or later I guess every DBA will be faced with this error-message

ORA-00257: archiver error. Connect internal only, until freed.

and angry phone-calls from users complaining about a hanging database.

The reason for this error is a lack of space for archiving redologs.

When using a FRA and a proper set up backup that deletes obsolete files automatically, you propably only have two options:

  • Increase the FRA-Size by changing the parameter “db_recovery_file_dest_size” and/or
  • Relocate the FRA-Directory to another disc by changing the parameter “db_recovery_file_dest”
  • If space is available, both options are done quick and easy and get you out of your mess instantly. So don’t panic – and beware of deleting e.g. some archive-logs on OS-Level. As laid out in my post “Oracle: How to tell DB about manually freed space in FRA“, the latter won’t get you further and may even worsen your situation.

    To increase the FRA-Size do this:

    alter system set db_recovery_file_dest_size=100g scope=both;
    

    To change to FRA-Location do this:

    alter system set db_recovery_file_dest='/usr4/oracle/fast_recovery_area' scope=both;
    

    Changing the FRA-Location is not complicated as stated here: “Oracle: Relocate Fast-Recovery-Area (FRA)

    Oracle: Relocate Fast-Recovery-Area (FRA)

    As our database grows and grows, here especially in respect of backupsize and daily generated archive-logs, we may sometime get to a point when we have to relocate our FRA to another device with more free space. This could also be the case in a situation when we have run out of FRA-space because of an unexpected sudden massive increase of archived redologs. I just had the latter when some developers had run an upgrade of their application.

    Luckily it is no big fuss to change the FRAs destination directory. It’s just a matter of changing the location with

    alter system set db_recovery_file_dest='/usr4/oracle/fast_recovery_area' scope=both;
    

    and increasing the FRAs size with

    alter system set db_recovery_file_dest_size=100g scope=both;
    

    To test the new settings we can initiate a redolog-archiving:

    alter system switch logfile;
    alter system checkpoint;
    

    After this we should see a new subdirectory in the new FRA-Location with our DB’s SID as name and some subfolders in there containing our archived redolog.

    Beware not to move the existing/old FRA-Files to the new location using OS-Commands, as the DB would not notice this and couldn’t keep track of that files. We can just let the files stay in the old FRA-Location until they age out and get deleted automatically by our DB. The absolute path to each file is recorded in the DB, so that the change of the db_recovery_file_dest is no problem for existing files. But keep in mind, that the overall size of files in the old db_recovery_file_dest adds up to the total size of our FRA. The FRA is a mere virtual concept that has no necessary 1:1 relationship to a physical directory on disc. So if we create a backup outside of our db_recovery_file_dest, this adds up too to our FRA-Size.

    Oracle: How to tell DB about manually freed space in FRA

    If a database is configured to use a Flash Recovery Area (FRA), all archive-logs and RMAN-Backups go there. The FRA’s disk-space is limited to a user-defined value.
    Usually a DBA’s RMAN-Backupjob cleans up all obsolete files in the FRA. This can easily be done by issuing a “DELETE OBSOLETE” in RMAN. Related to the defined RETENTION POLICY this command would delete all files no longer needed. If no manual deletes of files in the FRA on OS-Level are done, the database correctly keeps track of the remaining space in there.

    SQL> SELECT name,
      ROUND(space_limit/1024/1024/1024,0) limit_gb,
      ROUND(space_used /1024/1024/1024,0) used_gb,
      ROUND(space_used /space_limit,4)*100 AS pct_used
    FROM V$RECOVERY_FILE_DEST;
    
    NAME                                    LIMIT_GB    USED_GB   PCT_USED
    ------------------------------------- ---------- ---------- ----------
    /usr/people/oracle/fast_recovery_area         50         43      85.09
    
    SQL>
    

    This query gives a more detailed overview about the storage-distribution:

    SQL> select * from v$flash_recovery_area_usage;
    
    FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    -------------------- ------------------ ------------------------- ---------------
    CONTROL FILE                          0                         0               0
    REDO LOG                              0                         0               0
    ARCHIVED LOG                      63.52                         0             379
    BACKUP PIECE                      21.56                         0               4
    IMAGE COPY                            0                         0               0
    FLASHBACK LOG                         0                         0               0
    FOREIGN ARCHIVED LOG                  0                         0               0
    
    7 rows selected.
    
    SQL>
    

    If a database is configured in archive-log-mode and to use a FRA, but without a RMAN-Based backup with cleanup, the FRA will continually be filled with archive-logs until no FRA-Space is left. This would bring the database to a halt. All active sessions are on hold showing the wait-events “log file switch (archiving needed)” or “free buffer waits”.

    Now if we, in “panic”, delete a bunch of old archive-logs in the FRA using OS-Commands (like “rm”) to free space, the database would not notice this. Even a “CROSSCHECK ARCHIVELOG ALL” would not release the DB from it’s hold-state. It would just mark the deleted files as “EXPIRED” in the RMAN-Catalog. To make the database to recognize the delete, we must “delete” the files using RMAN once more:

    RMAN> CROSSCHECK ARCHIVELOG ALL;
    RMAN> DELETE EXPIRED ARCHIVELOG ALL;
    

    In general there is no need to manually delete files in the FRA on OS-Level, as we could have achieved our goal completely from within RMAN. This command for example would delete all archive-logs older than seven days – in the RMAN-Catalog and on disk:

    RMAN> DELETE ARCHIVELOG UNTIL TIME 'SYSDATE-7';
    

    The above procedure, of course, can also be used to handle the (manual) delete of backup-sets.

    More info about the RMAN’s DELETE-Command can be found here:

    http://download.oracle.com/docs/cd/B28359_01/backup.111/b28273/rcmsynta016.htm#i81315

    But there is one thing left: If we delete our archive-logs in the FRA with RMAN, only the files are deleted. And as Oracle creates every day a new directory for that days archive-logs named “YYYY-MM-DD” , we would still have a huge set of empty directories left. To clean up that too we could execute this (on Linux) after our RMAN-Backup/Cleanup has run:

    find /apps/oracle/fast_recovery_area/DEMODB/ -mindepth 1 -empty -type d -exec rm -r {} \;
    

    This command would pass all empty directories found under the given path to “rm” and hence delete them.

    Edit 2012-05-02:
    I got a note that, according to Oracle-Support, all empty directories in the FRA older than seven days would get deleted automatically after backup of control-file. I verified this successfully. But mind that “older than seven days” here means “remained unchanged since seven days”. In plain: If you delete all files from your archive-log-folder named “2012_05_02”, this folder would get deleted on 2012-05-09 (seven days later).

    See MOS-Document: “Flash Recovery Area Housekeeping of empty directories” [ID 305970.1]

    How to check if Oracle-DB-edition is “Standard” or “Enterprise”

    When logging in via SQL+ you see something like this:

    Standard-Edition (SE):

    SQL*Plus: Release 10.2.0.4.0 - Production on Mon Aug 2 14:35:46 2010
    
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    
    Connected to:
    Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    

    Enterprise-Edition (EE):

    SQL*Plus: Release 10.2.0.4.0 - Production on Mon Aug 2 14:34:52 2010
    
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Data Mining and Real Application Testing options
    

    You could also query v$version:

    SE:

    SQL> select * from v$version;
    
    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    

    EE:

    SQL>  select * from v$version;
    
    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    

    Oracle: Create PFILE from SPFILE

    I just ran into an error while trying to bounce a long running Oracle-DB:

    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORA-16032: parameter LOG_ARCHIVE_DEST destination string cannot be translated
    ORA-07286: sksagdi: cannot obtain device information.
    SVR4 Error: 2: No such file or directory

    So there must be something wrong with that LOG_ARCHIVE_DEST. But where the heck is it pointing to?!

    To get a parameterfile (PFILE), which is human readable as it’s in plain ASCII, one can use “CREATE PFILE FROM SPFILE” in sqlplus. That command will result in a pfile in $ORACLE_HOME/dbs where also the spfile resides. Until now I thought that I had to bring the instance into NOMOUNT-state with “STARTUP NOMOUNT” for beeing able to execute the given command, because the SPFILE is read just then. But…

    SQL> startup nomount
    ORA-16032: parameter LOG_ARCHIVE_DEST destination string cannot be translated
    ORA-07286: sksagdi: cannot obtain device information.
    SVR4 Error: 2: No such file or directory

    So, no way to get the value of the incriminated parameter?

    When the environmentvariable¬† “ORACLE_SID=” is set to the desired DB the instance seems to read the spfile from the default-location ($ORACLE_HOME/dbs) in the default-filenameformat (spfileSID.ora) – no matter if the DB is up or not. So one can connect to an idle instance and create a PFILE:

    sqlplus '/ as sysdba';
    SQL*Plus: Release 9.2.0.6.0 - Production on Mo Apr 12 17:05:39 2010
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Connected to an idle instance.
    SQL> create pfile from spfile;
    File created.

    As the process of creating a PFILE is just a translation of SPFILE to PFILE one even can connect to no instance at all (using only SQL-Plus) and create a PFILE from any given SPFILE while stating the absolute paths of each file:

    sqlplus /nolog
    SQL*Plus: Release 9.2.0.6.0 - Production on Mo Apr 12 17:41:29 2010
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    SQL> create pfile='/usr/people/oracle/initORGDB.ora' from spfile='/opt/oracle/orahome1/dbs/spfileORGDB.ora';
    File created.
    


    After getting the PFILE one can examine it, fix the error and startup the instance with the corrected PFILE – or create a SPFILE from that corrected PFILE right away and startup with the SPFILE.

    Edit 2014-08-04:
    At least with version 11.2.0.3.0 it is no longer possible to generate a pfile from spfile with “sqlplus /nolog”. To get the pfile we must get on the instance.

    [oracle@testsrv ~]$ sqlplus /nolog
    
    SQL*Plus: Release 11.2.0.3.0 Production on Mo Aug 4 14:12:42 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    SQL> create pfile='/app/oracle/temp/inittestdb.ora' from spfile='/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestdb.ora';
    SP2-0640: Not connected
    SQL> exit
    Disconnected
    
    [oracle@testsrv ~]$ export ORACLE_SID=testdb
    [oracle@testsrv ~]$ sqlplus / as sysdba
    
    SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 4 14:22:48 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Connected to an idle instance.
    
    SQL> create pfile='/app/oracle/temp/testdb.ora' from spfile;
    File created.
    SQL>
    

    Another way of getting the parameters is using the “strings”-command at the shell on the spfile (credits go to Frits Hoogland):

    [oracle@testsrv ~]$ strings /app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestdb.ora
    testdb.__db_cache_size=2969567232
    testdb.__java_pool_size=16777216
    testdb.__large_pool_size=16777216
    testdb.__oracle_base='/app/oracle'#ORACLE_BASE set from environment
    testdb.__pga_aggregate_target=2583691264
    testdb.__sga_target=4798283776
    testdb.__shared_io_pool_size=0
    testdb.__shared_pool_size=1711276032
    testdb.__streams_pool_size=33554432
    *.audit_file_dest='/app/oracle/admin/testdb/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.0.0'
    *.control_files='/usr2/oracle/oradata/testdb
    /control01.ctl','/app/oracle/fast_recovery_area/testdb/control02.ctl'
    *.db_block_size=8192
    *.db_domain='private.domain.com'
    *.db_name='testdb'
    *.db_recovery_file_dest='/app/oracle/fast_recovery_area'
    *.db_recovery_file_dest_size=161061273600
    *.diagnostic_dest='/app/oracle'
    *.log_archive_format='testdb_%t_%s_%r.arc'
    *.memory_target=7368343552
    *.nls_language='GERMAN'
    *.nls_territory='GERMANY'
    *.open_cursors=300
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.undo_ta
    blespace='UNDOTBS1'
    [oracle@testsrv ~]$
    

    This command extracts all strings from the SPFILE. The result is almost what would be in a PFILE. We just have to care for occasional newlines that would break certain parameters (here: control_files, undo_tablespace) in two lines. When we fix this line-breaks we would end up with the exact PFILE-Content.