Oracle: “SQL dictionary health check: ecol$.tabobj#,colnum fk 145 on object ECOL$ failed”

Just out of curiosity at “ADVISOR CENTRAL > CHECKERS” in OEM I started a manual “Dictionary Integrity Check” which surprisingly detected 14 errors of this kind:

SQL dictionary health check: ecol$.tabobj#,colnum fk 145 on object ECOL$ failed - 
Critical - Damaged rowid is AAAAB7AABAAAAPRAA1 - 
description: Object PAS.SERI_CARGO_DETAIL is referenced

There were this 14 incriminated rows:

select distinct(f.DAMAGE_DESCRIPTION) from v$hm_finding f, 
v$hm_run r where f.run_id = r.run_id and r.name='MT20180809_04';

Damaged rowid is AAAAB7AABAAAAPRABC - description: Object PAS.SERI_FLIGHTS is referenced
Damaged rowid is AAAAB7AABAAAAPRABD - description: Object PAS.SERI_FLIGHTS is referenced
Damaged rowid is AAAAB7AABAAAAPSAAH - description: Object PAS.SERI_VELOXYS is referenced
Damaged rowid is AAAAB7AABAAAAPRAAx - description: Object PAS.SERI_CARGO is referenced
Damaged rowid is AAAAB7AABAAAAPRAA7 - description: Object PAS.SERI_CHAINS is referenced
Damaged rowid is AAAAB7AABAAAAPRAA1 - description: Object PAS.SERI_CARGO_DETAIL is referenced
Damaged rowid is AAAAB7AABAAAAPSAAG - description: Object PAS.SERI_VELOXYS is referenced
Damaged rowid is AAAAB7AABAAAAPRAA6 - description: Object PAS.SERI_CHAINS is referenced
Damaged rowid is AAAAB7AABAAAAPRABZ - description: Object PAS.SERI_FUEL is referenced
Damaged rowid is AAAAB7AABAAAAPRAAF - description: Object PAS.SERS_COSTS is referenced
Damaged rowid is AAAAB7AABAAAAPRAB0 - description: Object PAS.SERI_ORDERING is referenced
Damaged rowid is AAAAB7AABAAAAPRAB1 - description: Object PAS.SERI_ORDERING is referenced
Damaged rowid is AAAAB7AABAAAAPRAAB - description: Object PAS.SERS_FLT_LOG_TYPE is referenced
Damaged rowid is AAAAB7AABAAAAPRABh - description: Object PAS.SERI_IC_INVOICE_HEAD is referenced

It seems as if these are all “false positives“, as stated here at MOS

Bug 26038061 – DBMS_HM: SQL dictionary health check: ecol$.tabobj#,colnum fk reported against tables using “add column optimization” feature [26038061.8]

“Dictionary health check related to ECOL$ reports incorrect failure.
This issue will be reported against table that had columns added using “add column optimization” feature, controlled by the parameter _ADD_COL_OPTIM_ENABLED.
You can find the tables and entries causing these issues using the SQL below”

— Identify all tables with columns added using “add column optimization” feature
select owner, object_name, name
from dba_objects, col$
where bitand(col$.PROPERTY,1073741824)=1073741824
and object_id=obj#;

— Find all tables with missing entries at ECOL$
— This would match the objects listed by DBMS_HM.GET_RUN_REPORT
select e.rowid,e.tabobj#,e.colnum, o.owner, o.object_name
from ecol$ e, dba_objects o
where (tabobj#,colnum) not in (select obj#,col# from col$)
and object_id = tabobj#;

I edited the second query in order to deliver useful information as the original query at MOS was erroneous.
The outcome of this query exactly matched with the tables referred to above.

Information on the named parameter “_ADD_COL_OPTIM_ENABLED” can be found at:

Init.ora Parameter “_ADD_COL_OPTIM_ENABLED” [Hidden] Reference Note (Doc ID 1492674.1)

Despite this other related Bug 16811780 “SQL DICTIONARY HEALTH CHECK: ECOL$.TABOBJ#,COLNUM FK 146 ON OBJECT ECOL$ FAILED” at MOS doesn’t refer to the aforementioned root cause of the error, it nevertheless advises as a workaround to “rebuild the table via DBMS_REDEFINITION”.

Oracle: “Checker run found 21 new persistent data failures”

This morning I found a mail from one of our Oracle Enterprise Managers (OEM) in my inbox, stating a CRITICAL alert and that “Checker run found 21 new persistent data failures“.

I immediately checked “V$BLOCK_CORRUPTION” but found nothing. Looking at the instance’s status on OEM showed an obviously well running database – apart from the red critical alert on it’s frontpage. I went to “ADVISOR CENTRAL > CHECKERS” in OEM and looked at the findings of the “DB Structure  Integrity Check” that triggered the alert:

Run Findings And Recommendations
Finding
Finding Name : System datafile is old
Finding ID : 1026508
Type : FAILURE
Status : CLOSED
Priority : CRITICAL
Message : System datafile 1:
'D:\ORADATA\ASP\TS_ASP_SYSTEM_01.DBF' needs media
recovery
Message : Database cannot be opened

Finding
Finding Name : Datafile is old
Finding ID : 1026514
Type : FAILURE
Status : CLOSED
Priority : HIGH
Message : Datafile 2: 'D:\ORADATA\ASP\TS_ASP_SYSAUX_01.DBF'
needs media recovery
Message : Some objects in tablespace SYSAUX might be unavailable

(...)

So I headed for the alert.log where I saw some suspicious “ALTER TABLESPACE… BEGIN BACKUP” commands. Scrolling through the past few days showed this command and a corresponding “ALTER TABLESPACE… END BACKUP” for all of the tablespaces shortly after every midnight – until today. This last night the “END BACKUP” was completely missing.

Looking at V$BACKUP confirmed this: All tablespaces were still in ACTIVE backup mode with a checkpoint timestamp of 00:09.

Hm, this looked familiar to me as we had the same root cause some weeks ago (see: Oracle: BEGIN BACKUP… with no END).

To shortly recap here: Our cloud provider apparently uses Veeam for backing up the VMs. On some of our machines they also seem to do a dedicated backup of Oracle Database using Veeam’s functionality. Therefor Veeam has to bring the database into backup mode for a short time using “ALTER DATABASE BEGIN BACKUP” (or actually they seem to use “ALTER TABLESPACE… BEGIN BACKUP”) and “ALTER DATABASE END BACKUP” when finished. Due to unknown circumstances this “END BACKUP” is sometimes missing, which left the tablespaces in backup mode.

An “ALTER DATABASE END BACKUP” resolved the situation and a fresh subsequent “DB Structure Integrity Check” no longer showed any errors.

For my peace of mind I should have stopped here. But I couldn’t keep my fingers still and started an additional “Dictionary Integrity Check” which detected 14 more errors of this kind:

SQL dictionary health check: ecol$.tabobj#,colnum fk 145 on object ECOL$ failed - 
Critical - Damaged rowid is AAAAB7AABAAAAPRAA1 - 
description: Object PAS.SERI_CARGO_DETAIL is referenced

You can read on here how the story goes…

Some helpful Firefox tweaks

If you’re like me collecting a host of pages on “other interesting stuff” while doing web research on your daily work, this Firefox tweaks may come in handy for you.

What I want for not to lose my precious collected webpages (opened in a multitude of tabs) is, that I get a new tab on any of these situations:

  • doing a web search via the browser’s search box
  • open bookmarks
  • open new page via the URL bar

This all can be accomplished if you type “about:config” in your Firefox’s URL bar, acknowledge the warning and set all of this parameters to “TRUE”:

browser.urlbar.openintab
browser.search.openintab
browser.tabs.loadBookmarksInTabs

To get to the parameter just type a representative portion of the respective name in the parameter search bar. The parameter’s value can be toggled via double click on it’s line.

And, as I just stumbled upon it in my notes, here is another parameter that may come in handy:

network.http.max-persistent-connections-per-server

With this buddy we can limit the number of concurrent downloads from a certain server – if we so wish.

Create database connection via LDAP-Server

To connect to an Oracle Database we need the hostname, port and SID or SERVICE_NAME of the target database. This information can all be pushed in one chunk to a new connection (this is called “Easy Connect“). But as this may be somewhat inconvenient, Oracle provides the possibility to store all connection data in a file called tnsnames.ora. This file is usually located at “$ORACLE_HOME/network/admin”. The disadvantage of this file in larger environments is, to keep all instances of that file on all desktops and servers consistent.
To mitigate this problem it is possible to setup an Oracle Internet Directory (“LDAP Server”) to host all connection information of internal and external databases. So there has to be only one single LDAP-Server all clients can connect to to query for the needed connection strings.

If an Oracle Client or database is already installed on the system, all Oracle tools will automatically look for LDAP data under the above path in their ORACLE_HOME in a file called ldap.ora, that contains the LDAP server’s connection data. If an ldap.ora exists, but is not used by our client, we should check if LDAP is configured as a so called “naming method” in “$ORACLE_HOME/network/admin/sqlnet.ora” – and here set as the first to use method.

In order to use such an LDAP-Server in SQL-Developer when creating a new connection, we chose “Connection type: LDAP” and fill in the connection string for our LDAP-Server. As this string is kind of unhandy to key it in for every new connection, we can configure our local environment to provide the needed LDAP connection string to our SQL-Developer. To do this we just have to create a persistent environment variable named TNS_ADMIN which points to the ldap.ora. If this file already exists in “$ORACLE_HOME/network/admin”, we just have to set TNS_ADMIN = “$ORACLE_HOME/network/admin”. Otherwise we can also use any arbitrary directory as the destination for our ldap.ora.
On Windows we set TNS_ADMIN quickly via cmd.exe:

setx TNS_ADMIN "C:\Oracle\product\Oracle11gClient\network\admin"

After that, if we start SQL Developer and create a new connection, we can select our LDAP server from the drop-down list. We have to click “load” to get the list of databases from the LDAP server to chose from.

Oracle: BEGIN BACKUP… with no END

RMAN backup failed due to “not enough space on the disk”:

RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/23/2018 15:32:17
ORA-19502: write error on file "E:\BACKUPS\TCT8NTGV_1_1", block number 559105 (block size=512)
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 112) There is not enough space on the disk.

Really, that disk was 100% taken by RMAN backups. But why the heck hasn’t our deletion policy prevented this situation?!

Analyzing the existing RMAN backups showed way too much (and such too old) backups of archivelogs that normally shouldn’t have been there anymore. Our last full backup was of 14.07.2018 but we still had archivelog backups from 12.07.2018 onwards – although our RETENTION POLICY looked like this:

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 2 DAYS;

Mind the timestamp of the RMAN error above: 23.07.2018. So we in fact hold backups of the past eleven days! How come…?!

I found all existing full backup’s datafiles to have the same checkpoint timestamp of 12.07.2018. So that’s why there are archivelog backups older than the oldest full backup: The oldest full backup of 14.07. actually contains datafiles with checkpoint time 12.07. – and hence all archivelogs as of 12.07. are still needed for recovery to comply with our retention policy.

RMAN> list backup of database summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
49416   B  F  A DISK        14.07.2018 00:14:43 1       1       YES        FULL_BACKUP_PAS_071318100018
49913   B  F  A DISK        21.07.2018 00:07:21 1       1       YES        FULL_BACKUP_PAS_072018100007
49914   B  F  A DISK        21.07.2018 00:10:53 1       1       YES        FULL_BACKUP_PAS_072018100007
49981   B  F  A DISK        22.07.2018 00:03:14 1       1       YES        FULL_BACKUP_PAS_072118100004
49988   B  F  A DISK        22.07.2018 02:19:43 1       1       YES        FULL_BACKUP_PAS_072118100004
RMAN> list backupset 49913,49914,49981,49988;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
49913   Full    17.23G     DISK        02:06:46     21.07.2018 00:07:21
        BP Key: 49913   Status: AVAILABLE  Compressed: YES  Tag: FULL_BACKUP_PAS_072018100007
        Piece Name: E:\BACKUPS\O3T8GN83_1_1
  List of Datafiles in backup set 49913
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  4       Full 14152463508157 12.07.2018 00:15:00 D:\ORADATA\PASP\TS_PASP_AIR_ARCH_01.DBF
  5       Full 14152463508199 12.07.2018 00:15:00 D:\ORADATA\PASP\TS_PASP_B00_D_01.DBF
  6       Full 14152463508578 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B00_I_01.DBF
  7       Full 14152463508616 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B01_D_01.DBF
  9       Full 14152463508720 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B04_D_01.DBF
  11      Full 14152463508804 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B16_D_01.DBF
  13      Full 14152463508987 12.07.2018 00:15:05 D:\ORADATA\PASP\TS_PASP_B28_I_01.DBF
  14      Full 14152463509069 12.07.2018 00:15:05 D:\ORADATA\PASP\TS_PASP_CUBE_D_01.DBF
  19      Full 14152463508804 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B16_D_02.DBF
  21      Full 14152503333567 20.07.2018 22:00:35 D:\ORADATA\PASP\TS_PASP_B04_I_03.DBF


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
49914   Full    18.85G     DISK        02:10:18     21.07.2018 00:10:53
        BP Key: 49914   Status: AVAILABLE  Compressed: YES  Tag: FULL_BACKUP_PAS_072018100007
        Piece Name: E:\BACKUPS\O4T8GN83_1_1
  List of Datafiles in backup set 49914
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 14152463508069 12.07.2018 00:14:59 D:\ORADATA\PASP\TS_PASP_SYSTEM_01.DBF
  2       Full 14152463508088 12.07.2018 00:14:59 D:\ORADATA\PASP\TS_PASP_SYSAUX_01.DBF
  3       Full 14152463508109 12.07.2018 00:14:59 D:\ORADATA\PASP\TS_PASP_UNDOTBS_01.DBF
  8       Full 14152463508657 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B01_I_01.DBF
  10      Full 14152463508757 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B04_I_01.DBF
  12      Full 14152463508859 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B28_D_01.DBF
  15      Full 14152463509103 12.07.2018 00:15:05 D:\ORADATA\PASP\TS_PASP_CUBE_I_01.DBF
  16      Full 14152463509135 12.07.2018 00:15:06 D:\ORADATA\PASP\PROD_BIPLATFORM.DBF
  17      Full 14152463509165 12.07.2018 00:15:06 D:\ORADATA\PASP\PROD_MDS.DBF
  18      Full 14152463508757 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B04_I_02.DBF
  20      Full 14152503333581 20.07.2018 22:00:36 D:\ORADATA\PASP\TS_PASP_B04_D_02.DBF


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
49981   Full    17.70G     DISK        02:02:41     22.07.2018 00:03:14
        BP Key: 49981   Status: AVAILABLE  Compressed: YES  Tag: FULL_BACKUP_PAS_072118100004
        Piece Name: E:\BACKUPS\Q8T8JBK1_1_1
  List of Datafiles in backup set 49981
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 14152463508069 12.07.2018 00:14:59 D:\ORADATA\PASP\TS_PASP_SYSTEM_01.DBF
  2       Full 14152463508088 12.07.2018 00:14:59 D:\ORADATA\PASP\TS_PASP_SYSAUX_01.DBF
  3       Full 14152463508109 12.07.2018 00:14:59 D:\ORADATA\PASP\TS_PASP_UNDOTBS_01.DBF
  4       Full 14152463508157 12.07.2018 00:15:00 D:\ORADATA\PASP\TS_PASP_AIR_ARCH_01.DBF
  5       Full 14152463508199 12.07.2018 00:15:00 D:\ORADATA\PASP\TS_PASP_B00_D_01.DBF
  8       Full 14152463508657 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B01_I_01.DBF
  10      Full 14152463508757 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B04_I_01.DBF
  12      Full 14152463508859 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B28_D_01.DBF
  15      Full 14152463509103 12.07.2018 00:15:05 D:\ORADATA\PASP\TS_PASP_CUBE_I_01.DBF
  18      Full 14152463508757 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B04_I_02.DBF
  20      Full 14152506440447 21.07.2018 22:00:33 D:\ORADATA\PASP\TS_PASP_B04_D_02.DBF

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
49988   Full    17.40G     DISK        02:14:16     22.07.2018 02:19:43
        BP Key: 49988   Status: AVAILABLE  Compressed: YES  Tag: FULL_BACKUP_PAS_072118100004
        Piece Name: E:\BACKUPS\QFT8JIU7_1_1
  List of Datafiles in backup set 49988
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  6       Full 14152463508578 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B00_I_01.DBF
  7       Full 14152463508616 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B01_D_01.DBF
  9       Full 14152463508720 12.07.2018 00:15:03 D:\ORADATA\PASP\TS_PASP_B04_D_01.DBF
  11      Full 14152463508804 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B16_D_01.DBF
  13      Full 14152463508987 12.07.2018 00:15:05 D:\ORADATA\PASP\TS_PASP_B28_I_01.DBF
  14      Full 14152463509069 12.07.2018 00:15:05 D:\ORADATA\PASP\TS_PASP_CUBE_D_01.DBF
  16      Full 14152463509135 12.07.2018 00:15:06 D:\ORADATA\PASP\PROD_BIPLATFORM.DBF
  17      Full 14152463509165 12.07.2018 00:15:06 D:\ORADATA\PASP\PROD_MDS.DBF
  19      Full 14152463508804 12.07.2018 00:15:04 D:\ORADATA\PASP\TS_PASP_B16_D_02.DBF
  21      Full 14152507027738 22.07.2018 00:05:27 D:\ORADATA\PASP\TS_PASP_B04_I_03.DBF

RMAN>

I dug through the alert.log to find a hint on what had frozen our datafiles to that checkpoint time. For the days prior to 12.07. I noticed a bunch of “ALTER TABLESPACE … BEGIN BACKUP” commands shortly followed by the corresponding “ALTER TABLESPACE … END BACKUP” – every day right after midnight. Up to 12.07.2018: Here the “ALTER TABLESPACE … END BACKUP” was missing. Expectedly querying V$BACKUP showed nearly all datafiles still in ACTIVE backup mode:

SQL> select * from v$backup;

     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- -------------------
         1 ACTIVE             1.4152E+13 12.07.2018 00:14:59
         2 ACTIVE             1.4152E+13 12.07.2018 00:14:59
         3 ACTIVE             1.4152E+13 12.07.2018 00:14:59
         4 ACTIVE             1.4152E+13 12.07.2018 00:15:00
         5 ACTIVE             1.4152E+13 12.07.2018 00:15:00
         6 ACTIVE             1.4152E+13 12.07.2018 00:15:03
         7 ACTIVE             1.4152E+13 12.07.2018 00:15:03
         8 ACTIVE             1.4152E+13 12.07.2018 00:15:03
         9 ACTIVE             1.4152E+13 12.07.2018 00:15:03
        10 ACTIVE             1.4152E+13 12.07.2018 00:15:04
        11 ACTIVE             1.4152E+13 12.07.2018 00:15:04
        12 ACTIVE             1.4152E+13 12.07.2018 00:15:04
        13 ACTIVE             1.4152E+13 12.07.2018 00:15:05
        14 ACTIVE             1.4152E+13 12.07.2018 00:15:05
        15 ACTIVE             1.4152E+13 12.07.2018 00:15:05
        16 ACTIVE             1.4152E+13 12.07.2018 00:15:06
        17 ACTIVE             1.4152E+13 12.07.2018 00:15:06
        18 ACTIVE             1.4152E+13 12.07.2018 00:15:04
        19 ACTIVE             1.4152E+13 12.07.2018 00:15:04
        20 NOT ACTIVE                  0
        21 NOT ACTIVE                  0

21 rows selected.

I didn’t had a clue where this commands came from, as RMAN doesn’t needed them for to do the backup and we use no other custom scripts for backup.

So I ended this backup mode quickly:

SQL> alter database end backup;
alter database end backup
*
ERROR at line 1:
ORA-01260: warning: END BACKUP succeeded but some files found not to be in
backup mode


SQL> select * from v$backup;

     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- -------------------
         1 NOT ACTIVE         1.4152E+13 12.07.2018 00:14:59
         2 NOT ACTIVE         1.4152E+13 12.07.2018 00:14:59
         3 NOT ACTIVE         1.4152E+13 12.07.2018 00:14:59
         4 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:00
         5 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:00
         6 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:03
         7 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:03
         8 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:03
         9 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:03
        10 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:04
        11 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:04
        12 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:04
        13 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:05
        14 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:05
        15 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:05
        16 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:06
        17 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:06
        18 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:04
        19 NOT ACTIVE         1.4152E+13 12.07.2018 00:15:04
        20 NOT ACTIVE                  0
        21 NOT ACTIVE                  0

21 rows selected.

To finally get rid of the “obsolete” old backups I had to free up some space to do a fresh full backup with current checkpoint time. As all existing full backups virtually hold the same data (as of the checkpoint’s view) I decided to delete the full backups of the last two days and relied on the archivelog backups until all was back to normal.

Now for the culprit of this mess: It turned out, that our cloud service provider uses Veeam to manage and backup our virtual machines. Although they just do a “simple” backup of the VMs, and there is usually no dedicated Oracle backup via Veeam configured, on this very VM there WAS a Veeam-Oracle-Backup somehow enabled. And Veeam does this backup with putting all tablespaces into backup mode for just a short time to get a consistent snapshot. Hence the incriminated BEGIN/END BACKUP commands in our alert.log, which caused the trouble. I’ll have a word on this with ’em to disable this Veeam thing in respect to our databases. But I wonder why Veeam doesn’t use an “ALTER DATABASE [BEGIN|END] BACKUP” instead of an “ALTER TABLESPACE … [BEGIN|END] BACKUP”. I think this way they won’t get “cross tablespace consistency”.