DBA Blogs

ORA_ROWSCN -- 2 : Multi Table, Multiple Rows

Hemant K Chitale - 11 hours 54 min ago

 Instead of the single-table, single-row demonstration in my previous blog post, this demonstrates how ORA_ROWSCN is presented when a single transaction (i.e. one COMMIT at the end of multiple DML statements) presents the same SCN for all the rows in all the tables that were involved.



oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 14 22:39:25 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Fri Aug 14 2020 22:39:20 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

22:39:25 SQL> create table t_1 (txn_scn number) rowdependencies;

Table created.

22:39:48 SQL> create table t_2(txn_scn number) rowdependencies;

Table created.

22:40:00 SQL> create table t_3 (txn_scn number) rowdependencies;

Table created.

22:40:07 SQL> create table t_4 (txn_scn number) rowdependencies;

Table created.

22:40:12 SQL>
22:40:12 SQL> insert into t_1
22:40:27 2 select current_scn
22:40:31 3 from v$database, dual
22:40:36 4 connect by level < 11
22:40:40 5 /

10 rows created.

22:40:40 SQL> insert into t_2
22:40:43 2 select current_scn
22:40:50 3 from v$database
22:40:53 4 /

1 row created.

22:40:53 SQL> !sleep 10

22:41:06 SQL> insert into t_3
22:41:14 2 select current_scn
22:41:18 3 from v$database, dual
22:41:22 4 connect by level < 6
22:41:28 5 /

5 rows created.

22:41:29 SQL> !sleep 30

22:42:01 SQL> select distinct (current_scn)
22:42:12 2
22:42:21 SQL>
22:42:21 SQL> select distinct (txn_scn)
22:42:25 2 from t_1
22:42:28 3 /

TXN_SCN
----------
6664390

22:42:28 SQL> select distinct (txn_scn)
22:42:36 2 from t_2
22:42:38 3 /

TXN_SCN
----------
6664419

22:42:40 SQL> select distinct (txn_scn)
22:42:44 2 from t_3
22:42:46 3 /

TXN_SCN
----------
6665530

22:42:46 SQL> insert into t_4
22:42:51 2 select * from t_1
22:42:53 3 /

10 rows created.

22:42:54 SQL> select distinct (txn_scn)
22:42:58 2 from t_4
22:43:00 3 /

TXN_SCN
----------
6664390

22:43:01 SQL> !sleep 60

22:44:05 SQL> update t_2
22:44:21 2 set txn_scn = (select current_scn from v$database)
22:44:36 3 /

1 row updated.

22:44:37 SQL> select txn_scn
22:44:42 2 from t_2
22:44:45 3 /

TXN_SCN
----------
6672139

22:44:46 SQL>
22:44:46 SQL>
22:45:07 SQL> select distinct(txn_scn) from t_1;

TXN_SCN
----------
6664390

22:45:27 SQL> select distinct(txn_scn) from t_2;

TXN_SCN
----------
6672139

22:45:33 SQL> select distinct(txn_scn) from t_3;

TXN_SCN
----------
6665530

22:45:41 SQL> select distinct(txn_scn) from t_4;

TXN_SCN
----------
6664390

22:45:45 SQL> insert into t_1
22:45:54 2 select current_scn from v$database;

1 row created.

22:46:04 SQL> update t_1
22:46:07 2 where txn_Scn != 6664390
22:46:16 3
22:46:20 SQL>
22:46:27 SQL> update t_1
22:46:30 2 st txn_scn = (select current_scn from v$database)
22:46:40 3 where txn_scn != 6664390
22:46:51 4 /
st txn_scn = (select current_scn from v$database)
*
ERROR at line 2:
ORA-00971: missing SET keyword


22:46:52 SQL> update t_1
22:46:58 2 set txn_scn = (select current_scn from v$database)
22:47:03 3 where txn_scn != 6664390
22:47:09 4 /

1 row updated.

22:47:10 SQL> select txn_scn, count(*)
22:47:16 2 from t_1
22:47:18 3 group by txn_scn
22:47:22 4 /

TXN_SCN COUNT(*)
---------- ----------
6683784 1
6664390 10

22:47:23 SQL> !sleep 15
22:47:42 SQL>
22:47:54 SQL> commit;

Commit complete.

22:47:59 SQL>


The CURRENT_SCN that is inserted into each of the 4 tables is different (I had multiple other transactions running from other sessions to forcefully increment the SCNs).

What ORA_ROWSCN values do we see after the COMMIT ?


22:48:57 SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 14 22:49:05 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Fri Aug 14 2020 22:48:57 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

22:49:05 SQL> select txn_scn, ora_rowscn
22:49:27 2 from t_1
22:49:29 3 /

TXN_SCN ORA_ROWSCN
---------- ----------
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6683784 6686983

11 rows selected.

22:49:30 SQL> select txn_scn, ora_rowscn
22:49:40 2 from t_1
22:49:44 3 order by 1,2
22:49:48 4 /

TXN_SCN ORA_ROWSCN
---------- ----------
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6683784 6686983

11 rows selected.

22:49:48 SQL>
22:49:48 SQL> select txn_scn, ora_rowscn
22:50:09 2 from t_2
22:50:11 3 order by 1,2
22:50:13 4 /

TXN_SCN ORA_ROWSCN
---------- ----------
6672139 6686983

22:50:14 SQL> select txn_scn, ora_rowscn
22:50:19 2 from t_3
22:50:21 3 order by 1,2
22:50:22 4 /

TXN_SCN ORA_ROWSCN
---------- ----------
6665530 6686983
6665530 6686983
6665530 6686983
6665530 6686983
6665530 6686983

22:50:23 SQL> select txn_scn, ora_rowscn
22:50:29 2 from t_4
22:50:32 3 order by 1,
22:50:34 4 2
22:50:36 5
22:50:37 SQL> /

TXN_SCN ORA_ROWSCN
---------- ----------
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983
6664390 6686983

10 rows selected.

22:50:37 SQL>


Every single row in all 4 tables has the same ORA_ROWSCN that was set when the COMMIT was issued.  So, for any DML which spans more than 1 row and/or more than 1 table, *all* the rows affected will have the same ORA_ROWSCN, irrespective of the actual CURRENT_SCN at the beginning of the first DML statement and the CURRENT_SCN at the end of the last DML statement.


However, remember that DDL statement (CREATE, ALTER, DROP etc) cause a COMMIT to be issued immediately.  So, if I had a DDL statement in the middle of the session, the ORA_ROWSCN for rows impacted after the DDL would be different from those for rows impacted before the DDL.
Note, however, if the client program had set AUTOCOMMIT ON, every statement would result in a COMMIT and fresh ORA_ROWSCN.  So, the ORA_ROWSCN values for these same SQL operations could have varied if I were using a client program (.e.g a JDBC connection) that had AUTOCOMMIT ON.  Also, session exit/disconnect/reconnect may issue a COMMIT depending on whether the client program issues a COMMIT on closing the session.

What other operations can you think that would implicit commits, resulting in varying ORA_ROWSCNs ?
Categories: DBA Blogs

Materialized view

Tom Kyte - 20 hours 6 min ago
hi tom, This is with reference to question asked earlier about MV. i don't know how to write the revert against the questions asked. https://asktom.oracle.com/pls/apex/asktom.search?tag=materialized-view-complete-refresh-getting-slower Actually in below pr_fact_x is a view and day_pr_mst_x is again a MV which is rebuild means complete refresh every day so fast refresh is not possible. <code>create materialized view pr_month nologging tablespace idm_msta_ts using index tablespace idm_msta_is as select -- + index_ffs(d) use_hash(d f) f.client_id, f.account_dim_id, f.org_dim_dc_id, f.org_dim_unit_id, f.instill_product_dim_id, f.instill_product_id, f.client_product_dim_id, d.month_period_num, sum(f.fact_delivered_cost) as TOTAL_COST, sum(f.fact_delivered_cases) as TOTAL_CASES , sum(f.fact_delivered_weight) as TOTAL_WEIGHT, sum(f.fact_delivered_volume) as TOTAL_VOLUME, sum(f.fact_delivered_count) as TOTAL_COUNT, f.currency_dim_id as CURRENCY_DIM_ID from pr_fact_x f, day_pr_mst_x d where f.client_id = d.client_id and f.invoice_trx_date = d.calendar_date group by f.client_id, f.account_dim_id, f.org_dim_dc_id, f.org_dim_unit_id, f.instill_product_dim_id, f.instill_product_id, f.client_product_dim_id, d.month_period_num , f.currency_dim_id ;</code> please suggest.
Categories: DBA Blogs

remove utl_http from database

Tom Kyte - Thu, 2020-08-13 07:26
Hi Tom, We are using utl_http in our PLSQL code to communicate with external system. We are using POST method to send message to external system and with certain transfer timeout we are waiting for response and reading the response. Now we are planning to remove or take away utl_http from code. As this is synchronous communication, can you advise the possible ways which substitute utl_http. Thank You.
Categories: DBA Blogs

get someone's age in an APEX field

Tom Kyte - Thu, 2020-08-13 07:26
cordial greetings this is to check if there is a way to automatically obtain an age in a field when the user registered the date of birth
Categories: DBA Blogs

OEM report long running backup

Tom Kyte - Thu, 2020-08-13 07:26
Hello Team, CAN ANYONE PLEASE SUGGEST HOW TO CREATE EMAIL NOTIFICATION OF LONG RUNNING BACKUP JOB REPORT FROM EOM.
Categories: DBA Blogs

Compressing new partitions of existing very large tables

Tom Kyte - Thu, 2020-08-13 07:26
We have quite a few very large tables that are interval partitioned ( 1 partition per day ). At any given point we store 120 days worth of data, i.e. we do daily partition maintenance and move the > 120 days data over to either Archival systems or purge them. My question is, is it possible to compress new partitions as they are added to the source table or do we need to create a new table and move data ? Here is an example of what the source table looks like <code> create table my_very_large_table ( id number(20) not null, processedxml clob, updatedon date default trunc( current_date ) ) lob ( processedxml ) store as securefile ( tablespace my_tablespace enable storage in row chunk 8192 ) partition by range( updatedon ) interval( numtodsinterval( 1, 'day' )) ( partition p_default values less than ( to_date( '01/01/2020', 'mm/dd/yyyy' )) ) tablespace my_tablespace; </code> We have 120 days worth of data in this table and several hundred thousand rows are added everyday. Can I alter the partition attributes so that new partitions are created as compressed ? Thanks BC, MT MI
Categories: DBA Blogs

select records with certain "distance" to others

Tom Kyte - Thu, 2020-08-13 07:26
Hello Tom, we have a table with appointments. There is a appointment every 5 minutes like this an there may be holes because some appointments are already taken <code> create table AM_HILFSTABELLE ( amht_num1 NUMBER, amht_dat1 DATE ); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 1, to_date('08.06.2020 08:00:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 2, to_date('08.06.2020 08:05:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 3, to_date('08.06.2020 08:10:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 4, to_date('08.06.2020 08:15:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 5, to_date('08.06.2020 08:20:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 6, to_date('08.06.2020 08:25:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 7, to_date('08.06.2020 08:35:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 8, to_date('08.06.2020 08:40:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 9, to_date('08.06.2020 08:45:00','DD.MM.YYYY HH24:MI:SS')); insert into am_hilfstabelle(amht_num1,amht_dat1) values ( 110, to_date('08.06.2020 08:50:00','DD.MM.YYYY HH24:MI:SS')); </code> We want to show the user only appointments that are at least 15 minutes one from the other In this case 08:00 08:15: 08:35 ( 08:30 is already taken, so 20 min from the last) 08:50 ( 15 minutes from the last one) We write this appointments in a temporary table that we show the user. Ww solved it with a cursor and a loop. Is it possible to do this with a select without the loop so we could avoid the pl/sql overhead and show the result directly to the user Regards Andreas
Categories: DBA Blogs

Select statement goes with Table Storage full in Dev environment but uses index range scan in SIT environment

Tom Kyte - Thu, 2020-08-13 07:26
Hi Team, I have a same table in 2 environments Development and SIT. Both have the same index pptcode_1_3. Environment - DEV Table name - pptcode number of rows - 103819 execution time - 9 secs when i run execution plan on development it does TABLE ACCESS STORAGE FULL for the select statement with the where condition. Environment - SIT TABLE name - pptcode number of rows 57109 execution time - 1 sec When I run execution plan on SIT it does go with TABLE ACCESS BY INDEX ROWID and INDEX RANGE SCAN How can i make the query to use index in development and run in 1 second.
Categories: DBA Blogs

Installing Docker on Amazon Linux 2

Pakistan's First Oracle Blog - Thu, 2020-08-13 00:50
Installing docker on Amazon Linux 2 is full of surprises which are not easy to deal with. I just wanted to test something within a container environment, so spun up a new EC2 instance from the following AMI:

Amazon Linux 2 AMI (HVM), SSD Volume Type - ami-0ded330691a314693 (64-bit x86) / ami-0c3a4ad3dbe082a72 (64-bit Arm)

After this Linux instance came up, I just did yum update to get all the latest stuff:

 sudo yum update

All good so far.
Then I installed/checked yum-utils and grabbed the docker repo, and all good there:

[ec2-user@testf ~]$ sudo yum install -y yum-utils
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Package yum-utils-1.1.31-46.amzn2.0.1.noarch already installed and latest version
Nothing to do

[ec2-user@testf ~]$ sudo yum-config-manager \
>     --add-repo \
>     https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo


Now, it's time to install docker:

[ec2-user@testf ~]$ sudo yum install docker-ce docker-ce-cli containerd.io
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
amzn2-core                                                                                                               | 3.7 kB  00:00:00
docker-ce-stable                                                                                                         | 3.5 kB  00:00:00
(1/2): docker-ce-stable/x86_64/primary_db                                                                                |  45 kB  00:00:00
(2/2): docker-ce-stable/x86_64/updateinfo                                                                                |   55 B  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package containerd.io.x86_64 0:1.2.13-3.2.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.2.13-3.2.el7.x86_64
---> Package docker-ce.x86_64 3:19.03.12-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.12-3.el7.x86_64
--> Processing Dependency: libcgroup for package: 3:docker-ce-19.03.12-3.el7.x86_64
---> Package docker-ce-cli.x86_64 1:19.03.12-3.el7 will be installed
--> Running transaction check
---> Package containerd.io.x86_64 0:1.2.13-3.2.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.2.13-3.2.el7.x86_64
---> Package docker-ce.x86_64 3:19.03.12-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.12-3.el7.x86_64
---> Package libcgroup.x86_64 0:0.41-21.amzn2 will be installed
--> Finished Dependency Resolution
Error: Package: containerd.io-1.2.13-3.2.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
Error: Package: 3:docker-ce-19.03.12-3.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest


and it failed. 

So googled the error Requires: container-selinux and every stackoverflow post and blogs say to download the new rpm from some centos or similar mirror but it simply doesn't work, no matter how hard you try. 

Here is the ultimate best solution which enabled me to get docker installed on Amazon Linux 2 on this EC2 server:

sudo rm /etc/yum.repos.d/docker-ce.repo

sudo amazon-linux-extras install docker

sudo service docker start

[ec2-user@~]$ docker --version

Docker version 19.03.6-ce, build 369ce74


That's it. I hope that helps.
Categories: DBA Blogs

PRAGMA SERIALLY_REUSABLE implications in a callback service

Tom Kyte - Wed, 2020-08-12 13:06
<b></b>Scenario: Oracle Recipe Tool / Microservices JNDI : jdbc/SOAXAOPS PLSQL : schema.pkg1.procedure This is the entry point to an on-premise DB package. It can be called from cloud and non-cloud services. Now whenever I would compile schema.pkg1 The callback will give below error : ORA-04065: not executed, altered or dropped package body "schema.pkg1" ORA-06508: PL/SQL: could not find program unit being called: "schema.pkg1" This error can be bypassed if a DDL was issued in this schema or existing stale connections are explicitly killed. So I added PRAGMA SERIALLY_REUSABLE; to schema.pkg1 based on below url ======================================================================== https://stackoverflow.com/questions/1761595/frequent-error-in-oracle-ora-04068-existing-state-of-packages-has-been-discarde https://docs.oracle.com/en/cloud/paas/integration-cloud/database-adapter/resolve-error-ora-04068-existing-state-packages-has-been-discarded.html https://docs.oracle.com/cd/E11882_01/appdev.112/e25519/packages.htm#LNPLS99977 In addition integrations team did this modification: Under Connection Properties: Test Connections on Reserve: Yes Test Table Name: SQL begin dbms_session.modify_package_state(dbms_session.reinitialize); end; Seconds to trust Idle Pool Connection: 0 This solved initial error of ORA-04065 , ORA-06508 However it gave 1-off error : ORA-06508: PL/SQL: could not find program unit being called This was at the line in schema.pkg1 which was giving call to schema.pkg2 Now schema.pkg2 is not having PRAGMA SERIALLY_REUSABLE; In a request set of 2 requests... first one faced this err and subsequent requests have been fine so far... (This is UAT environment) ============== Question is : ============== When I recompile pkg (HOT patch) in UAT can it re happen. Do I need to add this pragma to all nested packages. What are the pros and cons of using this Pragma apart from trigger and SQl prompt usage as mentioned on Oracle documentation. Is there an alternate way to deal with these errors for callback services.
Categories: DBA Blogs

Change local_temp_tablespace to shared TEMP

Tom Kyte - Wed, 2020-08-12 13:06
Hi, I am trying to find the downside of setting local_temp_tablespace to TEMP tablespace which is a shared temp. The reason is because of a bug, if local_temp_tablespace is NULL and dba_users.spare9 is NULL, then Oracle assigns SYSTEM tablespace as local_temp_tablespace when I issue alter user command. For example, if a user AGUPTA has spare9 as NULL in DBA_USERS and local_temp_tablespace is currently NULL and I issue the command to change password: <code>alter user AGUPTA identified by xpS2Z^4%g%0h;</code> Then, the local_temp_tablespace for AGUTPA changes to SYSTEM. This is not good. Mike Dietrich has a blog post about it. So, we did a small test and found that if we switch all users who have NULL for local_temp_tablespace to use TEMP tablespace, then the issue does not appear. The local_temp_tablespace stays at TEMP when changing password. So, my question is: Is there a downside to changing every user's local_temp_tablespace to shared TEMP? Thanks
Categories: DBA Blogs

ORA_ROWSCN

Hemant K Chitale - Wed, 2020-08-12 05:28

 As a follow up to my previous post on SCN_TO_TIMESTAMP, here is a demo of the ORA_ROWSCN function.

I have two different sessions and two different tables where I insert one row each.  I then delay the COMMIT in each session.


This is the first session :

oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:02:47 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:02:31 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:02:47 SQL> create table table_a
18:02:51 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp);

Table created.

18:03:10 SQL> insert into table_a
18:03:14 2 select 'TABLE_A', current_scn, systimestamp from v$database
18:03:32 3 /

1 row created.

18:03:33 SQL> select * from table_a
18:03:37 2 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
TABLE_A 6580147 12-AUG-20 06.03.33.263180 PM

18:03:38 SQL>
18:05:16 SQL> !sleep 120

18:07:21 SQL>
18:07:26 SQL> commit;

Commit complete.

18:07:28 SQL>


And this is the second session :

oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:04:27 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:03:32 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:04:27 SQL> create table table_b
18:04:36 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp);

Table created.

18:04:46 SQL> insert into table_b
18:04:51 2 select 'TABLE_B',current_scn, systimestamp from v$database;

1 row created.

18:05:03 SQL> select * from table_b
18:05:09 2 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
TABLE_B 6581390 12-AUG-20 06.05.03.813011 PM

18:05:10 SQL>
18:05:24 SQL> !sleep 30

18:06:00 SQL>
18:06:07 SQL>
18:06:13 SQL> commit;

Commit complete.

18:06:16 SQL>


So, the second session, against TABLE_B did the INSERT after the first session but issued a COMMIT before the first session.  (TABLE_B has a higher INSERT_SCN and INSERT_TIMESTAMP than TABLE_A).

Let's see what ORA_ROWSCN shows :
< br />
SQL> select table_name, insert_scn, insert_timestamp, scn_to_timestamp(ora_rowscn)
2 from table_a
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
TABLE_A 6580147 12-AUG-20 06.03.33.263180 PM
12-AUG-20 06.07.26.000000000 PM


SQL> select table_name, insert_scn, insert_timestamp, scn_to_timestamp(ora_rowscn)
2 from table_b
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
TABLE_B 6581390 12-AUG-20 06.05.03.813011 PM
12-AUG-20 06.06.14.000000000 PM


SQL>


The actual INSERT into TABLE_B was after that in TABLE_A  (higher INSERT_SCN and INSERT_TIMESTAMP)  but SCN_TO_TIMESTAMP of the ORA_ROWSCN implies that the row in TABLE_B is earlier than that in TABLE_A !

SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_a
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_A 6580147 12-AUG-20 06.03.33.263180 PM 6586905

SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_b
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_B 6581390 12-AUG-20 06.05.03.813011 PM 6584680

SQL>


The actual SCN recorded is that of the COMMIT time, *not* the INSERT time.

A database session gets an SCN for the Transaction it does when it COMMITs.
So, even though the INSERT into TABLE_A was earlier, it has a higher SCN simply because the COMMIT was issued later.


Does it matter if I use the ROWDEPENDENCIES extended attribute for the table  ? Without ROWDEPENDENCIES, ORA_ROWSCN actually uses the SCN in the block header -- irrespective of when each row in the block was inserted / updated.
In my scenario, I had a new table with only 1 row, so there would be no difference.

Nevertheless, I repeat the experiment with ROWDEPENDENCIES.


oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:17:57 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:17:47 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:17:57 SQL> create table table_a
18:18:10 2
18:18:10 SQL> create table table_a
18:18:14 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp) rowdependencies;

Table created.

18:18:31 SQL> insert into table_a
18:18:40 2 select 'TABLE_A', current_scn, systimestamp from v$database
18:18:50 3 /

1 row created.

18:18:51 SQL>
18:20:11 SQL> !sleep 60

18:21:13 SQL>
18:21:15 SQL> commit;

Commit complete.

18:21:16 SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>



and


oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:19:30 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:19:04 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:19:30 SQL> create table table_b
18:19:33 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp) rowdependencies;

Table created.

18:19:40 SQL> insert into table_b
18:19:52 2 select 'TABLE_B',current_scn, systimestamp from v$database;

1 row created.

18:20:00 SQL>
18:20:16 SQL> !sleep 30

18:20:49 SQL>
18:20:51 SQL> commit;

Commit complete.

18:20:52 SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>



resulting in :


SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_a
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_A 6612380 12-AUG-20 06.18.51.562927 PM 6618886

SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_b
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_B 6614592 12-AUG-20 06.20.00.141122 PM 6617807

SQL>


Bottom line : A row that is inserted (or updated) earlier can still have a higher SCN (and, therefore, show a higher SCN_TO_TIMESTAMP)  simply because the user or the application program issued the COMMIT later.   Even an application or batch job may run multiple queries or DMLs before finally issuing a COMMIT.


Categories: DBA Blogs

how to import sequence

Tom Kyte - Tue, 2020-08-11 18:46
export is done on table level using 9.2.0.1 exp user/password tables=emp,foo file=test.dmp during import sequences never got imported. This is the default behavior of oracle. I would appreciate if you please advise on the followings: 1. How to import sequences in table level export? 2. How to get the same sequence value at the time of export? Thanks
Categories: DBA Blogs

Update in Oracle DB

Tom Kyte - Tue, 2020-08-11 18:46
Hi dear AskTOM team. Have a great day to everyone. I have some confusion about UPDATE TABLE statement in Oracle DB 12cr2. Let's assume we have 3 users: U1; U2; U3; U1 has a table called TEST_1, and U2 and U3 both have UPDATE privilege on that table. My question is that: <b>If U2 and U3 try to update same rows in that particular table at the same time what will happen? How Oracle will control such kind of processes?</b> Thanks beforehand!
Categories: DBA Blogs

Ranking based on time break

Tom Kyte - Tue, 2020-08-11 18:46
i want to make rankning of trucks exit based on break more than 1 hour like eg below TRUCK EXIT T1 10:00 PM T2 10:05 PM T3 12:00 PM T4 12:05 PM T5 12:10 PM T6 12:20 PM Result should be like...... leaving break more than 1 hours gaps 10:00 10:05 1 12:00 12:20 2
Categories: DBA Blogs

Converting XML to JSON using Apex

Tom Kyte - Tue, 2020-08-11 18:46
Hello Everyone, There is a table(xxln_vs_publish_stg) has xmltype column(xml_data) which stores XML data. I have a requirement to convert XML data to json data. For that, I am using apex_json.write to convert. While executing below logic for changing it, I am getting Error as: ORA-20987: APEX - JSON.WRITER.NOT_OPEN - Contact your application administrator. Can you please help what is it I am doing which is wrong. <code> DECLARE l_xml sys.xmltype; l_amount BINARY_INTEGER := 32000; l_buffer RAW(32000); l_pos INTEGER := 1; l_stage NUMBER; content CLOB; content_blob BLOB; content_length NUMBER; BEGIN SELECT xml_data INTO l_xml FROM xxln_vs_publish_stg WHERE xml_data IS NOT NULL AND ROWNUM < 2; content := xmltype.getclobval(l_xml); xxln.convert_clob_to_blob(content, content_blob); content_length := dbms_lob.getlength(content_blob); dbms_output.put_line(content_length); apex_json.initialize_clob_output; IF dbms_lob.getlength(content_blob) < 32000 THEN apex_json.write(content); ELSE WHILE l_pos < content_length--DBMS_LOB.GETLENGTH(v_output_file_blob) LOOP dbms_lob.read(content_blob, l_amount, l_pos, l_buffer); apex_json.write(content); l_pos := l_pos + l_amount; END LOOP; END IF; dbms_output.put_line(apex_json.get_clob_output); apex_json.free_output; END; </code>
Categories: DBA Blogs

Options to quickly access large portions of rows

Tom Kyte - Tue, 2020-08-11 18:46
Hello, Tom. We have a fact table, that is partitioned by day and stores the last 90 days of data. Sometimes users of the application can change the status of record from 'ACTIVE' to 'CANCELED'. There are a lot of heavy analytical queries against that table that include full scans but only consider the 'ACTIVE' records. The number of 'CANCELED' record can wary greatly over time, from 5% to 60%. Right now it has 37 million active ones, and 67 million canceled, so my full scan could be 3 times faster. My question is: what is the best option to quickly access all the active records? B-tree index won't help, because there are too many rows to retrieve. Bitmap index seems to be a bad choice, since there are a lot of DML operations. I wanted to try subpartitioning by list and move the rows to the 'CANCELED' subpartition, but I immediately have concerns: There are 7 indexes on the table now. Moving a lot of rows between sections would require a lot of time and could potentially fill up the undo if someone decides to change the status of tens of millions of rows at a time(users can do and will do that). Since the table is partitioned by day, any blank space left after row movent in sections older than today won't be reused or reclaimed and a full scan will take just a much time. That makes the whole idea almost useless. I am afraid that shrinking the entire table could fill up the undo segment. I don't have an environment close in specs to our PROD environment, so I can't even really test my concerns with undo. Unfortunately we can't upgrade to 12.2 for a few more months, so move online is not availbable. Is there another option that I am missing or should I just run shrink space partition by partition on a daily basis?
Categories: DBA Blogs

Dynamically passing sequence name to get currval

Tom Kyte - Tue, 2020-08-11 18:46
I am trying to get the currval of all the user sequences in the schema. When I run the below sql it gives me invalid sql statement. I am not sure if the below is the right way to achieve it. Please advise. Assumption: The current value of the sequences are already set in the session. <code> set serveroutput on; declare sq number; sqnm varchar2(50); stmt varchar2(1000); cursor sqnc is (select sequence_name from user_sequences); begin for row in sqnc loop sqnm := row.sequence_name; stmt := 'SELECT' || sqnm ||'.currval into' || sq || 'from dual'; execute immediate stmt; dbms_output_put_line(sqnm || ' ' ||sq); end loop; end; </code>
Categories: DBA Blogs

Oracle has any feature similar to "Always Encrypted" that is offered by SQL server?

Tom Kyte - Tue, 2020-08-11 18:46
Hello, It would be great if you can help me here. Can you please share if Oracle has any feature similar to the "Always Encrypted" feature offered by SQL server? Link pasted at end has information on "Always Encrypted". I understand that Oracle offers data redaction to mask data. However, my understanding is that users with high authorization can bypass it. Oracle also offers Vault to control data access. However, there still will be Oracle users that can see the data in clear. It would be really helpful if you can share some pointers. Thanks, AB ------------------------------------------------------------------------------------------------------------------------------- Link: https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15 Text from this link: Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data and can view it, and those who manage the data but should have no access. By ensuring on-premises database administrators, cloud database operators, or other high-privileged unauthorized users, can't access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to store their data in Azure, and enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.
Categories: DBA Blogs

SCN_TO_TIMESTAMP

Hemant K Chitale - Tue, 2020-08-11 09:23
A quick demo of SCN_TO_TIMESTAMP in 19c

 
oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 11 21:59:56 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Mon Aug 10 2020 16:08:38 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select scn_to_timestamp(5389994) from dual;

SCN_TO_TIMESTAMP(5389994)
---------------------------------------------------------------------------
11-AUG-20 09.53.44.000000000 PM

SQL>
SQL> select scn_to_timestamp(5389994-100000) from dual;

SCN_TO_TIMESTAMP(5389994-100000)
---------------------------------------------------------------------------
12-JUL-20 11.19.13.000000000 PM

SQL>
SQL> select scn_to_timestamp(32720) from dual;
select scn_to_timestamp(32720) from dual
*
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1


SQL>


If you query for an older SCN, you would get an ORA-08181 error.  What is an "older SCN" ?  

Technically, Oracle frequently inserts new rows into SYS.SMON_SCN_TIME and deletes older rows.  This is the table that is queried by the SCN_TO_TIMESTAMP function.  So, if you query for an SCN no longer present in the table, you get an ORA-08181 error.

Does Oracle insert every SCN into this table ? Of course not !  Else there would have been more 5million rows in the table in my database.  It periodically inserts rows.  When you run the SCN_TO_TIMESTAMP function, you get an approximate timestamp  -- an estimate that Oracle derives from reading "nearby" rows.  

Do not ever assume that SCN_TO_TIMETAMP returns an Exact Timestamp for that SCN.

For a range of potential SCNs, you can query V$ARCHIVED_LOG for FIRST_TIME (which is still in DATE format, not TIMESTAMP) and FIRST_CHANGE# (which is the first SCN recorded for that ArchiveLog).


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs