Feed aggregator

Between function not working on varchar2 column

Tom Kyte - 6 hours 51 min ago
Hi Team, We are facing issue while using Between function in our SQL query. We are trying to use Between function on a column which is of datatype VARCHAR2. Data insert in this column can be alphanumeric as well as numeric, But it seems this funct...
Categories: DBA Blogs

redo log in nologging mode

Tom Kyte - 6 hours 51 min ago
I was trying to understand how redo works in nologging mode. Here is a interesting situation i encountered in XE. <code>SQL> @redo_question.sql Table dropped. Table created. LOG_MODE ------------ NOARCHIVELOG Statistics ------...
Categories: DBA Blogs

18c Global Temporary Tablespace

Tom Kyte - Tue, 2020-02-25 20:17
Our DB version is 18.3.0.0. We want to set a Global Temporary tablespace at CDB level and drop the Temporary tablespace at the PDB level. 1. The DEFAULT_TEMP_TABLESPACE is set in root container. 2. While in the Pluggable, when trying to drop the...
Categories: DBA Blogs

Request a new feature: bulk collect in batches

Tom Kyte - Tue, 2020-02-25 20:17
Hello, I know we can use BULK COLLECT to load all the records from a cursor into a set of collection variables, plus we can also use LIMIT clause to set the max number of records per load. It would be very convenient if Oracle can support the feat...
Categories: DBA Blogs

maximum row update at a single go in Oracle DB

Tom Kyte - Tue, 2020-02-25 20:17
Hi, We've near to 6 million records in our database and we want to update one specific column value for all the records. query will be like this: update table_xx set column_xx = '' where column_xxx = ''; Is there any limitation in Oracle DB? If ...
Categories: DBA Blogs

Compare execution performance stats between two PL/SQL procedures

Tom Kyte - Tue, 2020-02-25 20:17
Hi From the past I am only aware of Tom Kyte's Runstats package, which allows comparing some performance metrics across two procedures. I also found that the DBMS_SQLPA package also provides a feature that could help in this kind of scenario. ...
Categories: DBA Blogs

NVARCHAR Vs. VARCHAR

Tom Kyte - Tue, 2020-02-25 16:13
In a database with character sets defined as: NLS_CHARACTERSET = AL32UTF8 NLS_NCHAR_CHARACTERSET = UTF8 Would there be any difference in the (language) character sets that could be stored by VARCHAR2 Vs. NVARCHAR2? As in NAME VARCHAR2(...
Categories: DBA Blogs

Case sensitive search in case insensitive database

Tom Kyte - Tue, 2020-02-25 16:13
Hi, I'm using a database that is mostly case insensitive, using this login trigger: <code>create trigger milenio_central_logon after logon on milenio_central.schema begin execute immediate 'alter session set NLS_COMP=LINGUISTIC'; execute imm...
Categories: DBA Blogs

Tuning where column name is not null

Tom Kyte - Tue, 2020-02-25 16:13
Hi Team , I am trying to create test scenario where the table name TEST is created with 1 column ID. <code> Name Null? Type -----...
Categories: DBA Blogs

Tuning with index of date datatype

Tom Kyte - Tue, 2020-02-25 16:13
Hi Team , I have table named "CALL_LOGG" with ~1M record's , table is partitioned with range-hash we have index in one of column named as logdate which is having date datatype . most of time i see query using major filter as logdate is u...
Categories: DBA Blogs

How to find users with DBA role

Tom Kyte - Tue, 2020-02-25 16:13
using DBA_ROLE_PRIVS and DBA_USERS, how can i determine users with DBA role (also hidden role)
Categories: DBA Blogs

Best way to check for existence based on last event date

Tom Kyte - Tue, 2020-02-25 16:13
Hello, I am trying to find out the best way to include a criteria in a sql to check is the latest event for a user has occurred before a specific date. Below is my table setup and brief description USER_NP => Parent table listing a...
Categories: DBA Blogs

New Parallel Distribution Method For Direct Path Loads

Randolf Geist - Tue, 2020-02-25 16:12
Starting with version 12c Oracle obviously has introduced another parallel distribution method for direct path loads (applicable to INSERT APPEND and CTAS operations) when dealing with partitioned objects.

As you might already know, starting with version 11.2 Oracle supported a new variation of the PQ_DISTRIBUTE hint allowing more control how data gets distributed for the actual DML load step. In addition to the already documented methods (NONE, RANDOM / RANDOM_LOCAL, PARTITION) there is a new one EQUIPART which obviously only applies to scenarios where both, source and target table are equi partitioned.

In principle it looks like a "full-partition wise load", where the PX partition granule gets used as chunking method and each PX slave reads from the partition to process from source and writes into the corresponding partition of target. Therefore it doesn't require a redistribution of data and uses only a single PX slave set. Depending on the skew (partitions of different data volume) this might not be the best choice, but for massive data loads with evenly sized partitions it might give some advantage over the other distribution methods - the NONE distribution method being the closest, because it doesn't require additional redistribution either - but here all PX slaves read and write from any partition, so potentially there could be more contention.

Of course this new distributed method works only for the special case of equi partitioned source and target tables - and according to my tests only for the simple case of loading from the source table with no further operations like joins etc. involved.

A simple demonstration:

drop table t_part1 purge;
drop table t_part2 purge;

create table t_part1 (id, filler) partition by range (id) (
partition n10000 values less than (10001),
partition n20000 values less than (20001),
partition n30000 values less than (30001),
partition n40000 values less than (40001),
partition n50000 values less than (50001),
partition n60000 values less than (60001),
partition n70000 values less than (70001),
partition n80000 values less than (80001),
partition n90000 values less than (90001),
partition n100000 values less than (100001)
)
as
select rownum as id, rpad('x', 200) as filler
from dual
connect by level <= 100000
;

create table t_part2 (id, filler) partition by range (id) (
partition n10000 values less than (10001),
partition n20000 values less than (20001),
partition n30000 values less than (30001),
partition n40000 values less than (40001),
partition n50000 values less than (50001),
partition n60000 values less than (60001),
partition n70000 values less than (70001),
partition n80000 values less than (80001),
partition n90000 values less than (90001),
partition n100000 values less than (100001)
)
as
select rownum as id, rpad('x', 200) as filler
from dual
where 1 = 2
;

alter session enable parallel dml;

-- alter session set tracefile_identifier = equipart;

-- alter session set events 'trace [RDBMS.SQL_Optimizer.*] disk=highest';

--explain plan for
insert /*+ append parallel(2) pq_distribute(t_part2 equipart) */ into t_part2 select * from t_part1;

From 12.1.0.2 on the execution plan for the INSERT APPEND operation looks like this:

--------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 100K| 19M| 446 (1)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 100K| 19M| 446 (1)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT (EQUI-PARTITION) | T_PART2 | | | | | | | Q1,00 | PCWP | |
| 4 | OPTIMIZER STATISTICS GATHERING | | 100K| 19M| 446 (1)| 00:00:01 | | | Q1,00 | PCWP | |
| 5 | PX PARTITION RANGE ALL | | 100K| 19M| 446 (1)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL | T_PART1 | 100K| 19M| 446 (1)| 00:00:01 | 1 | 10 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
6 - SEL$1 / T_PART1@SEL$1

Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T_PART1"@"SEL$1")
FULL(@"INS$1" "T_PART2"@"INS$1")
PQ_DISTRIBUTE(@"INS$1" "T_PART2"@"INS$1" EQUIPART)
OUTLINE_LEAF(@"INS$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('18.1.0')
OPTIMIZER_FEATURES_ENABLE('18.1.0')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Oracle doesn't always automatically choose this distribution method. If you want to enforce it (and it is legal) you can use the PQ_DISTRIBUTE(EQUIPART) hint as outlined.

Bounce Listener and Database After Adding oracle User to Group

Bobby Durrett's DBA Blog - Tue, 2020-02-25 11:37

I keep forgetting that I need to bounce both the listener and the database after changing the oracle user’s group membership, so I am writing a quick note to myself here. This is the second time I have been tripped up by something like this and I spent way too much time figuring it out again.

We have some PL/SQL code that writes to files on a database server. The most recent issue was using DBMS_XSLPROCESSOR.CLOB2FILE to write a clob to a file in a certain directory. This got the following error:

ORA-22288: file or LOB operation FILEOPEN failed
Permission denied

I tried writing to the same directory using UTL_FILE.FOPEN with write permission to create a new file. I got this error:

ORA-29283: invalid file operation: cannot open file [29435]

I got a Linux admin to add the oracle Linux user to the correct group and I bounced the database and the error did not go away. Then I bounced the listener and the database, and the errors were resolved.

NOTE TO SELF: if you add the oracle user to a new group to allow PL/SQL to write to a directory using group permissions bounce the database and the listener!

Bobby

P.S. This is on Oracle 19c on Redhat Linux 7 but should apply to a wide range of Oracle versions and Unix like platforms.

P.P.S. Bounce the listener first.

Categories: DBA Blogs

count(*) – again

Jonathan Lewis - Tue, 2020-02-25 07:24

I’ve just received an email asking (yet again) a question about counting the number of rows in a table.

We have a large table with a CLOB column, where the CLOB occupies 85% storage space.
when we use select count(*) from , the DBA says that you should not use count(*) as it uses all columns and as this table contains CLOB it results in high CPU usage, where as if we use count(rowid), this brings us faster and same result.

Well I’ve pointed out in the past, in fact more than once, that count(*), count(1), count(declared-non-null-column) will all do the same thing … execute as count(*); I’ve also listed a few quirky anomalies, also more than once. However, count(rowid) is a little different, it doesn’t get transformed to count(*) as we can see from two pieces of evidence:

Exhibit A: fragments from a 10053 (CBO) trace file

----- Current SQL Statement for this session (sql_id=d381q70418ugs) -----
select count(rowid) from emp1

... 

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT("EMP1".ROWID) "COUNT(ROWID)" FROM "TEST_USER"."EMP1" "EMP1"

Unlike the various count() calls that are converted to count(*), counting rowids doesn’t seem to go througn the CNT transformation and the final state of the query still shows count(rowid) as the critical mechanism.

Exhibit B: variations in Column Projection Information

SQL> explain plan for select count(1) from emp1;

SQL> select * from table(dbms_xplan.display(null,null,'projection'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 2110459082

-----------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |     7  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |            |          |
|   2 |   INDEX FAST FULL SCAN| E1_PK | 20000 |     7  (15)| 00:00:01 |
-----------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]

SQL> explain plan for select count(rowid) from emp1;

SQL> select * from table(dbms_xplan.display(null,null,'projection'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 2110459082

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    12 |     7  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    12 |            |          |
|   2 |   INDEX FAST FULL SCAN| E1_PK | 20000 |   234K|     7  (15)| 00:00:01 |
-------------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(ROWID)[22]
   2 - ROWID[ROWID,10]

The count(1) query transforms to count(*) and we don’t see any column information begin generated at operation 2 and passing up to the parent operation. However with count(rowid) we see Oracle at operation 2 constructing actual rowids from block addresses and row directory entries and passing them up to the the parent operation.

The CPU it takes to construct and pass the rowids is probably insignificant compared to the CPU usage of accessing data blocks in the first place so I doubt if there would be much visible difference in clock-time between count(1) (or count(*)) and count(rowid), but technically it looks as if count(rowid) would actually be slower and more resource-intensive than anything that went through the count(*) transformation.

In passing – the execution plan that appears in our count(1) example also tells us that count(*) doesn’t “use all columns” – after all, there aren’t many tables where every column is in the primary key and emp1 is no exception to the general rule, and the plan is doing an index fast full scan of the primary key index.

 

LUHNs algoritm – Three ways with SQL and PL/SQL

Mathias Magnusson - Tue, 2020-02-25 07:00

Have you encountered LUHNs algorithm? I can almost guarantee it even if you’ve never heard the name before. It is part of all of our lives every single day.

It is used to check that various numbers are correctly entered. From ID numbers for persons in Sweden, Grace, and Israel to credit card numbers and IMEI numbers and misc other things.

It is a very simple checksum function not intended to be cryptographically secure hash function. Due to the calculation used it is also referred to modulus 10.

The algorithm was invented by Hans Peter Luhn in 1954. It was meant to protect against most accidental errors, not against malicious attacks.

The algorithm

Each number in a sequence is alternating multiplied by 1 and 2 with the results added together. After that a modulus 10 is applied to the sum and the result is the check digit the function should return.

Let’s take an example, 374 is the input, we’re looking for the check digit that should be used. The calculation should be based on even number of digits in the input so we prepend with a zero, thus we use 0374 as our input. Now we’ll calculate 0*1, 3*2, 7*1, and 4*2 – multiply each digit in the input with 1 and 2 alternating.

0*1 = 0, 3*2 = 6, 7*1 = 7 and 4*2 = 8. Add them together and we have 0 + 6 + 7 + 8 = 21. Here is where we use modulo, 21 modulo 10 is 1. That however is not the check digit, the check digit is the reverse. That is, what do you need to add to this to make the result 10. In our case we arrived at one after the modulo operation so our check digit is 9.

There is a special case however. If a digit that is multiplied with two is between five and nine then the result will be two digits (5*2 = 10 and so on). That result is then added together resulting in all results being added together as single digits. This if we have 7 we get 7*2=14, 14 is now taken each digit and added so the result we get for the 7 is 5 (1+4). Instead of adding the digit like that, one can also subtract 9 from the result as it arrives at the same end result (14-9=5).

If you find my explanation above to rushed or hard to follow, take a look at wikipedia.

With that explanation out of the way, lets look at three ways to implement this. From slowest to fastest. If you only need this occasionally then it does not matter, they are all fast. I however faced a situation where I had to generate valid numbers that has correct check digits för a few hundred million rows.

PL/SQL

Let’s first look at a basic implementation in PL/SQL. I say basic because it is probably the most straight forward implementation of the pseudo code version of the algorithm. It is in my experience the most common way to implement it. When this feature is needed it tends to end up in a traditional function/procedure in PL/SQL.

This code is “borrowed” from Daniel Ekberg, just changing some variable names to make it easier to understand for the rest of us.

create or replace function get_value_w_chk (inv in varchar2)
  return varchar
as
  in_val     varchar2 (30);
  pos        int;
  total      int := 0;
  multiplier int := 0;
  result_pos int := 0;
  in_len     int;
  chk_digit int;
begin
  in_len := length (inv);
  in_val := inv;

  for i in 0 .. (in_len - 1)
  loop
    pos := in_len - i;
    multiplier := mod (multiplier + 1, 2);
    result_pos := to_number(substr(in_val, pos, 1), '0') 
                * (multiplier + 1);

    if result_pos > 9 then
     total := total + result_pos - 9;
    else
     total := total + result_pos;
    end if;
  end loop;

  chk_digit := mod (1000 - total, 10);

  return in_val || chk_digit;
end get_value_w_chk;
/

The code essentially loops through a string of digits and makes the calculation for each digit, adds it to a total and finishes up with a modulo operation to end up with the check digit, it then returns the parameter concatenated with the check-digit.

PL/SQL recursive function

The cool kids are writing recursive functions all day long or so they would have you believe. This particular algorithm lent it self to it so I wanted to test doing that. Also, next time one of the cool kids claims it cannot be done in legacy languages like PL/SQL I can just point them to this blog post (hello future reader).

create or replace function get_chk(p_i_val in varchar2) 
       return varchar2 as
  v_inval varchar2(50) := 
                case
                  when mod(length(p_i_val), 2) = 0 then p_i_val
                  else                                  '0' || p_i_val
                end;

  v_num number;
  v_result varchar2(51) := p_i_val;

  function recur_value( p_i_mult_with in number
                      , p_i_val       in varchar2) return number as
    v_num_char number(2);
    v_num_tot  number(2);
    v_sum      number(2);
  begin
    v_num_char := to_number(substr(p_i_val,1,1)) * p_i_mult_with;

    if v_num_char > 9 then
      v_num_tot := v_num_char - 9;
    else
      v_num_tot := v_num_char;
    end if;

    if length(p_i_val) > 1 then
      v_sum := v_num_tot + recur_value( case 
                                          when p_i_mult_with = 1 then 2 
                                          else                        1 
                                        end
                                      , substr(p_i_val, 2));
    else
      v_sum := v_num_tot;
    end if;

    return v_sum;
  end recur_value; 

begin
  return v_result 
      || (10 - mod( recur_value( p_i_mult_with => 1 
                               , p_i_val       => v_inval)
                  , 10)); 
end get_chk;
/

Side note: I have now spent an inordinate amount of time on what looks like a broken numbering of the code above. No idea what happens, but I need to find a better code formatter.

The code takes the inparameter and calls the recursive function which works by peeling of one digit from the parameter and calling itself with the rest. This call continues until a chain of calls has been done and the innermost invocation only has a single digit as the inparameter, then it returns it’s calculation and it starts summing each returned value with the calculation for the peel off digit eventually getting the total for all calculations returned to the outer function which performs the same kind of modulus operation as we saw in the first code block.

Pure SQL

The previous versions are what you probably want to start with if your value is already in PL/SQL, but what if it starts in SQL or you generate data with SQL? Then calling a PL/SQL function is not optimal.

In my case I had a need to get a correct check digit for an invented value. When you are to do it with a few hundred million rows you do not want to call a function for every row.

This is what I came up with to calculate the check digit in SQL.

with dl as
(
  select 1 rad from dual
  union all
  select 2 rad from dual
)
, a as
(
  select case
           when mod(length(:1), 2) = 0 then :1
           else                             '0' || :1
         end arg
    from dual
)
--select * from a;
, b as
(
select level rn, to_number(substr(arg, level, 1)) dgt
  from a
  connect by level <= length(a.arg)
)
--select * from b;
, c as
(
select b.rn
     , b.dgt
     , case
         when mod(b.rn, 2) = 0 then dgt * 2
         else                       dgt
       end calc
  from b
)
--select * from c;
select 10 - mod(sum(case
                      when dl.rad = 1 and c.calc <  10 then c.calc
                      when dl.rad = 1 and c.calc >= 10 then trunc(c.calc / 10)
                      when dl.rad = 2 and c.calc <  10 then null
                      when dl.rad = 2 and c.calc >= 10 then c.calc - 10
                    end), 10) chk
  from c, dl
where dl.rad = 1
    or(dl.rad = 2
   and c.calc > 9);

What is going on in this SQL? It looks long but it really is broken up into small chunks. We have four WITH-selects before we get down to the real one.

The first prefactoring SQL – dl – is just getting us a “table” with two rows. It is used later to handle the situation of the multiplication of a digit being > 9.

Next is “a”, all we do here is to fix the input in cases where it is of an uneven length. In such cases we prepend the input with the digit zero. An alternative would have been to change so we start multiplying the digits with 2 instead of 1, it achieves the same result.

After that comes “b” where we split the string into one row per digit in the input. The digits are also converted to numbers as we’re going to use them in multiplication in the next step.

In the next step “c” we calculate the value for each digit, The first is multiplied by 1, the second by two, the third by one again and so on.

After that the “real” SQL is executed. It takes the rows with sums by digits and creates two of them using “dl”. For rows where dl.rad is 1 we take the first digit (if calc is 15, we take 1), for situations where c.calc is over 9 we use the dl.rad rows where it is 2 to get the second digit (if calc is 15, we take 5). All those numbers are the added together and the usual modulus operation is performed returning the check digit.

To simplify following and troubleshooting I have the “select * from” rows left in the SQL. Uncomment one and run the SQL to see the result up to that point. It is by far the best way to understand what happens in the SQL, even better if you follow my explanation here and uncomment one to see the result I try to describe.

If this helps you please leave a comment or even more so if you have one variant of this that is simpler or faster.

Fake Baselines – 2

Jonathan Lewis - Mon, 2020-02-24 09:38

Many years ago (2011) I wrote a note describing how you could attach the Outline Information from one query to the SQL_ID of another query using the official Oracle mechanism of calling dbms_spm.load_plans_from_cursor_cache(). Shortly after publishing that note I drafted a follow-up note with an example demonstrating that even when the alternative outline was technically relevant the optimizer might still fail to use the SQL Plan Baseline. Unfortunately I didn’t quite finish the draft – until today.

The example I started with nearly 10 years ago behaved correctly against 11.1.0.7, but failed to reproduce the plan when I tested it against 11.2.0.3, and it still fails against 19.3.0.0. Here’s the test data and the query we’re going to attempt to manipulate:


rem
rem     Script:         fake_sql_baseline_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2010
rem

create table emp1 (
        dept_no         number /* not null */,
        sal             number,
        emp_no          number,
        padding         varchar2(200),
        constraint e1_pk primary key(emp_no)
)
;

create table emp2 (
        dept_no         number /* not null */,
        sal             number,
        emp_no          number,
        padding         varchar2(200),
        constraint e2_pk primary key(emp_no)
)
;

insert into emp1
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',200)
from
        all_objects
where
        rownum <= 20000 -- > comment to avoid wordpress format issue
;

insert into emp2
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',200)
from
        all_objects
where
        rownum <= 20000 -- > comment to avoid wordpress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname                 => user,
                tabname                 => 'EMP1',
                cascade                 => true,
                method_opt              =>'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname                 => user,
                tabname                 => 'EMP2',
                cascade                 => true,
                method_opt              =>'for all columns size 1'
        );
end;
/

select
        /*+ target_query */
        count(*)
from
        emp1
where
        emp1.dept_no not in (
                select  dept_no
                from    emp2
        )
;

select * from table(dbms_xplan.display_cursor(null, null, 'outline'));

I haven’t included the code I run on my testbed to delete all existing SQL Plan Baselines before running this test, I’ll post that at the end of the article.

The query is very simple and will, of course, return no rows since emp1 and emp2 are identical and we’re looking for departments in emp1 that don’t appear in emp2. The “obvious” plan for the optimizer is to unnest the subquery into a distinct (i.e. aggregate) inline view then apply an anti-join. It’s possible that the optimizer will also decide to do complex view merging and postpone the aggregation. Here’s the execution plan from 19.3:


----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |   168 (100)|          |
|   1 |  SORT AGGREGATE     |      |     1 |     6 |            |          |
|*  2 |   HASH JOIN ANTI NA |      |  3333 | 19998 |   168   (5)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EMP1"."DEPT_NO"="DEPT_NO")

Hint Report (identified by operation id / Query Block Name / Object Alias):

Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  target_query

As expected the subquery unnested, we have the anti-join (in this case, since dept_no can be null, it’s a “Null-Aware” antijoin); and the optimizer has, indeed, decided to do the join before the aggregation.

Assume, now, that for reasons known only to me a merge (anti-)join would be more effective than a hash join. To get the optimizer to do this I’m going to capture the query and connect it to a plan that uses a merge join. There are several minor variations on how we could do this, but I’m going to follow the steps I took in 2011 – but cut out a couple of the steps where I loaded redundant baselines into the SMB (SQLPlan Management Base). As a starting point I’ll just record the sql_id and plan_hash_value for the query (and the child_number just in case I want to use dbms_xplan.display_cursor() to report the in-memory execution plan):

column  sql_id                  new_value       m_sql_id_1
column  plan_hash_value         new_value       m_plan_hash_value_1
column  child_number            new_value       m_child_number_1

select
        sql_id, plan_hash_value, child_number
from
        v$sql
where
        sql_text like '%target_query%'
and     sql_text not like '%v$sql%'
and     rownum = 1
;

Now I’ll hack the query to produce a plan that does the merge join. An easy first step is to look at the current outline and take advantage of the hints there. You’ll notice I included the ‘outline’ format in my call to dbms_xplan.display_cursor() above, even though I didn’t show you that part of the output – here it is now:

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "EMP1"@"SEL$1")
      FULL(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      LEADING(@"SEL$5DA710D3" "EMP1"@"SEL$1" "EMP2"@"SEL$2")
      USE_HASH(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      END_OUTLINE_DATA
  */

So I’m going to take the useful-looking hints, get rid of the use_hash() hint and, for good measure, turn it into a no_use_hash() hint. Here’s the resulting query, with its execution plan:

select
        /*+
                unnest(@sel$2)
                leading(@sel$5da710d3 emp1@sel$1 emp2@sel$2)
                no_use_hash(@sel$5da710d3 emp2@sel$2)
                full(@sel$5da710d3 emp2@sel$2)
                full(@sel$5da710d3 emp1@sel$1)
                alternate_query
        */
        count(*)
from
        emp1
where
        emp1.dept_no not in (
                select  dept_no
                from    emp2
        )
;

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   178 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |     6 |            |          |
|   2 |   MERGE JOIN ANTI NA |      |  3333 | 19998 |   178  (11)| 00:00:01 |
|   3 |    SORT JOIN         |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|*  5 |    SORT UNIQUE       |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("EMP1"."DEPT_NO"="DEPT_NO")
       filter("EMP1"."DEPT_NO"="DEPT_NO")

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "EMP1"@"SEL$1")
      FULL(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      LEADING(@"SEL$5DA710D3" "EMP1"@"SEL$1" "EMP2"@"SEL$2")
      USE_MERGE(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      END_OUTLINE_DATA
  */

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  alternate_query

Note that I’ve included the text “alternative_query” at the end of the hint list as something to use when I’m searaching v$sql. Note also, that the “no_use_hash()” hint has disappeared and been replaced by “use_merge()” hint.

The plan tells us that the optimizer is happy to use a “merge join anti NA”, so we can load this plan’s outline into the SMB by combining the sql_id and plan_hash_value for this query with (for older versions of Oracle, though you can now use the sql_id in recent versions) the text of the previous query so that we can store the old text with the new plan.


column  sql_id                  new_value       m_sql_id_2
column  plan_hash_value         new_value       m_plan_hash_value_2
column  child_number            new_value       m_child_number_2

select
        sql_id, plan_hash_value, child_number
from
        v$sql
where
        sql_text like '%alternate_query%'
and     sql_text not like '%v$sql%'
and     rownum = 1
;

declare
        m_clob  clob;
begin
        select
                sql_fulltext
        into
                m_clob
        from
                v$sql
        where
                sql_id = '&m_sql_id_1'
        and     child_number = &m_child_number_1
        ;

        dbms_output.put_line(m_clob);

        dbms_output.put_line(
                'Number of plans loaded: ' ||
                dbms_spm.load_plans_from_cursor_cache(
                        sql_id                  => '&m_sql_id_2',
                        plan_hash_value         => &m_plan_hash_value_2,
                        sql_text                => m_clob,
                        fixed                   => 'YES',
                        enabled                 => 'YES'
                )
        );

end;
/

At this point we have one SQL Plan Baseline in the SMB, and it says the old query should execute usng the new plan. So let’s give it a go:

set serveroutput off
alter system flush shared_pool;

alter session set events '10053 trace name context forever';

select
        /*+ target_query */
        count(*)
from
        emp1
where
        emp1.dept_no not in (
                select  dept_no
                from    emp2
        )
/

alter session set events '10053 trace name context off';

select * from table(dbms_xplan.display_cursor(null, null, 'alias outline'));

I’ve enabled the 10053 (optimizer) trace so that I can report a critical few lines from it later on. Here’s the execution plan, omitting the outline but including the alias information.

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |   168 (100)|          |
|   1 |  SORT AGGREGATE     |      |     1 |     6 |            |          |
|*  2 |   HASH JOIN ANTI NA |      |  3333 | 19998 |   168   (5)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$5DA710D3
   3 - SEL$5DA710D3 / EMP1@SEL$1
   4 - SEL$5DA710D3 / EMP2@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EMP1"."DEPT_NO"="DEPT_NO")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  target_query

Note
-----
   - Failed to use SQL plan baseline for this statement

We haven’t used the SQL Plan Baseline – and in 19.3 we even have a note that the optimizer knew there was at least one baseline available that it failed to use! So what went wrong?

I have two diagnostics – first is the content of the baseline itself (warning – the SQL below will report ALL currently saved SQL Plan Baselines); I’ve just made sure that I have only one to report:

set linesize 90

select
        pln.*
from
        (select sql_handle, plan_name
         from   dba_sql_plan_baselines spb
         order by
                sql_handle, plan_name
        ) spb,
        table(dbms_xplan.display_sql_plan_baseline(spb.sql_handle, spb.plan_name)) pln
;


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL handle: SQL_ce3099e9e3bdaf2f
SQL text: select         /*+ target_query */         count(*) from         emp1
          where         emp1.dept_no not in (                 select  dept_no
                        from    emp2         )
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_cwc4tx7jvvbtg02bb0c12         Plan id: 45812754
Enabled: YES     Fixed: YES     Accepted: YES     Origin: MANUAL-LOAD-FROM-CURSOR-CACHE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 1517539632

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   178 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |     6 |            |          |
|   2 |   MERGE JOIN ANTI NA |      |  3333 | 19998 |   178  (11)| 00:00:01 |
|   3 |    SORT JOIN         |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|*  5 |    SORT UNIQUE       |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("EMP1"."DEPT_NO"="DEPT_NO")
       filter("EMP1"."DEPT_NO"="DEPT_NO")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  alternate_query

We have an SQL Plan baseline that is accepted, enabled, and fixed; and it’s supposed to produce a “merge join anti NA”, and it clearly “belongs” to our query. So it should have been used.

Then we have the 10053 trace file, in which we find the following:


SPM: planId in plan baseline = 45812754, planId of reproduced plan = 1410137244
------- START SPM Plan Dump -------
SPM: failed to reproduce the plan using the following info:
  parse_schema name        : TEST_USER
  plan_baseline signature  : 14857544400522555183
  plan_baseline plan_id    : 45812754
  plan_baseline hintset    :
    hint num  1 len 27 text: IGNORE_OPTIM_EMBEDDED_HINTS
    hint num  2 len 35 text: OPTIMIZER_FEATURES_ENABLE('19.1.0')
    hint num  3 len 20 text: DB_VERSION('19.1.0')
    hint num  4 len  8 text: ALL_ROWS
    hint num  5 len 29 text: OUTLINE_LEAF(@"SEL$5DA710D3")
    hint num  6 len 16 text: UNNEST(@"SEL$2")
    hint num  7 len 17 text: OUTLINE(@"SEL$1")
    hint num  8 len 17 text: OUTLINE(@"SEL$2")
    hint num  9 len 36 text: FULL(@"SEL$5DA710D3" "EMP1"@"SEL$1")
    hint num 10 len 36 text: FULL(@"SEL$5DA710D3" "EMP2"@"SEL$2")
    hint num 11 len 54 text: LEADING(@"SEL$5DA710D3" "EMP1"@"SEL$1" "EMP2"@"SEL$2")
    hint num 12 len 41 text: USE_MERGE(@"SEL$5DA710D3" "EMP2"@"SEL$2")

During optimization the optimizer has found that SQL Plan Baseline. We can see that the hints in the baseline are exactly the hints from the plan that we wanted – but the optimizer says it can’t reproduce the plan we wanted. In fact if you try adding exactly these hints to the query itself you’ll still find that the merge join won’t appear and Oracle will use a hash join.

Conclusion

This is just a simple example of how the optimizer may be able to produce a plan if hinted in one way, but the outline consists of a different set of hints that won’t reproduce the plan they describe. My no_use_hash() has turned into a use_merge() but that hint fails to reproduce the merge join in circumstances that makes me think there’s a bug in the optimizer.

If you happen to be unlucky you may find that the plan you really need to see can’t be forced through a SQL Plan Baseline. In this example it may be necessary to use the SQL Patch mechanism to include the no_use_hash() hint in a set of hints that I associate with the query.

 

Quickly creating a Standby Database in 19c

Hemant K Chitale - Sun, 2020-02-23 09:45
A quick overview of creating a Standby from an active database, copying over the network.
(words in italics above are added after this post was published)

1.  Create the parameter file initSTDBYDB.ora with additional parameters
  change or add DB_UNIQUE_NAME to be STDBYDB
  change the location of control files
  add fal_server to be the lookup name for the Primary (e.g. ORCLCDB)
  add log_archive_dest_2 to specify the Primary Service and DB_UNIQUE_NAME (note : If you are using "log_archive_dest", you can't use "log_archive_dest_2" to co-exist.  A default DB_RECOVERY_FILE_DEST location is preferable)
  add db_file_name_convert and log_file_name_convert to map file names to new directories (if they are to be different or, for example, if creating the Standby on the same server !!)  --- ensure that you have the new directories (or ASM DiskGroups) available on the Standby with the right permissions (including directories for PDBs and the PDBSEED) !
  change any other hardcoded directory names (e.g. for adump)

2.  Create a listener.ora and/or a new listener with a static SID_NAME entry for the Standby DB

3.  Add an entry for the Standby  in the Primary tnsnames.ora and for the Primary in the Standby tnsnames.ora

4.  Add at least one Standby Redo Log file to the Primary Database

5.  Ensure that you have the password for the SYS account (or will you be using SYSDG ?) on the Primary and copy the Password file to the Stadnby

6.  Start the Standby listener

7.  STARTUP NOMOUNT the Standby Instance (remember to have the ORACLE_SID set !!)

8.  Start rman on the Primary with :
rman target sys/manager auxiliary sys/manager@STDBYDB
and then issue the command
duplicate target database for standby from active database dorecover;


and thus the execution will be as :

oracle19c>rman target sys/manager auxiliary sys/manager@STDBYDB

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Feb 23 23:38:59 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)
connected to auxiliary database: ORCLCDB (not mounted)

RMAN> duplicate target database for standby from active database dorecover;

Starting Duplicate Db at 23-FEB-20
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=21 device type=DISK
current log archived

contents of Memory Script:
{
backup as copy reuse
passwordfile auxiliary format '/opt/oracle/product/19c/dbhome_1/dbs/orapwSTDBYDB' ;
}
executing Memory Script

Starting backup at 23-FEB-20
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK
Finished backup at 23-FEB-20

contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format '/opt/oracle/oradata/STDBYDB/control01.ctl';
restore clone primary controlfile to '/opt/oracle/oradata/STDBYDB/control02.ctl' from
'/opt/oracle/oradata/STDBYDB/control01.ctl';
}
executing Memory Script

Starting backup at 23-FEB-20
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/opt/oracle/product/19c/dbhome_1/dbs/snapcf_ORCLCDB.f tag=TAG20200223T233924
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 23-FEB-20

Starting restore at 23-FEB-20
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 23-FEB-20

contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
set newname for tempfile 1 to
"/opt/oracle/oradata/STDBYDB/temp01.dbf";
set newname for tempfile 2 to
"/opt/oracle/oradata/STDBYDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf";
set newname for tempfile 3 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/temp01.dbf";
switch clone tempfile all;
set newname for datafile 1 to
"/opt/oracle/oradata/STDBYDB/system01.dbf";
set newname for datafile 3 to
"/opt/oracle/oradata/STDBYDB/sysaux01.dbf";
set newname for datafile 4 to
"/opt/oracle/oradata/STDBYDB/undotbs01.dbf";
set newname for datafile 5 to
"/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf";
set newname for datafile 6 to
"/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf";
set newname for datafile 7 to
"/opt/oracle/oradata/STDBYDB/users01.dbf";
set newname for datafile 8 to
"/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf";
set newname for datafile 9 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf";
set newname for datafile 10 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf";
set newname for datafile 11 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf";
set newname for datafile 12 to
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf";
backup as copy reuse
datafile 1 auxiliary format
"/opt/oracle/oradata/STDBYDB/system01.dbf" datafile
3 auxiliary format
"/opt/oracle/oradata/STDBYDB/sysaux01.dbf" datafile
4 auxiliary format
"/opt/oracle/oradata/STDBYDB/undotbs01.dbf" datafile
5 auxiliary format
"/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf" datafile
6 auxiliary format
"/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf" datafile
7 auxiliary format
"/opt/oracle/oradata/STDBYDB/users01.dbf" datafile
8 auxiliary format
"/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf" datafile
9 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf" datafile
10 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf" datafile
11 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf" datafile
12 auxiliary format
"/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf" ;
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /opt/oracle/oradata/STDBYDB/temp01.dbf in control file
renamed tempfile 2 to /opt/oracle/oradata/STDBYDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf in control file
renamed tempfile 3 to /opt/oracle/oradata/STDBYDB/ORCLPDB1/temp01.dbf in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 23-FEB-20
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/opt/oracle/oradata/ORCLCDB/system01.dbf
output file name=/opt/oracle/oradata/STDBYDB/system01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/opt/oracle/oradata/ORCLCDB/sysaux01.dbf
output file name=/opt/oracle/oradata/STDBYDB/sysaux01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf
output file name=/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/opt/oracle/oradata/ORCLCDB/undotbs01.dbf
output file name=/opt/oracle/oradata/STDBYDB/undotbs01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf
output file name=/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
output file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00008 name=/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf
output file name=/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/opt/oracle/oradata/ORCLCDB/users01.dbf
output file name=/opt/oracle/oradata/STDBYDB/users01.dbf tag=TAG20200223T233939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 23-FEB-20

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
backup as copy reuse
archivelog like "/opt/oracle/archivelog/ORCLCDB/1_41_1007421686.dbf" auxiliary format
"/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf" archivelog like
"/opt/oracle/archivelog/ORCLCDB/1_42_1007421686.dbf" auxiliary format
"/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf" ;
catalog clone archivelog "/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf";
catalog clone archivelog "/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf";
switch clone datafile all;
}
executing Memory Script

Starting backup at 23-FEB-20
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=41 RECID=9 STAMP=1033170130
output file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=42 RECID=10 STAMP=1033170130
output file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 23-FEB-20

cataloged archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf RECID=1 STAMP=1033170133

cataloged archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf RECID=2 STAMP=1033170133

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=6 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/undotbs01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=7 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=8 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=9 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/users01.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=10 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=11 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=12 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=13 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=14 STAMP=1033170134 file name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf

contents of Memory Script:
{
set until scn 4658614;
recover
standby
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 23-FEB-20
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 41 is already on disk as file /opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf
archived log for thread 1 with sequence 42 is already on disk as file /opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf
archived log file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf thread=1 sequence=41
archived log file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf thread=1 sequence=42
media recovery complete, elapsed time: 00:00:01
Finished recover at 23-FEB-20

contents of Memory Script:
{
delete clone force archivelog all;
}
executing Memory Script

released channel: ORA_DISK_1
released channel: ORA_AUX_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK
deleted archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_41_1007421686.dbf RECID=1 STAMP=1033170133
deleted archived log
archived log file name=/opt/oracle/archivelog/STDBYDB/1_42_1007421686.dbf RECID=2 STAMP=1033170133
Deleted 2 objects

Finished Duplicate Db at 23-FEB-20

RMAN>


Note :  For simplicity, I didn't use the SPFILE specification in the DUPLICATE command to create and update an SPFILE at the Standby.  I am using a simple initSTDBYDB.ora pfile


In the next blog post, I will be covering how to begin (and then monitor) shipping of redo from the Primary to the Standby.


Categories: DBA Blogs

Create Windows Symlinks to Vagrant Software Folder

Michael Dinh - Sat, 2020-02-22 06:45

Most Vagrant implementations have some derivative of software folder with file – put_software_here.txt

It does not make sense to have to copy software to the folder and then to delete afterwards repeatedly for every new Vagrant build.

Luckily, there is a way to create symlinks to a centralized software location.

Start vagrant failed – ERROR: gi_software does not exist because software folder was purposely deleted for demo.
Note: Using Git BASH for Windows.

resetlogs@ghost MINGW64 /g/oracle/vagrant-boxes/OracleFPP (master)
$ ls -l
total 28
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 config/
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 images/
-rw-r--r-- 1 dinh 197121  1896 Feb 22 06:35 LICENSE.txt
-rw-r--r-- 1 dinh 197121 10645 Feb 22 06:35 README.md
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 scripts/
-rw-r--r-- 1 dinh 197121  5449 Feb 22 06:35 THIRD_PARTY_LICENSES.txt
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 userscripts/
-rw-r--r-- 1 dinh 197121 16610 Feb 22 06:45 Vagrantfile

resetlogs@ghost MINGW64 /g/oracle/vagrant-boxes/OracleFPP (master)
$ vagrant up
ERROR: gi_software does not exist

resetlogs@ghost MINGW64 /g/oracle/vagrant-boxes/OracleFPP (master)

Create symlinks from Windows command line.

C:\Windows\System32>mklink
Creates a symbolic link.

MKLINK [[/D] | [/H] | [/J]] Link Target

        /D      Creates a directory symbolic link.  Default is a file
                symbolic link.
        /H      Creates a hard link instead of a symbolic link.
        /J      Creates a Directory Junction.
        Link    Specifies the new symbolic link name.
        Target  Specifies the path (relative or absolute) that the new link
                refers to.

C:\Windows\System32>mklink /J "G:\oracle\vagrant-boxes\OracleFPP\ORCL_software" "E:\ORCL_software"
Junction created for G:\oracle\vagrant-boxes\OracleFPP\ORCL_software <<===>> E:\ORCL_software

C:\Windows\System32>

Read more about it: The Complete Guide to Creating Symbolic Links (aka Symlinks) on Windows

Review symlinks and start vagrant successfully:

resetlogs@ghost MINGW64 /g/oracle/vagrant-boxes/OracleFPP (master)
$ ls -l
total 28
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 config/
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 images/
-rw-r--r-- 1 dinh 197121  1896 Feb 22 06:35 LICENSE.txt
lrwxrwxrwx 1 dinh 197121    16 Feb 22 07:05 ORCL_software -> /e/ORCL_software/
-rw-r--r-- 1 dinh 197121 10645 Feb 22 06:35 README.md
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 scripts/
-rw-r--r-- 1 dinh 197121  5449 Feb 22 06:35 THIRD_PARTY_LICENSES.txt
drwxr-xr-x 1 dinh 197121     0 Feb 22 06:35 userscripts/
-rw-r--r-- 1 dinh 197121 16610 Feb 22 06:45 Vagrantfile

resetlogs@ghost MINGW64 /g/oracle/vagrant-boxes/OracleFPP (master)
$ vagrant up
getting Proxy Configuration from Host...
Bringing machine 'host1' up with 'virtualbox' provider...
Bringing machine 'host2' up with 'virtualbox' provider...
==> host1: Importing base box 'ol74'...

Plan Change Monitor goes off on SQL Analyze query

Bobby Durrett's DBA Blog - Fri, 2020-02-21 16:17

Last night, the plan change monitor script we have setup on one of our production databases alerted us about query with a SQL Analyze hint. It looked weird.

The query has sql_id 48tfz5d37xk28 and starts like this:

/* SQL Analyze(3141,1) */ select ...

The weird thing is the output of my sqlstat3.sql script for this sql_id value:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
------------- --------------- ------------------------- ---------------- ------------------
48tfz5d37xk28      2311822442 20-FEB-20 10.00.28.096 PM                1          15836.016
48tfz5d37xk28      3597180003 20-FEB-20 10.00.28.096 PM                1           4398.618
48tfz5d37xk28      3992115155 20-FEB-20 11.00.01.054 PM                4         57198.8163
48tfz5d37xk28      3597180003 20-FEB-20 11.00.01.054 PM                3          74803.004
48tfz5d37xk28      2311822442 20-FEB-20 11.00.01.054 PM                4         65471.2243
48tfz5d37xk28      2172763952 20-FEB-20 11.00.01.054 PM                4         57206.2353
48tfz5d37xk28      2331870706 20-FEB-20 11.00.01.054 PM                4         57479.7398

It looks like one of the database’s built in processes was running this query with various plans. Plan_hash_value 3992115155 tripped the alert. I guess it was 10 times as long as plan 3597180003. The plan change monitor script could be improved in many ways to eliminate false positives like this, but I just added a condition to exclude all queries run by user SYS:

258c258,259
< m.sql_id not in ('dkqs29nsj23jq','58r2wgh5xx1bg','3m2su959r6tp8','3z60x9386n1v8','bx81s3m9zxutb','czwbdwxx5aqsk','001wbydnwaa7n','6quugay8yt73u')
---
> m.sql_id not in 
('dkqs29nsj23jq','58r2wgh5xx1bg','3m2su959r6tp8','3z60x9386n1v8','bx81s3m9zxutb','czwbdwxx5aqsk','001wbydnwaa7n','6quugay8yt73u') 
and s.username <> 'SYS'

This is the first time I remember seeing something like this cause an alert, but the workaround was easy.

Bobby

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator