DBA Blogs

The future of work

RDBMS Insight - Wed, 2019-10-02 10:48

The people behind the American tech industry blog TechDirt have put together an anthology of stories about the future of work called (appropriately) Working Futures, and I’m excited to have a story in it.

My story, “The Auditor and the Exorcist,” was heavily influenced by the years I spent working remotely for Oracle. It tells a near-future story of working in a world with a social credit system that depends on weak AI. Of course, poor security, hacking, and bugs are all present in the future, too…

Quick summary: Pat is stuck in a soul-deadening job as a social credit auditor. Her thoroughly modern home suddenly shows every sign of being haunted. Pat doesn’t believe in ghosts, but the only thing that seems to restore her home to normalcy is the help of an online exorcist. Is Pat’s house really haunted… or is something more sinister going on?

The book is now available in both ebook and paperback format at Amazon, and if you have a Kindle Unlimited subscription, you can read it for free. Plus, you can repost any of the stories in it (non-commercially): The stories in the anthology are all released under CC license. My story is released as CC-BY-NC-ND, meaning anyone’s free to recopy it non-commercially, but not to re-release it commercially or to remix without permission. to recopy it non-commercially, but not to re-release it commercially or to remix without permission.

Image credit: Remixed from a public domain image found on publicdomainimages.net. Feel free to reuse/remix further.

Categories: DBA Blogs

Volume Icon Missing? Get it Back! (Windows 7, 8, 10)

VitalSoftTech - Tue, 2019-10-01 10:01

So you’re using your Windows PC, and you want to turn the volume up to better hear the characters in the movie you’re watching, only to discover the missing volume icon. Where did it go? Before you start stressing out about a potential virus or some malware infecting your PC, take a look at this […]

The post Volume Icon Missing? Get it Back! (Windows 7, 8, 10) appeared first on VitalSoftTech.

Categories: DBA Blogs

How to enlarge an #Exasol database by adding a node

The Oracle Instructor - Mon, 2019-09-30 08:46

Adding a cluster node will not only increase the available storage capacity but also the total compute power of your cluster. This scale-out is a quite common operation for Exasol customers to do.

My example shows how to change an existing 2+1 cluster into a 3+0 cluster. Before you can enlarge the database with an active node, this node has to be a reserve node first. See here how to add a reserve to a 2+0 cluster. Of course you can add another reserve node to change from 3+0 to 3+1 afterwards. See here if you wonder why you may want to have a reserve node at all.

Initial state – reserve node is present

I start with a 2+1 cluster – 2 active nodes and 1 reserve node:

For later comparison, let’s look at the distribution of rows of one of my tables:

The rows are roughly even distributed across the two active nodes.

Before you continue, it would be a good idea to take a backup on a remote archive volume now – just in case.

Shutdown database before volume modification

A data volume used used by a database cannot be modified while that database is up, so shut it down first:

After going to the Storage branch in EXAoperation, click on the data volume:

Then click on Edit:

Decrease volume redundancy to 1

Change the redundany from 2 to 1, then click Apply:

Why is the redundancy reduced from 2 to 1 here? Let’s try to explain that. Initially, I had 2 active nodes with a volume using redundancy 2:

A and B are master segments while A’ and B’ are mirrored segments. If I could add a node to this volume keeping the existing segments, it would look like this:

Of course this would be a bad idea. The redundancy is reduced to 1 before the new node is added to the volume:

Only distributed master segments with no mirrors at first. Then the redundancy is again increased to 2:

This way, every master segment can be mirrored on a neighbor node. That’s why the redundancy needs to be reduced to 1.

Add new node to volume

After having decreased the volume redundancy to 1, click Edit on the volume detail page again and add n13 as a new master node to the volume and click Apply:

Increase redundancy to 2

Now click Edit again and increase the redudancy to 2:

The state of the volume shows now as RECOVERING – don’t worry, it just means that mirrored segments are now created.

Enlarge the database

Now click on the database link on the EXASolution screen:

Select the Action Enlarge and click Submit:

Enter 1 and click Apply:

The database detail page looks like this now:

Reorganize

Technically, this is a 3+0 cluster now – but the third node doesn’t contain any data yet. If we look at the same table as before, we see that no rows are on the new node:

To change that, a REORGANIZE needs to be done. Either on the database layer, on schema layer or on table layer. Most easy to perform is REORGANIZE DATABASE:

Took me about 10 Minutes on my tiny database. That command re-distributes every table across all cluster nodes and can be time consuming with high data volume. While a table is reorganized, that table is locked against DML. You can monitor the ongoing reorganization by selecting from EXA_DBA_PROFILE_RUNNING in another session.

Final state

Let’s check the distribution of the previous table again:

As you can see above, now there are rows on the added node. Also EXAoperation confirms that the new node is not empty any more:

On a larger database, you would see that the volume usage of the nodes is less than before per node and every node is holding roughly the same amount of data. For failsafety, you could add another reserve node now.

Summary of steps
  1. Add a reserve node (if not yet existing)
  2. Take a backup on a remote archive volume
  3. Shutdown database
  4. Decrease volume redundancy to 1
  5. Add former reserve node as new master node to the volume
  6. Increase redundancy to 2
  7. Enlarge database by 1 active node
  8. Reorganize
  9. Add another reserve node (optionally)
Categories: DBA Blogs

Basic Replication -- 6 : COMPLETE and ATOMIC_REFRESH

Hemant K Chitale - Sun, 2019-09-29 09:36
Till 9i, if you did a COMPLETE Refresh of a Single Materialized View, Oracle would do a TRUNCATE followed by an INSERT.
If you did a COMPLETE Refresh of a *group* of Materialized Views, Oracle would execute DELETE and INSERT so that all the MVs would be consistent to the same point in time.  Thus, if one of the MVs failed to refresh (e.g. the SELECT on the Source Table failed or the INSERT failed, it would be able to do a ROLLBACK of all the MVs to revert them to the status (i.e. all rows that were present) as of the time before the Refresh began.  This would also allow all MVs to be available for queries with the rows as of before the Refresh began, even as the Refresh was running (because the Refresh of the multiple MVs was a single transaction).

In 10g, the behaviour for a *group* of Materialized Views remained the same.  However, for a single MV, the default was now to do a DELETE and INSERT as well.  This would allow the MV to be queryable as well while the Refresh was running.
This change came as a surprise to many customers (including me at a site where I was managing multiple single MVs) !
This change meant that the single MV took longer to run (because DELETEing all the rows takes a long time !) and required much more Undo and Redo space !!

Here's a demonstration in 19c (as in the previous posts in this series) :

First, I start with a new, larger, Source Table  and then build two MVs on it :

SQL> create table source_table_2
2 as select *
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_2
2 add constraint source_table_2_pk
3 primary key (object_id)
4 /

Table altered.

SQL> select count(*)
2 from source_table_2
3 /

COUNT(*)
----------
72366

SQL>
SQL> create materialized view new_mv_2_1
2 as select object_id, owner, object_name, object_type
3 from source_table_2
4 /

Materialized view created.

SQL> create materialized view new_mv_2_2
2 as select object_id, owner, object_name, object_type
3 from source_table_2
4 /

Materialized view created.

SQL>
SQL> select mview_name, refresh_mode, refresh_method, last_refresh_type, fast_refreshable
2 from user_mviews
3 where mview_name like 'NEW_MV%'
4 order by 1
5 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF FAST_REFRESHABLE
---------------- --------- -------- -------- ------------------
NEW_MV_2_1 DEMAND FORCE COMPLETE DIRLOAD_DML
NEW_MV_2_2 DEMAND FORCE COMPLETE DIRLOAD_DML

SQL>


Note that it *IS* possible to have two Materialized Views with exactly the same QUERY co-existing.  They may have different REFRESH_METHODs (here both are the same) and/or may have different frequencies of Refresh calls when the REFRESH_MODE is 'DEMAND'

Note also that I did not specify any "refresh on demand" clause so both defaulted to FORCE and DEMAND.

(Question 1 : Why might I have two MVs with the same QUERY and the same REFRESH_METHOD but different frequency or different times when the Refresh is called ?)

(Question 2 : What is DIRLOAD_DML ?)


Now, let me issue two different COMPLETE Refresh calls and trace them.

SQL> execute dbms_mview.refresh('NEW_MV_2_1','C');
SQL> execute dbms_mview.refresh('NEW_MV_2_2','C',atomic_refresh=>FALSE); -- from a different session


Now, I look at the trace files.

For the NEW_MV_2_1  (where ATOMIC_REFRESH defaulted to TRUE), I see :

/* MV_REFRESH (DEL) */ delete from "HEMANT"."NEW_MV_2_1"

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"



And for the NEW_MV_2_2 (where ATOMIC_REFRESH was set to FALSE), I see :

LOCK TABLE "HEMANT"."NEW_MV_2_2" IN EXCLUSIVE MODE  NOWAIT

/* MV_REFRESH (DEL) */ truncate table "HEMANT"."NEW_MV_2_2" purge snapshot log

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO "HEMANT"."NEW_MV_2_2"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"


So, the default ATOMIC_REFRESH=TRUE caused a DELETE followed by an INSERT while the ATOMIC_REFRESH=FALSE caused a DELETE followed by an INSERT APPEND (a Direct Path Insert).  The second method is much faster.



More information from a tkprof for the NEW_MV_2_1 (ATOMIC_REFRESH=TRUE) is :

INSERT INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID","OWNER","OBJECT_NAME",
"OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER",
"SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM
"SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 66 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.01 0 66 0 0




delete from "HEMANT"."NEW_MV_2_1"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.47 1.77 151 173 224377 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.47 1.77 151 173 224377 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE NEW_MV_2_1 (cr=178 pr=151 pw=0 time=1783942 us starts=1)
72366 72366 72366 INDEX FAST FULL SCAN SYS_C_SNAP$_82SOURCE_TABLE_2_PK (cr=157 pr=150 pw=0 time=54982 us starts=1 cost=42 size=361830 card=72366)(object id 73111)




INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID",
"OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID",
"SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME",
"SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 4 0
Execute 1 0.71 0.71 0 2166 152128 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.71 0.71 0 2166 152132 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL NEW_MV_2_1 (cr=2257 pr=0 pw=0 time=723103 us starts=1)
72366 72366 72366 TABLE ACCESS FULL SOURCE_TABLE_2 (cr=1410 pr=0 pw=0 time=30476 us starts=1 cost=392 size=3980130 card=72366)




Note that the first INSERT was only Parsed but *not* Executed.


While that for NEW_MV_2_2 (ATOMIC_REFRESH=FALSE) shows :

LOCK TABLE "HEMANT"."NEW_MV_2_2" IN EXCLUSIVE MODE  NOWAIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0




truncate table "HEMANT"."NEW_MV_2_2" purge snapshot log



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 1 0
Execute 1 0.06 0.56 13 15 511 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.06 0.57 13 15 512 0



INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO
"HEMANT"."NEW_MV_2_2"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE")
SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER",
"SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM
"SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.09 0 43 0 0
Execute 1 0.22 0.56 3 1487 1121 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.24 0.65 3 1530 1121 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD AS SELECT NEW_MV_2_2 (cr=3688 pr=7 pw=586 time=953367 us starts=1)
72366 72366 72366 OPTIMIZER STATISTICS GATHERING (cr=3337 pr=0 pw=0 time=142500 us starts=1 cost=392 size=3980130 card=72366)
72366 72366 72366 TABLE ACCESS FULL SOURCE_TABLE_2 (cr=1410 pr=0 pw=0 time=40841 us starts=1 cost=392 size=3980130 card=72366)




ALTER INDEX "HEMANT"."SYS_C_SNAP$_83SOURCE_TABLE_2_PK" REBUILD NOPARALLEL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.08 0 1 1 0
Execute 1 0.11 0.48 586 626 680 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.11 0.56 586 627 681 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 INDEX BUILD UNIQUE SYS_C_SNAP$_83SOURCE_TABLE_2_PK (cr=632 pr=586 pw=150 time=392351 us starts=1)(object id 0)
72366 72366 72366 SORT CREATE INDEX (cr=590 pr=586 pw=0 time=148023 us starts=1)
72366 72366 72366 MAT_VIEW ACCESS FULL NEW_MV_2_2 (cr=590 pr=586 pw=0 time=86149 us starts=1 cost=166 size=361830 card=72366)



So, the ATOMIC_REFRESH=FALSE caused
a. TRUNCATE
b. INSERT APPEND (i.e. Direct Path Insert, minimizing Undo and reducing Redo)
c. REBUILD INDEX

I am not comparing Execution Time for the two Refresh's.  I would rather that you focus on the fact that the DELETE (in ATOMIC_REFRESH=TRUE) can be very expensive (think Undo and Redo) when it has delete, say, millions of rows.  Also, that the INSERT is a regular operation that also causes Undo and Redo to be generated.

ATOMIC_REFRESH=FALSE makes a significant difference to the Undo and Redo generation and will be faster for large Materialized Views.

What is the downside of ATOMIC_REFRESH=FALSE ?  Firstly, the MV will not present any rows to a query that executes against it while the Refresh is in progress.  Secondly, if the Refresh fails, the MV is left in a Truncated state (without rows) until another Refresh is executed.
The ATOMIC_REFRESH=TRUE avoids  these two pitfalls, at the expense of resources (Undo and Redo) and time to refresh.

For more information, see Oracle Support Document #553464.1


Categories: DBA Blogs

60Hz vs. 144Hz vs. 240Hz: What you should know in 2019

VitalSoftTech - Thu, 2019-09-26 09:53

60Hz vs. 144Hz vs. 240Hz; sounds like quite a mouthful, even for the average gamer. Which one’s the best? Which one’s the highest quality? If you’ve been interested in gaming for any length of time, you’re probably familiar with these terms. But what exactly do they mean? In this article, we’ll discuss the debate of […]

The post 60Hz vs. 144Hz vs. 240Hz: What you should know in 2019 appeared first on VitalSoftTech.

Categories: DBA Blogs

Pivot with list of rows

Tom Kyte - Thu, 2019-09-26 06:46
We have a table which contains db_name and usernames. In the output we need list of users per DB i.e. number of columns will be equal to distinct db_name. sample output format: <b>DB1 DB2</b> USER1 USER4 USER2 USER5 USER3 Database version:...
Categories: DBA Blogs

confuse at the order of execution plan table

Tom Kyte - Thu, 2019-09-26 06:46
As we were told that "The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is executed first." howe...
Categories: DBA Blogs

Converting data types in where clause

Tom Kyte - Thu, 2019-09-26 06:46
Hi Tom, My question is regarding when a query is not having the right datatype in the where clause Example: -- Create table <code>CREATE TABLE mytable ( mynumber varchar2(20), primary key(mynumber));</code> -- Insert some rows <...
Categories: DBA Blogs

Resetting a live sequence

Tom Kyte - Thu, 2019-09-26 06:46
A sequence was about to finish, so I had make it bigger. I work on database 11.2 so I had to use a workaround described here (https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1119633817597). The procedure shown in LiveSQL was ru...
Categories: DBA Blogs

Strange behavior in analytic functions with partitions

Tom Kyte - Thu, 2019-09-26 06:46
Hello, I have met strange behavior that I can't understand. I have this table <code>CREATE TABLE test_table (id NUMBER(10,0) NOT NULL, register_date DATE DEFAULT sysdate NOT NULL, row...
Categories: DBA Blogs

Issue in generating Custom Reference ID and Mapping with Form's field

Tom Kyte - Thu, 2019-09-26 06:46
Building an app for blocking a demo calendar for particular product setup. I wanted to create a custom reference ID ( CURRENT_MONTH-CURRENT_YEAR-SEQUENCE like SEPT-2019-003) which will be the Primary Key (Column BOOKING_REF) for the table (table ...
Categories: DBA Blogs

Inserting without a full list for field names

Tom Kyte - Thu, 2019-09-26 06:46
We have an issue when we perform an insert like this <code> INSERT INTO STS_RESP_LOG(STS_REQ_LOG_SYSTEM_ID,HSTRY_FLG, SNGL_STS, FRQNCY, CRT_DATE ) VALUES ( ?, ?, ?, ?, sysdate); </code> ?...
Categories: DBA Blogs

RMAN MAXPIECESIZE VS SECTION SIZE

Tom Kyte - Thu, 2019-09-26 06:46
HELLO , i've made some test to try to parallelizethe best the backup of bigdatabase and i wanted to know if it's possible to parallelize the backup of a backuppiece (multiple backupset) on mulitple channel with maxpiecesize because our SBT media is ...
Categories: DBA Blogs

Where is my StreamAdmin account?

DBASolved - Wed, 2019-09-25 15:10

One of the huge benefits of Oracle GoldenGate Microservices is the security framework which comes standard when you install GoldenGate. As you setup the ServiceManager and first deployment, you are prompted to build an administration account. As a best practice we recommend that the account be named “oggdmin” with the password you define. Although it […]

The post Where is my StreamAdmin account? appeared first on DBASolved.

Categories: DBA Blogs

Basic Replication -- 5 : REFRESH_METHOD : FAST or FORCE ?

Hemant K Chitale - Wed, 2019-09-25 10:14
In the previous blog post, I had a remark "We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post."

This is in the context of whether it is a FORCE or FAST that shows up as the REFRESH_METHOD.  A FORCE attempts a FAST and, if it can't do so (e.g. the Materialized View Log is not accessible), attempts a COMPLETE Refresh from all the rows of the Source Table.

Other than a MV Log being a requirement, there are constraints on which types of Materialized Views can do a FAST Refresh.

SQL> create materialized view mv_fast_not_possible
2 refresh fast on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /
as select id, data_element_2, sysdate
*
ERROR at line 3:
ORA-12015: cannot create a fast refresh materialized view from a complex query


SQL> !oerr ora 12015
12015, 00000, "cannot create a fast refresh materialized view from a complex query"
// *Cause: Neither ROWIDs and nor primary key constraints are supported for
// complex queries.
// *Action: Reissue the command with the REFRESH FORCE or REFRESH COMPLETE
// option or create a simple materialized view.

SQL>


Thus, a "complex" query -- here one that adds a SYSDATE column -- cannot use a FAST Refresh.
(For all the restrictions, see Paragraph "5.3.8.4 General Restrictions on Fast Refresh" in the 19c documentation. )

SQL> create materialized view mv_fast_not_possible
2 refresh force on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /

Materialized view created.

SQL> select refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'MV_FAST_NOT_POSSIBLE'
4 /

REFRESH_M REFRESH_ LAST_REF
--------- -------- --------
DEMAND FORCE COMPLETE

SQL>
SQL> insert into source_table
2 values (2000,'TwoThousand','NewTwoTh',sysdate);

1 row created.

SQL> select * from source_table order by date_col ;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19
5 Fifth Five 16-SEP-19
6 Sixth TwoHundred 19-SEP-19
7 Seventh ThreeHundred 19-SEP-19
2000 TwoThousand NewTwoTh 25-SEP-19

7 rows selected.

SQL>
SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_FAST_NOT_POSSIBLE');

PL/SQL procedure successfully completed.

SQL>
SQL> select mview_name, refresh_mode,refresh_method,last_refresh_type, last_refresh_date
2 from user_mviews
3 order by last_refresh_date
4 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF LAST_REFR
--------------------- --------- -------- -------- ---------
MV_OF_SOURCE DEMAND FAST FAST 25-SEP-19
MV_2 DEMAND FORCE FAST 25-SEP-19
MV_FAST_NOT_POSSIBLE DEMAND FORCE COMPLETE 25-SEP-19

SQL>


MV_FAST_NOT_POSSIBLE will always undergo a COMPLETE Refresh using REFRESH_METHOD='FORCE'.

MV_2 has REFRESH_METHOD='FORCE' because it was created with "refresh on demand" with the "fast" keyword missing.  Nevertheless, it is a "simple" Materialized View so does a FAST Refresh.

MV_OF_SOURCE was created with "refresh fast on demand", so it is already configured as REFRESH_METHOD='FAST'



Categories: DBA Blogs

The Product Concept: What does it mean in 2019?

VitalSoftTech - Tue, 2019-09-24 10:41

If you’ve ever taken a marketing course, you’ve likely heard about the “product concept.” However, there may be many who are unaware of what it is and what it entails. The advertising lexicon is getting broader by the day, and with each passing minute, more words for different concepts are being introduced. Which brings us […]

The post The Product Concept: What does it mean in 2019? appeared first on VitalSoftTech.

Categories: DBA Blogs

Impact of Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-09-23 17:29

I am working on upgrading an Oracle database from 11.2.0.4 to 19c and migrating it from HP Unix to Linux. This 15-terabyte database is too large to copy from the old to the new system during our normal weekend downtime window. It also has a ton of weekend batch updates that overlap the normal weekend change window so it would be best for our business processing if the cut over from the old to the new system was as fast as possible.

I want to use GoldenGate to minimize the downtime for the cutover using an approach similar to what is described in this Oracle document:

Zero Downtime Database Upgrade Using Oracle GoldenGate

You start GoldenGate collecting changes on the current production system and then take your time copying the 15 TB of data from the old to new system. Once you are done with the initial load you apply the changes that happened in the meanwhile. Finally, you cut over to the new system. You could even switch the direction of the replication to push changes on the new production system back to the old system to allow for a mid-week back out several days after your upgrade. Pretty cool. A teammate of mine successfully used this approach on an important database some years back.

But the database that I am working on now, unlike the one that my colleague worked on, has a lot of tables set to nologging. Under the right conditions inserts into tables set to nologging are not written to the redo logs and will be missed by GoldenGate. This Oracle article recommends setting your database to FORCE LOGGING so GoldenGate will not miss any updates:

In order to ensure that the required redo information is contained in the Oracle redo logs for segments being replicated, it is important to override any NOLOGGING operations which would prevent the required redo information from being generated. If you are replicating the entire database, enable database force logging mode.

Oracle GoldenGate Performance Best Practices

We could also switch all our application tables and partitions in the source system to logging but we have so many I think we would set the whole database to force logging.

But the big question which I touched on in my previous post is whether force logging will slow down our weekend batch processing so much that we miss our deadlines for weekend processing to complete and affect our business in a negative way. The more I investigate it the more convinced I am that force logging will have minimal impact on our weekend jobs. This is an unexpected and surprising result. I really thought that our batch processing relied heavily on nologging writes to get the performance they need. It makes me wonder why we are using nologging in the first place. It would be a lot better for backup and recovery to have all our inserts logged to the redo logs. Here is a nice Oracle Support document that lays out the pros and cons of using nologging:

The Gains and Pains of Nologging Operations (Doc ID 290161.1)

I have an entry in my notes for this upgrade project dated 8/26/19 in which I wrote “Surely force logging will bog the … DB down”. Now I think the opposite. So, what changed my mind? It started with the graph from the previous post:

Graph From Previous Post with Little Direct Writes I/O

I was really surprised that the purple line was so low compared to the other two. But I felt like I needed to dig deeper to make sure that I was not just misunderstanding these metrics. The last thing I want to do is make some production change that slows down our weekend processes that already struggle to meet their deadlines. I was not sure what other metrics to look at since I could not find something that directly measures non-logged writes. But then I got the idea of using ASH data.

In my “Fast way to copy data into a table” post I said that to copy data quickly between two Oracle tables “you want everything done nologging, in parallel, and using direct path”. I may have known then and forgotten but working on this now has me thinking about the relationship between these three ways of speeding up inserts into tables. I think there are the following two dependencies:

  • Nologging requires direct path
  • Parallel requires direct path

Oracle document “Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)” says the first one. In the second case if you have a target table set to parallel degree > 1 and you enable parallel DML you get direct path writes when you insert into the target table.

From all this I got the idea to look for direct path write waits in the ASH views. I could use ASH to identify insert statements that are using direct path. Then I could check that the target tables or partitions are set to nologging. Then I would know they are doing non-logged writes even if I did not have a metric that said so directly.

directwritesql.sql looked at all the SQL statements that had direct write waits over the entire 6 weeks of our AWR history. The output looks like this:

     select
  2  sql_id,count(*) active
  3  from DBA_HIST_ACTIVE_SESS_HISTORY a
  4  where
  5  event = 'direct path write'
  6  group by sql_id
  7  order by active desc;

SQL_ID            ACTIVE
------------- ----------
2pfzwmtj41guu         99
g11qm73a4w37k         88
6q4kuj30agxak         58
fjxzfp4yagm0w         53
bvtzn333rp97k         39
6as226jb93ggd         38
0nx4fsb5gcyzb         36
6gtnb9t0dfj4w         31
3gatgc878pqxh         31
cq433j04qgb18         25

These numbers startled me because they were so low. Each entry in DBA_HIST_ACTIVE_SESS_HISTORY represents 10 seconds of activity. So over 6 weeks our top direct path write waiter waited 990 seconds. Given that we have batch processes running full out for a couple of days every weekend 990 seconds over 6 weekends is nothing.

I took the top SQL ids and dumped out the SQL text to see what tables they were inserting into. Then I queried the LOGGING column of dba_tables and dba_tab_partitions to see which insert was going into a table or partition set to nologging.

select logging,table_name
from dba_tables
where owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
order by table_name;


select logging,table_name,count(*) cnt
from dba_tab_partitions
where table_owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
group by logging,table_name
order by table_name,cnt desc;

This simple check for LOGGING or NOLOGGING status eliminated several of the top direct path write waiters. This process reduced the list of SQL ids down to three top suspects:

SQL_ID            ACTIVE
------------- ----------
cq433j04qgb18         25
71sr61v1rmmqc         17
0u0drxbt5qtqk         11

These are all inserts that are not logged. Notice that the most active one has 250 seconds of direct path write waits over the past 6 weeks. Surely enabling force logging could not cause more than about that much additional run time over the same length of time.

I got the idea of seeing what percentage of the total ASH time was direct path write waits for each of these SQL statements. In every case it was small:

cq433j04qgb18

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              2508              25    .996810207
 
71sr61v1rmmqc

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              1817              17    .935608145

0u0drxbt5qtqk

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              8691              11    .126567714

TOTAL_SAMPLE_COUNT was all the samples for that SQL_ID value for the past 6 weeks. DW_SAMPLE_COUNT is the same count of samples that are direct write waits that we already talked about. DW_SAMPLE_PCT is the percentage of the total samples that were direct write wait events. They were all around 1% or lower which means that write I/O time was only about 1% of the entire run time of these inserts. The rest was query processing best I can tell.

Also I used my sqlstat3 script to look at the average run time for these inserts:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
71sr61v1rmmqc      3333358322 01-SEP-19 12.00.46.381 PM                1         2768879.28         892080    207085.624                  0                      0                  3.817             9028323            1045428               19337954
71sr61v1rmmqc      3333358322 08-SEP-19 10.00.43.551 AM                0         264428.594          98840     28257.339                  0                      0                  3.657              177736             143345                      0
71sr61v1rmmqc      3333358322 08-SEP-19 11.00.49.648 AM                1          2352509.9         767440    160933.191                  0                      0                      0             8729437             791837               19110340
71sr61v1rmmqc      3333358322 15-SEP-19 11.00.03.027 AM                1         3090070.21         904310    190593.062                  0                      0                  2.192             9095421             949579               19470026

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
0u0drxbt5qtqk       382840242 01-SEP-19 02.00.23.436 AM                1         29281391.6        3211050    16624311.7                  0                      0              99532.905            37076159           14440303               24479240
0u0drxbt5qtqk       382840242 08-SEP-19 02.00.11.424 AM                1         3871668.37         424670    2563007.61                  0                      0               1236.003             4622248            2457057                2468983
0u0drxbt5qtqk       382840242 15-SEP-19 03.00.12.349 AM                0         5161808.16         615520    3358994.55                  0                      0              20656.365             6251060            2801828                      0
0u0drxbt5qtqk       382840242 15-SEP-19 04.00.33.661 AM                1         2412910.02         240650    1741053.89                  0                      0                699.717             3050529            1542895                4638794

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
cq433j04qgb18      1267318024 02-SEP-19 10.00.57.871 PM                1          77132.892          51180     10719.692                  0                      0                  2.003              460346              47055                 772468
cq433j04qgb18      1267318024 03-SEP-19 10.00.55.601 PM                1         116064.154          68350      9808.483                  0                      0              15746.609              911571              20422                1256808
cq433j04qgb18      1267318024 04-SEP-19 10.00.31.071 PM                1         106594.074          64030      6328.462                  0                      0              15603.102              777779              14945                1561172
cq433j04qgb18      1267318024 05-SEP-19 10.00.13.265 PM                0          44435.247          31810      2760.438                  0                      0                365.132              139637               5111                 257770
cq433j04qgb18      1267318024 09-SEP-19 10.00.18.346 PM                1         791385.928         324050    171504.931                  0                      0               7484.358             6430665             600703               14262960
cq433j04qgb18      1267318024 10-SEP-19 10.00.29.224 PM                0         1685763.14         676210    304045.354                  0                      0                283.296            11884045             838290               16268667
cq433j04qgb18      1267318024 11-SEP-19 10.01.00.800 PM                0         369644.825         172120     42679.357                  0                      0                  3.929             2443772             151369                3901044
cq433j04qgb18      1267318024 12-SEP-19 10.00.28.499 PM                0          30381.614          25630      1191.884                  0                      0                 422.55               98580               3389                 184812
cq433j04qgb18      1267318024 13-SEP-19 10.00.07.502 PM                0         173286.567         109990     11461.865                  0                      0                 359.37             1475324              63073                2360818
cq433j04qgb18      1267318024 16-SEP-19 10.00.52.175 PM                1         190203.822          93680     47585.666                  0                      0                122.658             1221886             348327                2955258

These queries run at most a couple of hours. If direct path writes are 1% of their total run time, I estimated that force logging would add about 1% to the elapsed time or about 2 minutes per execution.

The final step was to try to run one of these top nologging I/O inserts in a test environment with and without force logging to see if the test matches the expected performance slowdown. I was not able to run 0u0drxbt5qtqk without setting up a more elaborate test with the development team. My test of cq433j04qgb18 ran considerably faster with force logging than without it so I think other factors were hiding whatever effect force logging had. But 71sr61v1rmmqc had some nice results that matched my estimates well. This is on a Delphix clone of production so the data was up to date with prod but the underlying I/O was slower.

71sr61v1rmmqc results running 5 times normal 5 times force logging

The individual run times are in seconds and the averages are listed in seconds and in minutes. I ran the insert 5 times with no force logging and 5 times with it alternating. I dropped the primary key and unique index of the target table to keep from getting constraint errors. I rolled back the insert each time. It averaged about 1.2 minutes more out of 40 minutes of run time which is about a 3% increase. My estimate from ASH was about 1% so this test matches that well.

The final test remains. In some upcoming production weekend, I will put in a change to flip the database to force logging and see how it goes. My tests were run on a test system with a different storage system and with no other activity. We might see different results on a heavily loaded system with a queue for the CPU. But, after all this analysis and testing I feel confident that we won’t be able to tell that force logging is enabled. Unfortunately, we sometimes have performance issues anyway due to plan changes or data volume so the force logging switch might get blamed. But I feel confident enough to push for the final test and I think we ultimately will pass that test and be able to use force logging to enable GoldenGate to support a short cut over time for our migration and upgrade project.

Bobby

P.S. A good question came in as a comment about direct path write waits and asynchronous I/O. The system I am testing on does not support async I/O because it is HP Unix and a filesystem. This older blog post talks a bit about async and direct I/O on HP-UX:

https://www.bobbydurrettdba.com/2013/04/26/db_writer_processes-dbwr_io_slaves-with-no-asynch-io-on-hp-ux/

So, your mileage may vary (YMMV) if you do these same queries on a system with asynchronous writes. Linux filesystems support async writes and on HP-UX our RAC system on ASM supports it. It is one of the challenges of writing blog posts. Other people may be in different situations than I am.

Categories: DBA Blogs

Understanding/Modifying Oracle GoldenGate Microservices Settings

DBASolved - Fri, 2019-09-20 10:40

Oracle GoldenGate Microservices provide a wide range of options from administration and security to enhance the replication setup and experience. The Microservices architecture makes interaction with Oracle GoldenGate much easier compared to the traditional Classic architecture. This architecture provides you with not one but four avenues to interact with GoldenGate to ensure replication is configured, […]

The post Understanding/Modifying Oracle GoldenGate Microservices Settings appeared first on DBASolved.

Categories: DBA Blogs

Improving URLs for Oracle GoldenGate Microservices using a Reverse Proxy

DBASolved - Fri, 2019-09-20 06:16

Oracle GoldenGate finally has a GUI/Web Page interface to work with the product. This has been a long over due and welcomed feature that was initally released with Oracle GoldenGate Microservices in 12.3. Since 12.3 and through 19.1, the Oracle GoldenGate team has been preaching the simplicity and securty benefits of using a reverse proxy […]

The post Improving URLs for Oracle GoldenGate Microservices using a Reverse Proxy appeared first on DBASolved.

Categories: DBA Blogs

The Top 10 Biggest Sports Brands (2019)

VitalSoftTech - Thu, 2019-09-19 10:03

The sportswear market is one of the most profitable and lucrative niches in the entire world. Some of the biggest sports brands have successfully capitalized on the demand for sports apparel and equipment, turning them into internationally acknowledged top sports brands. It is also incredibly competitive, and newer and more innovative designs are manufactured each […]

The post The Top 10 Biggest Sports Brands (2019) appeared first on VitalSoftTech.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs