DBA Blogs

Basic Replication -- 4 : Data Dictionary Queries

Hemant K Chitale - Tue, 2019-09-17 08:58
Now that we have two Materialized Views against a Source table, how can we identify the relationship via the data dictionary ?

This is the query to the data dictionary in the database where the Source Table exists :

SQL> l
1 select v.owner MV_Owner, v.name MV_Name, v.snapshot_site, v.refresh_method,
2 l.log_table MV_Log_Name, l.master MV_Source,
3 to_char(l.current_snapshots,'DD-MON-RR HH24:MI:SS') Last_Refresh_Date
4 from dba_registered_snapshots v, dba_snapshot_logs l
5 where v.snapshot_id = l.snapshot_id
6* and l.log_owner = 'HEMANT'
SQL> /

MV_OWNER MV_NAME SNAPSHOT_SITE REFRESH_MET MV_LOG_NAME MV_SOURCE LAST_REFRESH_DATE
-------- ---------------- ------------------ ----------- ------------------ --------------------- ------------------
HEMANT MV_OF_SOURCE ORCLPDB1 PRIMARY KEY MLOG$_SOURCE_TABLE SOURCE_TABLE 16-SEP-19 22:41:04
HEMANT MV_2 ORCLPDB1 PRIMARY KEY MLOG$_SOURCE_TABLE SOURCE_TABLE 16-SEP-19 22:44:37

SQL>


I have run the query on the DBA_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_LOGS because the join on SNAPSHOT_ID is not available between DBA_REGISTERED_MVIEWS and DBA_MVIEW_LOGS.  Similarly, the CURRENT_SNAPSHOTS column is also not available in DBA_MVIEW_LOGS.  These two columns are important when you have *multiple* MViews against the same Source Table.

Note the "Snapshot_Site" is required because the Materialized View can be in a different database.  In this example, the MViews are in the same database as the Source Table. 

The target database containing the MViews will not have the Source Table "registered" into a data dictionary view.  The Source Table will be apparently from the QUERY column of DBA_MVIEWS (also, if the Source Table is in a different database, look at the MASTER_LINK column to identify the Database Link that connects to the source database).


UPDATE :  In case you are wondering what query you'd write against the database containing the Materialized View(s), you can simply query DBA_MVIEWS.

SQL> l
1 select mview_name, query, master_link, refresh_mode, refresh_method,
2 last_refresh_type, to_char(last_refresh_date,'DD-MON-RR HH24:MI:SS') Last_Refresh_Date
3 from dba_mviews
4 where owner = 'HEMANT'
5* order by 1 desc
SQL> /

MVIEW_NAME
------------
QUERY
--------------------------------------------------------------------------------
MASTER_LINK REFRESH_M REFRESH_ LAST_REF LAST_REFRESH_DATE
------------ --------- -------- -------- ---------------------------
MV_OF_SOURCE
SELECT "SOURCE_TABLE"."ID" "ID","SOURCE_TABLE"."DATA_ELEMENT_1" "DATA_ELEMENT_1"
,"SOURCE_TABLE"."DATA_ELEMENT_2" "DATA_ELEMENT_2","SOURCE_TABLE"."DATE_COL" "DAT
E_COL" FROM "SOURCE_TABLE" "SOURCE_TABLE"
DEMAND FAST FAST 16-SEP-19 22:41:04

MV_2
select id, data_element_2
from source_table
DEMAND FORCE FAST 16-SEP-19 22:44:37


SQL>


Here, the MASTER_LINK would specify the name of the Database Link used to connect to the Master (i.e. Source) table, if it was a different database.

REFRESH_MODE is ON DEMAND so that the MVs can be refreshed by either scheduled jobs or manually initiated calls -- as I've done in previous blog posts.  (The alternative can be ON COMMIT, if the Source Table and MV are in the same database).

LAST_REFRESH_TYPE is FAST, meaning that the refresh was able to use the MV Log on the Source Table to identify changes and merge them into the MV.  See the entries from the trace file that I've shown in the previous blog post.

Note the difference in the two REFRESH_METHOD values for the two MVs.
MV_OF_SOURCE was created as "refresh fast on demand" while "MV_2" was created as "refresh on demand".

We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post.

Question : Why does the QUERY look so different between MV_OF_SOURCE and MV_2 ?



Categories: DBA Blogs

One More Thing: New Oracle Cloud free tier better than AWS free tier

Iggy Fernandez - Mon, 2019-09-16 19:20
Larry Ellison just concluded his Oracle OpenWorld keynote with the announcement of an Oracle Cloud free tier that is better than the AWS free tier. The Oracle Cloud free tier never expires and includes the crown jewels. The slides say it all.
Categories: DBA Blogs

How to Make a Pandora Submission? An Easy Guide!

VitalSoftTech - Mon, 2019-09-16 10:12

How exactly can you make a Pandora submission? Getting your music on Pandora might seem intimidating. However, if you’re an aspiring musician or an independent artist, this music streaming platform can be more beneficial for you than you might expect. More and more musicians are entering the booming music industry. In fact, it’s hard to […]

The post How to Make a Pandora Submission? An Easy Guide! appeared first on VitalSoftTech.

Categories: DBA Blogs

Basic Replication -- 3 : Multiple Materialized Views

Hemant K Chitale - Mon, 2019-09-16 09:53
You can define multiple Materialized Views against the same Source Table with differences in :
a) the SELECT clause column list
b) Predicates in the WHERE clause
c) Joins to one or more other Source Table(s) in the FROM clause
d) Aggregates in the SELECT clause

Thus, for my Source Table, I can add another Materialized View :

SQL> create materialized view mv_2
2 refresh on demand
3 as select id, data_element_2
4 from source_table;

Materialized view created.

SQL>
SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
0

SQL> insert into source_table
2 values (5, 'Fifth','Five',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
1

SQL>
SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> select * from mv_of_source;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
5 Fifth Five 16-SEP-19
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19

SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
1

SQL>


Now that there are two MVs referencing the Source Table, the MV Log is not completely purged when only one of the two MVs is refreshed.  Oracle still maintains entries in the MV Log for the second MV to be able to execute a Refresh.

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
5 16-SEP-19 I N
FE
5.6299E+14


SQL> execute dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from mv_2;

ID DATA_ELEMENT_2
---------- ---------------
101 One
103 Three
104 Updated
5 Five

SQL>


The MV Log is "purged" only when the second (actually the last) MV executes a Refresh.  Of course, if more rows were inserted / updated in the Source Table between the Refresh of MV_OF_SOURCE and MV_2, there would be corresponding entries in the MV Log.

So, Oracle does use some mechanism to track MVs that execute Refresh's and does continue to "preserve" rows in the MV Log for MVs that haven't been refreshed yet.

As I've noted in two earlier posts, in 2007 and 2012, the MV Log (called "Snapshot Log" in the 2007 post) can keep growing for a long time if you have one or more Materialized Views that just aren't executing their Refresh  calls.


Categories: DBA Blogs

import a single table from a full export backup in oracle

Learn DB Concepts with me... - Fri, 2019-09-13 11:01

import a single table from a full export backup and remap it


impdp USERNAME/PASSWORD tables=SCHEMA.TABLE_NAME directory=DPUMP dumpfile=DUMPFILE_%U.dmp
remap_schema=SOURCE:TARGET 
REMAP_TABLE=TABLE_NAME:TABLE_NAME_NEW


Optional things above :

  1. Remove remap if you don't want.
  2. Add ENCRYPTION_PASSWORD=IF_ANY
Categories: DBA Blogs

Presenting at UKOUG Techfest19 Conference in Brighton, UK

Richard Foote - Thu, 2019-09-12 19:07
I’m very excited to be attending my 3rd UKOUG Conference, this year re-badged as Techfest19. The fact it’s being held in Brighton is a little disconcerting for a Crystal Palace fan, but really looking forward nonetheless to what has always been one of the very best Oracle conferences on the yearly calendar. I have a […]
Categories: DBA Blogs

Estimating how much write I/O is not logged

Bobby Durrett's DBA Blog - Thu, 2019-09-12 11:30

I am trying to figure out how much non-logged write I/O an Oracle database is doing. I want to run an ALTER DATABASE FORCE LOGGING command on the database so that I can use Oracle GoldenGate(GGS) which reads updates from Oracle’s logs. GGS will miss writes that are not logged. But if I turn on force logging it may slow down applications that depend on non-logged writes for good performance. So, I want to find some Oracle performance metrics that give me an idea about how much non-logged write I/O we have so I have an estimate of how much force logging will degrade performance.

I created SQL*Plus and PythonDBAGraphs reports based on DBA_HIST_IOSTAT_FUNCTION that gives some insight into the write I/O that is not logged. Here is the Python based graphical version of the report for one recent weekend:

Possible NOLOGGING Write I/O

The purple-blue line represents Direct Writes. These may or may not be logged. The red-orange line represents writes through the DBWR process. These are always logged. The light green line represents log I/O through the LGWR process. My theory is that if the purple line is above the green line the difference must be write I/O that is not logged. But if the green line is equal to or greater than the purple line you really do not know if there was any write I/O that was not logged. But if there is non-logged write I/O it cannot be more than the amount indicated by the purple line. So, this graph does not directly answer my question about how much write I/O was not logged but it does show some numbers that relate to the question.

I did some experiments with the V$IOSTAT_FUNCTION view that populates DBA_HIST_IOSTAT_FUNCTION to see what values it gives for Direct Writes, DBWR, and LGWR using different scenarios. Here is the zip of these scripts and their output: nologgingscriptsandlogs09122018.zip. I tested four scenarios:

  1. Insert append nologging
  2. Insert append logging
  3. Insert noappend logging
  4. Insert noappend nologging

1 and 2 did Direct Writes. 3 and 4 did DBWR writes. 2, 3, and 4 all did LGWR writes.

Here are the relevant sections of the output that correspond to these statements.

Insert append nologging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
Direct Writes                      4660
LGWR                                 46
DBWR                                 27

Insert append logging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
LGWR                               4789
Direct Writes                      4661
DBWR                                 37

Insert noappend logging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
DBWR                               6192
LGWR                               4528
Direct Writes                         2

Insert noappend nologging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
DBWR                               6213
LGWR                               4524
Direct Writes                         2

This pattern is similar to that in a Ask Tom post that I wrote about a while back. That post showed the different situations in which writes were logged or not. I also got some ideas about direct writes and logging from this Oracle support document:

Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)

It sounds like inserts into tables that go through the normal processing eventually get written to disk by DBWR but inserts with the append hint write directly to the datafiles and may or may not be logged and written out by LGWR.

These tests and documents gave me the idea of building a report and graph based on DBA_HIST_IOSTAT_FUNCTION showing the values for the Direct Writes, DBWR, and LGWR FUNCTION_NAME values. The graph above shows an example of a real system. I was surprised to see how high the DBWR and LGWR values were and how low the Direct Writes were. That made me think that it would be safe to try turning on FORCE LOGGING because it likely will have minimal impact on the overall weekend processing. It gave me enough evidence to push for approval to do a controlled test of enabling FORCE LOGGING in production over an upcoming weekend. I will update this post with the results if we move forward with the test.

Bobby

Categories: DBA Blogs

Announcement: Australia/NZ “Let’s Talk Database” Events October 2019 !!

Richard Foote - Wed, 2019-09-11 22:49
I’ve very excited to announce the next series of Oracle “Let’s Talk Database” events to be run throughout Australia and New Zealand in October 2019. I’ll be discussing two exciting topics this series, “Oracle Database 19c New Features” and “Oracle Exadata X8“. As always, these sessions run between 9am-1pm, include a networking lunch and are free, […]
Categories: DBA Blogs

Oracle Database 19c Automatic Indexing: Default Index Column Order Part II (Future Legend)

Richard Foote - Tue, 2019-09-10 20:59
In Part I, we explored some options that Oracle might adopt when ordering the columns within an Automatic Index by default, in the absence of other factors where there is only the one SQL statement to be concerned with. A point worth making is that if all columns of an index are specified within SQL […]
Categories: DBA Blogs

Jami (Gnu Ring) review

RDBMS Insight - Tue, 2019-09-10 15:35

An unavoidable fact of database support life is webconferences with clients or users. Most of the time, we’re more interested in what’s going on onscreen than in each others’ faces. But every now and then we need to have a face-to-face. Skype is popular, but I recently had the chance to try out a FOSS alternative with better security: Jami.

Jami (formerly Gnu Ring) is a FOSS alternative to Skype that advertises a great featureset and some terrific privacy features. I suggested to a small group that we try it out for an upcoming conference call.

Just going by its specs, Jami (https://jami.net/) looks amazing. It’s free, open-source software that’s available on all the major platforms, including all the major Linux distros. It boasts the following advantages over Skype and many other Skype alternatives:

  • Distributed: Uniquely, there aren’t any central servers. Jami uses distributed hash table technology to distribute directory functions, authentication, and encryption across all devices connected to it.
  • Secure: All communications are end-to-end encrypted.
  • FOSS: Jami’s licensed under a GPLv3+ license, is a GNU package and a project of the Free Software Foundation.
  • Ad-free: If you’re not paying for commercial software, then you are the product. Not so with Jami, which is non-commercial and ad-free. Jami is developed and maintained by Savoir Faire Linux, a Canadian open-source consulting company.

And its listed features include pretty much everything you’d use Skype for: text messaging, voice calls, video calls, file and photo sharing, even video conference calls.

I wanted to use it for a video conference call, and my group was willing to give it a try. I had high hopes for this FOSS Skype alternative.

Installation

Jami is available for: Windows, Linux, OS X, iOS, Android, and Android TV. (Not all clients support all features; there’s a chart in the wiki.) I tried the OS X and iOS variants.

First, I installed Jami on OS X and set it up. The setup was straightforward, although I had to restart Jami after setting up my account, in order for it to find that account.

Adding contacts

One particularly cool feature of Jami is that your contact profile is stored locally, not centrally. Your profile’s unique identifier is a cumbersomely long 40-digit hexadecimal string, such as “7a639b090e1ab9b9b54df02af076a23807da7299” (not an actual Jami account afaik). According to the documentation, you can also register a username for your account, such as “natalkaroshak”.

Contacts are listed as hex strings.Unfortunately, I wasn’t able to actually find any of my group using their registered usernames, nor were they able to find me under my username. We had to send each other 40-digit hex strings, and search for the exact hex strings in Jami, in order to find each other.

The only way to add a contact, once you’ve located them, is to interact with them, eg. by sending a text or making a call. This was mildly annoying when trying to set up my contact list a day ahead of the conference call.

Once I’d added the contacts, some of them showed up in my contact list with their profile names… and some of them didn’t, leaving me guessing which hex string corresponded to which member of my group.

Sending messages, texts, emojis

Sending and receiving Skype-style messages and emojis worked very well in Jami. Group chat isn’t available.

Making and taking calls

The documented process for a conference call in Jami is pretty simple: call one person,

Only the Linux and Windows versions currently support making conference calls. Another member of our group tried to make the conference call. As soon as I answered his incoming call, my Jami client crashed. So I wasn’t able to actually receive a call using Jami for OS X.

The caller and one participant were able to hear each other briefly, before the caller’s Jami crashed as well.

Linking another device to the same account

I then tried installing Jami on my iPhone. Again, the installation went smoothly, and this let me try another very cool feature of Jami.

In Jami, your account information is all stored in a folder on your device. There’s no central storage. Password creation is optional, because you don’t log in to any server when you join Jami. If you do create a password, you can (1) register a username with the account and (2) use the same account on another device.

The process of linking my iPhone’s Jami to the same account I used with my OSX Jami was very smooth. In the OSX install, I generated an alphanumeric PIN, entered the PIN into my device, and entered the account password. I may have mis-entered the first alphanumeric PIN, because it worked on the second try.

Unfortunately, my contacts from the OSX install didn’t appear in the iOS install, even though they were linked to the same account. I had to re-enter the 40-digit hex strings and send a message to each conference call participant.

Making calls on iOS

The iOS client doesn’t support group calling, but I tried video calling one person. We successfully connected. However, that’s where the success ended. I could see the person I called, but was unable to hear her. And she couldn’t see OR hear me. After a few minutes, the video of the other party froze up too.

Conclusion

Jami looked very promising, but didn’t actually work.

All of the non-call stuff worked: installation, account creation, adding contacts (though having to use the 40-digit hex codes is a big drawback), linking my account to another device.

But no one in my group was able to successfully make a video call that lasted longer than a few seconds. The best result was that two people could hear each other for a couple of seconds.

Jami currently has 4.5/5 stars on alternativeto.net. I have to speculate that most of the reviews are from Linux users, and that the technology is more mature on Linux. For OSX and iOS, Jami’s not a usable alternative to Skype yet.

Big thanks to my writing group for gamely trying Jami with me!

Categories: DBA Blogs

Basic Replication -- 2b : Elements for creating a Materialized View

Hemant K Chitale - Mon, 2019-09-09 09:02
Continuing the previous post, what happens when there is an UPDATE to the source table ?

SQL> select * from source_table;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
1 First One 18-AUG-19
3 Third Three 18-AUG-19
4 Fourth Four 18-AUG-19

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from rupd$_source_table;

no rows selected

SQL>
SQL> update source_table
2 set data_element_2 = 'Updated', date_col=sysdate
3 where id=4;

1 row updated.

SQL> select * from rupd$_source_table;

no rows selected

SQL> commit;

Commit complete.

SQL> select * from rupd$_source_table;

no rows selected

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
4 01-JAN-00 U U
18
8.4443E+14


SQL>

So, it is clear that UPDATES, too, go to the MLOG$ table.

What about multi-row operations ?

SQL> update source_table set id=id+100;

3 rows updated.

SQL> select * from rupd$_source_table;

no rows selected

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
4 01-JAN-00 U U
18
8.4443E+14

1 01-JAN-00 D O
00
1.4075E+15

101 01-JAN-00 I N
FF
1.4075E+15

3 01-JAN-00 D O
00
1.4075E+15

103 01-JAN-00 I N
FF
1.4075E+15

4 01-JAN-00 D O
00
1.4075E+15

104 01-JAN-00 I N
FF
1.4075E+15


7 rows selected.

SQL>



Wow ! Three rows updated in the Source Table translated to 6 rows in the MLOG$ table ! Each update row was represented by an DMLTYPE$$='D' and OLD_NEW$$='O'  followed by a DMLTYPE$$='I' and OLD_NEW$$='N'.   So that should mean "delete the old row from the materialized view and insert the new row into the materialized view" ??

(For the time being, we'll ignore SNAPTIME$$ being '01-JAN-00').

So an UPDATE to the Source Table of a Materialized View can be expensive during the UPDATE (as it creates two entries in the MLOG$ table) and for subsequent refresh's as well !

What happens when I refresh the Materialized View ?

SQL> execute dbms_session.session_trace_enable;

PL/SQL procedure successfully completed.

SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> execute dbms_session.session_trace_disable;

PL/SQL procedure successfully completed.

SQL>


The session trace file shows these operations (I've excluded a large number of recursive SQLs and SQLs that were sampling the data for optimisation of execution plans):

update "HEMANT"."MLOG$_SOURCE_TABLE" 
set snaptime$$ = :1
where snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')

/* QSMQ VALIDATION */ ALTER SUMMARY "HEMANT"."MV_OF_SOURCE" COMPILE

select 1 from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ > :1
and ((dmltype$$ IN ('I', 'D')) or (dmltype$$ = 'U' and old_new$$ in ('U', 'O')
and sys.dbms_snapshot_utl.vector_compare(:2, change_vector$$) = 1))
and rownum = 1

SELECT /*+ NO_MERGE(DL$) ROWID(MAS$) ORDERED USE_NL(MAS$) NO_INDEX(MAS$) PQ_DISTRIBUTE(MAS$,RANDOM,NONE) */
COUNT(*) cnt
FROM ALL_SUMDELTA DL$, "HEMANT"."SOURCE_TABLE" MAS$
WHERE DL$.TABLEOBJ# = :1 AND DL$.TIMESTAMP > :2 AND DL$.TIMESTAMP <= :3
AND MAS$.ROWID BETWEEN DL$.LOWROWID AND DL$.HIGHROWID

select dmltype$$, count(*) cnt from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ > :1 and snaptime$$ <= :2
group by dmltype$$ order by dmltype$$

delete from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ <= :1


and this being the refresh (merge update) of the target MV
DELETE FROM "HEMANT"."MV_OF_SOURCE" SNAP$ 
WHERE "ID" IN
(SELECT * FROM (SELECT MLOG$."ID"
FROM "HEMANT"."MLOG$_SOURCE_TABLE" MLOG$
WHERE "SNAPTIME$$" > :1 AND ("DMLTYPE$$" != 'I'))
AS OF SNAPSHOT(:B_SCN) )

/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_OF_SOURCE" "SNA$" USING
(SELECT * FROM (SELECT CURRENT$."ID",CURRENT$."DATA_ELEMENT_1",CURRENT$."DATA_ELEMENT_2",CURRENT$."DATE_COL"
FROM (SELECT "SOURCE_TABLE"."ID" "ID","SOURCE_TABLE"."DATA_ELEMENT_1" "DATA_ELEMENT_1","SOURCE_TABLE"."DATA_ELEMENT_2" "DATA_ELEMENT_2","SOURCE_TABLE"."DATE_COL" "DATE_COL"
FROM "SOURCE_TABLE" "SOURCE_TABLE") CURRENT$,
(SELECT DISTINCT MLOG$."ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE" MLOG$ WHERE "SNAPTIME$$" > :1
AND ("DMLTYPE$$" != 'D')) LOG$ WHERE CURRENT$."ID" = LOG$."ID") AS OF SNAPSHOT(:B_SCN) )"AV$" ON ("SNA$"."ID" = "AV$"."ID")
WHEN MATCHED THEN UPDATE SET "SNA$"."DATA_ELEMENT_1" = "AV$"."DATA_ELEMENT_1","SNA$"."DATA_ELEMENT_2" = "AV$"."DATA_ELEMENT_2","SNA$"."DATE_COL" = "AV$"."DATE_COL"
WHEN NOT MATCHED THEN INSERT (SNA$."ID",SNA$."DATA_ELEMENT_1",SNA$."DATA_ELEMENT_2",SNA$."DATE_COL")
VALUES (AV$."ID",AV$."DATA_ELEMENT_1",AV$."DATA_ELEMENT_2",AV$."DATE_COL")


So, we see a large number of intensive operations against the MLOG$ Materialized View Log object.

And on the MV, there is a DELETE followed by a MERGE (UPDATE/IINSERT)


Two takeaways :
1.  Updating the Source Table of a Materialized View can have noticeable overheads
2.  Refreshing a Materialized View takes some effort on the part of the database

(Did you notice the strange year 2100 date in the update of the MLOG$ table?
.
.
.
.
.
.
Categories: DBA Blogs

Oracle GoldenGate Microservices Upgrade – 12.3.0.x/18.1.0.x to 19.1.0.0.x

DBASolved - Sun, 2019-09-08 16:45

Oracle GoldenGate Microservices have been out for a few years now. Many customers have pursued the architecture in many different industries and have this in many dfifernt use-cases and architectures. But what do you do when you want to upgrade your Oracle GoldenGate Microservices Architecture?

In a previous post, I wrote about how to upgrade Oracle GoldenGate Microservices using the GUI or HTML5 approach in this post – Upgrading GoldenGate Microservices Architecture – GUI Based (January 2018). Today, many of the steps are exactly the same as they were a year ago. The good news is that Oracle has documented the process a bit clearer in the lates upgrade document (here).

So why a new post on upgrading the architecture? Over the last few days, I’ve been looking into a problem that has been reported by customers. This problem affects the upgrade process, not so much in how to do the upgrade but when the upgrade is done.

In nutshell, the upgrade process for Oracle GoldenGate Microservices is done in these few steps:

1. Download the latest version of Oracle GoldenGate Microservices -> In this case: 19.1.0.0.1 (here); however, this approach will work with 19.1.0.0.2 as well.
2. Upload the software, if needed, to a staging area on the server where Oracle GoldenGate Microservices is running. Ideally, you should be upgrading from OGG 12c (12.3.x) or 18c (18.1.x).
3. Unzip the downloaded zip file to a temporary folder in the staging area
4. Execute runInstaller from the directory in the staging area. This will start the Oracle Universal Installer for Oracle GoldenGate.
5. Within the installation process, provide the Oracle GoldenGate Home for the Software Location.
6. Click Install to begin the installation into a New Oracle GoldenGate Home.

Note: At this point, you should have two Oracle GoldenGate Microservices Homes. One for the older version and one for the 19c version.

7. Login to the ServiceManager
8. Under Deployments -> select ServiceManager
9. Under Deployment Details -> select the pencil icon. This will open the edit field for the GoldenGate Home.
10. Edit the GoldenGate Home -> change to the new Oracle GoldenGate Microservices Home then click Apply.
This will force the ServiceManager to reboot.

At this point, you may be asking yourself, I’ve done everything but the ServiceManager has not come back up. What is going on?

If you have configured the ServiceManager as a daemon, you can try to start the ServiceManager by using the systemctl commands.

systemctl start OracleGoldenGate

 

This command will just return with nothing important. In order to find out if it start successfully or not, check the status of the service.

systemctl status OracleGoldenGate
OracleGoldenGate.service - Oracle GoldenGate Service Manager
   Loaded: loaded (/etc/systemd/system/OracleGoldenGate.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Sun 2019-09-08 21:27:59 UTC; 2s ago
  Process: 3430 ExecStart=/opt/app/oracle/product/12.3.0/oggcore_1/bin/ServiceManager (code=killed, signal=SEGV)
 Main PID: 3430 (code=killed, signal=SEGV)


Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Unit OracleGoldenGate.service entered failed state.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service failed.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service holdoff time over, scheduling restart.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Stopped Oracle GoldenGate Service Manager.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: start request repeated too quickly for OracleGoldenGate.service
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: <strong>Failed to start Oracle GoldenGate Service Manage</strong>r.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Unit OracleGoldenGate.service entered failed state.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service failed.

 

As you can tell the ServiceManager has failed to start. Why is this?

If you look at the output of the last systemctl status command, you see that the service is still referencing the old Oracle GoldenGate Microservices home.

Now the question becomes, how to I fix this?

The solution here is simple. Go to the deployment home for the ServiceManager and look under the bin directory. You will see teh registerServiceManager.sh script. Edit this script and change the variable OGG_HOME to match the new Oracle GoldenGate Home for 19c.

$ cd /opt/app/oracle/gg_deployments/ServiceManager/bin
$ ls
registerServiceManager.sh
$ vi registerServiceManager.sh


#!/bin/bash

# Check if this script is being run as root user
if [[ $EUID -ne 0 ]]; then
  echo "Error: This script must be run as root."
  exit
fi


# OGG Software Home location
OGG_HOME="/opt/app/oracle/product/12.3.0/oggcore_1” <— Change to reflect new OGG_HOME

Wit the registerServiceManager.sh file edit, go back and re-run the file as the root user.

# cd /opt/app/oracle/gg_deployments/ServiceManager/bin
# ./registerServiceManager.sh
Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
----------------------------------------------------
     Oracle GoldenGate Install As Service Script
----------------------------------------------------
OGG_HOME=/opt/app/oracle/product/19.1.0/oggcore_1
OGG_CONF_HOME=/opt/app/oracle/gg_deployments/ServiceManager/etc/conf
OGG_VAR_HOME=/opt/app/oracle/gg_deployments/ServiceManager/var
OGG_USER=oracle
Running OracleGoldenGateInstall.sh…

With the service now updated, you can start and check the service.

# systemctl start OracleGoldenGate
# systemctl status OracleGoldenGate
OracleGoldenGate.service - Oracle GoldenGate Service Manager
   Loaded: loaded (/etc/systemd/system/OracleGoldenGate.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-09-08 21:39:58 UTC; 2s ago
 Main PID: 21946 (ServiceManager)
    Tasks: 13
   CGroup: /system.slice/OracleGoldenGate.service
           └─21946 /opt/app/oracle/product/19.1.0/oggcore_1/bin/ServiceManager

Sep 08 21:39:58 OGG12c219cUpgrade systemd[1]: Started Oracle GoldenGate Service Manager.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: 2019-09-08T21:39:58.509+0000 INFO | Configuring user authorization secure store path as '/opt/app/oracle/gg_deployments/Serv...ureStore/'.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: 2019-09-08T21:39:58.510+0000 INFO | Configuring user authorization as ENABLED.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Oracle GoldenGate Service Manager for Oracle
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Version 19.1.0.0.0 OGGCORE_19.1.0.0.0_PLATFORMS_190508.1447
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Linux, x64, 64bit (optimized) on May  8 2019 18:17:50
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Operating system character set identified as UTF-8.
Hint: Some lines were ellipsized, use -l to show in full.


At this point, you can now log back into the ServiceManager and confirm that the upgrade was done successfully.

Note: If you have your ServiceManager configured to be manually started and stopped, then you will need to edit the startSM.sh and stopSM.sh file. The OGG_HOME has to be changed in these files as well.

Enjoy!!!

Categories: DBA Blogs

What does EP Stand for? A Simple Answer!

VitalSoftTech - Thu, 2019-09-05 06:52

If you’re an independent artist or a beginner musician in the competitive music industry, you’ve probably heard of the acronym EP before, but the real question is: what does EP stand for in music? Many new independent artists and musicians have now entered the market, and the music industry has become more competitive than ever. […]

The post What does EP Stand for? A Simple Answer! appeared first on VitalSoftTech.

Categories: DBA Blogs

September 27 Arizona Oracle User Group Meeting

Bobby Durrett's DBA Blog - Wed, 2019-09-04 10:30

The Arizona Oracle User Group (AZORA) is cranking up its meeting schedule again now that the blazing hot summer is starting to come to an end. Our next meeting is Friday, September 27, 2019 from 12:00 PM to 4:00 PM MST.

Here is the Meetup link: Meetup

Thank you to Republic Services for allowing us to meet in their fantastic training rooms.

Thanks also to OneNeck IT Solutions for sponsoring our lunch.

OneNeck’s Biju Thomas will speak about three highly relevant topics:

  • Oracle’s Autonomous Database — “What’s the Admin Role?”
  • Oracle Open World #OOW 19 Recap
  • Let’s Talk AI, ML, and DL

I am looking forward to learning something new about these areas of technology. We work in a constantly evolving IT landscape so learning about the latest trends can only help us in our careers. Plus, it should be interesting and fun.

I hope to see you there.

Bobby

Categories: DBA Blogs

London March 2020: “Oracle Indexing Internals and Best Practices” and “Oracle Performance Diagnostics and Tuning” Seminars !!

Richard Foote - Tue, 2019-09-03 06:44
It’s with great excitement that I announce I’ll finally be returning to London, UK in March 2020 to run both of my highly acclaimed seminars. The dates and registration links are as follows: 23-24 March 2020: “Oracle Indexing Internals and Best Practices” seminar – Tickets and Registration Link 25-26 March 2020: “Oracle Performance Diagnostics and […]
Categories: DBA Blogs

How are Rich Media Ads Different from Other Ad Formats?

VitalSoftTech - Tue, 2019-09-03 05:36

If you have clicked here, you’re probably wondering how rich media ads are different from other ad formats. You’ve probably heard the term “rich media ads” at least once in your foray across the advertising realm. But do you know how these interactive ads are different from other ad layouts? If you’re new to advertising, […]

The post How are Rich Media Ads Different from Other Ad Formats? appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Database 19c Automatic Indexing: Default Index Column Order Part I (Anyway Anyhow Anywhere)

Richard Foote - Mon, 2019-09-02 08:06
The next thing I was curious about regarding Automatic Indexing was in which order would Oracle by default order the columns within an index. This can be a crucial decision with respect to the effectiveness of the index (but then again, may not be so crucial as well). Certainly one would expect the index column […]
Categories: DBA Blogs

Announcement: New “Oracle Indexing Internals and Best Practices” Webinar – 19-23 November 2019 in USA Friendly Time Zone

Richard Foote - Sun, 2019-09-01 23:54
I’m very excited to announce a new Webinar series for my highly acclaimed “Oracle Indexing Internals and Best Practices” training event, running between 19-23 November 2019 !! Indexes are fundamental to every Oracle database and are crucial for optimal performance. However, there’s an incredible amount of misconception, misunderstanding and pure myth regarding how Oracle indexes function […]
Categories: DBA Blogs

Getting started with Hyper-V on Windows 10

The Oracle Instructor - Fri, 2019-08-30 03:27

Microsoft Windows 10 comes with its own virtualization software called Hyper-V. Not for the Windows 10 Home edition, though.

Check if you fulfill the requirements by opening a CMD shell and typing in systeminfo:

The below part of the output from systeminfo should look like this:

If you see No there instead, you need to enable virtualization in your BIOS settings.

Next you go to Programms and Features and click on Turn Windows features on or off:

You need Administrator rights for that. Then tick the checkbox for Hyper-V:

That requires a restart at the end:

Afterwards you can use the Hyper-V Manager:

Hyper-V can do similar things than VMware or VirtualBox. It doesn’t play well together with VirtualBox in my experience, though: VirtualBox VMs refused to start with errors like “VT-x is not available” after I installed Hyper-V. I also found it a bit trickier to handle than VirtualBox, but that’s maybe just because of me being less familiar with it.

The reason I use it now is because one of our customers who wants to do an Exasol Administration training cannot use VirtualBox – but Hyper-V is okay for them. And now it looks like that’s also an option. My testing so far shows that our educational cluster installation and management labs work also with Hyper-V.

Categories: DBA Blogs

Oracle 19c Automatic Indexing: How Many Executions Does It Take? (One Shot)

Richard Foote - Wed, 2019-08-28 23:16
One of the first questions I asked when playing with the new Oracle Database 19c Automatic Indexing feature was how many executions of an SQL does it take for a new index to be considered? To find out, I create the following table: I then ran the following query just once and checked to see […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs