Feed aggregator

Review: Blockchain for dummies

Dietrich Schroff - Fri, 2020-02-21 16:11
The book "blockchain for dummies" provided by IBM contains 6 chapters on 41 pages. (Download from IBM)

The chapters 1 & 2 describe the basic of blockchain technology.
From my point of view, this part is a bit too shallow, because there is no single formular inside this book ;-)
But there are some nice illustrations given:

and the differences between a public blockchain and corporate blockchains are good explained including their  consequences.

Chapter 3 is about where companies can use blockchains. This chapter i did not understand. It is all about frictions which companies have to overcome. But these frictions are not special to blockchain (like "innovation restrictions").

In chapter 4 some examples are shown. These examples are very universal examples, which are too abstract in my eyes.

Chapter 5 contains information about a project of the Linux Foundation: Hyperledger.
This is very nice opening for https://www.hyperledger.org/.

Chapter 6 comes up with ten steps to your first blockchain application.
Skippable.

My conclusion: a very nice book and really ok for free. But I wouldn't spend any money on it...

Speed up datapump export for migrating big databases

Yann Neuhaus - Fri, 2020-02-21 07:39
Introduction

Big Oracle databases (several TB) are still tough to migrate to another version on a new server. For most of them, you’ll probably use RMAN restore or Data Guard, but datapump is always a cleaner way to migrate. With datapump, you can easily migrate to a new filesystem (ASM for example), rethink your tablespace organization, reorganize all the segments, exclude unneeded components, etc. All of these tasks in one operation. But datapump export can take hours and hours to complete. This blog post describe a method I used on several projects: it helped me a lot to optimize migration time.

Why datapump export takes so much time?

First of all, exporting data with datapump is actually extracting all the objects from the database, so it’s easy to understand why it’s much slower than copying datafiles. Regarding datapump speed, it mainly depends on disk speed where datafiles reside, and parallelism level. Increasing parallelism does not always speed up export, simply because if you’re on mechanical disks, it’s slower to read multiple objects on the same disks than actually do it serially. So there is some kind of limit, and for big databases, it can last hours to export data. Another problem is that long lasting export needs more undo data. If your datapump export lasts 10 hours, you’ll need 10 hours of undo_retention (if you need a consistent dump – at least when testing the migration because application is running). You’re also risking DDL changes on the database, and undo_retention cannot do anything for that. Be carefull because uncomplete dump is totally usable to import data, but you’ll miss several objects, not the goal I presume.

The solution would be trying to reduce the time needed for datapump export to avoid such problems.

SSD is the solution

SSD is probably the best choice for today’s databases. No more bottleneck with I/Os, that’s all we were waiting for. But your source database, an old 11gR2 or 12cR1, probably doesn’t run on SSD, especially if it’s a big database. SSD were quite small and expensive several years ago. So what? You probably didn’t plan a SSD migration on source server as you will decommission it as soon as migration is finished.

The solution is to use a temporary server fitted with fast SSDs. You don’t need a real server, with a fully rendundant configuration. You even don’t need RAID at all to protect your data because this server will only be for a single use: JBOD is OK.

How to configure this server?

This server will have:

  • exactly the same OS, or something really similar compared to source server
  • the exact same Oracle version
  • the same configuration of the filesystems
  • enough free space to restore the source database
  • SSD-only storage for datafiles without redundancy
  • enough cores to maximise the parallelism level
  • a shared folder to put the dump, this shared folder would also be mounted on target server
  • a shared folder to pick up the latest backups from source database
  • enough bandwith for shared folders. 1Gbps network is only about 100MB/s, so don’t expect very high speed with that kind of network
  • you don’t need a listener
  • you’ll never use this database for you application
  • if you’re reusing a server, make sure it will be dedicated for this purpose (no other running processes)
And regarding the license?

As you may know this server would need a license. But you also know that during the migration project, you’ll have twice the license used on your environment for several weeks: still using old servers, and already using new servers for migrated database. To avoid any problem, you can use a server previously running Oracle databases and already decommissionned. Tweak it with SSDs and it will be fine. And please make sure to be fully compliant with the Oracle license on your target environment.

How to proceed?

We won’t use this server as a one-shot path for migration because we need to try if the method is good enough and also find the best settings for datapump.

To proceed, the steps are:

  • declare the database in /etc/oratab
  • create a pfile on source server and copy it to $ORACLE_HOME/dbs on the temporary server
  • edit the parameters to disable references to source environnement, for example local and remote_listeners and Data Guard settings. The goal is to make sure starting this database will have no impact on production
  • startup the instance on this pfile
  • restore the controlfile from the very latest controlfile autobackup
  • restore the database
  • recover the database and check the SCN
  • take a new archivelog backup on the source database (to simulate the real scenario)
  • catalog the backup folder on the temporary database with RMAN
  • do another recover database on temporary database, it should apply the archivelogs of the day, then check again the SCN
  • open the database in resetlogs mode
  • create the target directory for datapump on the database
  • do the datapump export with maximum parallelism level (2 times the number of cores available on your server – it will be too many at the beginning, but not enough at the end). No need for flashback_scn here.

You can try various parallelism levels to adjust to the best value. Once you’ve found the best value, you can schedule the real migration.

Production migration

Now you managed to master the method, let’s imagine that you planned to migrate to production tonight at 18:00.

09:00 – have a cup of coffee first, you’ll need it!
09:15 – remove all the datafiles on the temporary server, also remove redologs and controlfiles, and empty the FRA. Only keep the pfile.
09:30 – startup force your temporary database, it should stop in nomount mode
09:45 – restore the latest controlfile autobackup on temporary database. Make sure no datafile will be added today on production
10:00 – restore the database on the temporary server. During the restore, production is still available on source server. At the end of the restore, do a first recover but DON’T open your database with resetlogs now
18:00 – your restore should be finished now, you can disconnect everyone from source database, and take the very latest archivelog backup on source database. From now your application should be down.
18:20 – on your temporary database, catalog the backup folder with RMAN. It will discover the latest archivelog backups.
18:30 – do a recover of your temporary database again. It should apply the latest archivelogs (generated during the day). If you want to make sure that everything is OK, check the current_scn on source database, it should be nearly the same as your temporary database
18:45 – open the temporary database with RESETLOGS
19:00 – do the datapump export with your optimal settings

Once done, you now have to do the datapump import on your target database. Parallelism will depend on the cores available on target server, and the resources you would preserve for other databases already running on this server.

Benefits and drawbacks

Obvious benefit is that it probably costs less than 30 minutes to apply the archivelogs of the day on the temporary database. And total duration of the export can be cut by several hours.

First drawback is that you’ll need a server of this kind, or you’ll need to build one. Second drawback is if you’re using Standard Edition: don’t expect to save that much hours as it has no parallelism at all. Big databases are not very well deserved by Standard Edition, you may know.

Real world example

This is a recent case. Source database is 12.1, about 2TB on mechanical disks. Datapump export is not working correctly: it lasted more than 19 hours with lots of errors. One of the big problem of this database is a bigfile tablespace of 1.8TB. Who did this kind of configuration?

Temporary server is a DEV server already decommissioned running the same version of Oracle and using the same Linux kernel. This server is fitted with enough TB of SSD: mount path was changed to match source database filesystems.

On source server:

su – oracle
. oraenv <<< BP3
sqlplus / as sysdba
create pfile='/tmp/initBP3.ora' from spfile;
exit
scp /tmp/initBP3.ora oracle@db32-test:/tmp

On temporary server:
su – oracle
cp /tmp/initBP3.ora /opt/orasapq/oracle/product/12.1.0.2/dbs/
echo "BP3:/opt/orasapq/oracle/product/12.1.0.2:N" >> /etc/oratab
. oraenv <<< BP3
vi $ORACLE_HOME/dbs/initBP3.ora
remove db_unique_name, dg_broker_start, fal_server, local_listener, log_archive_config, log_archive_dest_2, log_archive_dest_state_2, service_names from this pfile
sqlplus / as sysdba
startup force nomount;
exit
ls -lrt /backup/db42-prod/BP3/autobackup | tail -n 1
/backup/db42-prod/BP3/autobackup/c-2226533455-20200219-01
rman target /
restore controlfile from '/backup/db42-prod/BP3/autobackup/c-2226533455-20200219-01';
alter database mount;
CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET;
restore database;
...
recover database;
exit;

On source server:
Take a last backup of archivelogs with your own script: the one used in scheduled tasks.

On temporary server:
su – oracle
. oraenv <<< BP3
rman target /
select current_scn from v$database;
CURRENT_SCN
-----------
11089172427
catalog start with '/backup/db42-prod/BP3/backupset/';
recover database;
select current_scn from v$database;
CURRENT_SCN
-----------
11089175474
alter database open resetlogs;
exit;
sqlplus / as sysdba
create or replace directory mig as '/backup/dumps/';
exit
expdp \'/ as sysdba\' full=y directory=migration dumpfile=expfull_BP3_`date +%Y%m%d_%H%M`_%U.dmp parallel=24 logfile=expfull_BP3_`date +%Y%m%d_%H%M`.log

Export was done in less than 5 hours, 4 times less than on source database. Database migration could now fit in one night. Much better isn’t it?

Other solutions

If you’re used to Data Guard, you can create a standby on this temporary server that would be dedicated to this purpose. No need to manually apply the latest archivelog backup of the day because it’s already in sync. Just convert this standby to primary without impacting the source database, or do a simple switchover then do the datapump export.

Transportable tablespace is a mixed solution where datafiles are copied to destination database, only metadata being exported and imported. But don’t expect any kind of reorganization here.

If you cannot afford a downtime of several hours of migration, you should think about logical replication. Solutions like Golden Gate are perfect for keeping application running. But as you probably know, it comes at a cost.

Conclusion

If several hours of downtime is acceptable, datapump is still a good option for migration. Downtime is all about disk speed and parallelism.

Cet article Speed up datapump export for migrating big databases est apparu en premier sur Blog dbi services.

Downloading ppts from sessions

Tom Kyte - Thu, 2020-02-20 21:12
HI ALl, say what is the url to the ppt for: ?BCS Office Hours - Multitenant Fundamentals & Hands On Lab? ? /thanks /paul
Categories: DBA Blogs

We Are so Appreciative for the Show of Support!

Oracle Press Releases - Thu, 2020-02-20 05:00
Blog
We Are so Appreciative for the Show of Support!

Ken Glueck, Executive Vice President, Oracle—Feb 20, 2020

NOTE: Before we turn to the more than 30 amicus briefs filed in support of Oracle at the Supreme Court, we are obligated to highlight the conduct of Google’s head of Global Affairs and Chief Legal Officer, Kent Walker. Over the past few months, Walker led a coercion campaign against companies and organizations that were likely to file on Oracle’s behalf to persuade them to stay silent.  We are aware of more than half a dozen contacts by Mr. Walker (or his representatives) to likely amici, but we probably only heard of a small piece of his efforts.

In our previous posts we detailed the facts in Google v. Oracle: Google copied verbatim 11,000 lines of Java code and then broke Java’s interoperability. We explained that Google knew fully that the Java code was subject to copyright but decided to copy it anyway and “make enemies along the way.”  We discussed IBM’s Jailbreak initiative, which was aborted because everyone understood—including Google and IBM—that Sun’s code was subject to a copyright license. 

We explained how there was never any confusion in the industry about how copyright was applied to software and no contemporaneous discussion whatsoever distinguishing between some code that’s copyrightable and other code that isn’t. All of this parsing of code was invented after the fact by Google. We discussed the impossibility of the Supreme Court drawing lines between some code and not other code (on a case-by-case basis), without undermining copyright protection for all computer programs, which is exactly Google’s intent. Lastly, we explained that Google’s business model is predicated on monetizing the content of others so its economic interests are correlated to weak intellectual property protection. And that is exactly why most members of the technology community declined to file briefs on Google’s behalf. 

More than 30 businesses, organizations, and individuals filed amicus briefs with the Supreme Court. The numerous amicus briefs filed on our behalf largely reflect actual owners of copyrights that have a direct stake in the outcome of this matter, and I wanted to highlight a few of them here. Most importantly, the totality of the briefs make an overwhelming case for the court to reject Google’s attempt to retroactively carve itself out of the law.

To start, the United States Solicitor General filed a brief in support of Oracle on behalf of the United States Government. The Solicitor General’s office will also participate in oral arguments before the Supreme Court, making clear that longstanding U.S. intellectual property policy is fundamentally at odds with Google’s position. It’s really hard to overstate how strong the Solicitor General’s brief is on Oracle’s behalf.  For example, the SG states “Contrary to [Google]’s contention, the Copyright Office has never endorsed the kind of copying in which [Google] engaged." … "[Google] declined to take [the open source] or any other license, despite 'lengthy licensing negotiations' with [Oracle].  Instead, [Google] simply appropriated the material it wanted." And, “[T]he fair use doctrine does not permit a new market entrant to copy valuable parts of an established work simply to attract fans to its own competing commercial product. To the contrary, copying ‘to get attention or to avoid the drudgery in working up something fresh’ actively disserves copyright’s goals.”

“[Google’s] approach [to copyrightability] is especially misguided because the particular post-creation changed circumstance on which it relies—i.e., developers’ acquired familiarity with the calls used to invoke various methods in the Java Standard Library—is a direct result of the Library’s marketplace success.” The SG continued, “Google designed its Android platform in a manner that made it incompatible with the Java platform. Pet. App. 46a n.11. Petitioner thus is not seeking to ensure that its new products are compatible with a ‘legacy product’ (Pet. Br. 26). Petitioner instead created a competing platform and copied thousands of lines of code from the Java Standard Library in order to attract software developers familiar with respondent’s work.”

And the SG stated, “The court of appeals correctly held that petitioner’s verbatim copying of respondent’s original computer code into a competing commercial product was not fair use.” Lastly, “the record contained  ‘overwhelming’  evidence that petitioner’s copying harmed the market for the Java platform.”

A brief by several songwriters and the Songwriters Guild explains that much like Oracle’s Java software, a large portion of music streams on YouTube are misappropriated for the good of Google and Google alone—“Through YouTube, Google profits directly from verbatim copies of Amici’s own works. These copies are unauthorized, unlicensed, and severely under-monetized.”

A brief filed by Recording Industry Association of America, National Music Publishers Association, and the American Association of Independent Music makes clear that its “members depend on an appropriately balanced fair use doctrine that furthers the purposes of copyright law, including the rights to control the reproduction and distribution of copyrighted works, to create derivative works, and to license the creation of derivative works.”

Briefs were filed expressing similar concerns from a broad spectrum of the creative community, including journalists, book publishers, photographers, authors, and the motion picture industry. Google’s attempts to retroactively justify a clear act of infringement with novel theories of software copyright and fair use have alarmed nearly every segment of the artistic and creative community.

Another amicus brief from the News Media Alliance (over 2,000 news media organizations), explains how Google Search, Google News and other online platforms appropriate vast quantities of its members’ journalistic output, and reproduces it to displace the original creative content. They point out that, as journalists, they often sit on both sides of the “fair use” defense, but warn that they “cannot stand silent when entire digital industries are built, and technology companies seek to achieve and maintain dominance, by the overly aggressive assertion of fair use as Google does in this case.”

And USTelecom, the national trade association representing the nation’s broadband industry, including technology providers, innovators, suppliers, and manufacturers. USTelecom notes that its members are poised to invest $350 billion in their software-driven networks over the next several years, laying the foundation for 5G. Software interfaces are also important for network providers to “enable interoperability among technologies, networks, and devices,” and “while telecommunications providers must share access to their software interfaces, they also must retain their exclusive property rights in their implementation of these interfaces if they are to ensure network security and resiliency, protect their customers’ privacy, innovate and compete.”

We were pleased that some of the most prominent names in technology—who were contemporaneous witnesses to Google’s theft—have filed amicus briefs in support Oracle’s position, including Scott McNealy, the longtime CEO of Sun Microsystems, and Joe Tucci, the longtime CEO of EMC Corporation. Mr. Tucci states, “as the numbers and ever-increasing success show, the system is working. Accepting Google’s invitation to upend that system by eliminating copyright protection for creative and original computer software code would not make the system better—it would instead have sweeping and harmful effects throughout the software industry.”

Several of our amici note in their briefs that the Constitution includes copyright protection in Article I, Section 8. As Consumers’ Research explains in their brief, “to the Founders, copyrights were not just a way to encourage innovation, but also to protect people’s inherent rights in the fruits of their labor. Any conception of copyright that ignores the latter is both incomplete and inconsistent with the original understanding of the Copyright Clause.”

One of the key points Oracle makes in our brief to the Court is the clear Congressional intent and action to provide full copyright protection to software, and the longstanding refusal by Congress to create any distinctions between different types of software code (such as “interfaces”).

Several of our amici reinforce this fact, none less authoritative than the former Senate and House Judiciary Chairmen. Former Senators Orrin Hatch and Dennis DeConcini, and former Congressman Bob Goodlatte make it clear that Google’s invitation to the Court to carve out some ill-defined category of “interfaces” from the Copyright Act’s full protection of all software code is contrary to the intent of Congress and plain language of the statute. According to the former Chairmen, “[B]oth the text and history of the Copyright Act show that Congress accorded computer programs full copyright protection, with no carve-out for some undefined subset of software.”

Furthermore, the former Members state, it would be beyond the purview of the Court to respond to Google and its amici’s policy arguments in favor of creating new standards of copyrightability and fair use for different, loosely defined categories of software. “This Court should not undermine [Congress’s] legislative judgment … by creating the loopholes to copyrightability and fair use that Google requests.”

The Members further point out, “to the extent that Google has a different, less-protective vision for the federal copyright regime, it is ‘free to seek action from Congress.’ (quoting the Solicitor General). Thus far, Congress has not seen fit to take such action, notwithstanding its recent comprehensive review of the federal copyright laws, which directly examined the scope of copyright protection and technological innovation. This Court should not diminish copyright protections for computer programs where Congress, as is its constitutional prerogative, has chosen to refrain from doing so for four decades.”

The Members’ points are given further emphasis by the extremely important brief from Professor Arthur Miller, who was a Presidential appointee to the National Commission on New Technological Uses of Copyrighted Works (“CONTU”), where he served on the Software Subcommittee. Professor Miller forcefully rebuts Google’s contention that the Java code it copied should be denied protection either because it was so popular or because it was in some category of un-protectable software it refers to as “interfaces.”

Congress had good reason not to enact a popularity exception to copyright. As an initial matter, such an exception would lure the courts into a hopeless exercise in line-drawing: Just how popular must a work become before the creator is penalized with loss of protection?… Nor does calling the copied material an “interface” aid in the line-drawing exercise. Though that term “may seem precise * * * it really has no specific meaning in programming. Certainly, it has no meaning that has any relevance to copyright principles.” (citing his seminal Harvard Law Review article on software copyright). “Any limitation on the protection of ‘interfaces’ thus would be a limitation on the protection of much of the valuable expression in programs, and would invite plagiarists to label as an ‘interface’ whatever they have chosen to copy without permission.” Ibid. More importantly, a popularity exception would eviscerate the goal of the Copyright Act, which is to promote advancements. “The purpose of copyright is to create incentives for creative effort.” Sony v. Universal City Studios. But advance too far and create widely desired work, petitioner warns, and risk losing copyright protection altogether; anyone will be able to copy the previously protected material by claiming that doing so was “necessary.” That logic is head-scratching. “[P]romoting the unauthorized copying of interfaces penalizes the creative effort of the original designer, something that runs directly counter to the core purposes of copyright law because it may freeze or substantially impede human innovation and technological growth.” (citing Miller Harvard Law Review article).

This history of strong copyright protection is further explained in the brief by the Committee for Justice: “The framers of the U.S. Constitution designed that document to protect the right to property. It was understood that strong property rights were fundamental to freedom and prosperity. The Constitution’s Copyright Clause is a critical part of this project. The clause empowers Congress to enact laws to protect intellectual property, which was understood to be worthy of protection in the same sense and to the same degree that tangible property is. Congress has taken up the task by enacting a series of Copyright Acts that have steadily expanded the protection afforded intellectual property. This, in turn, has led to a robust and thriving market for intellectual property.”

Likewise, the American Conservative Union Foundation, Internet Accountability Project, and American Legislative Exchange Council all recount the long history of copyright protections, going back to the Constitution, and the importance of maintaining a system of strong intellectual property rights. They also weigh-in against Google’s fair use defense. 

Similarly the Hudson Institute makes the point that if the Supreme Court were to adopt Google’s breathtakingly expansive view of fair use, it would “provide a roadmap to foreign actors like China to circumvent U.S. and international copyright protection for computer code and other works. Such a roadmap, if adopted by this Court, will remove the brighter lines and greater clarity provided by the decision below, and would eliminate a significant tool for private and governmental enforcement of IP rights.”

In separate briefs, two large software companies, Synopsys and SAS Institute, explain how the use of software code and “interfaces” actually work in the real world. Synopsis explains that the purpose of its brief is “to challenge the notion, offered by Google and its amici, that the copying of someone else’s code is a mainstay of the computer programming world. It is simply not true that ‘everybody does it,’ and that software piracy allows for lawful innovative entrepreneurship, as Google suggests.

SAS takes head-on the absurdity of Google’s professed interest in “interoperability” as the pretense for its unlicensed use of Oracle’s code. “Google copied the software interfaces not because it wanted Android applications to interoperate with Java, but so it could attract Java programmers for Android to replace Java. ‘Unrebutted evidence’ showed ‘that Google specifically designed Android to be incompatible with the Java platform and not allow for interoperability with Java programs.’ (citing Fed. Circuit decision). No case has found fair use where the defendant copied to produce an incompatible product.”

SAS also provides a powerful rebuttal to Google’s request for the Court to create new judicial carve-out from the Copyright Act for software “interfaces.” “There are unlimited ways to write interfaces, and nothing justifies removing them from what the Copyright Act expressly protects. To the contrary, the user-friendly expressive choices Sun made became critical to Java’s success. The thousands of lines of Java declaring code and the organization Google copied are intricate, creative expression. … The creativity is undeniable. ‘Google’s own ‘Java guru’ conceded that there can be ‘creativity and artistry even in a single method declaration.’” (citing Fed. Circuit decision) SAS goes on to provide detailed examples of the creative expression in declaring code.

I’ll conclude with the powerful brief of the former Register of Copyrights, Ralph Oman.  Mr. Oman forcefully rebuts the “sky is falling” rhetoric of Google and its amici regarding copyrights and software.

Copyright protection has spurred greater creativity, competition, and technological advancement, fueling an unprecedented period of intellectual growth and one of America’s greatest economic sectors today—software development. While Congress is of course free to revisit the application of copyright to software if it believes changes to the current regime are warranted, there is no basis for this Court to assume that policymaking role here. Instead, this Court should give effect to Congress’s intent, as embodied in the 1976 Act and its subsequent amendments, that traditional copyright principles apply to software just as these principles apply to other works. Applying those principles to the record in this case, the Federal Circuit properly concluded that Google’s conceded copying of the APIs infringed Oracle’s copyrights. While the technology at issue may be novel, the result that such free riding is not allowed is as old as copyright law itself.

We are grateful for this diverse, influential group of more than 30 amici, which we are certain will provide important, valuable insight to the Court in its deliberations.

We know that many of them have spoken up despite Google’s campaign of intimidation, which makes us even more appreciative.

Handy TensorFlow.js API for Client-Side ML Development

Andrejus Baranovski - Thu, 2020-02-20 01:45
Let’s look into TensorFlow.js API for training data handling, training execution, and inference. TensorFlow.js is awesome because it brings Machine Learning into the hands of Web developers, this provides mutual benefit. Machine Learning field gets more developers and supporters, while Web development becomes more powerful with the support of Machine Learning.


Read more - Handy TensorFlow.js API for Client-Side ML Development.

OCID & Its Importance In Oracle Cloud (OCI)

Online Apps DBA - Wed, 2020-02-19 23:31

Most of the Oracle Cloud Infrastructure (OCI) resources are assigned with a unique ID called Oracle Cloud Identifier (OCID). It is used to identify each resource of an Oracle Infrastructure uniquely. Check out K21 Academy’s blog post at https://k21academy.com/oci62 that covers: • Overview Of OCID • OCID Syntax • How To Find The OCID Of […]

The post OCID & Its Importance In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Simplify RMAN Restore With Meaningful Tag

Michael Dinh - Wed, 2020-02-19 15:44

Here is a simple demo for how to restore RMAN backup in case of failed migration using RMAN tag.

List backup from File System:

[oracle@db-fs-1 ~]$ ls -alrt /u01/backup/*MIGRATION*
-rw-r----- 1 oracle oinstall  12886016 Feb 18 21:56 /u01/backup/HAWK_3291419015_20200218_1cuosf50_1_1_MIGRATION_44
-rw-r----- 1 oracle oinstall   1073152 Feb 18 21:56 /u01/backup/HAWK_3291419015_20200218_1duosf58_1_1_MIGRATION_45
-rw-r----- 1 oracle oinstall 112263168 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1buosf50_1_1_MIGRATION_43
-rw-r----- 1 oracle oinstall 212926464 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1auosf50_1_1_MIGRATION_42
-rw-r----- 1 oracle oinstall   2946560 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1euosf61_1_1_MIGRATION_46
-rw-r----- 1 oracle oinstall    114688 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1fuosf63_1_1_MIGRATION_47
-rw-r----- 1 oracle oinstall   1114112 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1guosf64_1_1_MIGRATION_48
-rw-r----- 1 oracle oinstall      3584 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1iuosf67_1_1_MIGRATION_50
-rw-r----- 1 oracle oinstall   2946560 Feb 18 21:57 /u01/backup/HAWK_3291419015_20200218_1huosf67_1_1_MIGRATION_49
-rw-r----- 1 oracle oinstall   1114112 Feb 18 21:57 /u01/backup/CTL_HAWK_3291419015_20200218_1juosf6a_1_1_MIGRATION_51
-rw-r----- 1 oracle oinstall      3584 Feb 18 21:57 /u01/backup/CTL_HAWK_3291419015_20200218_1kuosf6c_1_1_MIGRATION_52
-rw-r----- 1 oracle oinstall    114688 Feb 18 21:57 /u01/backup/CTL_HAWK_3291419015_20200218_1luosf6e_1_1_MIGRATION_53
-rw-r----- 1 oracle oinstall   1114112 Feb 18 21:57 /u01/backup/CTL_HAWK_3291419015_20200218_1muosf6f_1_1_MIGRATION_54
[oracle@db-fs-1 ~]$

List backup from RMAN:

[oracle@db-fs-1 ~]$ rman target /

Recovery Manager: Release 12.2.0.1.0 - Production on Wed Feb 19 04:21:17 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3291419015)

RMAN> list backup summary tag='MIGRATION';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
39      B  0  A DISK        2020-FEB-18 21:56:52 1       1       YES        MIGRATION
40      B  0  A DISK        2020-FEB-18 21:56:56 1       1       YES        MIGRATION
41      B  0  A DISK        2020-FEB-18 21:57:11 1       1       YES        MIGRATION
42      B  0  A DISK        2020-FEB-18 21:57:17 1       1       YES        MIGRATION
43      B  A  A DISK        2020-FEB-18 21:57:22 1       1       YES        MIGRATION
44      B  F  A DISK        2020-FEB-18 21:57:23 1       1       YES        MIGRATION
45      B  F  A DISK        2020-FEB-18 21:57:25 1       1       YES        MIGRATION
46      B  A  A DISK        2020-FEB-18 21:57:27 1       1       YES        MIGRATION
47      B  A  A DISK        2020-FEB-18 21:57:27 1       1       YES        MIGRATION
48      B  F  A DISK        2020-FEB-18 21:57:31 1       1       YES        MIGRATION
49      B  A  A DISK        2020-FEB-18 21:57:32 1       1       YES        MIGRATION
50      B  F  A DISK        2020-FEB-18 21:57:34 1       1       YES        MIGRATION
52      B  F  A DISK        2020-FEB-18 21:57:36 1       1       YES        MIGRATION

RMAN> list backup of database summary tag='MIGRATION';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
39      B  0  A DISK        2020-FEB-18 21:56:52 1       1       YES        MIGRATION
40      B  0  A DISK        2020-FEB-18 21:56:56 1       1       YES        MIGRATION
41      B  0  A DISK        2020-FEB-18 21:57:11 1       1       YES        MIGRATION
42      B  0  A DISK        2020-FEB-18 21:57:17 1       1       YES        MIGRATION

RMAN> list backup of archivelog all summary tag='MIGRATION';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
43      B  A  A DISK        2020-FEB-18 21:57:22 1       1       YES        MIGRATION
46      B  A  A DISK        2020-FEB-18 21:57:27 1       1       YES        MIGRATION
47      B  A  A DISK        2020-FEB-18 21:57:27 1       1       YES        MIGRATION
49      B  A  A DISK        2020-FEB-18 21:57:32 1       1       YES        MIGRATION

RMAN> list backup of controlfile summary tag='MIGRATION';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
45      B  F  A DISK        2020-FEB-18 21:57:25 1       1       YES        MIGRATION
48      B  F  A DISK        2020-FEB-18 21:57:31 1       1       YES        MIGRATION
52      B  F  A DISK        2020-FEB-18 21:57:36 1       1       YES        MIGRATION

RMAN> list backup of spfile summary tag='MIGRATION';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
44      B  F  A DISK        2020-FEB-18 21:57:23 1       1       YES        MIGRATION
50      B  F  A DISK        2020-FEB-18 21:57:34 1       1       YES        MIGRATION

RMAN> list backupset 42,49,50,52;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
42      Incr 0  203.05M    DISK        00:00:29     2020-FEB-18 21:57:17
        BP Key: 42   Status: AVAILABLE  Compressed: YES  Tag: MIGRATION
        Piece Name: /u01/backup/HAWK_3291419015_20200218_1auosf50_1_1_MIGRATION_42
        Keep: BACKUP_LOGS        Until: 2020-AUG-18 21:56:48
  List of Datafiles in backup set 42
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  1    0  Incr 1428959    2020-FEB-18 21:56:48              NO    /u02/oradata/HAWK/datafile/o1_mf_system_h4s874gt_.dbf

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
49      3.00K      DISK        00:00:00     2020-FEB-18 21:57:32
        BP Key: 49   Status: AVAILABLE  Compressed: YES  Tag: MIGRATION
        Piece Name: /u01/backup/CTL_HAWK_3291419015_20200218_1kuosf6c_1_1_MIGRATION_52
        Keep: BACKUP_LOGS        Until: 2020-AUG-18 21:57:32

  List of Archived Logs in backup set 49
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    3       1429002    2020-FEB-18 21:57:26 1429040    2020-FEB-18 21:57:32

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
50      Full    96.00K     DISK        00:00:00     2020-FEB-18 21:57:34
        BP Key: 50   Status: AVAILABLE  Compressed: YES  Tag: MIGRATION
        Piece Name: /u01/backup/CTL_HAWK_3291419015_20200218_1luosf6e_1_1_MIGRATION_53
        Keep: BACKUP_LOGS        Until: 2020-AUG-18 21:57:33
  SPFILE Included: Modification time: 2020-FEB-18 21:51:45
  SPFILE db_unique_name: HAWK

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
52      Full    1.05M      DISK        00:00:01     2020-FEB-18 21:57:36
        BP Key: 52   Status: AVAILABLE  Compressed: YES  Tag: MIGRATION
        Piece Name: /u01/backup/CTL_HAWK_3291419015_20200218_1muosf6f_1_1_MIGRATION_54
        Keep: BACKUP_LOGS        Until: 2020-AUG-18 21:57:35
  Control File Included: Ckp SCN: 1429047      Ckp time: 2020-FEB-18 21:57:35

RMAN>

You are probably wondering why BS 49 with Piece Name: /u01/backup/CTL_HAWK_3291419015_20200218_1kuosf6c_1_1_MIGRATION_52 contains archivelog?

RMAN backup script:

[oracle@db-fs-1 ~]$ cat /u01/backup/backup_keep.rman
spool log to /u01/backup/rman_keep_backup_migration.log
connect target;
set echo on
show all;
run {
allocate channel c1 device type disk format '/u01/backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 2G MAXOPENFILES 1;
allocate channel c2 device type disk format '/u01/backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 2G MAXOPENFILES 1;
allocate channel c3 device type disk format '/u01/backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 2G MAXOPENFILES 1;
backup as compressed backupset incremental level 0 filesperset 1 check logical database
keep until time 'ADD_MONTHS(SYSDATE,6)' TAG='MIGRATION';
backup as compressed backupset archivelog from time 'trunc(sysdate)'
filesperset 8
keep until time 'ADD_MONTHS(SYSDATE,6)' TAG='MIGRATION';
}
run {
allocate channel c1 device type disk format '/u01/backup/CTL_%d_%I_%T_%U_MIGRATION_%s';
backup as compressed backupset current controlfile
keep until time 'ADD_MONTHS(SYSDATE,6)' TAG='MIGRATION';
}
LIST BACKUP OF DATABASE SUMMARY TAG='MIGRATION';
LIST BACKUP OF ARCHIVELOG ALL SUMMARY TAG='MIGRATION';
LIST BACKUP OF CONTROLFILE TAG='MIGRATION';
report schema;
exit
[oracle@db-fs-1 ~]$

RMAN backup log snippet:

allocated channel: c1
channel c1: SID=57 device type=DISK

Starting backup at 2020-FEB-18 21:57:30

backup will be obsolete on date 2020-AUG-18 21:57:30
archived logs required to recover from this backup will be backed up
channel c1: starting compressed full datafile backup set
channel c1: specifying datafile(s) in backup set
including current control file in backup set
channel c1: starting piece 1 at 2020-FEB-18 21:57:31
channel c1: finished piece 1 at 2020-FEB-18 21:57:32
piece handle=/u01/backup/CTL_HAWK_3291419015_20200218_1juosf6a_1_1_MIGRATION_51 tag=MIGRATION comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
current log archived

backup will be obsolete on date 2020-AUG-18 21:57:32
archived logs required to recover from this backup will be backed up
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set

******* input archived log thread=1 sequence=3 RECID=30 STAMP=1032731852 *******

channel c1: starting piece 1 at 2020-FEB-18 21:57:32
channel c1: finished piece 1 at 2020-FEB-18 21:57:33
piece handle=/u01/backup/CTL_HAWK_3291419015_20200218_1kuosf6c_1_1_MIGRATION_52 tag=MIGRATION comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01

Restore backup from RMAN:

RMAN> startup force nomount;

Oracle instance started

Total System Global Area     805306368 bytes

Fixed Size                     8625856 bytes
Variable Size                314573120 bytes
Database Buffers             478150656 bytes
Redo Buffers                   3956736 bytes

RMAN> restore controlfile from '/u01/backup/CTL_HAWK_3291419015_20200218_1muosf6f_1_1_MIGRATION_54';

Starting restore at 2020-FEB-19 03:41:37
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u02/fra/HAWK/controlfile/o1_mf_h4r8xqh6_.ctl
Finished restore at 2020-FEB-19 03:41:38

RMAN> alter database mount;

RMAN> catalog start with '/u01/backup' noprompt;

RMAN> restore database preview summary from tag='MIGRATION';

Starting restore at 2020-FEB-19 03:43:05
using channel ORA_DISK_1


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
42      B  0  A DISK        2020-FEB-18 21:57:17 1       1       YES        MIGRATION
41      B  0  A DISK        2020-FEB-18 21:57:11 1       1       YES        MIGRATION
39      B  0  A DISK        2020-FEB-18 21:56:52 1       1       YES        MIGRATION
40      B  0  A DISK        2020-FEB-18 21:56:56 1       1       YES        MIGRATION


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
47      B  A  A DISK        2020-FEB-18 21:57:27 1       1       YES        MIGRATION
recovery will be done up to SCN 1428959
Media recovery start SCN is 1428959
Recovery must be done beyond SCN 1428964 to clear datafile fuzziness
Finished restore at 2020-FEB-19 03:43:05

RMAN> restore database from tag='MIGRATION';
RMAN> recover database until scn 1428965;

Starting recover at 2020-FEB-19 03:44:45
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/19/2020 03:44:45
RMAN-20208: UNTIL CHANGE is before RESETLOGS change

RMAN> list incarnation of database;


List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       HAWK     3291419015       PARENT  1          2017-JAN-26 13:52:29
2       2       HAWK     3291419015       PARENT  1408558    2020-FEB-18 18:49:45
3       3       HAWK     3291419015       PARENT  1424305    2020-FEB-18 20:02:49
4       4       HAWK     3291419015       PARENT  1425161    2020-FEB-18 20:19:50
5       5       HAWK     3291419015       PARENT  1425162    2020-FEB-18 20:33:05
6       6       HAWK     3291419015       PARENT  1426203    2020-FEB-18 21:13:15
7       7       HAWK     3291419015       CURRENT 1428966    2020-FEB-18 22:05:54

RMAN> recover database until scn 1428967;

Starting recover at 2020-FEB-19 03:47:41
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=1
channel ORA_DISK_1: reading from backup piece /u01/backup/HAWK_3291419015_20200218_1huosf67_1_1_MIGRATION_49
channel ORA_DISK_1: piece handle=/u01/backup/HAWK_3291419015_20200218_1huosf67_1_1_MIGRATION_49 tag=MIGRATION
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u02/oradata/HAWK/archivelog/2020_02_19/o1_mf_1_1_h4s8gfjc_.arc thread=1 sequence=1
archived log file name=/u02/oradata/HAWK/archivelog/2020_02_19/o1_mf_1_1_h4rx8c8b_.arc thread=1 sequence=1
channel default: deleting archived log(s)
archived log file name=/u02/oradata/HAWK/archivelog/2020_02_19/o1_mf_1_1_h4s8gfjc_.arc RECID=32 STAMP=1032752861
media recovery complete, elapsed time: 00:00:00
Finished recover at 2020-FEB-19 03:47:42

RMAN> alter database open resetlogs;

Statement processed

RMAN> report schema;

Report of database schema for database with db_unique_name HAWK

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    800      SYSTEM               YES     /u02/oradata/HAWK/datafile/o1_mf_system_h4s874gt_.dbf
3    470      SYSAUX               NO      /u02/oradata/HAWK/datafile/o1_mf_sysaux_h4s86of7_.dbf
4    70       UNDOTBS1             YES     /u02/oradata/HAWK/datafile/o1_mf_undotbs1_h4s86kbl_.dbf
7    5        USERS                NO      /u02/oradata/HAWK/datafile/o1_mf_users_h4s86ncz_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u02/oradata/HAWK/datafile/o1_mf_temp_h4s8jc3n_.tmp

RMAN> delete force noprompt backup tag='MIGRATION';

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=53 device type=DISK

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
39      39      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1cuosf50_1_1_MIGRATION_44
40      40      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1duosf58_1_1_MIGRATION_45
41      41      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1buosf50_1_1_MIGRATION_43
42      42      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1auosf50_1_1_MIGRATION_42
43      43      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1euosf61_1_1_MIGRATION_46
44      44      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1fuosf63_1_1_MIGRATION_47
45      45      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1guosf64_1_1_MIGRATION_48
46      46      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1iuosf67_1_1_MIGRATION_50
47      47      1   1   AVAILABLE   DISK        /u01/backup/HAWK_3291419015_20200218_1huosf67_1_1_MIGRATION_49
48      48      1   1   AVAILABLE   DISK        /u01/backup/CTL_HAWK_3291419015_20200218_1juosf6a_1_1_MIGRATION_51
49      49      1   1   AVAILABLE   DISK        /u01/backup/CTL_HAWK_3291419015_20200218_1kuosf6c_1_1_MIGRATION_52
50      50      1   1   AVAILABLE   DISK        /u01/backup/CTL_HAWK_3291419015_20200218_1luosf6e_1_1_MIGRATION_53
52      52      1   1   AVAILABLE   DISK        /u01/backup/CTL_HAWK_3291419015_20200218_1muosf6f_1_1_MIGRATION_54
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1cuosf50_1_1_MIGRATION_44 RECID=39 STAMP=1032731809
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1duosf58_1_1_MIGRATION_45 RECID=40 STAMP=1032731816
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1buosf50_1_1_MIGRATION_43 RECID=41 STAMP=1032731808
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1auosf50_1_1_MIGRATION_42 RECID=42 STAMP=1032731808
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1euosf61_1_1_MIGRATION_46 RECID=43 STAMP=1032731841
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1fuosf63_1_1_MIGRATION_47 RECID=44 STAMP=1032731843
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1guosf64_1_1_MIGRATION_48 RECID=45 STAMP=1032731845
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1iuosf67_1_1_MIGRATION_50 RECID=46 STAMP=1032731847
deleted backup piece
backup piece handle=/u01/backup/HAWK_3291419015_20200218_1huosf67_1_1_MIGRATION_49 RECID=47 STAMP=1032731847
deleted backup piece
backup piece handle=/u01/backup/CTL_HAWK_3291419015_20200218_1juosf6a_1_1_MIGRATION_51 RECID=48 STAMP=1032731851
deleted backup piece
backup piece handle=/u01/backup/CTL_HAWK_3291419015_20200218_1kuosf6c_1_1_MIGRATION_52 RECID=49 STAMP=1032731852
deleted backup piece
backup piece handle=/u01/backup/CTL_HAWK_3291419015_20200218_1luosf6e_1_1_MIGRATION_53 RECID=50 STAMP=1032731854
deleted backup piece
backup piece handle=/u01/backup/CTL_HAWK_3291419015_20200218_1muosf6f_1_1_MIGRATION_54 RECID=52 STAMP=1032752561
Deleted 13 objects


RMAN> exit


Recovery Manager complete.

[oracle@db-fs-1 ~]$ ls -alrt /u01/backup/
total 28
drwxrwxr-x 6 oracle oinstall  4096 Feb 18 19:11 ..
-rw-r--r-- 1 oracle oinstall  1104 Feb 18 20:40 backup_keep.rman
-rw-r--r-- 1 oracle oinstall 12346 Feb 18 21:57 rman_keep_backup_migration.log
drwxr-xr-x 2 oracle oinstall  4096 Feb 19 04:28 .
[oracle@db-fs-1 ~]$

Just a crazy idea.
Keep the same backup tag for all backups until the next level 0.

Backup TAG for daily level 0 backup:

[oracle@db-fs-1 ~]$ echo "$(date +%Y%b%d)"
2020Feb19
[oracle@db-fs-1 ~]$

Backup TAG for weekly level 0 backup

[oracle@db-fs-1 ~]$ echo "$(date +%Y%b)_WK$(date +%U)"
2020Feb_WK07
[oracle@db-fs-1 ~]$

Backup TAG for monthly level 0 backup

[oracle@db-fs-1 ~]$ echo "$(date +%Y%b)"
2020Feb
[oracle@db-fs-1 ~]$

Tag: ARCH for archivelog backup and may not be useful.
LV=A means archivelog backup.

RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
69      B  A  A DISK        2020-FEB-19 13:29:02 1       1       NO         ARCH
70      B  A  A DISK        2020-FEB-19 13:29:03 1       1       NO         ARCH
71      B  F  A DISK        2020-FEB-19 13:29:04 1       1       NO         TAG20200219T132904

RMAN>

In writing this post, I realized the my own backup script will need some improvements.

Oracle 20c Data Guard : Validating a Fast Start Failover Configuration

Yann Neuhaus - Wed, 2020-02-19 14:59

In Oracle 20c, we can now validate a Fast Start Failover configuration with the new command VALIDATE FAST_START FAILOVER. This command will help identifying issues in the configuration. I tested this new feature.
The Fast Start Failover is configured and the observer is running fine as we can see below.

DGMGRL> show configuration verbose

Configuration - prod20

  Protection Mode: MaxPerformance
  Members:
  prod20_site1 - Primary database
    prod20_site2 - (*) Physical standby database 

  (*) Fast-Start Failover target
  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'
    ConfigurationWideServiceName    = 'prod20_CFG'
    ConfigurationSimpleName         = 'prod20'

Fast-Start Failover: Enabled in Potential Data Loss Mode
  Lag Limit:          30 seconds
  Threshold:          30 seconds
  Active Target:      prod20_site2
  Potential Targets:  "prod20_site2"
    prod20_site2 valid
  Observer:           oraadserver2
  Shutdown Primary:   TRUE
  Auto-reinstate:     TRUE
  Observer Reconnect: (none)
  Observer Override:  FALSE

Configuration Status:
SUCCESS

DGMGRL> 

If we run the command we can see that everything is working and that a failover will happen if needed

DGMGRL> VALIDATE FAST_START FAILOVER;
  Fast-Start Failover:  Enabled in Potential Data Loss Mode
  Protection Mode:      MaxPerformance
  Primary:              prod20_site1
  Active Target:        prod20_site2

DGMGRL

Now let’s stop the observer

DGMGRL> stop observer

Observer stopped.

And if we run the Validate command again, we have the following message

DGMGRL> VALIDATE FAST_START FAILOVER;

  Fast-Start Failover:  Enabled in Potential Data Loss Mode
  Protection Mode:      MaxPerformance
  Primary:              prod20_site1
  Active Target:        prod20_site2

Fast-start failover not possible:
  Fast-start failover observer not started.

DGMGRL> 

Cet article Oracle 20c Data Guard : Validating a Fast Start Failover Configuration est apparu en premier sur Blog dbi services.

Broad Coalition Files Supreme Court Briefs Supporting Oracle

Oracle Press Releases - Wed, 2020-02-19 11:07
Press Release
Broad Coalition Files Supreme Court Briefs Supporting Oracle Amici urge Court to reject Google’s attempt to weaken copyright protection in the United States

Redwood Shores, Calif.—Feb 19, 2020

A wide array of individuals and organizations from across technology, arts and culture, government, advocacy, and academia filed amicus briefs this week supporting Oracle in the Supreme Court. This diverse group is speaking out to defend copyright protection and to reject Google’s attempts to excuse its theft of more than 11,000 lines of Oracle’s original code.

Commenting on the briefs, Oracle General Counsel Dorian Daley said: “Google is attempting to rewrite the fundamental copyright protections that fuel innovation in this country. The amicus briefs make clear that to avoid significant consequences well beyond the software industry, Google’s self-serving arguments and attempts to rewrite long-settled law must be rejected.”

The fallacies in Google’s arguments, as well as additional context about the case are set forth in Executive Vice President Ken Glueck’s recent blog posts about the importance of copyright protection and the absence of support for Google’s position in the technology community.

Excerpts from the amicus briefs filed today upend Google’s claim that upholding the current ruling and existing law will crush innovation in the software industry. As the briefs demonstrate, that stance is not just inaccurate – the reverse is true.

Google wields a variety of weaponized copyright exceptions on top of rhetoric that is both deceptively public-spirited (letting Google win is “promoting innovation”) and ominous (impeding Google would “break the internet”). Google further seeks to justify these exceptions by trying to hide behind small players. It engages in astroturfing tactics to give the impression that it has more public support than it does.” (Songwriters Guild of America, pp. 16-17) (unless otherwise noted, all emphasis is added)

“[W]hat is good for Google is not synonymous with what is good for the public. […] In fact, a ruling for Google would be “promoting” software innovation only in that the purported “innovation” would be furthering Google’s private interest—i.e., using works without permission or a license fee.” (Songwriters Guild of America, p. 32)

No reasonable person would invest resources in creating an original work if another person could lawfully extract material portions of that work and incorporate them into a marketplace replacement.” (Internet Accountability Project, p. 4)

Oracle’s code was undoubtedly creative and copyright protected.

“Google has appointed itself the world’s ‘organize[r]’ of other people’s information…and in this case it copied verbatim substantial amounts of Oracle’s software to do so.” (Internet Accountability Project, p. 1)

“[M]ore than enough creative choices were made by [Oracle’s programmers] in creating the 7,000 lines of declaring code…to satisfy the copyright requirements.” (Interdisciplinary Research Team on Programmer Creativity, p. 19)

“[I]t is clear that there were thousands of different ways [that Oracle’s] APIs could have been written when they were created . . . [and] that they are protected from copying by the Copyright Act.” (Interdisciplinary Research Team on Programmer Creativity, pp. 19-20)

Google could have licensed Oracle’s software but chose to copy it instead.

The inconvenience of not copying does not excuse copying.” (American Conservative Union, p. 14)

“Google and its amici seek to establish a rule of general applicability in the software industry that will justify future unauthorized copying whenever it saves the copier time and money.” (American Conservative Union, p. 18)

“Google asserts [that its] choice to copy proves that Google had no choice other than to copy. Yet Google’s assertion that it “reused” the declarations “only because it had no other choice,” finds no support in the record. The obvious other choice was licensing.” (American Conservative Union, p. 12)

“Through YouTube, Google profits directly from verbatim copies of Amici’s own works. These copies are often unauthorized, unlicensed, and severely undermonetized.” (Songwriters Guild of America, p. 26)

The argument that Google’s use was protected by exceptions to the copyright laws falls flat.

“This brazen commercial use in competition with [Oracle] and the indisputable harm to the market doom [Google’s] fair use claim.” (Copyright Thought Leaders, p. 21)

“Google’s verbatim copying of Oracle’s code for use in the Android platform had a measurable negative impact on Oracle’s bottom line. Under existing law…such use of another’s work is categorically not ‘fair.’” (Internet Accountability Project, p. 3)

“[Google] has done nothing that qualifies as transformative. [Google] engaged in verbatim copying to use the copied code in commercial competition with others who either licensed the works or avoided infringement by applying their own creativity to write different code to perform those functions.” (Copyright Thought Leaders, p. 20)

Google and many of its amici seek to upend [the] well-ordered system of private property rights in software through either an unworkably complex, nearly metaphysical, interpretation of copyrightability of software, or a broad “fair use” exemption, both based on some conjured up “special status” as players in the software industry.” (American Conservative Union, p. 4)

This case threatens copyright protection across the board.

Copyright is, in fact, of existential importance to…creators, who would be utterly lacking in market power and the ability to earn their livings without it.” (Songwriters Guild of America, p. 3)

“It is not empty rhetoric to say that without the statutory and constitutional protections of copyright, professional creators could not earn their livings and simply would not produce new works, and the world would be poorer for it. The reason is simple but profound: copyright protection allows for a vibrant creative environment in which artists can predictably recover the gains of their creative labors.” (Songwriters Guild of America, p. 5)

Contact Info
Deborah Hellinger
Oracle
+1 212-508-7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1 212-508-7935

Oracle Cloud Applications Achieves FedRAMP Moderate Authorization

Oracle Press Releases - Wed, 2020-02-19 08:00
Press Release
Oracle Cloud Applications Achieves FedRAMP Moderate Authorization New authorization marks key milestone for the Federal Government, extending agency access to the suite of Oracle Cloud Applications covering ERP, HCM, and CX

Redwood Shores, Calif.—Feb 19, 2020

Oracle today announced that Oracle Cloud Applications has achieved FedRAMP Moderate Authorization. FedRAMP is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. With this new authorization, U.S. Federal Government customers can benefit from Oracle’s complete and integrated suite of cloud applications for finance, human resources, supply chain, and customer experience.

To outpace accelerating change in technology, government agencies need to break down data silos, embrace the latest innovations, and improve digital experiences, collaboration, and service. Built on Oracle’s industry-leading cloud platform and infrastructure, Oracle Cloud Applications enables customers to benefit from best-in-class security, high-end scalability, and performance, in addition to strong integration capabilities.

“FedRAMP Authorization for Oracle Cloud Applications is a critical step in meeting the growing demands and compliance requirements of our public sector customers,” said Tamara Greenspan, group vice president, Oracle Public Sector. “By achieving this authorization, we are able to help the Federal Government tap into our complete and innovative cloud applications suite to not just keep pace, but stay ahead of the evolving business and technology landscape.”

Oracle has been a long-standing strategic technology partner of the U.S. Federal Government. In fact, a component of the U.S. Intelligence Community was the first customer to use Oracle’s flagship database software 35 years ago. Today, more than 500 government organizations take advantage of the superior performance of Oracle’s industry-leading technologies.

Contact Info
Celina Bertallee
Oracle
559-283-2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • 559-283-2425

[New Feature] Oracle Cloud Shell in Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Wed, 2020-02-19 03:22

Cloud shell gives you access to a Linux Shell in Oracle Cloud Infrastructure (OCI) Console. It can be used to interact with resources and quickly run OCI CLI commands. The cloud shell is now available in all commercial regions. It is free to use and easily accessible from the console. Check Out K21 Academy’s blog […]

The post [New Feature] Oracle Cloud Shell in Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

XS$NULL - Can we login to it and does it really have no privileges?

Pete Finnigan - Tue, 2020-02-18 15:11
I have read on line about XS$NULL over the years and particularly the documentation that states that it has no privileges. The documentation states the following: An internal account that represents the absence of a user in a session. Because....[Read More]

Posted by Pete On 17/02/20 At 01:09 PM

Categories: Security Blogs

Oracle 20c : The new PREPARE DATABASE FOR DATA GUARD

Yann Neuhaus - Tue, 2020-02-18 15:02

As you may know, Oracle 20c is in the cloud with new features. The one I have tested is the PREPARE DATABASE FOR DATA GUARD.
This command configures a database for use as a primary database in a Data Guard broker configuration. Database initialization parameters are set to recommended values.
Let’s see what this command will do for us
The db_unique_name of the primary database is prod20 and in the Data Guard I will build, the db_unique_name will be changed to prod20_site1.

SQL> show parameter db_unique_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_unique_name			     string	 prod20
SQL> 

Now let’s connect to the broker can run the help to see the syntax

[oracle@oraadserver ~]$ dgmgrl
DGMGRL for Linux: Release 20.0.0.0.0 - Production on Tue Feb 18 21:36:39 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected to "prod20_site1"
Connected as SYSDG.
DGMGRL> 
 
DGMGRL> help prepare    

Prepare a primary database for a Data Guard environment.

Syntax:

  PREPARE DATABASE FOR DATA GUARD
    [WITH [DB_UNIQUE_NAME IS ]
          [DB_RECOVERY_FILE_DEST IS ]
          [DB_RECOVERY_FILE_DEST_SIZE IS ]
          [BROKER_CONFIG_FILE_1 IS ]
          [BROKER_CONFIG_FILE_2 IS ]];

And then run the command

DGMGRL> PREPARE DATABASE FOR DATA GUARD with DB_UNIQUE_NAME is prod20_site1;
Preparing database "prod20" for Data Guard.
Initialization parameter DB_UNIQUE_NAME set to 'prod20_site1'.
Initialization parameter DB_FILES set to 1024.
Initialization parameter LOG_BUFFER set to 268435456.
Primary database must be restarted after setting static initialization parameters.
Shutting down database "prod20_site1".
Database closed.
Database dismounted.
ORACLE instance shut down.
Starting database "prod20_site1" to mounted mode.
ORACLE instance started.
Database mounted.
Initialization parameter DB_FLASHBACK_RETENTION_TARGET set to 120.
Initialization parameter DB_LOST_WRITE_PROTECT set to 'TYPICAL'.
RMAN configuration archivelog deletion policy set to SHIPPED TO ALL STANDBY.
Adding standby log group size 209715200 and assigning it to thread 1.
Adding standby log group size 209715200 and assigning it to thread 1.
Adding standby log group size 209715200 and assigning it to thread 1.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'AUTO'.
Initialization parameter DG_BROKER_START set to TRUE.
Database set to FORCE LOGGING.
Database set to FLASHBACK ON.
Database opened.
DGMGRL> 

The output shows the changes done by the PREPARE command. We can do some checks

SQL> show parameter db_unique_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_unique_name			     string	 prod20_site1
SQL> select flashback_on,force_logging from v$database;

FLASHBACK_ON	   FORCE_LOGGING
------------------ ---------------------------------------
YES		   YES

SQL> 

SQL> show parameter standby_file

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
standby_file_management 	     string	 AUTO
SQL> 

But here I can see that I only have 3 standby redo log groups instead of 4 (as I have 3 redo log groups)

SQL> select bytes,group# from v$log;

     BYTES     GROUP#
---------- ----------
 209715200	    1
 209715200	    2
 209715200	    3

SQL> 


SQL> select group#,bytes from v$standby_log;

    GROUP#	BYTES
---------- ----------
	 4  209715200
	 5  209715200
	 6  209715200

SQL> 

After building the Data Guard I did some checks (note that steps not shown here but the same that other version)
For the configuration

DGMGRL> show configuration verbose;

Configuration - prod20

  Protection Mode: MaxPerformance
  Members:
  prod20_site1 - Primary database
    prod20_site2 - Physical standby database 

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'
    ConfigurationWideServiceName    = 'prod20_CFG'
    ConfigurationSimpleName         = 'prod20'

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS

For the primary database

DGMGRL> show database verbose 'prod20_site1';

Database - prod20_site1

  Role:                PRIMARY
  Intended State:      TRANSPORT-ON
  Instance(s):
    prod20

  Properties:
    DGConnectIdentifier             = 'prod20_site1'
    ObserverConnectIdentifier       = ''
    FastStartFailoverTarget         = ''
    PreferredObserverHosts          = ''
    LogShipping                     = 'ON'
    RedoRoutes                      = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '30'
    TransportLagThreshold           = '30'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    ApplyInstances                  = '0'
    ArchiveLocation                 = ''
    AlternateLocation               = ''
    StandbyArchiveLocation          = ''
    StandbyAlternateLocation        = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    LogXptStatus                    = '(monitor)'
    SendQEntries                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = 'oraadserver'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraadserver)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=prod20_site1_DGMGRL)(INSTANCE_NAME=prod20)(SERVER=DEDICATED)))'
    TopWaitEvents                   = '(monitor)'
    SidName                         = '(monitor)'

  Log file locations:
    Alert log               : /u01/app/oracle/diag/rdbms/prod20_site1/prod20/trace/alert_prod20.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/prod20_site1/prod20/trace/drcprod20.log

Database Status:
SUCCESS

DGMGRL> 

For the standby database

DGMGRL> show database verbose 'prod20_site2';

Database - prod20_site2

  Role:                PHYSICAL STANDBY
  Intended State:      APPLY-ON
  Transport Lag:       0 seconds (computed 1 second ago)
  Apply Lag:           0 seconds (computed 1 second ago)
  Average Apply Rate:  2.00 KByte/s
  Active Apply Rate:   0 Byte/s
  Maximum Apply Rate:  0 Byte/s
  Real Time Query:     OFF
  Instance(s):
    prod20

  Properties:
    DGConnectIdentifier             = 'prod20_site2'
    ObserverConnectIdentifier       = ''
    FastStartFailoverTarget         = ''
    PreferredObserverHosts          = ''
    LogShipping                     = 'ON'
    RedoRoutes                      = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '30'
    TransportLagThreshold           = '30'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    ApplyInstances                  = '0'
    ArchiveLocation                 = ''
    AlternateLocation               = ''
    StandbyArchiveLocation          = ''
    StandbyAlternateLocation        = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    LogXptStatus                    = '(monitor)'
    SendQEntries                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = 'oraadserver2'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraadserver2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PROD20_SITE2_DGMGRL)(INSTANCE_NAME=prod20)(SERVER=DEDICATED)))'
    TopWaitEvents                   = '(monitor)'
    SidName                         = '(monitor)'

  Log file locations:
    Alert log               : /u01/app/oracle/diag/rdbms/prod20_site2/prod20/trace/alert_prod20.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/prod20_site2/prod20/trace/drcprod20.log

Database Status:
SUCCESS

DGMGRL> 
Conclusion

I am sure that you will adopt this nice command.

Cet article Oracle 20c : The new PREPARE DATABASE FOR DATA GUARD est apparu en premier sur Blog dbi services.

Interval Partition(s)

Jonathan Lewis - Tue, 2020-02-18 07:45

A quirky little feature of interval partitioning showed up on Twitter today – a parallel insert that would only use a single PX slave to do the inserting. With 1.1 billion rows and the option for running parallel 32 this made the loading process rather slower than it ought to have been.

Fortunately it’s quite easy to model (and work around) the oddity. So here’s a small data set and an empty partitioned table to work with:


rem
rem     Script:         pt_int_load_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table t1 
nologging 
as
select 
        ao.* 
from 
        all_Objects ao, 
        (select rownum id from dual connect by level <= 20)
;

create table pt1
partition  by range (object_id) interval (1000000) (
        partition p1 values less than (1)
)
as
select  * 
from    all_Objects
where   rownum = 0
/

I’ve created a table by copying all_objects 20 times which, for my little sandbox, has given me a total of about 1.2M rows. Then I’ve created an empty interval-partitioned clone of all_objects, with the first partition defined to hold all rows where the object_id is less than 1 (and there’s no object in the database that could match that criterion). I’ve defined the interval to be 1,000,000 and since the highest object_id in my database is about 90,000 the first partition that gets added to this table will be able to hold all the data from t1.

So now we try to do a parallel insert from t1 into pt1, and check the execution plan and parallel execution statistics:


set serveroutput off

insert /*+ append enable_parallel_dml parallel(6) */ into pt1 select * from t1;

select * from table(dbms_xplan.display_cursor);

start pq_tqstat

Note how I’ve used the hint /*+ enable_parallel_dml */ (possible a 12c hint back-ported to 11.2.0.4) rather than using an “alter session”, it’s just a little convenience to be able to embed the directive in the SQL. The pq_tqstat script is one I published some time ago to report the contents of the session-specific dynamic performance view v$pq_tqstat immediately after running a parallel statement.

Here’s the plan:


SQL_ID  25hub68pf1z1s, child number 0
-------------------------------------
insert /*+ append enable_parallel_dml parallel(6) */ into pt1 select *
from t1

Plan hash value: 2888707464

-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                            |          |       |       |   631 (100)|          |        |      |            |
|   1 |  PX COORDINATOR                             |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)                       | :TQ10001 |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT (HIGH WATER MARK BROKERED)| PT1      |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     OPTIMIZER STATISTICS GATHERING          |          |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE                             |          |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,01 | PCWP |            |
|   6 |       PX SEND RANDOM LOCAL                  | :TQ10000 |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,00 | P->P | RANDOM LOCA|
|   7 |        PX BLOCK ITERATOR                    |          |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,00 | PCWC |            |
|*  8 |         TABLE ACCESS FULL                   | T1       |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
   - Degree of Parallelism is 6 because of hint

The most important detail of this plan is that the PX slaves do the load as select (operation 3), then send a message to the query coordinator (PX send QC, operation 2) to tell it about the data load. They do not send their data to the QC for the QC to do the load.

So the plan says we will be doing parallel DM, but here’s what v$pq_tqstat tells us:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES ROW_SHARE DATA_SHARE      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- --------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P006                215880   34785363     17.47      16.86         16          0           0
                                             1 P007                202561   34436325     16.39      16.69         17          0           0
                                             1 P008                207519   34564496     16.79      16.75         17          0           0
                                             1 P009                208408   34594770     16.86      16.77         17          0           0
                                             1 P00A                198915   33529627     16.10      16.25         16          0           0
                                             1 P00B                202537   34430603     16.39      16.69         16          0           0
                      Consumer               1 P000                     0        144      0.00       0.00         51         47           0
                                             1 P001                     0        144      0.00       0.00         51         47           0
                                             1 P002               1235820  206340464    100.00     100.00         75         47           0
                                             1 P003                     0        144      0.00       0.00         51         47           0
                                             1 P004                     0        144      0.00       0.00       1138       1134           0
                                             1 P005                     0        144      0.00       0.00       1137       1133           0

                    1 Producer               1 P000                     0         24      0.00       5.91         51         42           0
                                             1 P001                     0         24      0.00       5.91         50         41           0
                                             1 P002                     2        286    100.00      70.44         58         14           0
                                             1 P003                     0         24      0.00       5.91         51         43           0
                                             1 P004                     0         24      0.00       5.91         51         42           0
                                             1 P005                     0         24      0.00       5.91         51         43           0
                      Consumer               1 QC                       2        406    100.00     100.00        311        179           0

19 rows selected.

The query did run parallel 6 as hinted – and 6 PX slaves scanned the t1 table; but they all sent all their data to one PX slave in the second slave set and that one PX slave did all the inserts. The plan was parallel, but the execution was effectively serial. (You’ll note there is something a little odd about the waits and timeout for p004 and p005 when they are acting as consumers. I may worry about that later, but it could be a host-based side effect of running parallel 6 on a VM with 4 CPUs).

The serialization leads to two questions

  1. What went wrong?
  2. How do we work around this and make the insert “truly” parallel

My answer to (1) is “I don’t know – but I’ll look at it if necessary” combined with the guess – it’s something to do with the table having only one partition at the outset and this has an unexpected side effect on the randomising function for the PX distribution.

My answer to (2) is “if I’m right about (1), why not try pre-defining two partitions, and I’ll even let both of them stay empty”.

So here’s my new definition for pt1:


create table pt1
partition  by range (object_id) interval (1000000) (
        partition p0 values less than (0),
        partition p1 values less than (1)
)
as
select  * 
from    all_Objects
where   rownum = 0
/

Re-running the test with the completely redundant, and permanently empty p0 partition the plan doesn’t change but the results from v$pq_tqstat change dramatically:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES ROW_SHARE DATA_SHARE      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- --------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P006                207897   34581153     16.82      16.76         23          4           0
                                             1 P007                215669   34786429     17.45      16.86         30          5           0
                                             1 P008                221474   36749626     17.92      17.81         28          5           0
                                             1 P009                204959   34497164     16.58      16.72         22          2           0
                                             1 P00A                177755   30141002     14.38      14.61         21          0           0
                                             1 P00B                208066   35585810     16.84      17.25         25          2           0
                      Consumer               1 P000                213129   35612973     17.25      17.26         82         57           0
                                             1 P001                200516   33570586     16.23      16.27         84         55           0
                                             1 P002                203395   33950449     16.46      16.45         83         56           0
                                             1 P003                205458   34235575     16.63      16.59         82         54           0
                                             1 P004                204111   33999932     16.52      16.48        581        555           0
                                             1 P005                209211   34971669     16.93      16.95        580        553           0

                    1 Producer               1 P000                     2        286     16.67      16.67        422        149           0
                                             1 P001                     2        286     16.67      16.67        398        130           0
                                             1 P002                     2        286     16.67      16.67        405        128           0
                                             1 P003                     2        286     16.67      16.67        437        161           0
                                             1 P004                     2        286     16.67      16.67        406        116           0
                                             1 P005                     2        286     16.67      16.67        440        148           0
                      Consumer               1 QC                      12       1716    100.00     100.00        242        111           0



19 rows selected.

Every consumer receives and inserts roughly 200,000 rows – it’s a totally fair parallel DML. Timings are pretty irrelevant for such a small data set but the excution time did drop from 7 seconds to 4 seconds when parallelism was working “properly”.

I’ve tested this script on 12.2.0.1 and 19.3.0.0 – the same anomaly appears in both versions though it might be worth noting that the strange skew in the waits and timeouts doesn’t appear in 19.3.0.0.

How to fetch part of a string for LONG datatype

Tom Kyte - Tue, 2020-02-18 06:11
HI, I am writing a query to find missing table partitions for next year using all_tab_partitions table, I am able to fetch the records with the help of column partition positions, but I have to extract the last partition date (YYYY-MM-DD) from HIG...
Categories: DBA Blogs

Finding when someone dropped an object

Tom Kyte - Tue, 2020-02-18 06:11
Hi Team, I have a DB, where a table is dropped from the schema accidentally. We are trying to find whether it got dropped due to manual execution of DROP query and by whom? Is there any way that we can find that the DROP query which is executed ...
Categories: DBA Blogs

Scheduling Compilation or Execution of Stored Procedures

Tom Kyte - Tue, 2020-02-18 06:11
Hello We have a Oracle 11g Release 2 database with five identical working Schemas being accessed by a VB Client Server business application The db server, with 32GB RAM just hosts this one database Copies of Client applications are installed a...
Categories: DBA Blogs

Sqoop ojdbc8.jar throws error ORA-06502:PL/SQL:: numeric or value error

Tom Kyte - Tue, 2020-02-18 06:11
Hi Tom, We are trying to do a sqoop import to hive from Oracle and struck with a weird error below: WARN[main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.sql.SQLExcepion: ORA-00606:error occurred ...
Categories: DBA Blogs

Oracle 19C database issue with table types and pipelining

Tom Kyte - Tue, 2020-02-18 06:11
I have a package working fine in 11g version. But when I deploy the same package in 19c version, the behavior is different. PFB the description. Package specification has an cursor and created a table type with cursor%rowtype. Having a pipel...
Categories: DBA Blogs

Synchronous refresh in mview ORA-31922: Foreign key must contain partition key in table

Tom Kyte - Tue, 2020-02-18 06:11
Team, Here is my testcase which got failed during Synchronous refresh in mview. <code>create table products as select rownum as prod_id, object_name as prod_name, object_type as prod_category, object_id as prod_category_id, data_object...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator