Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 16 hours 20 min ago

DevOpsDays Geneva – 2022

Sun, 2022-05-22 15:40

We started the DevOpsDays Geneva Thursday May 12th , with dbi services colleagues Pascal ZANETTE, Pierre-Yves BREHIER, Jean-Philippe CLAPOT and Chay TE, with the  registration and a few cup of coffee, to prepare this first day.

After the Welcome speech provided by the event hosts, Matteo MAZZERI and Matthieu ROBIN, I follow the first main stream session, provided by Julia GIACINTI and Xavier NICOLOVICI, from PICTET, on “How to support the emergence of a DevOps culture within a large company”. In a well-defined sketch, Julie and Xavier explained the DevOps discovery of a traditional Prod manager and the incomprehension of this new way of work. They detailed this journey within their company, gave us a few tips and tricks, shared the challenges they faced and conclude with the current situation as well as the next steps.

It was then to David BARBARIN to present “Why we migrated the DB monitoring stack to Prometheus and Grafana”. David first detailed the current Migros Online architecture and explained the constraints and challenges which leads to the decision to use these tools. He gave a lots of details and explanation through his technical demonstration and conclude with the achievements and results of this migration.

The next presentation was done by Giovanni GALLORO, who gave a deep technical demonstration on Tekton pipelines named : “Tekton : from source to production inside Kubernetes”.

The last session of this busy morning was provided by Denis JANNOT who demonstrate how to implement “Advanced authentication patterns from the edge”. Denis shows the new challenges faced to properly secure a K8s cluster and detail several solution available which can be evaluate, based on infrastructure needs and constraints (Envoy proxy / Gloo Edge, API server…)

After a well deserved lunch break, we had a very interesting talk from Max ANDERSSON, who explain the audience how to “close the feedback loop for infrastructure development. Max recall the well know “3 ways” DevOps pillar and provide a refreshing and dynamic interactive session with the attendees.

We then reached the last session of this first by following Hervé SCHWEITZER, dbi services founder and CTO, session around YAK, a powerful internally developed tool for multi-cloud deployment. YAK is a derivation from IAC acronym (Infrastructure As Code) and was designed around Ansible and Docker, to allow a host deployment with the same setup independently of the destination (on prem AWS, Azure,…), using only one specific command. Hervé conclude with a Geneva DevOpsDays exclusive announcement : the YaK core component will be share with the community in the next months, probably around September this year. Stay tuned on this Blog website for the next announcements on the topic.

We ended this well filled first day with a cocktail diner, where passionate discussion and exchanges continued until late at night, in a very good ambiance.

After this short night (and a few cup of coffee), we started the second and last day of this event with Ankur MARFATIA, who clarify how to “Turn an enterprise to a learning community”. Ankur provide a real case example of learning share session put in place in his company, and underline the benefits of such internal events. He also linked these kind of internal practices with what can be found in external events, such as the DevOps Days. Interestingly, the importance of having food during these kind of meet-up was agreed by the whole assemble, which proves we are all the same.
He then explained some key points to create a safe and global learning environment for everyone, and reminds the two golden points for successful coaching sessions: Middle/Top Management implication and people wish to follow the training leaders.
Ankur conclude his talk with three important points to keep in mind :
1) learning is a never-ending journey
2) we all have different learning curves
3) change culture takes team and effort.

The next session was “Tips to fight the imposter syndrome”, by Aurélie VACHE a brilliant talk around this perception biases which leads a person to thing he/she does not belongs to his/her role or position. The person think his/her position is linked to luck and not hard work or knowledge. It leads to a self-deprecation feeling and the fear that other people will “realize” this imposture, soon or later. Aurélie gave a very frank and dynamic talk, with a lot of examples and tips to work around this syndrome and received a well deserved round of applause from everyone present in the room.

Right after, we listen carefully Courtney HIBA on how to “build mastery into your daily practice”. Courtney explained what was Mastery, why it was crucial for personal fulfillment and the different Mastery categories (Emotional, Business, Wealth and Relationship). She conclude by proposing an action plan to the audience : define one or two goals to master this year, identify the compelling reasons (why do you want to master it) and take action to start your mastery journey to achieve your goals.

Dr. Joe PEREZ followed, with a real show around “Driving decision with data : delight or disaster”. He gave a very energic talk, on the difference between the data’s values and their usage or pertinence. He gave us key points to help us to get enough materials to use data as accurately as possible, in order to improve our data-driven decisions, with all the necessary rules and safe guards to make it really useful and relevant.

We jumped into the “Docs-as-code : fix a poor document culture in your organization” presentation, dispensed by Sandro CIRULLI. Sandor first list the consequences of a poor documentation, which could leads to big issues such as : new employees onboarding slowness, productivity decrease, production outage recovery increased time, technical debt, new feature implementation slowness, poor communication,…
He then explained the documentation as code and why we should treat our documentation as we do for our code : versioned and trackable.
He shows the assembly a few tools which can be used for this purpose, such as Hugo, Red the Docs and Antora. He conclude that despite it need efforts from everyone at first, there is huge benefits to apply documentation-as-code and that we need to choose the tool that fits the company needs.

Stéphanie FISHER then share her personal experience and “lesson learned during an Agile transformation”. She highlights five key points she learned during her Agile coach career :
1st lesson : Avoid the word “Agile” to avoid people to block on a single word instead of embracing the concept and idea. 2nd : Resist the urge to fill the gaps : The risk is to substitute the root cause resolution against taking a role in the company. 3rd: Adapt yourself to the client context : you need to listen to your customer needs rather than pushing your own. 4th: Accept the frustration in the change : embrace the conflict if needed. Accept the tension as part of a necessary step in the change. 5Th : use your own advises and “Agile” yourself ! The world is changing and we need to be flexible, accept the incertitude, not over-analyze, prefer the testing to the thinking, prefer learning objective over that performance objective. It requires patience and resilience but it worst it.

Scott GRAFFIUS, remotely from the USA, detailed the Tuckman’s model to explain the five phases of a team development, Forming/Storming/Norming/Performing/Adjourning, and provide advice and guidance to help team members during those phases.
As a conclusion, Scott mentioned that all these steps are inevitable and seen in most of the teams, regardless of their activity or technical knowledge. Following the guidance shared will help team and individuals to prevent frustration and keep a good work spirit along the organization.

This presentation was followed by a participative open session with Matteo and Matthieu on what we thought about the organization of this event, if we had some improvement ideas or proposal, what we liked the most during these two days, the less,…
This conclude this DevOps Days Geneva and we hope we will see you there next year, we will be present for sure !

 

Cet article DevOpsDays Geneva – 2022 est apparu en premier sur Blog dbi services.

dbi services at the DevOpsDays Geneva 2022

Sun, 2022-05-22 15:37

dbi services was at the DevopsDays Geneva 2022 this year, which took place at the Haute Ecole de Gestion campus, in Carouge. Pierre-Yves Brehier, Pascal Zanette, Emmanuel Wagner, Chay Te and myself were present.

Opportunity to learn, discover new topics or technologies, doing in-person meetings and initiate business, this 2-days DevOps major event in the Romandie part of Switzerland gathered more than 400 people, 18 companies on site, and 16 additional sponsors.

 

With a mix of small dedicated rooms and more global sessions, this event covered a wide range of topics, technical or organizational.

Among all sessions we all attended to, here a small subset.
The first session I had the pleasure to attend was an interesting session Etienne Studer from Gradle who presented us the Developer Productivity Engineering.
This new engineering approach aim to increase developer productivity. Using automation and acceleration technologies, this approach is based on five pillars :

  • Faster feedback cycles
  • Faster Troubleshootings
  • Reliable Builds and tests
  • Continuous Learning and Improvement
  • CI Cost and Resource efficiency

 

Another good session was the one performed by Giovanni Galloro, from Google Gloud, Specialist Customer Engineer.
He presented us Telkon. Open-source framework, Tekton is used to create CI/CD systems such as Jenkins, Jenkins X the one we use daily, Skaffold or more. In his session, he described all the basic components, such as Tasks, Pipelines, Steps, …
It was followed by a demo of the tool. Even if we knew it, it was very interesting to see Tekton, shown and explained by somebody from Google.

 

At the end of the first day, we were all there to support our CTO, Herve Schweitzer. He was presenting a new product from dbi services, YaK.

YaK, is a IaaC product. IaaC stands for an Infrastructure as a Code. It helps you to describe and deploy your infrastructure on specified targets. But in fact it’s way more : by using Ansible as a back-end but also all of our knowledge and expertise, it helps you to deploy virtual machines (Linux or Windows based) and databases (such as MariaDB, Postgres, Oracle, …) pain-free on Cloud providers (AWS, Azure or Oracle cloud for instance) or on-premise.

The beauty of that is changing where your DBs or VMs are deployed, so moving from one provider to another, is just a matter of  setting the correct target! The room was packed for that session and the product and its concept was very well received by all the audience, concluded by a set of constructive discussions on the product itself and its application on the customer. dbi services is acting in favor of the community : a part of the product will be released public in September 2022 !

On the second day, we all attended in the amphitheater of the campus to a session of Aurelie Vache, from OVH Cloud.
With a warm support from the whole assistance, Aurelie didn’t talk about any technical topic here. No fancy DevOps tools here, “just” a talk about who we are: humans. Instead of being technical, she transported us into the discovery of a psychological pattern, called the impostor syndrome.

People affected feel that they don’t deserve their success or their position in a company. They feel the situation as if it were only due to chance. She gave some clues, like accepting we have knowledge, sharing and contributing, getting feedback, being positive. Thanks Aurelie for this very nice and refreshing presentation.

We can’t summarize this event to a set of sessions we attended or topics we had the pleasure to learn. It was also the opportunity, after last year cancellation, to meet people and share coffee or talks with customers, potential new employees or other DevOps fans like us !

 

Cet article dbi services at the DevOpsDays Geneva 2022 est apparu en premier sur Blog dbi services.

Helvetia used AWS SCT & DMS to migrate to AWS RDS for PostgreSQL

Thu, 2022-05-19 04:32

One of our long term-time customers, Helvetia, successfully migrated on-prem Oracle databases to AWS, not only because of the licenses, but more importantly: to deploy faster, innovate faster, and use the state-of-the-art open source database system.


When you plan such a project, you need to know which tools you want to use and what the target architecture shall look like. There are several options to choose from but finally Helvetia decided to use the AWS native services AWS DMS and AWS RDS for PostgreSQL.

AWS DMS gives you the option to initially populate the target instance from the source, and right afterwards logically replicates ongoing changes from the source to the target. However, before you can do that, you need the schema to be ready in the target. To prepare this, there is AWS SCT. This is not an AWS service, but a free tool you can use to convert a schema from a database system to another. If you want to go from Oracle to PostgreSQL, this tool also performs an automatic conversion from Oracle’s PL/SQL to PostgreSQL’s PL/pgSQL. Although this tool does a great job, you have to be very careful with the result, and invest a good amount of time in testing. Autonomous transactions, for example, do not exist in PostgreSQL and the AWS schema conversion utility implements a workaround using database links. This can be fine if you rarely use it (because it needs to establish a new connection) but if you rely heavily on this feature, you’d better re-implement in a way that is native to PostgreSQL.

Another area you need to pay attention to are the data types. PostgreSQL comes with many of them. A NUMBER in Oracle can mean many things in PostgreSQL. It could be an integer or a numeric in PostgreSQL. Depending on what you go for, this comes with space and performance impacts in PostgreSQL. PostgreSQL comes with a boolean data type. In Oracle, this is usually implemented as a character or a numeric value. Do you want to keep it that way or do you want to convert to a boolean? Converting means that you also need to adjust the business logic in the database.

Another issue that took quite some to solve was this. The very simplified test case attached to the initial Email showed massive performance drops in PostgreSQL compared to Oracle. The reason is that Oracle’s PL/SQL is a compiled language and PostgreSQL’s PL/pgSQL is interpreted. If you have a case that more or less matches what is described in the thread linked above, you need to re-write this. The same applies when you have commits or rollbacks in PL/SQL functions. PostgreSQL does not allow you to commit or rollback in a function. You need to use procedures for that.

These are just a few hints of what might come along the way when migrating to AWS RDS for PostgreSQL. Once you have solved all this, the migration can be really smooth and will most probably be a success. Here are some posts that describe how to set this up using an Oracle sample schema as the source:

If you follow that, you should have enough knowledge to get started with your journey to AWS RDS.

Cet article Helvetia used AWS SCT & DMS to migrate to AWS RDS for PostgreSQL est apparu en premier sur Blog dbi services.

Installing MySQL InnoDB Cluster in OKE using a MySQL Operator

Tue, 2022-05-10 01:20

During previous months, I’ve had some time to satisfy my curiosity about databases in containers and I started to test a little bit MySQL in Kubernetes.
This is how it all began…

In January I had the chance to be trained on Kubernetes attending the Docker and Kubernetes essentials Workshop of dbi services. So I decided to prepare a session on this topic at our internal dbi xChange event. And as if by magic, at the same time, a customer asked for our support to migrate a MySQL database to their Kubernetes cluster.

In general, I would like to raise two points before going into the technical details:
1. Is it a good idea to move databases into containers? Here I would use a typical IT answer: “it depends”. I can suggest you to think about your needs and constraints, if you have small images to deploy, about storage and persistence, performances, …
2. There are various solutions for installing, orchestrating and administering MySQL in K8s: MySQL single instance vs MySQL InnoDB Cluster, using MySQL Operator for Kubernetes or Helm Charts, on-premise but also through Oracle Container Engine for Kubernetes on OCI, … I recommend you to think about which are (again) your needs and skills, if you are already working on Cloud technologies, whether you have already set up DevOps processes and which ones, …

Here I will show you how to install a MySQL InnoDB Cluster in OKE using a MySQL Operator.

First thing is to have an account on Oracle OCI and have deployed an Oracle Container Engine for Kubernetes in your compartment. You can do it in an easy was using the Quick Create option under “Developer Services > Containers & Artifacts > Kubernetes Clusters (OKE)”:

In this way all the resources you need (VCN, Internet and NAT gateways, a K8s cluster with workers nodes and node pool) are there in one click:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl cluster-info
Kubernetes control plane is running at https://xxx.xx.xxx.xxx:6443
CoreDNS is running at https://xxx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get nodes -o wide
NAME         STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP       OS-IMAGE                  KERNEL-VERSION                      CONTAINER-RUNTIME
10.0.10.36   Ready    node    6m7s   v1.22.5   10.0.10.36    yyy.yyy.yyy.yyy   Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.37   Ready    node    6m1s   v1.22.5   10.0.10.37    kkk.kkk.kkk.kk    Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.42   Ready    node    6m     v1.22.5   10.0.10.42    jjj.jj.jjj.jj     Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7

As a second step, you can install the MySQL Operator for Kubernetes using kubectl:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
customresourcedefinition.apiextensions.k8s.io/innodbclusters.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/mysqlbackups.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/clusterkopfpeerings.zalando.org created
customresourcedefinition.apiextensions.k8s.io/kopfpeerings.zalando.org created
elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
serviceaccount/mysql-sidecar-sa created
clusterrole.rbac.authorization.k8s.io/mysql-operator created
clusterrole.rbac.authorization.k8s.io/mysql-sidecar created
clusterrolebinding.rbac.authorization.k8s.io/mysql-operator-rolebinding created
clusterkopfpeering.zalando.org/mysql-operator created
namespace/mysql-operator created
serviceaccount/mysql-operator-sa created
deployment.apps/mysql-operator created

You can check the health of the MySQL Operator:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get deployment -n mysql-operator mysql-operator
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mysql-operator   1/1     1            1           24s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get pods --show-labels -n mysql-operator
NAME                              READY   STATUS    RESTARTS   AGE    LABELS
mysql-operator-869d4b4b8d-slr4t   1/1     Running   0          113s   name=mysql-operator,pod-template-hash=869d4b4b8d

To isolate resources, you can create a dedicated namespace for the MySQL InnoDB Cluster:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl create namespace mysql-cluster
namespace/mysql-cluster created

You should also create a Secret using kubectl to store MySQL user credentials that will be created and then required by pods to access to the MySQL server:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl create secret generic elisapwd --from-literal=rootUser=root --from-literal=rootHost=% --from-literal=rootPassword="pwd" -n mysql-cluster
secret/elisapwd created

You can check that the Secret was corrected created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get secrets -n mysql-cluster
NAME                  TYPE                                  DATA   AGE
default-token-t2c47   kubernetes.io/service-account-token   3      2m
elisapwd              Opaque                                3      34s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl describe secret/elisapwd -n mysql-cluster
Name:         elisapwd
Namespace:    mysql-cluster
Labels:       
Annotations:  

Type:  Opaque

Data
====
rootHost:      1 bytes
rootPassword:  7 bytes
rootUser:      4 bytes

Now you have to write a .yaml configuration file to define how the MySQL InnoDB Cluster should be created. Here is a simple example:

elisa@cloudshell:~ (eu-zurich-1)$ vi InnoDBCluster_config.yaml
apiVersion: mysql.oracle.com/v2alpha1
kind: InnoDBCluster
metadata:
  name: elisacluster
  namespace: mysql-cluster 
spec:
  secretName: elisapwd
  instances: 3
  router:
    instances: 1

At this point you can run a MySQL InnoDB Cluster applying the configuration that you just created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f InnoDBCluster_config.yaml
innodbcluster.mysql.oracle.com/elisacluster created

You can finally check if the MySQL InnoDB Cluster has been successfully created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get innodbcluster --watch --namespace mysql-cluster
NAME           STATUS    ONLINE   INSTANCES   ROUTERS   AGE
elisacluster   PENDING   0        3           1         12s
elisacluster   PENDING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         104s
elisacluster   INITIALIZING   0        3           1         106s
elisacluster   ONLINE         1        3           1         107s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get all -n mysql-cluster
NAME                                       READY   STATUS    RESTARTS   AGE
pod/elisacluster-0                         2/2     Running   0          4h44m
pod/elisacluster-1                         2/2     Running   0          4h42m
pod/elisacluster-2                         2/2     Running   0          4h41m
pod/elisacluster-router-7686457f5f-hwfcv   1/1     Running   0          4h42m

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                               AGE
service/elisacluster             ClusterIP   10.96.9.203           6446/TCP,6448/TCP,6447/TCP,6449/TCP   4h44m
service/elisacluster-instances   ClusterIP   None                  3306/TCP,33060/TCP,33061/TCP          4h44m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/elisacluster-router   1/1     1            1           4h44m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/elisacluster-router-7686457f5f   1         1         1       4h44m

NAME                            READY   AGE
statefulset.apps/elisacluster   3/3     4h44m

You can use port forwarding in the following way:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl port-forward service/elisacluster mysql --namespace=mysql-cluster
Forwarding from 127.0.0.1:6446 -> 6446

to access your MySQL InnoDB Cluster on a second terminal in order to check its health:

elisa@cloudshell:~ (eu-zurich-1)$ mysqlsh -h127.0.0.1 -P6446 -uroot -p
Please provide the password for 'root@127.0.0.1:6446': *******
Save password for 'root@127.0.0.1:6446'? [Y]es/[N]o/Ne[v]er (default No): N
MySQL Shell 8.0.28-commercial

Copyright (c) 2016, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@127.0.0.1:6446'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 36651
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type \use  to set one.
 MySQL  127.0.0.1:6446 ssl  JS >  MySQL  127.0.0.1:6446 ssl  JS > dba.getCluster().status();
{
    "clusterName": "elisacluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "PRIMARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306"
}

 MySQL  127.0.0.1:6446 ssl  JS > \sql
Switching to SQL mode... Commands end with ;
 MySQL  127.0.0.1:6446 ssl  SQL > select @@hostname;
+----------------+
| @@hostname     |
+----------------+
| elisacluster-0 |
+----------------+
1 row in set (0.0018 sec)
 MySQL  127.0.0.1:6446 ssl  SQL > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST                                                           | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 717dbe17-ba71-11ec-8a91-3665daa9c822 | elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | PRIMARY     | 8.0.28         | XCom                       |
| group_replication_applier | b02c3c9a-ba71-11ec-8b65-5a93db09dda5 | elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
| group_replication_applier | eb06aadd-ba71-11ec-8aac-aa31e5d7e08b | elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.0036 sec)

Easy, right?
Yes, but databases containers is still a tricky subject. As we said above, many topics need to be addressed: deployment type, performances, backups, storage and persistence, … So stay tuned, more blog posts about MySQL on K8s will come soon…

By Elisa Usai

Cet article Installing MySQL InnoDB Cluster in OKE using a MySQL Operator est apparu en premier sur Blog dbi services.

How to upgrade Patroni

Mon, 2022-05-02 11:10

It’s been a while since I wrote the blog post about PostgreSQL 13 and Patroni 2.0. On this blog I received a comment, if it’s possible to explain, how to upgrade from Patroni 1.6 to 2. So let’s make a short blog about the upgrade procedure of Patroni.

For my initial installation is used python pip, not the rpm packages. But the upgrade with pip is also really simple and straight forward. There is no need to stop Patroni. But as usual, be sure you have a backup of you database or a snapshot of your server, if applicable.

In first step, let’s check the current version of Patroni and upgrade pip itself and the setuptools to be sure to have it on the newest version as well.

postgres@patroni1:/u02/pgdata/ [PG1] patronictl version
patronictl version 1.6.4
postgres@patroni1:/u02/pgdata/ [PG1] python3 -m pip install --upgrade pip
Defaulting to user installation because normal site-packages is not writeable
Collecting pip
  Downloading pip-21.3.1-py3-none-any.whl (1.7 MB)
     |████████████████████████████████| 1.7 MB 2.0 MB/s
Installing collected packages: pip
Successfully installed pip-21.3.1
postgres@patroni1:/u02/pgdata/ [PG1] python3 -m pip install --upgrade --user setuptools
Requirement already satisfied: setuptools in /home/postgres/.local/lib/python3.6/site-packages (45.2.0)
Collecting setuptools
  Downloading setuptools-59.6.0-py3-none-any.whl (952 kB)
     |████████████████████████████████| 952 kB 1.6 MB/s
Installing collected packages: setuptools
  Attempting uninstall: setuptools
    Found existing installation: setuptools 45.2.0
    Uninstalling setuptools-45.2.0:
      Successfully uninstalled setuptools-45.2.0
Successfully installed setuptools-59.6.0
postgres@patroni3:/u01/app/postgres/product/ [PG1]

Afterwards we upgrade Patroni itself. Which is also done using python pip with –upgrade option.

postgres@patroni1:/u02/pgdata/ [PG1]  python3 -m pip install --upgrade  --user patroni[etcd]
Requirement already satisfied: patroni[etcd] in /home/postgres/.local/lib/python3.6/site-packages (1.6.4)
Collecting patroni[etcd]
  Downloading patroni-2.1.3-py3-none-any.whl (222 kB)
     |████████████████████████████████| 222 kB 1.6 MB/s
Requirement already satisfied: psutil>=2.0.0 in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (5.7.0)
Requirement already satisfied: six>=1.7 in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (1.14.0)
Requirement already satisfied: urllib3!=1.21,>=1.19.1 in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (1.25.8)
Collecting ydiff>=1.2.0
  Downloading ydiff-1.2.tar.gz (42 kB)
     |████████████████████████████████| 42 kB 2.1 MB/s
  Preparing metadata (setup.py) ... done
Requirement already satisfied: click>=4.1 in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (7.0)
Requirement already satisfied: PyYAML in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (5.3)
Requirement already satisfied: prettytable>=0.7 in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (0.7.2)
Requirement already satisfied: python-dateutil in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (2.8.1)
Requirement already satisfied: python-etcd=0.4.3 in /home/postgres/.local/lib/python3.6/site-packages (from patroni[etcd]) (0.4.5)
Requirement already satisfied: dnspython>=1.13.0 in /home/postgres/.local/lib/python3.6/site-packages (from python-etcd=0.4.3->patroni[etcd])                                                                                                                                                            (1.16.0)
Using legacy 'setup.py install' for ydiff, since package 'wheel' is not installed.
Installing collected packages: ydiff, patroni
    Running setup.py install for ydiff ... done
  Attempting uninstall: patroni
    Found existing installation: patroni 1.6.4
    Uninstalling patroni-1.6.4:
      Successfully uninstalled patroni-1.6.4
Successfully installed patroni-2.1.3 ydiff-1.2
postgres@patroni3:/u01/app/postgres/product/ [PG1] 

To be sure everything works as expected, check the version in the last step and the cluster itself

postgres@patroni1:/u02/pgdata/ [PG1]  patronictl list
+ Cluster: PG1 (7093128570847303467) -+---------+----+-----------+
| Member   | Host           | Role    | State   | TL | Lag in MB |
+----------+----------------+---------+---------+----+-----------+
| patroni1 | 192.168.22.111 | Leader  | running |  4 |           |
| patroni2 | 192.168.22.112 | Replica | running |  4 |         0 |
| patroni3 | 192.168.22.113 | Replica | running |  4 |         0 |
+----------+----------------+---------+---------+----+-----------+

postgres@patroni1:/u02/pgdata/ [PG1] patroni --version
patroni 2.1.3

Once this this is finished successfully on one node, you can go on with the other nodes in the cluster.

Cet article How to upgrade Patroni est apparu en premier sur Blog dbi services.

How to setup a Consul Cluster on RHEL 8, Rocky Linux 8, AlmaLinux 8 part 2

Mon, 2022-05-02 03:49

Within the first part I have described the setup of Consul as replacement for ETCD.
Here now the setup ob keepalived, haproxy and patroni.

The needed packages I have installed within the first part, so let’s start with the configuration of keepalived.

At first we need to open firewalld for the VRRP Protocol:

$ [root@patroni-01 ~]# firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
$ success
$ [root@patroni-01 ~]# firewall-cmd --reload
$ success
$ [root@patroni-01 ~]#

Next part will be the configuration of keepalived:

$ [root@patroni-01 /]# cat /etc/keepalived/keepalived.conf
$ vrrp_script haproxy {
$         script "killall -0 haproxy"
$         interval 2
$         weight 2
$ }
$ vrrp_instance VI_1 {
$         state MASTER
$         interface ens160
$         virtual_router_id 51
$         priority 255
$         advert_int 1
$         authentication {
$               auth_type PASS
$               auth_pass new_password
$         }
$         virtual_ipaddress {
$               192.168.198.200/24
$         }
$         track_script {
$         haproxy
$         }
$ }
$ [root@patroni-01 /]#

Priority defines the default role, in my case 255 for the master role.

The keepalived.conf for the backup role on patroni-02:

$ [root@patroni-02 /]# cat /etc/keepalived/keepalived.conf
$ vrrp_script haproxy {
$         script "killall -0 haproxy"
$         interval 2
$         weight 2
$ }
$ vrrp_instance VI_1 {
$         state BACKUP
$         interface ens160
$         virtual_router_id 51
$         priority 254
$         advert_int 1
$         authentication {
$               auth_type PASS
$               auth_pass new_password
$         }
$         virtual_ipaddress {
$               192.168.198.200/24
$         }
$         track_script {
$         haproxy
$         }
$ }
[root@patroni-02 /]#

The keepalived.conf for the backup role on patroni-03:

$ [root@patroni-03 /]# cat /etc/keepalived/keepalived.conf
$ vrrp_script haproxy {
$         script "killall -0 haproxy"
$         interval 2
$         weight 2
$ }
$ vrrp_instance VI_1 {
$         state BACKUP
$         interface ens160
$         virtual_router_id 51
$         priority 254
$         advert_int 1
$         authentication {
$               auth_type PASS
$               auth_pass new_password
$         }
$         virtual_ipaddress {
$               192.168.198.200/24
$         }
$         track_script {
$         haproxy
$         }
$ }
$ [root@patroni-03 /]#

Checking status on all three nodes.:
patroni-01 as MASTER:

$ [root@patroni-01 /]# journalctl -u keepalived
$ Mar 25 13:04:45 patroni-01.patroni.test Keepalived_vrrp[11468]: (VI_1) Entering MASTER STATE

patroni-02 as BACKUP:

$ journalctl -u keepalived
$ Mar 25 14:20:18 patroni-02.patroni.test Keepalived_vrrp[1484]: (VI_1) Entering BACKUP STATE

patroni-03 as BACKUP:

$ journalctl -u keepalived
$ Mar 25 14:21:56 patroni-03.patroni.test Keepalived_vrrp[1465]: (VI_1) Entering BACKUP STATE

Next step haproxy.
At first we need to adapt SE Linux for haproxy or switch it off:

$ [root@patroni-01 /]#setsebool -P haproxy_connect_any=1

haproxy.cfg is the same on all three servers:

$ [root@patroni-01 /]# cat /etc/haproxy/haproxy.cfg
$ global
$     maxconn 100
$ 
$ defaults
$     log global
$     mode tcp
$     retries 2
$     timeout client 30m
$     timeout connect 4s
$     timeout server 30m
$     timeout check 5s
$ 
$ listen stats
$     mode http
$     bind *:7000
$     stats enable
$     stats uri /
$     # stats auth haproxy:haproxy
$     # stats refresh 10s
$ 
$ listen PG1
$     bind *:5000
$     option httpchk
$     http-check expect status 200
$     default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
$     server postgresql_192.168.198.132_5432 192.168.198.132:5432 maxconn 100 check port 8008
$     server postgresql_192.168.198.133_5432 192.168.198.133:5432 maxconn 100 check port 8008
$     server postgresql_192.168.198.134_5432 192.168.198.134:5432 maxconn 100 check port 8008
$ [root@patroni-01 /]#

Starting and enabling haproxy:

$ [root@patroni-01 /]# systemctl start haproxy
$ [root@patroni-01 /]# systemctl enable haproxy

Now the interesting part, Patroni.
At first, there is a missing dependancy by installing Patroni out of RPM Pachages, python3-urllib3 is missing:

$ [root@patroni-01 pgdata]# dnf install python3-urllib3
$ Last metadata expiration check: 5:18:38 ago on Mon 11 Apr 2022 11:06:33 AM CEST.
$ Dependencies resolved.
$ ==========================================================================================================================================================================================================================================================================================
$  Package                                                                   Architecture                                                     Version                                                                Repository                                                        Size
$ ==========================================================================================================================================================================================================================================================================================
$ Installing:
$  python3-urllib3                                                           noarch                                                           1.24.2-5.el8                                                           baseos                                                           176 k
$ Installing dependencies:
$  python3-pysocks                                                           noarch                                                           1.6.8-3.el8                                                            baseos                                                            33 k
$ 
$ Transaction Summary
$ ==========================================================================================================================================================================================================================================================================================
$ Install  2 Packages
$ 
$ Total download size: 209 k
$ Installed size: 681 k
$ Is this ok [y/N]: y
$ Downloading Packages:
$ (1/2): python3-pysocks-1.6.8-3.el8.noarch.rpm                                                                                                                                                                                                             274 kB/s |  33 kB     00:00
$ (2/2): python3-urllib3-1.24.2-5.el8.noarch.rpm                                                                                                                                                                                                            1.0 MB/s | 176 kB     00:00
$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$ Total                                                                                                                                                                                                                                                     469 kB/s | 209 kB     00:00
$ Running transaction check
$ Transaction check succeeded.
$ Running transaction test
$ Transaction test succeeded.
$ Running transaction
$   Preparing        :                                                                                                                                                                                                                                                                  1/1
$   Installing       : python3-pysocks-1.6.8-3.el8.noarch                                                                                                                                                                                                                               1/2
$   Installing       : python3-urllib3-1.24.2-5.el8.noarch                                                                                                                                                                                                                              2/2
$   Running scriptlet: python3-urllib3-1.24.2-5.el8.noarch                                                                                                                                                                                                                              2/2
$   Verifying        : python3-pysocks-1.6.8-3.el8.noarch                                                                                                                                                                                                                               1/2
$   Verifying        : python3-urllib3-1.24.2-5.el8.noarch                                                                                                                                                                                                                              2/2
$ 
$ Installed:
$   python3-pysocks-1.6.8-3.el8.noarch                                                                                                          python3-urllib3-1.24.2-5.el8.noarch
$ 
$ Complete!

Patroni need a information which one of the consul nodes is master at start.
This information comes out of the parameter “bootstrap”: true only on the master node at start.

$ [root@patroni-01 consul.d]# cat consul.json-dist.hcl
$ {
$     "bootstrap": true,
$     "server": true,
$     "data_dir": "/pgdata/consul",
$     "log_level": "INFO"
$     "disable_update_check": true,
$     "disable_anonymous_signature": true,
$     "advertise_addr": "192.168.198.132",
$     "bind_addr": "192.168.198.132",
$     "bootstrap_expect": 3,
$     "client_addr": "0.0.0.0",
$     "domain": "patroni.test",
$     "enable_script_checks": true,
$     "dns_config": {
$         "enable_truncate": true,
$         "only_passing": true
$     },
$     "enable_syslog": true,
$     "encrypt": "ueX3vI8HI63FR/VE+Yv1T4+x7mrrNIU7F2bDNfPVR9g=",
$     "leave_on_terminate": true,
$     "log_level": "INFO",
$     "rejoin_after_leave": true,
$     "retry_join": [
$         "patroni-01",
$         "patroni-02",
$         "patroni-03"
$     ],
$     "server": true,
$     "start_join": [
$         "patroni-01",
$         "patroni-02",
$         "patroni-03"
$     ],
$     "ui_config.enabled": true
$ }
$ [root@patroni-01 consul.d]#

Now Patroni, this is similar to Patroni using etcd.
By using etcd there is a part etcd within the patroni.yml file, this is replaced with a part consul:

$ [root@patroni-01 patroni]# cat patroni.yml
$ name: "patroni-01.patroni.test"
$ scope: PG1
$ namespace: /patroni.test/
$ consul:
$   url: http://127.0.0.1:8500
$   register_service: true
$ postgresql:
$   connect_address: "patroni-01.patroni.test:5432"
$   bin_dir: /usr/pgsql-14/bin
$   data_dir: /pgdata/14/data
$   authentication:
$     replication:
$       username: replicator
$       password: replicator
$     superuser:
$       username: postgres
$       password: postgres
$   listen: 192.168.198.132:5432
$ restapi:
$   connect_address: "patroni-01.patroni.test:8008"
$   listen: "patroni-01.patroni.test:8008"
$ bootstrap:
$   dcs:
$     postgresql:
$       use_pg_rewind: true
$       use_slots: true
$       parameters:
$         wal_level: 'hot_standby'
$         hot_standby: "on"
$         wal_keep_segments: 8
$         max_replication_slots: 10
$         wal_log_hints: "on"
$         listen_addresses: '*'
$         port: 5432
$         logging_collector: 'on'
$         log_truncate_on_rotation: 'on'
$         log_filename: 'postgresql-%a.log'
$         log_rotation_age: '1440'
$         log_line_prefix: '%m - %l - %p - %h - %u@%d - %x'
$         log_directory: 'pg_log'
$         log_min_messages: 'WARNING'
$         log_autovacuum_min_duration: '60s'
$         log_min_error_statement: 'NOTICE'
$         log_min_duration_statement: '30s'
$         log_checkpoints: 'on'
$         log_statement: 'ddl'
$         log_lock_waits: 'on'
$         log_temp_files: '0'
$         log_timezone: 'Europe/Zurich'
$         log_connections: 'on'
$         log_disconnections: 'on'
$         log_duration: 'on'
$         client_min_messages: 'WARNING'
$         wal_level: 'replica'
$         hot_standby_feedback: 'on'
$         max_wal_senders: '10'
$         shared_buffers: '1024MB'
$         work_mem: '8MB'
$         effective_cache_size: '3072MB'
$         maintenance_work_mem: '64MB'
$         wal_compression: 'off'
$         max_wal_senders: '20'
$         shared_preload_libraries: 'pg_stat_statements'
$         autovacuum_max_workers: '6'
$         autovacuum_vacuum_scale_factor: '0.1'
$         autovacuum_vacuum_threshold: '50'
$         archive_mode: 'on'
$         archive_command: '/bin/true'
$         wal_log_hints: 'on'
$         ssl: "on"
$         ssl_ciphers: "TLSv1.2:!aNULL:!eNULL"
$         ssl_cert_file: /pgdata/certs/server.crt
$         ssl_key_file: /pgdata/certs/server.key
$   users:
$     app_user:
$       password: "aZ5QrESZ"
$   pg_hba:
$     - local all all  scram-sha-256
$     - hostssl all all 127.0.0.1/32 scram-sha-256
$     - hostssl all all ::1/128 scram-sha-256
$     - hostssl all all ::1/128 scram-sha-256
$     - hostssl all all 0.0.0.0/0 scram-sha-256
$     - hostssl replication replicator patroni-01.patroni.test scram-sha-256
$     - hostssl replication replicator patroni-01.patroni.test scram-sha-256
$     - hostssl replication replicator patroni-01.patroni.test scram-sha-256
$   initdb:
$     - encoding: UTF8
$ [root@patroni-01 patroni]#

The only difference within patroni.yml on the three nodes within this example setup is:
name: “patroni-01.patroni.test” needs to be adapted to “patroni-02.patroni.test” or “patroni-03.patroni.test”
Under postgresql:
connect_address: “patroni-01.patroni.test:5432” needs to be adapted to “patroni-02.patroni.test:5432” or “patroni-03.patroni.test:5432”.
listen: 192.168.198.132:5432 needs to be adpated to the corosponding IPs 192.168.198.133:5432 or 192.168.198.134:5432.
Under reatapi:
connect_address: “patroni-01.patroni.test:8008” to “patroni-02.patroni.test:8008” or “patroni-02.patroni.test:8008”.
listen: “patroni-01.patroni.test:8008” to “patroni-02.patroni.test:8008” or “patroni-02.patroni.test:8008”.

In my exapmle patroni-01 is the consul leader, so here we need to start patroni first to be leader within the patroni cluster.
Means the consul leader will be the patroni leader in any case, also in case of failover.

$ postgres@patroni-01: patronictl list
$ + Cluster: PG1 (7358967191570897068) -----------------+---------+----+-----------+
$ | Member                  | Host            | Role    | State   | TL | Lag in MB |
$ +-------------------------+-----------------+---------+---------+----+-----------+
$ | patroni-01.patroni.test | 192.168.198.132 | Leader  | running |  2 |           |
$ | patroni-02.patroni.test | 192.168.198.133 | Replica | running |  2 |         0 |
$ | patroni-03.patroni.test | 192.168.198.134 | Replica | running |  2 |         0 |
$ +-------------------------+-----------------+---------+---------+----+-----------+

Cet article How to setup a Consul Cluster on RHEL 8, Rocky Linux 8, AlmaLinux 8 part 2 est apparu en premier sur Blog dbi services.

Configure Data Guard between 2 DB Systems with Oracle 21c

Fri, 2022-04-29 10:23

In this previous blog, I have configured 2 DB systems with Oracle 21c in an ODA 19.14. Now let’s continue and show how we can configure a Data Guard between these 2 servers.
One option is to manually create the Data Guard as we do in a non-ODA environment. The second option that I am showing is to use the odacli command to configure the Data Guard.

All steps are described in Oracle documentation

The first step is to create the backup configuration. The /u01/backup can be a NFS share or not. In my case it’s just a local directory that I create in each server

[root@mdidbi42 u01]# odacli create-backupconfig -n nfsbkup -w 10 -d NFS -c /u01/backup/
{
  "jobId" : "b051745b-702a-48ff-92c2-3c80a16e73f2",
  "status" : "Created",
  "message" : "backup config creation",
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 09:28:37 AM CEST",
  "resourceList" : [ {
    "resourceId" : "8da9a983-6afa-45b9-bd66-a66d1420e048",
    "resourceType" : null,
    "resourceNewType" : "BackupConfig",
    "jobId" : "b051745b-702a-48ff-92c2-3c80a16e73f2",
    "updatedTime" : null
  } ],
  "description" : "create backup config:nfsbkup",
  "updatedTime" : "April 29, 2022 09:28:37 AM CEST"
}
[root@mdidbi42 u01]#

Check the job status

[root@mdidbi42 u01]# odacli describe-job -i "b051745b-702a-48ff-92c2-3c80a16e73f2"

Job details
----------------------------------------------------------------
                     ID:  b051745b-702a-48ff-92c2-3c80a16e73f2
            Description:  create backup config:nfsbkup
                 Status:  Success
                Created:  April 29, 2022 9:28:37 AM CEST
                Message:  backup config creation

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Backup config metadata persist           April 29, 2022 9:28:37 AM CEST      April 29, 2022 9:28:37 AM CEST      Success

[root@mdidbi42 u01]#

After we have to associate the backup configuration with the database

[root@mdidbi42 orabackups]# odacli modify-database -in MDIDB1 -bin nfsbkup
{
  "jobId" : "1220f80f-098e-4a3f-98db-e83f00a260a4",
  "status" : "Created",
  "message" : "Modify database",
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 09:30:03 AM CEST",
  "resourceList" : [ {
    "resourceId" : "f964b35e-7209-4d2d-9fca-5387c61c2148",
    "resourceType" : "DB",
    "resourceNewType" : null,
    "jobId" : "1220f80f-098e-4a3f-98db-e83f00a260a4",
    "updatedTime" : "April 29, 2022 09:30:03 AM CEST"
  } ],
  "description" : "Modify database : MDIDB1",
  "updatedTime" : "April 29, 2022 09:30:03 AM CEST"
}
[root@mdidbi42 orabackups]#
[root@mdidbi42 orabackups]# odacli describe-job -i "1220f80f-098e-4a3f-98db-e83f00a260a4"

Job details
----------------------------------------------------------------
                     ID:  1220f80f-098e-4a3f-98db-e83f00a260a4
            Description:  Modify database : MDIDB1
                 Status:  Success
                Created:  April 29, 2022 9:30:03 AM CEST
                Message:  Modify database

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate OMF parameter values            April 29, 2022 9:30:09 AM CEST      April 29, 2022 9:30:11 AM CEST      Success
update db with backupconfig attributes   April 29, 2022 9:30:28 AM CEST      April 29, 2022 9:30:31 AM CEST      Success
Enable Database Autobackup               April 29, 2022 9:30:31 AM CEST      April 29, 2022 9:30:31 AM CEST      Success
Enable Archivelog Autobackup             April 29, 2022 9:30:31 AM CEST      April 29, 2022 9:30:31 AM CEST      Success
Configure Control file Auto Backup Format April 29, 2022 9:30:31 AM CEST      April 29, 2022 9:30:35 AM CEST      Success
Backup Current Control file              April 29, 2022 9:30:35 AM CEST      April 29, 2022 9:30:49 AM CEST      Success
Update metadata for database:MDIDB1      April 29, 2022 9:30:50 AM CEST      April 29, 2022 9:30:50 AM CEST      Success

[root@mdidbi42 orabackups]#

Now it’s time to take a backup of your primary database

[root@mdidbi42 orabackups]# odacli create-backup --backupType Regular-L0 -in MDIDB1
{
  "jobId" : "be3e4d32-12f7-4b6e-8fc2-0d1d3f7d982e",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 09:31:51 AM CEST",
  "resourceList" : [ ],
  "description" : "Create Regular-L0 Backup[TAG:auto][Db:MDIDB1][NFS:/u01/backup/orabackups/dbs0fab9fed3/database/3671723533/MDIDB1_42]",
  "updatedTime" : "April 29, 2022 09:31:51 AM CEST"
}
[root@mdidbi42 orabackups]#
[root@mdidbi42 orabackups]# odacli describe-job -i "be3e4d32-12f7-4b6e-8fc2-0d1d3f7d982e"

Job details
----------------------------------------------------------------
                     ID:  be3e4d32-12f7-4b6e-8fc2-0d1d3f7d982e
            Description:  Create Regular-L0 Backup[TAG:auto][Db:MDIDB1][NFS:/u01/backup/orabackups/dbs0fab9fed3/database/3671723533/MDIDB1_42]
                 Status:  Success
                Created:  April 29, 2022 9:31:51 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate backup config                   April 29, 2022 9:31:55 AM CEST      April 29, 2022 9:31:55 AM CEST      Success
NFS location existence validation        April 29, 2022 9:31:55 AM CEST      April 29, 2022 9:31:55 AM CEST      Success
Backup Validations                       April 29, 2022 9:31:55 AM CEST      April 29, 2022 9:32:02 AM CEST      Success
Recovery Window validation               April 29, 2022 9:32:02 AM CEST      April 29, 2022 9:32:04 AM CEST      Success
Archivelog deletion policy configuration April 29, 2022 9:32:04 AM CEST      April 29, 2022 9:32:07 AM CEST      Success
Database backup                          April 29, 2022 9:32:07 AM CEST      April 29, 2022 9:33:32 AM CEST      Success

[root@mdidbi42 orabackups]#

When the backup finished, save the backup report in a json file. First look for the backup id

[root@mdidbi42 ~]# odacli list-backupreports | grep Regular-L0
ce3b0eb9-3976-45f2-829e-c39cc5d21e43     e5fbd9e1-ccb6-4d3e-88cb-ece45007a549     3671906653    MDIDB1     MDIDB1_42      Regular-L0   auto         April 29, 2022 3:16:36 PM CEST      April 29, 2022 3:16:36 PM CEST      Configured
[root@mdidbi42 ~]

And save it

[root@mdidbi42 ~]# odacli describe-backupreport -i ce3b0eb9-3976-45f2-829e-c39cc5d21e43 > backup_report_mdidb1.json 
[root@mdidbi42 ~]#

Below the contents of the json file

[root@mdidbi42 ~]# cat backup_report_mdidb1.json
{
  "id" : "ce3b0eb9-3976-45f2-829e-c39cc5d21e43",
  "dbResId" : "e5fbd9e1-ccb6-4d3e-88cb-ece45007a549",
  "tag" : "auto",
  "dbId" : "3671906653",
  "dbName" : "MDIDB1",
  "dbUniqueName" : "MDIDB1_42",
  "backupType" : "Regular-L0",
  "keepDays" : null,
  "backupLocation" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/db",
  "cfBackupHandle" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/db/c-3671906653-20220429-05",
  "spfBackupHandle" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/db/c-3671906653-20220429-05",
  "pitrTimeStamp" : "April 29, 2022 15:17:33 PM CEST",
  "pitrSCN" : "1842740",
  "resetLogsTimeStamp" : "April 29, 2022 14:05:17 PM CEST",
  "resetLogsSCN" : "1712336",
  "oraHomeVersion" : "21.5.0.0.220118",
  "sqlPatches" : null,
  "backupLogLoc" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/rmanlog/2022-04-29/rman_backup_auto_2022-04-29_15-16-40.0579.log",
  "tdeWalletLoc" : null,
  "dbConfigLoc" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/dbconfig/2022-04-29/DBCONFIG_auto_2022-04-29_15-17-50.0845.tar.gz",
  "name" : "Backup_Report_MDIDB1",
  "createTime" : "April 29, 2022 15:16:36 PM CEST",
  "state" : {
    "status" : "CONFIGURED"
  },
  "updatedTime" : "April 29, 2022 15:16:36 PM CEST",
  "backupReportLogDetail" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/rmandetaillogreport/2022-04-29/rman_list_backup_detail_auto_2022-04-29_15-17-44.0692.log",
  "dbInfo" : {
    "dbClass" : "OLTP",
    "dbType" : "SI",
    "dbShape" : "odb2",
    "dbEdition" : "EE",
    "dbStorage" : "ASM",
    "dbRedundancy" : null,
    "pdbName" : "PDB1",
    "isCdb" : true
  },
  "dbDataSize" : "4202M",
  "dbRedoSize" : "12306M",
  "rmanBackupPieces" : "/u01/backup/orabackups/dbs0fab9fed3/database/3671906653/MDIDB1_42/backuppieces/2022-04-29/backupPieces_auto_e5fbd9e1-ccb6-4d3e-88cb-ece45007a549_20220429151749.json",
  "compressionAlgo" : "BASIC",
  "cpuPool" : null,
  "numberOfCores" : null
}
[root@mdidbi42 ~]#

Now Copy backups on the same location in the standby DB system
Also copy the backup report to the standby system and restore the backup to a standby database. Don’t forget to give access to backup files to user oracle

[root@mdidbi42 backup]# pwd
/u01/backup
[root@mdidbi42 backup]# scp -r orabackups/ mdidbi43:$PWD
FIPS mode initialized
root@mdidbi43's password:
scp -r backup_report_mdidb1.json  mdidbi43:$PWD

Now it’s time to restore the backups on the standby server. Personally, I have deleted the existing database which comes with the DB system creation on the standby server

[root@mdidbi43 ~]#  odacli irestore-database -r backup_report_mdidb1.json -u MDIDB2_43 -ro STANDBY -dh b3da570b-acd1-41fa-8c8c-1c16fd66                                                                                                      abb3
Enter SYS user password:
Retype SYS user password:
{
  "jobId" : "aa280ddf-f215-452a-833c-4912d68aa2e5",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 15:26:56",
  "resourceList" : [ ],
  "description" : "Database service recovery with db name: MDIDB1",
  "updatedTime" : "April 29, 2022 15:26:56"
}
[root@mdidbi43 ~]# odacli describe-job -i "aa280ddf-f215-452a-833c-4912d68aa2e5"

Job details
----------------------------------------------------------------
                     ID:  aa280ddf-f215-452a-833c-4912d68aa2e5
            Description:  Database service recovery with db name: MDIDB1
                 Status:  Success
                Created:  April 29, 2022 3:26:56 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Check if cluster ware is running         April 29, 2022 3:26:58 PM CEST      April 29, 2022 3:26:58 PM CEST      Success
Creating DbStorage for DbRestore         April 29, 2022 3:26:58 PM CEST      April 29, 2022 3:27:00 PM CEST      Success
Validating DiskSpace for DATA            April 29, 2022 3:26:58 PM CEST      April 29, 2022 3:26:59 PM CEST      Success
Generating SSH key                       April 29, 2022 3:26:59 PM CEST      April 29, 2022 3:27:00 PM CEST      Success
SSH key                                  April 29, 2022 3:27:00 PM CEST      April 29, 2022 3:27:00 PM CEST      Success
SSH key scan                             April 29, 2022 3:27:00 PM CEST      April 29, 2022 3:27:00 PM CEST      Success
Audit directory creation                 April 29, 2022 3:27:00 PM CEST      April 29, 2022 3:27:00 PM CEST      Success
Create pfile for Auxiliary Instance      April 29, 2022 3:27:01 PM CEST      April 29, 2022 3:27:01 PM CEST      Success
Deleting FRA                             April 29, 2022 3:27:01 PM CEST      April 29, 2022 3:27:02 PM CEST      Success
Rman duplicate                           April 29, 2022 3:27:02 PM CEST      April 29, 2022 3:29:10 PM CEST      Success
Creating pfile from spfile               April 29, 2022 3:29:11 PM CEST      April 29, 2022 3:29:11 PM CEST      Success
Set PFile Ownership                      April 29, 2022 3:29:11 PM CEST      April 29, 2022 3:29:11 PM CEST      Success
Customize Db Parameters                  April 29, 2022 3:29:11 PM CEST      April 29, 2022 3:29:12 PM CEST      Success
Shutdown And Start database              April 29, 2022 3:29:12 PM CEST      April 29, 2022 3:29:33 PM CEST      Success
Create spfile for restore db             April 29, 2022 3:29:33 PM CEST      April 29, 2022 3:29:33 PM CEST      Success
Set PFile Ownership                      April 29, 2022 3:29:33 PM CEST      April 29, 2022 3:29:33 PM CEST      Success
Shutdown And Mount database              April 29, 2022 3:29:33 PM CEST      April 29, 2022 3:29:51 PM CEST      Success
Register Database taskflow               April 29, 2022 3:29:55 PM CEST      April 29, 2022 3:31:33 PM CEST      Success
Create SPFile in shared loc              April 29, 2022 3:29:55 PM CEST      April 29, 2022 3:30:03 PM CEST      Success
Delete Local Spfile                      April 29, 2022 3:30:03 PM CEST      April 29, 2022 3:30:03 PM CEST      Success
Register DB with clusterware             April 29, 2022 3:30:03 PM CEST      April 29, 2022 3:30:31 PM CEST      Success
Set SysPassword and Create PwFile        April 29, 2022 3:30:31 PM CEST      April 29, 2022 3:30:33 PM CEST      Success
Enable block change tracking             April 29, 2022 3:30:33 PM CEST      April 29, 2022 3:30:37 PM CEST      Success
Creating pfile                           April 29, 2022 3:30:37 PM CEST      April 29, 2022 3:30:38 PM CEST      Success
Updating db env                          April 29, 2022 3:30:38 PM CEST      April 29, 2022 3:30:39 PM CEST      Success
Enable DbSizing Template                 April 29, 2022 3:30:39 PM CEST      April 29, 2022 3:31:06 PM CEST      Success
Create tns entry                         April 29, 2022 3:31:06 PM CEST      April 29, 2022 3:31:07 PM CEST      Success
Running datapatch                        April 29, 2022 3:31:07 PM CEST      April 29, 2022 3:31:08 PM CEST      Success
Set CPU pool                             April 29, 2022 3:31:08 PM CEST      April 29, 2022 3:31:08 PM CEST      Success
Reset Associated Networks                April 29, 2022 3:31:34 PM CEST      April 29, 2022 3:31:38 PM CEST      Success
Set log_archive_dest for Database        April 29, 2022 3:31:38 PM CEST      April 29, 2022 3:31:43 PM CEST      Success
Copy Pwfile to Shared Storage            April 29, 2022 3:31:43 PM CEST      April 29, 2022 3:31:49 PM CEST      Success

[root@mdidbi43 ~]#

After the restore we can connect on the standby database and can see that it’s in a MOUNT STATE

[oracle@mdidbi43 ~]$ . oraenv
ORACLE_SID = [MDIDB1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@mdidbi43 ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Apr 29 10:01:23 2022
Version 21.5.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.5.0.0.0

SQL> select db_unique_name ,open_mode from v$database;

DB_UNIQUE_NAME                 OPEN_MODE
------------------------------ --------------------
MDIDB2_43                      MOUNTED

SQL>

Ok now we can configure the Data Guard by running the configure-dataguard command from the primary.
You will have some questions to fill

[root@mdidbi42 ~]# odacli configure-dataguard
Standby site address: mdidbi43
BUI username for Standby site. If Multi-user Access is disabled on Standby site, enter 'oda-admin'; otherwise, enter the name of the use who has irestored the Standby database (default: oda-admin):
BUI password for Standby site:
root@mdidbi43's password:
Database name for Data Guard configuration: MDIDB1
Primary database SYS password:
*******************************************************************************************
Data Guard default settings
Primary site network for Data Guard configuration: Public-network
Standby site network for Data Guard configuration: Public-network
Primary database listener port: 1521
Standby database listener port: 1521
Transport type: ASYNC
Protection mode: MAX_PERFORMANCE
Data Guard configuration name: MDIDB1_42_MDIDB2_43
Active Data Guard: disabled
Do you want to edit this Data Guard configuration? (Y/N, default:N): Y
*******************************************************************************************
Primary site network for Data Guard configuration [Public-network] (default: Public-network):
Standby site network for Data Guard configuration [Public-network] (default: Public-network):
Primary database listener port (default: 1521):
Standby database listener port (default: 1521):
Transport type [ASYNC, FASTSYNC, SYNC] (default: ASYNC):
Protection mode [MAX_PROTECTION, MAX_PERFORMANCE, MAX_AVAILABILITY] (default: MAX_PERFORMANCE):
Data Guard configuration name (default: MDIDB1_42_MDIDB2_43):
Primary database is missing certain archivelogs for Data Guard configuration. We need to restore those from backup.
Enter RMAN backup encryption password:
Do you want to provide another RMAN backup encryption password? [y/n] (default 'n'): n
Enable Active Data Guard? (Y/N, default:N):
Standby database's SYS password will be set to Primary database's after Data Guard configuration. Ignore warning and proceed with Data Gard configuration? (Y/N, default:N): Y
*******************************************************************************************
Configure Data Guard MDIDB1_42_MDIDB2_43 started
*******************************************************************************************
Step 1: Validate Data Guard configuration request (Primary site)
Description: Validate DG Config Creation for db MDIDB1
Job ID: eb1c048d-4a26-44f6-972d-d0923b43a30c
Started April 29, 2022 15:36:13 PM CEST
Validate create Data Guard configuration request
Finished April 29, 2022 15:36:17 PM CEST
*******************************************************************************************
Step 2: Validate Data Guard configuration request (Standby site)
Description: Validate DG Config Creation for db MDIDB1
Job ID: 81d48002-8478-4c4b-a49c-89f7babe8764
Started April 29, 2022 15:36:18 PM CEST
Validate create Data Guard configuration request
Finished April 29, 2022 15:36:23 PM CEST
*******************************************************************************************
Step 3: Restore missing archivelog (Primary site)
Description: Create Archivelog Restore for db:MDIDB1
Job ID: ca35287b-e306-4372-a1ad-84ed61185b55
Started April 29, 2022 15:36:24 PM CEST
Restore Archivelog validation
Restore Archivelog
Finished April 29, 2022 15:36:43 PM CEST
*******************************************************************************************
Step 4: Download password file from Primary database (Primary site)
Description: Download orapwd file from Primary database
Started April 29, 2022 15:36:43 PM CEST
Prepare orapwd file for Primary database MDIDB1
Finished April 29, 2022 15:36:45 PM CEST
*******************************************************************************************
Step 5: Upload password file to Standby database (Standby site)
Description: Upload orapwd file to Standby database
Started April 29, 2022 15:36:45 PM CEST
Write orapwd file to Standby database MDIDB1
Finished April 29, 2022 15:36:58 PM CEST
*******************************************************************************************
Step 6: Configure Primary database (Primary site)
Description: DG Config service for db MDIDB1 - ConfigurePrimary
Job ID: 55c4622f-3190-4ac6-9344-9e551075bc21
Started April 29, 2022 15:36:59 PM CEST
Configure host DNS on primary env
Configure Data Guard Tns on primary env
Enable Data Guard related Db parameters for primary env
Enable force logging and archivelog mode in primary env
Enable FlashBack
Configure network parameters for local listener on primary env
Restart listener on primary env
Create services for primary db
Finished April 29, 2022 15:37:26 PM CEST
*******************************************************************************************
Step 7: Configure Standby database (Standby site)
Description: DG Config service for db MDIDB1 - ConfigureStandby
Job ID: 642866cc-c2e3-49fd-99af-7017bcf23ae2
Started April 29, 2022 15:37:27 PM CEST
Configure Data Guard Tns on standby env
Configure host DNS on standby env
Clear Data Guard related Db parameters for standby env
Enable Data Guard related Db parameters for standby env
Enable force logging and archivelog mode in standby env
Populate standby database metadata
Configure network parameters for local listener on standby env
Reset Db sizing and hidden parameters for ODA best practice
Restart Listener on standby env
Create services for standby db
Finished April 29, 2022 15:38:39 PM CEST
*******************************************************************************************
Step 8: Configure and enable Data Guard (Primary site)
Description: DG Config service for db MDIDB1 - ConfigureDg
Job ID: e6882484-c946-4b4e-b51e-ddca8886449d
Started April 29, 2022 15:38:40 PM CEST
Config and enable Data Guard
Post check Data Guard configuration
Finished April 29, 2022 15:40:22 PM CEST
*******************************************************************************************
Step 9: Enable Flashback (Standby site)
Description: DG Config service for db MDIDB1 - EnableFlashback
Job ID: e9e0d091-7bc2-4185-901d-7914168f32bb
Started April 29, 2022 15:40:23 PM CEST
Enable FlashBack
Finished April 29, 2022 15:40:36 PM CEST
*******************************************************************************************
Step 10: Re-enable Data Guard (Primary site)
Description: DG Config service for db MDIDB1 - ReenableDg
Job ID: d83d2779-f7fd-4d90-b5eb-58fda4792f15
Started April 29, 2022 15:40:37 PM CEST
Re-enable Data Guard if inconsistent properties found
Post check Data Guard configuration
Finished April 29, 2022 15:40:39 PM CEST
*******************************************************************************************
Step 11: Create Data Guard status (Primary site)
Description: DG Status operation for db MDIDB1 - NewDgconfig
Job ID: cb590d03-3795-4dfe-b374-69212f8aecd8
Started April 29, 2022 15:40:40 PM CEST
Create Data Guard status
Finished April 29, 2022 15:40:42 PM CEST
*******************************************************************************************
Step 12: Create Data Guard status (Standby site)
Description: DG Status operation for db MDIDB1 - NewDgconfig
Job ID: ba68bda7-f705-4a6e-9e96-d37d042603f0
Started April 29, 2022 15:40:43 PM CEST
Create Data Guard status
Finished April 29, 2022 15:40:44 PM CEST
*******************************************************************************************
Configure Data Guard MDIDB1_42_MDIDB2_43 completed
*******************************************************************************************
[root@mdidbi42 ~]#

The configuration finished successfully, we can list the status of the Data Guard

[root@mdidbi42 ~]# odacli list-dataguardstatus
Updated about 4 minute(s) ago
ID                                       Name                             Database Name        Role       Protection Mode    Apply Lag       Transport Lag   Apply Rate      Status
---------------------------------------- -------------------------------- -------------------- ---------- ------------------ --------------- --------------- --------------- ----------
b297b32d-987e-407d-9422-6d3748481907     MDIDB1_42_MDIDB2_43              MDIDB1               PRIMARY    MAX_PERFORMANCE    18 seconds      0 seconds       19.00 KByte/s   CONFIGURED
[root@mdidbi42 ~]#
[root@mdidbi42 ~]# odacli describe-dataguardstatus -i b297b32d-987e-407d-9422-6d3748481907
Updated about 4 minute(s) ago
Dataguard Status details
----------------------------------------------------------------
                     ID: b297b32d-987e-407d-9422-6d3748481907
                   Name: MDIDB1_42_MDIDB2_43
          Database Name: e5fbd9e1-ccb6-4d3e-88cb-ece45007a549
                   Role: PRIMARY
        Protection Mode: MAX_PERFORMANCE
              Apply Lag: 0 seconds
          Transport Lag: 0 seconds
             Apply Rate: 1.00 KByte/s
                 Status: CONFIGURED
           Updated Time: April 29, 2022 3:45:05 PM CEST

[root@mdidbi42 ~]#

We can also connect via dgmgrl and verify


DGMGRL> show configuration

Configuration - MDIDB1_42_MDIDB2_43

  Protection Mode: MaxPerformance
  Members:
  MDIDB1_42 - Primary database
    MDIDB2_43 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 58 seconds ago)

DGMGRL>

To do a switchover run following command on the primary server

[root@mdidbi42 ~]#  odacli switchover-dataguard -i b297b32d-987e-407d-9422-6d3748481907 -u MDIDB2_43
Password for target database:
{
  "jobId" : "758d015e-c208-4313-b989-5ffb4fd9fa7c",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 15:59:47 PM CEST",
  "resourceList" : [ ],
  "description" : "Dataguard operation for MDIDB1_42_MDIDB2_43 - SwitchoverDg",
  "updatedTime" : "April 29, 2022 15:59:47 PM CEST"
}
[root@mdidbi42 ~]#
[root@mdidbi42 ~]# odacli describe-job -i "758d015e-c208-4313-b989-5ffb4fd9fa7c"

Job details
----------------------------------------------------------------
                     ID:  758d015e-c208-4313-b989-5ffb4fd9fa7c
            Description:  Dataguard operation for MDIDB1_42_MDIDB2_43 - SwitchoverDg
                 Status:  Success
                Created:  April 29, 2022 3:59:47 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Precheck switchover DataGuard            April 29, 2022 3:59:47 PM CEST      April 29, 2022 3:59:58 PM CEST      Success
Switchover DataGuard                     April 29, 2022 3:59:58 PM CEST      April 29, 2022 4:01:07 PM CEST      Success
Postcheck switchover DataGuard           April 29, 2022 4:01:07 PM CEST      April 29, 2022 4:01:08 PM CEST      Success
Check if DataGuard config is updated     April 29, 2022 4:01:09 PM CEST      April 29, 2022 4:01:19 PM CEST      Success

[root@mdidbi42 ~]#

We can validate the switchover while connected to the broker

DGMGRL> show configuration

Configuration - MDIDB1_42_MDIDB2_43

  Protection Mode: MaxPerformance
  Members:
  MDIDB2_43 - Primary database
    MDIDB1_42 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 19 seconds ago)

DGMGRL>

To switchback to MDIDB1_42 just run the switchover command on the actual primary server

[root@mdidbi43 ~]#  odacli switchover-dataguard -i b297b32d-987e-407d-9422-6d3748481907 -u MDIDB1_42
Password for target database:
{
  "jobId" : "17eef433-7e8e-4d4d-95e3-defe8724cb99",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 16:05:49 PM CEST",
  "resourceList" : [ ],
  "description" : "Dataguard operation for MDIDB1_42_MDIDB2_43 - SwitchoverDg",
  "updatedTime" : "April 29, 2022 16:05:49 PM CEST"
}
[root@mdidbi43 ~]#

After the switchback

DGMGRL> show configuration

Configuration - MDIDB1_42_MDIDB2_43

  Protection Mode: MaxPerformance
  Members:
  MDIDB1_42 - Primary database
    MDIDB2_43 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 70 seconds ago)

DGMGRL>

We can also perfom a failover by running following command on the standby server

[root@mdidbi42 ~]# odacli failover-dataguard -i 96e5c5cf-f710-4a14-a506-9024e4952ba4 -u MDIDB2_43
Password for target database:
DCS-10001:Internal error encountered: Invalid role for FailoverDg:Primary. Execute this command on the Standby site mdidbi43.dbi-lab.ch.
[root@mdidbi42 ~]#

Check the job status

[root@mdidbi43 ~]# odacli describe-job -i "7c0dd208-1fca-4c03-99e1-7d896f79c1b5"

Job details
----------------------------------------------------------------
                     ID:  7c0dd208-1fca-4c03-99e1-7d896f79c1b5
            Description:  Dataguard operation for MDIDB1_42_MDIDB2_43 - FailoverDg
                 Status:  Success
                Created:  April 29, 2022 4:11:09 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Precheck failover DataGuard              April 29, 2022 4:11:09 PM CEST      April 29, 2022 4:11:12 PM CEST      Success
Failover DataGuard                       April 29, 2022 4:11:12 PM CEST      April 29, 2022 4:11:33 PM CEST      Success
Postcheck DataGuard status               April 29, 2022 4:11:33 PM CEST      April 29, 2022 4:11:37 PM CEST      Success
Check if DataGuard config is updated     April 29, 2022 4:11:37 PM CEST      April 29, 2022 4:11:47 PM CEST      Success

[root@mdidbi43 ~]#

Check the Data Guard status
On mdidbi43

DGMGRL> show configuration

Configuration - MDIDB1_42_MDIDB2_43

  Protection Mode: MaxPerformance
  Members:
  MDIDB2_43 - Primary database
    MDIDB1_42 - Physical standby database (disabled)
      ORA-16661: the standby database needs to be reinstated

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 59 seconds ago)

DGMGRL>

On the former primary mdidbi42

DGMGRL> show configuration

Configuration - MDIDB1_42_MDIDB2_43

  Protection Mode: MaxPerformance
  Members:
  MDIDB1_42 - Primary database
    MDIDB2_43 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
DISABLED
DGM-17290: Role change detected. This database may no longer be the primary database.

DGMGRL>

Let’s reinstate the former primary database MDIDB1_42. So connected to mdidbi43 run the following command

[root@mdidbi43 ~]# odacli reinstate-dataguard   -i b297b32d-987e-407d-9422-6d3748481907 -u MDIDB1_42
Password for target database:
{
  "jobId" : "cfe917c4-85f6-4469-b8e0-e6c6e602c54e",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "April 29, 2022 16:16:56 PM CEST",
  "resourceList" : [ ],
  "description" : "Dataguard operation for MDIDB1_42_MDIDB2_43 - ReinstateDg",
  "updatedTime" : "April 29, 2022 16:16:56 PM CEST"
}
[root@mdidbi43 ~]#
[root@mdidbi43 ~]# odacli describe-job -i "cfe917c4-85f6-4469-b8e0-e6c6e602c54e"

Job details
----------------------------------------------------------------
                     ID:  cfe917c4-85f6-4469-b8e0-e6c6e602c54e
            Description:  Dataguard operation for MDIDB1_42_MDIDB2_43 - ReinstateDg
                 Status:  Success
                Created:  April 29, 2022 4:16:56 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Precheck reinstate DataGuard             April 29, 2022 4:16:56 PM CEST      April 29, 2022 4:16:59 PM CEST      Success
Reinstate DataGuard                      April 29, 2022 4:16:59 PM CEST      April 29, 2022 4:18:13 PM CEST      Success
Postcheck DataGuard status               April 29, 2022 4:18:13 PM CEST      April 29, 2022 4:18:17 PM CEST      Success
Check if DataGuard config is updated     April 29, 2022 4:18:17 PM CEST      April 29, 2022 4:18:27 PM CEST      Success

[root@mdidbi43 ~]#

and the new status

DGMGRL> show configuration

Configuration - MDIDB1_42_MDIDB2_43

  Protection Mode: MaxPerformance
  Members:
  MDIDB2_43 - Primary database
    MDIDB1_42 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 19 seconds ago)

DGMGRL>
Conclusion

Hope this help

Cet article Configure Data Guard between 2 DB Systems with Oracle 21c est apparu en premier sur Blog dbi services.

Configure DB System Oracle 21c in a ODA 19.14

Fri, 2022-04-29 10:15

One question we may ask is that if it is possible to deploy an Oracle 21c database in an ODA as the latest version for ODA is actually 19.x
The answer is Yes. Since ODA release 19.12, we can install a DB system with Oracle 21c while the bare metal system runs Oracle Grid Infrastructure 19.x.

To see the supported versions on the DB system just run the command describe-dbsystem-image

[root@dbi-oda-x8 ~]# odacli  describe-dbsystem-image
DB System Image details
--------------------------------------------------------------------------------
Component Name        Supported Versions    Available Versions
--------------------  --------------------  --------------------

DBVM                  19.14.0.0.0           19.14.0.0.0

GI                    19.14.0.0.220118      19.14.0.0.220118
                      19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.5.0.0.220118       not-available
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

DB                    19.14.0.0.220118      19.14.0.0.220118
                      19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.5.0.0.220118       not-available
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

[root@dbi-oda-x8 ~]#

In this blog I am showing some tests I did to install a DB system with Oracle 21c on a bare metal with ODA 19.14

[root@dbi-oda-x8 ~]# odacli describe-component
System Version
---------------
19.14.0.0.0

System node Name
---------------
dbi-oda-x8

Local System Version
---------------
19.14.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK
                                          19.14.0.0.0           up-to-date

GI
                                          19.14.0.0.220118      up-to-date

DB {
[ OraDB19000_home2,OraDB19000_home4 ]
                                          19.14.0.0.220118      up-to-date
[ OraDB12201_home1 ]
                                          12.2.0.1.220118       up-to-date
[ OraDB19000_home3 ]
                                          19.13.0.0.211019      19.14.0.0.220118
}

DCSCONTROLLER
                                          19.14.0.0.0           up-to-date

DCSCLI
                                          19.14.0.0.0           up-to-date

DCSAGENT
                                          19.14.0.0.0           up-to-date

DCSADMIN
                                          19.14.0.0.0           up-to-date

OS
                                          7.9                   up-to-date

ILOM
                                          5.0.2.24.r141466      up-to-date

BIOS
                                          52050300              up-to-date

SHARED CONTROLLER FIRMWARE
                                          VDV1RL04              up-to-date

LOCAL DISK FIRMWARE
                                          1132                  up-to-date

SHARED DISK FIRMWARE
                                          1132                  up-to-date

HMP
                                          2.4.8.0.600           up-to-date


[root@dbi-oda-x8 ~]#

To be able to create a DB system with Oracle 21c (21.5), we first have to update the repository with the required patches.
So let’s download following patches
– p33152235_1914000_Linux-x86-64.zip : Oracle Database Appliance 21.5.0.0.220118 GI Clone for DB Systems
– p33152237_1914000_Linux-x86-64.zip : Oracle Database Appliance 21.5.0.0.220118 Database Clone File for DB Systems

After the download we unpack the zip files

[root@dbi-oda-x8 mdi]# unzip p33152235_1914000_Linux-x86-64.zip
Archive:  p33152235_1914000_Linux-x86-64.zip
 extracting: odacli-dcs-19.14.0.0.0-220127-GI-21.5.0.0.zip
  inflating: README.txt
[root@dbi-oda-x8 mdi]# unzip p33152237_1914000_Linux-x86-64.zip
Archive:  p33152237_1914000_Linux-x86-64.zip
 extracting: odacli-dcs-19.14.0.0.0-220127-DB-21.5.0.0.zip
replace README.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: README.txt
[root@dbi-oda-x8 mdi]#

And then let’s update the repository with the files

For the GI stack

[root@dbi-oda-x8 ~]# /opt/oracle/dcs/bin/odacli update-repository -f /u03/app/oracle/mdi/odacli-dcs-19.14.0.0.0-220127-GI-21.5.0.0.zip
{
  "jobId" : "c93133b6-be21-43cd-9a3a-9e58af6798fc",
  "status" : "Created",
  "message" : "/u03/app/oracle/mdi/odacli-dcs-19.14.0.0.0-220127-GI-21.5.0.0.zip",
  "reports" : [ ],
  "createTimestamp" : "April 27, 2022 08:46:00 AM CEST",
  "resourceList" : [ ],
  "description" : "Repository Update",
  "updatedTime" : "April 27, 2022 08:46:00 AM CEST"
}
[root@dbi-oda-x8 ~]#

Be sure that the job successfully finished

[root@dbi-oda-x8 ~]# odacli describe-job -i "c93133b6-be21-43cd-9a3a-9e58af6798fc"

Job details
----------------------------------------------------------------
                     ID:  c93133b6-be21-43cd-9a3a-9e58af6798fc
            Description:  Repository Update
                 Status:  Success
                Created:  April 27, 2022 8:46:00 AM CEST
                Message:  /u03/app/oracle/mdi/odacli-dcs-19.14.0.0.0-220127-GI-21.5.0.0.zip

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Unzip bundle                             April 27, 2022 8:46:00 AM CEST      April 27, 2022 8:46:26 AM CEST      Success

[root@dbi-oda-x8 ~]#

For the DB

[root@dbi-oda-x8 ~]# /opt/oracle/dcs/bin/odacli update-repository -f /u03/app/oracle/mdi/odacli-dcs-19.14.0.0.0-220127-DB-21.5.0.0.zip
{
  "jobId" : "413f472f-7351-46f1-92e4-44d464b268bc",
  "status" : "Created",
  "message" : "/u03/app/oracle/mdi/odacli-dcs-19.14.0.0.0-220127-DB-21.5.0.0.zip",
  "reports" : [ ],
  "createTimestamp" : "April 27, 2022 08:47:59 AM CEST",
  "resourceList" : [ ],
  "description" : "Repository Update",
  "updatedTime" : "April 27, 2022 08:47:59 AM CEST"
}
[root@dbi-oda-x8 ~]#
[root@dbi-oda-x8 ~]# odacli describe-job -i "413f472f-7351-46f1-92e4-44d464b268bc"

Job details
----------------------------------------------------------------
                     ID:  413f472f-7351-46f1-92e4-44d464b268bc
            Description:  Repository Update
                 Status:  Success
                Created:  April 27, 2022 8:47:59 AM CEST
                Message:  /u03/app/oracle/mdi/odacli-dcs-19.14.0.0.0-220127-DB-21.5.0.0.zip

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Unzip bundle                             April 27, 2022 8:48:00 AM CEST      April 27, 2022 8:48:28 AM CEST      Success

[root@dbi-oda-x8 ~]#

And if we then verify available images, we should see the 21c images now available

[root@dbi-oda-x8 mdi]# odacli describe-dbsystem-image
DB System Image details
--------------------------------------------------------------------------------
Component Name        Supported Versions    Available Versions
--------------------  --------------------  --------------------

DBVM                  19.14.0.0.0           19.14.0.0.0

GI                    19.14.0.0.220118      19.14.0.0.220118
                      19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.5.0.0.220118       21.5.0.0.220118
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

DB                    19.14.0.0.220118      19.14.0.0.220118
                      19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.5.0.0.220118       21.5.0.0.220118
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

[root@dbi-oda-x8 mdi]#

interface. But you can also find some json templates in Oracle documentation.
Of course you can also directly create the DB system with the GUI

[root@dbi-oda-x8 mdi]# cat mdi-dbi-ip42.json
{
    "system": {
        "name": "mdi-dbi-ip42",
        "systemPassword": "******",
        "timeZone": "Europe/Zurich",
        "diskGroup": "DATA",
        "cpuPoolName": "cpuninja",
        "enableRoleSeparation": true
    },
    "database": {
        "name": "MDIDB1",
        "uniqueName": "MDIDB1_42",
        "domainName": "dbi-lab.ch",
        "adminPassword": null,
        "version": "21.5.0.0.220118",
        "edition": "EE",
        "type": "SI",
        "dbClass": "OLTP",
        "shape": "odb2",
        "role": "PRIMARY",
        "targetNodeNumber": null,
        "enableDbConsole": false,
        "enableFlashStorage": false,
        "redundancy": null,
        "characterSet": {
            "characterSet": "AL32UTF8",
            "nlsCharacterset": "AL16UTF16",
            "dbTerritory": "AMERICA",
            "dbLanguage": "AMERICAN"
        },
        "rmanBackupPassword": null,
        "enableTDE": false,
        "isCdb": true,
        "pdbName": "PDB1",
        "pdbAdminUser": "pdb1admin",
        "tdePassword": null
    },
    "network": {
        "domainName": "dbi-lab.ch",
        "ntpServers": [
            "216.239.35.0"
        ],
        "dnsServers": [
            "8.8.8.8",
            "8.8.4.4"
        ],
        "nodes": [
            {
                "name": "mdidbi42",
                "ipAddress": "10.36.0.242",
                "netmask": "255.255.255.0",
                "gateway": "10.36.0.1",
                "number": 0
            }
        ],
        "publicVNetwork": "pubnet"
    },
    "grid": {
        "language": "en",
        "enableAFD": false
    }
}
[root@dbi-oda-x8 mdi]#

To create the DB system following command is used

[root@dbi-oda-x8 mdi]#  odacli create-dbsystem -p /u03/app/oracle/mdi/mdi-dbi-ip42.json
Enter password for system "mdi-dbi-ip42":
Retype password for system "mdi-dbi-ip42":
Enter administrator password for DB "MDIDB1":
Retype administrator password for DB "MDIDB1":

Job details
----------------------------------------------------------------
                     ID:  65360c5e-cdc0-4fe4-98f4-d1422070f980
            Description:  DB System mdi-dbi-ip42 creation
                 Status:  Created
                Created:  April 27, 2022 10:52:36 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 mdi]#

A few minutes after, we can validate that the creation was successful.

[root@dbi-oda-x8 ~]# odacli describe-job -i 65360c5e-cdc0-4fe4-98f4-d1422070f980

Job details
----------------------------------------------------------------
                     ID:  65360c5e-cdc0-4fe4-98f4-d1422070f980
            Description:  DB System mdi-dbi-ip42 creation
                 Status:  Success
                Created:  April 27, 2022 10:52:36 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create DB System metadata                April 27, 2022 10:52:36 AM CEST     April 27, 2022 10:52:36 AM CEST     Success
Persist new DB System                    April 27, 2022 10:52:36 AM CEST     April 27, 2022 10:52:36 AM CEST     Success
Validate DB System prerequisites         April 27, 2022 10:52:36 AM CEST     April 27, 2022 10:52:40 AM CEST     Success
Setup DB System environment              April 27, 2022 10:52:40 AM CEST     April 27, 2022 10:52:41 AM CEST     Success
Create DB System ASM volume              April 27, 2022 10:52:41 AM CEST     April 27, 2022 10:52:48 AM CEST     Success
Create DB System ACFS filesystem         April 27, 2022 10:52:48 AM CEST     April 27, 2022 10:52:57 AM CEST     Success
Create DB System VM ACFS snapshots       April 27, 2022 10:52:57 AM CEST     April 27, 2022 10:53:26 AM CEST     Success
Create temporary SSH key pair            April 27, 2022 10:53:26 AM CEST     April 27, 2022 10:53:27 AM CEST     Success
Create DB System cloud-init config       April 27, 2022 10:53:27 AM CEST     April 27, 2022 10:53:27 AM CEST     Success
Provision DB System VM(s)                April 27, 2022 10:53:27 AM CEST     April 27, 2022 10:53:28 AM CEST     Success
Attach disks to DB System                April 27, 2022 10:53:28 AM CEST     April 27, 2022 10:53:29 AM CEST     Success
Add DB System to Clusterware             April 27, 2022 10:53:29 AM CEST     April 27, 2022 10:53:29 AM CEST     Success
Start DB System                          April 27, 2022 10:53:29 AM CEST     April 27, 2022 10:53:30 AM CEST     Success
Wait DB System VM first boot             April 27, 2022 10:53:30 AM CEST     April 27, 2022 10:54:42 AM CEST     Success
Setup Mutual TLS (mTLS)                  April 27, 2022 10:54:42 AM CEST     April 27, 2022 10:55:01 AM CEST     Success
Export clones repository                 April 27, 2022 10:55:01 AM CEST     April 27, 2022 10:55:01 AM CEST     Success
Setup ASM client cluster config          April 27, 2022 10:55:01 AM CEST     April 27, 2022 10:55:04 AM CEST     Success
Install DB System                        April 27, 2022 10:55:04 AM CEST     April 27, 2022 11:22:46 AM CEST     Success
Cleanup temporary SSH key pair           April 27, 2022 11:22:46 AM CEST     April 27, 2022 11:22:47 AM CEST     Success
Set DB System as configured              April 27, 2022 11:22:47 AM CEST     April 27, 2022 11:22:47 AM CEST     Success

[root@dbi-oda-x8 ~]#

We can describe the newly created DB system

[root@dbi-oda-x8 mdi]# odacli describe-dbsystem -n mdi-dbi-ip42
DB System details
--------------------------------------------------------------------------------
                       ID:  731cdd55-7094-44fa-87c4-6941a422931b
                     Name:  mdi-dbi-ip42
                    Image:  19.14.0.0.0
                    Shape:  odb2
             Cluster name:  dbs0fab9fed3
             Grid version:  21.5.0.0.220118
                   Memory:  16.00 GB
             NUMA enabled:  YES
                   Status:  CONFIGURED
                  Created:  2022-04-27 10:52:36 CEST
                  Updated:  2022-04-27 11:22:47 CEST

 CPU Pool
--------------------------
                     Name:  cpuninja
          Number of cores:  4

                     Host:  dbi-oda-x8
        Effective CPU set:  21-24,31-34
              Online CPUs:  21, 22, 23, 24, 31, 32, 33, 34
             Offline CPUs:  NONE

 VM Storage
--------------------------
               Disk group:  DATA
              Volume name:  S0FAB9FED3
            Volume device:  /dev/asm/s0fab9fed3-390
                     Size:  200.00 GB
              Mount Point:  /u05/app/sharedrepo/mdi-dbi-ip42

 VMs
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x0fab9fed3
             VM Host Name:  mdidbi42.dbi-lab.ch
            VM image path:  /u05/app/sharedrepo/mdi-dbi-ip42/.ACFS/snaps/vm_x0fab9fed3/x0fab9fed3
             Target State:  ONLINE
            Current State:  ONLINE

 VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x0fab9fed3
                   Public:  10.36.0.242     / 255.255.255.0   / ens3 / BRIDGE(pubnet)
                      ASM:  192.168.17.10   / 255.255.255.128 / ens4 / BRIDGE(privasm) VLAN(priv0.100)

 Extra VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x0fab9fed3
                   pubnet:  10.36.0.242     / 255.255.255.0   / PUBLIC

 Databases
--------------------------
                     Name:  MDIDB1
              Resource ID:  f964b35e-7209-4d2d-9fca-5387c61c2148
              Unique name:  MDIDB1_42
              Database ID:  3671723533
              Domain name:  dbi-lab.ch
               DB Home ID:  51c91da9-af48-42b4-86b5-01e1cafe7711
                    Shape:  odb2
                  Version:  21.5.0.0.220118
                  Edition:  EE
                     Type:  SI
                     Role:  PRIMARY
                    Class:  OLTP
                  Storage:  ASM
               Redundancy:
         Target node name:
            Character set:  AL32UTF8
        NLS character set:
                 Language:  AMERICAN
                Territory:  AMERICA
          Console enabled:  false
             SEHA enabled:  false
      Associated networks:  Public-network
         Backup config ID:
       Level 0 Backup Day:  sunday
       Autobackup enabled:  true
              TDE enabled:  false
                 CDB type:  true
                 PDB name:  PDB1
           PDB admin user:  pdb1admin

[root@dbi-oda-x8 mdi]#

And that’s all. While connected to the DB system server, we can manage the database like any other one.

[root@mdidbi42 trace]# ps -ef | grep pmon
root      6218 23103  0 11:36 pts/0    00:00:00 grep --color=auto pmon
oracle   90406     1  0 11:21 ?        00:00:00 ora_pmon_MDIDB1
[root@mdidbi42 trace]#

We can connect to the database and validate that the version is 21c

[oracle@mdidbi42 ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Wed Apr 27 11:36:30 2022
Version 21.5.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.5.0.0.0

SQL>

The database component version

SQL>  select comp_name,version,status from dba_registry;

COMP_NAME                           VERSION    STATU
----------------------------------- ---------- -----
Oracle Database Catalog Views       21.0.0.0.0 VALID
Oracle Database Packages and Types  21.0.0.0.0 VALID
Oracle Real Application Clusters    21.0.0.0.0 VALID
JServer JAVA Virtual Machine        21.0.0.0.0 VALID
Oracle XDK                          21.0.0.0.0 VALID
Oracle Database Java Packages       21.0.0.0.0 VALID
OLAP Analytic Workspace             21.0.0.0.0 VALID
Oracle XML Database                 21.0.0.0.0 VALID
Oracle Workspace Manager            21.0.0.0.0 VALID
Oracle Text                         21.0.0.0.0 VALID
Oracle Multimedia                   21.0.0.0.0 VALID
Oracle OLAP API                     21.0.0.0.0 VALID
Spatial                             21.0.0.0.0 VALID
Oracle Locator                      21.0.0.0.0 VALID
Oracle Label Security               21.0.0.0.0 VALID
Oracle Database Vault               21.0.0.0.0 VALID

16 rows selected.

The RU version

SQL> select description from dba_registry_sqlpatch;

DESCRIPTION
--------------------------------------------------------------------------------
Database Release Update : 21.5.0.0.220118 (33516412)

SQL>
Conclusion

We have seen that even if my ODA is running with a 19.x version, we can deploy a 21c database

Cet article Configure DB System Oracle 21c in a ODA 19.14 est apparu en premier sur Blog dbi services.

How to allow users having monitoring role to view and manage messages in WebLogic JMS queues.

Thu, 2022-04-28 07:03

Recently I got the request to provide JMS Queues monitoring access to a group of users with the privileges to view and manage de messages in the queues. This can be done through the WebLogic console but for this customer, all is done via ansible scripts. Thus I had to find a scripting solution.

I followed the Oracle documentation to create the policy.

Extract of the WLST code:

try:
   print "applying JMS access policy for domain", domainName
   xacmlFile = open('XACMLAuthorizer.xml','r')
   xacmlDoc = xacmlFile.read()
   print(xacmlDoc)
   print "cd('/SecurityConfiguration/" + domainName + "/DefaultRealm/myrealm/Authorizers/XACMLAuthorizer')"
   cd('/SecurityConfiguration/' + domainName + '/DefaultRealm/myrealm/Authorizers/XACMLAuthorizer')
   print "add policy"
   cmo.addPolicy(xacmlDoc)
   print "done applying policy"
   return True
except Exception, inst:
   print inst
   print sys.exc_info()[0]
   dumpStack()
   sys.stderr.write("unable to apply JMS access policy for domain " + domainName)
   return False

The XACMLAuthorizer.xml policy file:

<?xml version="1.0" encoding="UTF-8"?>
<Policy PolicyId="urn:bea:xacml:2.0:entitlement:resource:type@E@Fjmx@G@M@Ooperation@Einvoke@M@Oapplication@E@M@OmbeanType@Eweblogic.management.runtime.JMSDestinationRuntimeMBean" RuleCombiningAlgId="urn:oasis:names:tc:xacml:1.0:rule-combining-algorithm:first-applicable">
  <Description>Rol(Admin,MonitorJMSQueues)type=&lt;jmx&gt;, operation=invoke, application=, mbeanType=weblogic.management.runtime.JMSDestinationRuntimeMBean</AttributeValue>
          <ResourceAttributeDesignator AttributeId="urn:oasis:names:tc:xacml:2.0:resource:resource-ancestor-or-self" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
        </ResourceMatch>
      </Resource>
    </Resources>
  </Target>
  <Rule RuleId="primary-rule" Effect="Permit">
    <Condition>
      <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of">
        <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-bag">
          <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Admin</AttributeValue>
          <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">MonitorJMSQueues</AttributeValue>
        </Apply>
        <SubjectAttributeDesignator AttributeId="urn:oasis:names:tc:xacml:2.0:subject:role" DataType="http://www.w3.org/2001/XMLSchema#string"/>
      </Apply>
    </Condition>
  </Rule>
  <Rule RuleId="deny-rule" Effect="Deny"/>
</Policy>

The script works fine but I get the following error when trying to see the messages stored in the JMS Queue.


<Administration Console encountered the following error:
weblogic.management.NoAccessRuntimeException: Access not allowed for subject:
principals=[<LIST-OF-PRINCIPALS],
on Resource weblogic.management.runtime.JMSDestinationRuntimeMBeanOperation: invoke , Target: getMessages
at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:295)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:299)
at javax.management.remote.rmi.RMIConnectionImpl_12214_WLStub.invoke(UnknownSource)
at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:1020)
at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at weblogic.management.remote.wlx.ClientProvider$WLXRMIConnectorWrapper$1$1.call(ClientProvider.java:715)
at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:287)
at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:272)
at weblogic.management.remote.wlx.ClientProvider$WLXRMIConnectorWrapper$1.invoke(ClientProvider.java:709)
at com.sun.proxy.$Proxy115.invoke(Unknown Source)
at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at weblogic.management.remote.wlx.ClientProvider$WLXRMIConnectorWrapper$1$1.call(ClientProvider.java:715)
at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:287)
...

The policy was not applied until I go to the WebLogic console and save the policy following the steps below:

  • – Under Security Realms -> “Roles and Policies” -> Realm Policies, in the Policy table, select “JMX Policy Editor.”
    – Select “Global Scope” and click Next.
    – From MBean Types, open “weblogic.management.runtime”
    – Select “JMSDestinationRuntimeMBean” and click next.
    – In Attributes and Operations, select the View edit link for the “Operations: Permission to Invoke.”
  • There I see the policy created with the WLST script. If I click on the save button, then the policy works.

    The documentation doesn’t tell it but there a setPolicyExpression call needed after the addPolicy. This to apply the loaded policy.

    try:
       print "applying JMS access policy for domain", domainName
       xacmlFile = open('XACMLAuthorizer.xml','r')
       xacmlDoc = xacmlFile.read()
       print(xacmlDoc)
       print "cd('/SecurityConfiguration/" + domainName + "/DefaultRealm/myrealm/Authorizers/XACMLAuthorizer')"
       cd('/SecurityConfiguration/' + domainName + '/DefaultRealm/myrealm/Authorizers/XACMLAuthorizer')
       print "add policy"
       cmo.addPolicy(xacmlDoc)
       print "Applying policy done"
       cmo.setPolicyExpression('type=<jmx>','{Rol(Admin) | Rol(MonitorJMSQueues)}')
       print "Set policy done"
       return True
    except Exception, inst:
       print inst
       print sys.exc_info()[0]
       dumpStack()
       sys.stderr.write("unable to apply JMS access policy for domain " + domainName)
       return False
    

    After this, the users or groups of users having the roles MonitorJMSQueues and Monitor assigned are able to monitor the the JMS queues and manage the messages in them.

    Cet article How to allow users having monitoring role to view and manage messages in WebLogic JMS queues. est apparu en premier sur Blog dbi services.

    How to create an Oracle GoldenGate EXTRACT in Multitenant

    Sat, 2022-04-23 07:42

    Create an EXTRACT process into container database has some specificity :

    From the CDB$ROOT, create a common user and configure the database to be ready to extract data via GoldenGate:

    SQL> create user c##gg_admin identified by "*****" default tablespace goldengate temporary tablespace temp;
    
    User created.
    
    SQL>
    
    SQL> alter user c##gg_admin quota unlimited on goldengate;
    
    User altered.
    
    SQL>
    
    
    SQL> grant create session, connect,resource,alter system, select any dictionary, flashback any table to c##gg_admin container=all;
    
    Grant succeeded.
    
    SQL>
    
    SQL> exec dbms_goldengate_auth.grant_admin_privilege(grantee => 'c##gg_admin',container=>'all');
    
    PL/SQL procedure successfully completed.
    
    SQL> alter user c##gg_admin set container_data=all container=current;
    
    User altered.
    
    SQL>
    
    SQL> grant alter any table to c##gg_admin container=ALL;
    
    Grant succeeded.
    
    SQL>
    
    alter system set enable_goldengate_replication=true scope=both;
    
    
    SQL> alter database force logging;
    
    
    SQL> alter pluggable database add supplemental log data;
    
    Pluggable database altered.
    
    SQL>
    
    

    Add the schematrandata for the schema concerned:

    GGSCI (vmld-01726 as c##gg_admin@MYCDB) 3> add schematrandata schema_source
    
    2022-04-13 18:06:55  INFO    OGG-01788  SCHEMATRANDATA has been added on schema "schema_source".
    
    2022-04-13 18:06:55  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema "schema_source".
    
    2022-04-13 18:06:55  INFO    OGG-10154  Schema level PREPARECSN set to mode NOWAIT on schema "schema_source".
    
    2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_DUMMY *****
    Oracle Goldengate support native capture on table schema_source.ZZ_DUMMY.
    Oracle Goldengate marked following column as key columns on table schema_source.ZZ_DUMMY: SCN, D, COMMENT_TXT
    No unique key is defined for table schema_source.ZZ_DUMMY.
    
    2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_DUMMY2 *****
    Oracle Goldengate support native capture on table schema_source.ZZ_DUMMY2.
    Oracle Goldengate marked following column as key columns on table schema_source.ZZ_DUMMY2: SCN, D, COMMENT_TXT
    No unique key is defined for table schema_source.ZZ_DUMMY2.
    
    2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_SURVEILLANCE *****
    Oracle Goldengate support native capture on table schema_source.ZZ_SURVEILLANCE.
    Oracle Goldengate marked following column as key columns on table schema_source.ZZ_SURVEILLANCE: I.
    
    2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_SURVEILLANCE_COPY *****
    Oracle Goldengate support native capture on table schema_source.ZZ_SURVEILLANCE_COPY.
    Oracle Goldengate marked following column as key columns on table schema_source.ZZ_SURVEILLANCE_COPY: I, SURV_DATE, ELLAPSED_1, ELLAPSED_2, CLIENT_HOST, CLIENT_TERMINAL, OS_USER, CLIENT_PROGRAM, INFO
    No unique key is defined for table schema_source.ZZ_SURVEILLANCE_COPY.
    
    GGSCI (vmld-01726 as c##gg_admin@MYCDB/CDB$ROOT) 3> dblogin userid c##gg_admin@MYPDB password xxxx
    Successfully logged into database.
    
    GGSCI (vmld-01726 as c##gg_admin@MYCDB) 4> info schematrandata schema_source
    
    2022-04-13 18:32:43  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "schema_source".
    
    2022-04-13 18:32:43  INFO    OGG-01980  Schema level supplemental logging is enabled on schema "schema_source" for all scheduling columns.
    
    2022-04-13 18:32:43  INFO    OGG-10462  Schema "schema_source" have 4 prepared tables for instantiation.
    
    GGSCI (vmld-01726 as c##gg_admin@MYCDB) 5>
    

    Create a new alias connection to the container database and register the extract, the extract must be registered into the root container (CDB$ROOT) even the data to capture are from the PDB:

    GGSCI (myserver) 10> alter credentialstore add user c##gg_admin@MYCDB_X1 alias ggadmin_exacc
    Password:
    
    Credential store altered.
    
    GGSCI (myserver) 11> dblogin useridalias ggadmin
    Successfully logged into database CDB$ROOT.
    
    GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 2>
    
    GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 2> register extract E3 database container (MYPDB)
    
    2022-04-13 18:31:19  INFO    OGG-02003  Extract E3 successfully registered with database at SCN 3386436450080
    
    
    GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 3>
    

    Save the SCN –> 3386436450080

    Create the EXTRACT, connected on the CDB:

    [oracle@myserver:/u01/app/oracle/product/19.1.0.0.4/gg_1]$ mkdir -p /u01/gs_x/ogg/
    
    GGSCI (myserver) 7> add extract E3, integrated tranlog, begin now
    EXTRACT (Integrated) added.
    
    
    GGSCI (myserver) 8> INFO ALL
    
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    
    MANAGER     RUNNING
    EXTRACT     STOPPED     E3		    00:00:00      00:00:06
    
    
    GGSCI (myserver) 9>
    
    
    GGSCI (myserver) 9> add exttrail /u01/gs_x/ogg/gz, extract E3
    EXTTRAIL added.
    
    
    GGSCI (myserver) 2> edit param E3
    
    
    GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 13> edit param E3
    Extract E3
    useridalias ggadmin
    Exttrail /u01/gs_x/ogg/gz
    LOGALLSUPCOLS
    UPDATERECORDFORMAT COMPACT
    DDL  &
    INCLUDE MAPPED OBJNAME MYPDB.SCHEMA.*
    Sequence MYPDB.SCHEMA.*;
    Table MYPDB.SCHEMA.* ;
    

    The parameter Table must be prefixed by the Pdb Name

     

    Start the Extract always from the CDB$ROOT:

    GGSCI (myserver as c##gg_admin@MY_CDB/CDB$ROOT) 12> START EXTRACT E3 atcsn 3386436450080
    
    Sending START request to MANAGER ...
    EXTRACT E3 starting
    
    
    GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 15> info all
    
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    
    MANAGER     RUNNING
    EXTRACT     RUNNING     E3      00:00:05      00:00:03
    

    Check the extract is running.
    Now you are ready to create the Pump process, do the Initial Load and create the replicat process on the target.

    Cet article How to create an Oracle GoldenGate EXTRACT in Multitenant est apparu en premier sur Blog dbi services.

    Near Zero Downtime Migration and failback with GoldenGate

    Sat, 2022-04-23 06:49

    Oracle GoldenGate allows to migrate Oracle database with Near Zero Downtime and with failback capability.

    Near Zero Downtime migration means very minimal application switchover downtime.

    The failback consists to rollback the migration from 19c to 12c.

    The goal of this blog is to describe how to migrate an oracle database from 12c to 19c via Oracle GoldenGate with Near Zero Downtime and with failback capability.

     

    STEP 1 – CHECK GOLDENGATE SYNCHRONIZATION

    The Oracle GoldenGate processes must be up and running (EXTRACT and PUMP on source, REPLICAT on target) and the source and target databases must be synchronized (same data between source and target).

    Since GoldenGate is a logical replication, comparison of data must be done regularly via Oracle Veridata (under license) or manually (my preferred method):

    • Compare the number of rows between Source and Target via the sql function COUNT(*) –> must return the same number of rows
    • Compare the data between Source and Target via the sql function MINUS –> must return “No rows selected”
      • MINUS does not support CLOB/BLOB columns
    • Compare the data for CLOB/BLOB between Source and Target via dbms_lob.compare –> must return “0”
    --COMPARE NB OF ROWS
    SQL>
    select count(*) from TABLE_A@dblink_source
    union all
    select count(*) from TABLE_A;
    
    COUNT(*)
    ----------
    3754
    3754
    
    --COMPARE DATA BETWEEN SOURCE AND TARGET
    SQL>
    select * from TABLE_A@dblink_source
    minus
    select * from TABLE_A;
    
    no rows selected
    
    --COMPARE DATA BETWEEN TARGET AND SOURCE
    SQL>
    select * TABLE_A
    minus
    select * from TABLE_A@dblink_source;
    
    no rows selected
    
    --COMPARE DATA FOR CLOB/BLOB COLUMNS
    CREATE OR REPLACE FUNCTION compare_clob(clob_src IN clob, clob_trg IN clob)
    RETURN NUMBER
    IS v_result number;
    clob_src1 long;
    clob_trg1 long;
    BEGIN
    clob_src1 := dbms_lob.substr( clob_src, 32000, 1 );
    clob_trg1 := dbms_lob.substr( clob_trg, 32000, 1 );
    
    select dbms_lob.compare(clob_src1,clob_trg1)
    into v_result
    from dual;
    
    RETURN(v_result);
    END;
    /

    STEP 2 – STOP THE APPLICATION ON SOURCE

    At this step, the downtime starts…

    • Stop the Application on the source database (12c)
    • If the application cannot be stopped :
      • Lock the applicatives schemas
      • Check there is no more transaction int v$transaction/gv$transaction
      • Kill session
    • Stop the listener
    • Save the job_queue_processes value (needed later when we will start the application on 19c)
    • Disable the job_queue_processes (“alter system set job_queue_processes = 0 scope = both)

    To be sure there is no transaction running, create a dummy table on 12c database and insert 1 row, later we will check into the GoldenGate trail files that this transaction is the last one, that will ensure us that no “real” transaction are lost from Source database before the switchover.

    SQL> create table source_schema.zz_dummy2 (SCN varchar2(100), d date, comment_txt varchar2(50));
    
    Table created.
    
    SQL>
    
    --on target
    SQL> desc source_schema.zz_dummy2
    Name Null? Type
    ----------------------------------------- -------- ----------------------------
    SCN VARCHAR2(100)
    D DATE
    COMMENT_TXT VARCHAR2(50)
    
    SQL>
    alter session set nls_date_format='dd.mm.yyyy hh24:mi:ss';
    insert into source_schema.zz_dummy2 select (select to_char(current_scn) from v$database),sysdate, 'RECORD TEST' from dual;
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> select * from source_schema.zz_dummy2;
    
    SCN
    --------------------------------------------------------------------------------
    D
    -------------------
    COMMENT_TXT
    --------------------------------------------------------------------------------
    3386436290352
    13.04.2022 14:36:41
    RECORD TEST
    
    SQL>
    

    Check the table is synchronized by GoldenGate on the target database (19c).

    STEP 3 – CREATE RESTORE POINT

    On the source database, create a restore point :

    SQL> CREATE RESTORE POINT rp_before_switch GUARANTEE FLASHBACK DATABASE;
    
    Restore point created.
    

    If we want to rollback the migration, we will replicate the 12c database from the 19c database via GoldenGate (with new transaction existing since the start of the Application on 19c). If the rollback with GoldenGate fails (for any reason), this restore point will allow to restore the database in point in time.

    STEP 4 – CHECK EXTRACT HAS NO MORE TRANSACTION

    Check the EXTRACT process has no record to handle.

    GGSCI (source_server as ggadmin@db_source) 42> send extract e1 logend
    
    Sending LOGEND request to EXTRACT E1 ...
    YES
    
    GGSCI (source_server) 8> send extract e1 showtrans
    
    Sending SHOWTRANS request to EXTRACT E1...
    No transactions found.
    
    GGSCI (source_server) 16> send extract e1 logend
        Sending LOGEND request to EXTRACT E1 ...
        YES
    

    The command “send extract e1 logend” goes to the end of the current trailfile and check the last transaction.

    If the command “send extract e1 logend”  send YES, that means the last record has been handled.

    If the command “send extract e1 logend” send NO, that means there is always record to handle by the EXTRACT process.

    If there is no more transaction to manage by GoldenGate, we are ready to stop the EXTRACT process:

    GGSCI (source_server) 9> stop extract e1
    Sending STOP request to EXTRACT E1...
    Request processed.
    
    GGSCI (source_server) 9> info all
    
    Program Status Group Lag at Chkpt Time Since Chkpt
    
    MANAGER RUNNING
    EXTRACT STOPPED E1 00:00:02 00:00:02
    

    STEP 5 – CHECK REPLICAT HAS NO MORE TRANSACTION

    Check the REPLICAT process has no record to handle:

    GGSCI (target_server) 32> send replicat R1 logend
    
    Sending LOGEND request to REPLICAT R1...
    YES
    

    The command “send replicat R1 logend” must return “YES”

    STEP 6 – CHECK THE LAST TRANSACTION INTO TRAIL FILE

    From logdump utility, check into the current trailfile (last trail file used by the EXTRACT or the REPLICAT) if the last transaction concerns the Insert into the dummy table (remember the step2) :

    --got the trail file name with the command “view report E1”
    Logdump 25 >ghdr on
    detail data
    ggstoken detailLogdump 26 >Logdump 27 >
    Logdump 28 >
    Logdump 28 >
    Logdump 28 >open /u01/xs/ogg/xs000000000
    Current LogTrail is /u01/xs/ogg/xs000000000
    Logdump 29 >pos last
    Reading forward from RBA 348547
    Logdump 30 >pos rev
    Reading in reverse from RBA 348547
    Logdump 31 >n
    ___________________________________________________________________
    Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)
    UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
    RecLength  :    65  (x0041)   IO Time    : 2022/04/13 14:36:44.000.000
    IOType     :     5  (x05)     OrigNode   :   255  (xff)
    TransInd   :     .  (x03)     FormatType :     R  (x52)
    SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
    AuditRBA   :      61367       AuditPos   : 522820
    Continued  :     N  (x00)     RecCount   :     1  (x01)
    
    2022/04/13 14:36:44.000.000 Insert Len 65 RBA 348360
    Name: SOURCE_SCHEMA.ZZ_DUMMY2 (TDR Index: 3)
    After Image: Partition x0c G s
    0000 1100 0000 0d00 3333 3836 3433 3632 3930 3335 | ........338643629035
    3201 0015 0000 0032 3032 322d 3034 2d31 333a 3134 | 2......2022-04-13:14
    3a33 363a 3431 0200 0f00 0000 0b00 5245 434f 5244 | :36:41........RECORD
    2054 4553 54 | TEST
    Column 0 (x0000), Len 17 (x0011)
    0000 0d00 3333 3836 3433 3632 3930 3335 32 | ....3386436290352
    Column 1 (x0001), Len 21 (x0015)
    0000 3230 3232 2d30 342d 3133 3a31 343a 3336 3a34 | ..2022-04-13:14:36:4
    31 | 1
    Column 2 (x0002), Len 15 (x000f)
    0000 0b00 5245 434f 5244 2054 4553 54 | ....RECORD TEST
    
    GGS tokens:
    TokenID x52 'R' ORAROWID Info x00 Length 20
    4141 4347 5159 4141 4541 4141 4154 4d41 4141 0001 | AACGQYAAEAAAATMAAA..
    TokenID x4c 'L' LOGCSN Info x00 Length 13
    3333 3836 3433 3632 3930 3336 36 | 3386436290366
    TokenID x36 '6' TRANID Info x00 Length 15
    302e 3130 2e31 392e 3437 3631 3931 39 | 0.10.19.4761919
    TokenID x69 'i' ORATHREADID Info x01 Length 2
    0001 | ..
    
    Logdump 32 >
    

    The last transaction must be the INSERT INTO SOURCE_SCHEMA.ZZ_DUMMY2. If this is the case, stop the REPLICAT PROCESS:

    GGSCI (target_server) 2> stop replicat R1
    
    Sending STOP request to REPLICAT R1...
    Request processed.
    
    GGSCI (target_server) 3> info all
    
    Program Status Group Lag at Chkpt Time Since Chkpt
    
    MANAGER RUNNING
    REPLICAT STOPPED R1 00:00:00 00:00:11
    

    STEP 7 – DISABLE AUTOSTART/AUTORESTART INTO MANAGER PARAMETER FILE

    Remove the AUTOSTART/AUTORESTART parameters into the MANAGER parameter file on source and target:

    --FOR EXTRACT
    AUTORESTART EXTRACT E1, RETRIES 3, WAITMINUTES 1, RESETMINUTES 60
    AUTOSTART EXTRACT E1
    
    --FOR REPLICAT
    AUTORESTART EXTRACT R1, RETRIES 3, WAITMINUTES 1, RESETMINUTES 60
    AUTOSTART EXTRACT R1
    

    STEP 8 – CREATE THE REVERSE EXTRACT (NEEDED IN CASE OF FAILBACK)

    This step is needed in case we want to rollback the migration after the switchover. Rollback means go-back to 12c database without lose data.

    Because new transactions has created new rows into the target database after the switchover, we must resynchronize the 12c database from the first transaction on 19c database. If we rollback the migration and if the application go back to 12c, no transaction will be lost.

    Configure the 19c database to be ready to capture data via the REVERSE EXTRACT:

    • Add schematrandata on the schema to be replicated

    Register the REVERSE EXTRACT:

    GGSCI (target_server) 10> alter credentialstore add user gg_admin@db_target alias target
    Password:
    
    Credential store altered.
    
    GGSCI (target_server) 11> dblogin useridalias target
    Successfully logged into database db_target.
    
    GGSCI (target_server as gg_admin@db_target) 2>
    
    GGSCI (target_server as gg_admin@db_target) 2>
    register extract E2 database
    
    2022-04-13 18:31:19 INFO OGG-02003 Extract E2 successfully registered with database at SCN 3386436450080
    

     

    Save the SCN related the “REGISTER EXTRACT” command and create the REGISTER EXTRACT :

    [oracle@target_server:/u01/app/oracle/product/19.1.0.0.4/gg_1]$ mkdir -p /u01/gs_x/ogg/
    
    GGSCI (target_server) 7> add extract E2, integrated tranlog, begin now
    EXTRACT (Integrated) added.
    
    GGSCI (target_server) 8> INFO ALL
    
    Program Status Group Lag at Chkpt Time Since Chkpt
    
    MANAGER RUNNING
    EXTRACT STOPPED E2 00:00:02 00:00:08
    

    The extract E2 captures data from 19c database and the parameter file must be configured to extract data from the schema on 19c database.

    GGSCI (target_server as gg_admin@db_target) 13> edit param e2 
    Extract E2 
    useridalias target 
    Exttrail /u01/gs_x/ogg/gz 
    LOGALLSUPCOLS 
    UPDATERECORDFORMAT COMPACT 
    DDL & INCLUDE MAPPED OBJNAME target_schema.* 
    Sequence target_schema.*; 
    Table target_schema.* ;
    

    Start the EXTRACT with the SCN got after the REGISTER EXTRACT step:

    GGSCI (target_server as gg_admin@db_target) 12> 
    START EXTRACT E2 atcsn 3386436450080
    
    Sending START request to MANAGER ...
    EXTRACT E2 starting
    
    GGSCI (TARGET_SERVER as gg_admin@target_db) 15> info all
    
    Program Status Group Lag at Chkpt Time Since Chkpt
    
    MANAGER RUNNING
    EXTRACT RUNNING E2 00:00:05 00:00:03
    

     

    STEP 9 – START THE APPLICATION ON 19C DATABASE

    Change the parameter job_queue_processes to the value we had in 12c database (step 2).

    Start the application on 19c database by changing the DNS alias.

    At this step, the downtime ends…

    STEP 10 – GOT THE FIRST TRANSACTION ON 19C DATABASE

    View the report file related to the REVERSE EXTRACT:

    GGSCI (target_server as gg_admin@db_target) 71> 
    view report e2
    
    . . .
    2022-04-13 18:38:40 INFO OGG-01872 Transaction delivery commencing with Transaction ID 482517513.14.30.1073,
    CSN 3386436508051, 0 transaction(s) skipped.
    
    . .
    
    SCN --> CSN 3386436508051
    

    The SCN related to the first transaction after the switchover is 3386436508051

    STEP 11 – CREATE THE REVERSE REPLICAT – (NEEDED IN CASE OF FAILBACK)

    The REVERSE REPLICAT replicates the data from 19c database to 12c database from the first transaction occuring on 19c database.

    The SCN value is very important because :

    • Before the SCN 3386436508051, we will have duplicate data on 12c database
    • After the SCN 3386436508051, we risk to have missing data on 12c database
    GGSCI (source_server as ggadmin@source_db) 7> 
    edit param r2
    
    Replicat R2
    DBOPTIONS INTEGRATEDPARAMS ( parallelism 6 )
    DISCARDFILE /u01/app/oracle/product/19.1.0.0.4/gg_1/dirrpt/R2_discard.txt, append, megabytes 10
    USERIDALIAS ggsource
    MAP target_schema.*, TARGET source_schema.*;
    
    add replicat R2 integrated exttrail /u01/gs_x/ogg/gz
    
    GGSCI (source_server as ggadmin@source_db) 6>
    add replicat R2 integrated exttrail /u01/gs_x/ogg/gz
    REPLICAT (Integrated) added.
    
    GGSCI (source_server as ggadmin@source_db) 7> info all
    
    Program Status Group Lag at Chkpt Time Since Chkpt
    
    MANAGER RUNNING
    EXTRACT STOPPED E1 00:00:02 00:00:02
    EXTRACT STOPPED P1 00:00:02 00:00:02
    EXTRACT STOPPED R2 00:00:02 00:00:02
    

    Start the REVERSE REPLICAT:

    GGSCI (source_server as gg_admin@db_source) 52> 
    start replicat r2 atcsn 3386436508051
    
    GGSCI (source_server as gg_admin@db_source) 57> info all
    
    Program Status Group Lag at Chkpt Time Since Chkpt
    
    MANAGER RUNNING
    EXTRACT STOPPED E1 00:00:02 00:00:00
    EXTRACT STOPPED P1 00:00:02 00:00:00
    REPLICAT RUNNING R2 00:00:02 00:00:00
    

    From now the 12c database is syncrhonized from 19c database

    STEP 12 – CHECK THE SYNCHRONIZATION – 19C –> 12C

    Compare the rows like we do in the step 1.

    STEP 13 – REMOVE OR NOT ALL GOLDENGATE PROCESS

    If the migration has been validated (application is working as expected on 19c), remove all GoldenGate process on source and target server.

    If the migration has not been validated (application is not working as expected), and you want to rollback the migration, replay all the step from step1 but in the direction 19c –> 12c, and start the application on 12c.

     

    CONCLUSION

    Near Zero Downtime migration means very minimal application switchover downtime, so the step from 1 to 9 must be done as faster as possible in order to minimize the application downtime.

    It’s difficult to give a downtime estimation because it depends of the complexity of the application and there are a lot of components outside the database and outside GoldenGate which can increase the downtime (jdbc/odbc driver compatibility, dependent application, LDAP connection configuration,…)

    Migrate an oracle database in near zero downtime with GoldenGate implies several steps where some are very important :

    • Check no more transaction is running before to switchover, otherwise we will lose data in 19c databse
    • In case of rollback, choose the correct SCN when we create the REVERSE EXTRACT (SCN got after the REGISTER REVERSE EXTRACT on 19c) and the REVERSE REPLICAT (SCN of the 1st transaction on 19c)
    • If we want a minimum of downtime, of course this procedure must be tested on a non production environment as close as possible of PROD in term of activity and architecture to be ready when you will execute it on PROD.

    Cet article Near Zero Downtime Migration and failback with GoldenGate est apparu en premier sur Blog dbi services.

    SQL Server: Find who forced a plan in Query Store with this new XEvent

    Fri, 2022-04-22 01:50

    The latest Cumulative Update for SQL Server 2019 has been released this week on Monday. It brings many bug fixes and some small improvements.
    One of these improvements is the addition of an extended event to identify the users forcing or unforcing an execution plan via the Query Store.

    In this blog post, I will test this new XEvent.

    For details about the latest CU see: KB5011644 – Cumulative Update 16 for SQL Server 2019
    There is a tiny KB dedicated for this new XEvent, see: KB5012964 – Improvement: Add an XEvent for tracking manual user plan forcing and unforcing

    What this new event does is very simple, and it’s what is described in its KB:

    An Extended Event (XEvent), query_store_plan_forcing_user_change, is added to optionally track when users manually force or unforce a plan for a particular query in the Query Store.

    Query Store Extended Events

    This new event is added to an already well-stocked list of extended events available around the Query Store.
    The following query lists 85 XEvents.

    SELECT o.name         AS [Object-Name]
    	, o.description  AS [Object-Descr]
    FROM sys.dm_xe_packages AS p
    	JOIN  sys.dm_xe_objects AS o
    		ON p.guid = o.package_guid
    WHERE o.object_type = 'event'
      AND p.name = 'qds'
      AND o.name LIKE '%query_store%'

    The new extended event, query_store_plan_forcing_user_change, comes with a few fields related to the plan being forced, the query, and most importantly the “is_forced_plan” field.
    When a plan is forced it is set to True. It is set to false when the user unforce a plan.

    Parameter Sensitive plan (parameter sniffing) demo

    To test this extended event here is a simple demo of parameter sniffing using the AdventureWorks database.

    Here is the preparation script if you want to follow along:

    ALTER DATABASE AdventureWorks SET QUERY_STORE = ON;
    go
    
    use [AdventureWorks]
    go
    
    DROP PROC IF EXISTS dbo.GetAverageSalary;
    DROP TABLE IF EXISTS dbo.Employees;
    go
    create table dbo.Employees (
    	ID int not null,
    	Number varchar(32) not null,
    	Name varchar(100) not null,
    	Salary money not null,
    	Country varchar(64) not null,
    	constraint PK_Employees	primary key clustered(ID)
    );
    
    ;with N1(C) as (select 0 union all select 0) -- 2 rows
    	,N2(C) as (select 0 from N1 as T1 cross join N1 as T2) -- 4 rows
    	,N3(C) as (select 0 from N2 as T1 cross join N2 as T2) -- 16 rows
    	,N4(C) as (select 0 from N3 as T1 cross join N3 as T2) -- 256 rows
    	,N5(C) as (select 0 from N4 as T1 cross join N4 as T2 ) -- 65,536 rows
    	,Nums(Num) as (select row_number() over (order by (select null)) from N5)
    insert into dbo.Employees(ID, Number, Name, Salary, Country)
    	select 
    		Num, 
    		convert(varchar(5),Num), 
    		'USA Employee: ' + convert(varchar(5),Num), 
    		40000,
    		'USA'
    	from Nums;
    
    ;with N1(C) as (select 0 union all select 0) -- 2 rows
    	,N2(C) as (select 0 from N1 as T1 cross join N1 as T2) -- 4 rows
    	,N3(C) as (select 0 from N2 as T1 cross join N2 as T2) -- 16 rows
    	,Nums(Num) as (select row_number() over (order by (select null)) from N3)
    insert into dbo.Employees(ID, Number, Name, Salary, Country)
    	select 
    		65536 + Num, 
    		convert(varchar(5),65536 + Num), 
    		'Switzerland Employee: ' + convert(varchar(5),Num), 
    		40000,
    		'Switzerland'
    	from Nums;
    
    create nonclustered index IDX_Employees_Country
    on dbo.Employees(Country);
    go
    
    create proc dbo.GetAverageSalary @Country varchar(64)
    as
    	select Avg(Salary) as [Avg Salary]
    	from dbo.Employees
    	where Country = @Country;
    go

    This is a very basic employees table with a Salary column and a Country column.
    To create a parameter sniffing scenario I have inserted way more employees in the USA than in Switzerland.

    select Count(*) AS nbEmployees, Country
    from dbo.Employees
    group by Country;

    So when executing the stored procedure alternating the two countries for the @Country parameter associated with a plan cache eviction (forced with the CLEAR PROCEDURE_CACHE command) we create a parameter sniffing scenario.

    alter database scoped configuration clear procedure_cache
    go
    exec dbo.GetAverageSalary @Country='USA';
    exec dbo.GetAverageSalary @Country='Switzerland';
    go 50
    
    alter database scoped configuration clear procedure_cache
    go
    exec dbo.GetAverageSalary @Country='Switzerland';
    exec dbo.GetAverageSalary @Country='USA';
    go 50

    Looking at the Query Store we have 2 execution plans for the same query.
    The first plan is an index Scan, it’s best for larger data sets like the USA parameter, and it’s fine for the Switzerland parameter.

    The second plan is to use a Nest Loops, it’s the best plan for a small result set, so the Switzerland parameter, but it’s a disaster performance-wise for a larger number of rows like the USA parameter.

    In such cases, one might want to force the first plan (with a Scan) for all parameters.

    Forcing a plan

    A plan can be forced with the following stored procedure or using the SSMS built-in report.

    exec sp_query_store_force_plan 
    	@query_id = 1
    	, @plan_id = 1;

    So I did force the plan manually.

    The Extended Event

    We can retrieve from the Extended Event the query_id, plan_id, and the is_forced_plan field.

    We know who has changed forced a plan (hostname, username, etc.), what plan it is, and for what query.

    Same thing for Unforcing a plan:

    Is it working with automatic tuning?

    I enabled the automatic tuning feature which forces the last known good plan automatically.

    ALTER DATABASE AdventureWorks
    	SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN = ON);

    It successfully force the best plan for that query but did not fire the XEvent. As the name and the description mention, it only applies to user-triggered changes.
    I am not aware of another XEvent which can collect the fact a plan has been forced by the Automatic Tuning and I think this is something that should also be available.

    To get this information I think we have no choice but to use the sys.query_store_plan DMV.

    use AdventureWorks
    go
    select plan_id, query_id, query_plan_hash
    	, is_forced_plan
    	, plan_forcing_type_desc
    from sys.query_store_plan

    Querying the XE

    The XEvent data can be queried in SQL to be joined with DMVs to retrieve the query text for example:

    use AdventureWorks
    go
    ;WITH cte AS (
    	SELECT
    		event_data.value(N'(event/@timestamp)[1]', N'datetime') AS EventDatetime
    		, event_data.value('(/event/action[@name=''database_name'']/value)[1]','varchar(200)') AS [DatabaseName]
    		, event_data.value('(/event/data[@name=''query_hash'']/value)[1]','varchar(200)') AS query_hash
    		, event_data.value('(/event/data[@name=''plan_id'']/value)[1]','int') AS plan_id
    		, event_data.value('(/event/data[@name=''query_id'']/value)[1]','int') AS query_id
    		, event_data.value('(/event/data[@name=''is_forced_plan'']/value)[1]','varchar(max)') AS is_forced_plan
    		, event_data.value('(/event/action[@name=''username'']/value)[1]','varchar(200)') AS username
    		, event_data.value('(/event/action[@name=''client_hostname'']/value)[1]','varchar(200)') AS client_hostname
    		, event_data.value('(/event/action[@name=''client_app_name'']/value)[1]','varchar(200)') AS client_app_name
    	FROM (
    		SELECT CAST(event_data as xml) AS event_data
    		FROM sys.fn_xe_file_target_read_file('query_store_plan_forcing_user_change*.xel', null, null, null)
    	) AS xe
    )
    SELECT
    	cte.EventDatetime
    	, cte.DatabaseName
    	, cte.is_forced_plan
    	, cte.username
    	, cte.client_hostname
    	, cte.client_app_name
    	, t.query_sql_text
    	, CAST(p.query_plan AS XML) AS query_plan
    FROM cte
    	JOIN sys.query_store_query AS q
    		on cte.query_id = q.query_id
    	JOIN sys.query_store_query_text AS t
    		on t.query_text_id = q.query_text_id
    	JOIN sys.query_store_plan AS p
    		ON q.query_id = p.query_id
    		AND cte.plan_id = p.plan_id
    ORDER BY cte.EventDatetime DESC

    Conclusion

    Microsoft may periodically add enhancements to SQL Server without waiting for a major version change, through Cumulative Updates.
    In this blog post, I tested an extended event that was just added with CU16 for SQL Server 2019.
    This new XEvent could be used to monitor the forcing of plans on your databases and identify who made the change when several DBAs or Developers can perform this type of action in your environment.

    Cet article SQL Server: Find who forced a plan in Query Store with this new XEvent est apparu en premier sur Blog dbi services.

    SQL Server: Automatic Soft-NUMA and uneven CPU load

    Wed, 2022-04-20 02:39

    SQL Server has CPU limitations depending on the Edition. If your VM is not configured properly you can reach these limits very easily.
    In a previous post, I described the effect VM misconfigurations can have on performance because not all available cores are used as expected.
    See: SQL Server CPU limitation for Express and Standard Edition

    In this article I will share a similar case but this time in a context where the Automatic soft-NUMA feature comes into play.

    Standard Edition limitations

    In the article mentioned just above we saw that the Standard edition is limited to 4 sockets.
    If you decide to give your VM 8 cores but it is also configured with 8 sockets, you will find yourself limited to 4 sockets, therefore 4 cores.
    It can look like this from the Task Manager:

    NUMA architecture

    Non-Uniform Memory Access (NUMA) is a computer system architecture that is used with multiprocessor designs in which some regions of memory have greater access latencies.
    Some memory regions are connected directly to one or more processors, with all processors connected to each other through various types of interconnection fabric. For large multiprocessor systems, this arrangement results in less contention for memory and increased system performance.

    A NUMA architecture divides memory and processors into groups, called “NUMA nodes”.

    As illustrated above accessing local memory is faster than accessing the memory associated with other NUMA nodes.
    SQL Server includes optimizations that recognize and adapt to a computer’s NUMA topology.

    SQL Server Automatic Soft-NUMA

    Starting with version 2016, during startup, SQL Server interrogates the hardware layout and automatically configures Soft NUMA on systems reporting 8 or more CPUs per NUMA node.

    There are benefits to having Soft-NUMA enabled. Some internal processes are partitioned by soft-NUMA. SQL Server creates one I/O Completion Thread by NUMA node.
    If you have hardware NUMA you can have an increased number of Lazy Writers workers. This can be helpful for large systems to improve performance and scalability.
    See: How It Works: Soft NUMA, I/O Completion Thread, Lazy Writer Workers and Memory Nodes

    Soft-NUMA applies only for CPU Scheduling and Connectivity. Memory locality is not impacted.

    Standard Edition limitations and Automatic Soft-NUMA

    The origin of this article is a case that I encountered with a customer.
    Let’s look at sys.dm_os_sys_info to see the resources available for the SQL Server instance.

    SELECT
    	cpu_count, socket_count, hyperthread_ratio
    	, scheduler_count
    	, softnuma_configuration_desc, numa_node_count
    FROM sys.dm_os_sys_info

    The VM is configured with 12 logical cores, which are visible for SQL Server as “cpu_count”. The cores are distributed over 6 sockets.
    Because we have more than 8 cores Automatic soft-NUMA comes into play. It is enabled and 2 soft NUMA nodes are created. There is no hardware NUMA here.

    We also get the information from the ErrorLog.

    We can notice we have only 8 schedulers on the instance instead of 12. This is caused by the Standard Edition limitation of 4 sockets (of 2 cores in this case).

    So we have 8 cores and 2 NUMA nodes. We could expect to have 2 NUMAnodes of 4 cores each. This can be checked with sys.dm_os_schedulers.

    SELECT
    	parent_node_id, scheduler_id
    	, status, is_online
    FROM sys.dm_os_schedulers
    WHERE scheduler_id < 100

    We can see that the 2 NUMA nodes are composed of 6 schedulers each but that 4 schedulers are deactivated because of licensing.
    The second NUMA node has only 2 schedulers; schedulers with id 6 and 7.
    This situation is very interesting. We would have preferred to have 2 nodes of 4 cores.

    I was curious about the scheduler load across NUMA nodes. I came up with the following query on sys.dm_os_workers.

    SELECT
    	COUNT(*) AS WorkerCount
    	, parent_node_id
    	, COUNT(DISTINCT scheduler_id) AS SchedulersCount
    	, COUNT(*)/COUNT(DISTINCT scheduler_id) AS AvgWorkersBySchedulers
    FROM sys.dm_os_workers AS w
    	JOIN sys.dm_os_schedulers AS s
    		ON w.scheduler_address = s.scheduler_address
    WHERE s.status = 'VISIBLE ONLINE'
    GROUP BY parent_node_id

    The second NUMA node with only 2 schedulers has about 40% of all workers.
    There seems to be a load difference between the 2 NUMA nodes.

    Compared with another instance having a similar configuration but in Enterprise Edition, the workers seem better distributed on the schedulers.

    Going a little bit further we can have look at tasks and associated sessions per scheduler.

    SELECT COUNT(*) AS Tasks
    	, s.parent_node_id
    	, t.scheduler_id
    	, s.is_idle
    	, s.load_factor
    	, STRING_AGG(COALESCE(CAST(t.session_id AS VARCHAR), 'n/a'), ', ') AS session_id
    FROM sys.dm_os_tasks AS t
    	LEFT JOIN sys.dm_os_workers AS w
    		on w.task_address = t.task_address
    			JOIN sys.dm_os_schedulers AS s
    				on s.scheduler_address = w.scheduler_address
    WHERE s.status = 'VISIBLE ONLINE'
    GROUP BY s.parent_node_id, t.scheduler_id, s.is_idle, s.load_factor

    We can see a larger number of tasks on schedulers 6 and 7 which belong to the second NUMA node. The CPU load is on average higher on these two schedulers.

    Connections round-robin

    What happens is that new connections made to SQL Server are assigned to NUMA nodes in a round-robin fashion.

    Indeed with sys.dm_exec_connections, we can see that the distribution of connections is almost balanced on the 2 NUMA nodes.

    SELECT COUNT(*) AS NodeConnections
    	, node_affinity
    FROM sys.dm_exec_connections
    GROUP BY node_affinity

    Fix

    In this particular case, we simply reconfigured the VM to have only 2 sockets and 6 cores per socket. The 12 cores can thus be used without being constrained by the limitation of 4 sockets maximum in the Standard edition.
    Things look better now.

    The configuration change was done together with a change to the MAXDOP setting. We have set the MAXDOP value to the number of cores per NUMA node so in this case 6.
    A slight decrease in CPU usage has been observed on this VM. As there is little activity on this SQL Server instance the change did not make a glaring difference.

    Conclusion

    Since the SQL Server connection assignment is done in a round-robin manner, having an uneven number of schedulers per NUMA node can result in an uneven CPU load.

    In this customer context, there is not a very important SQL activity and the consequences of this sub-optimal NUMA configuration are negligible.

    I found this subject interesting, beyond the license limitation it is a reminder that it is important to be aware of the NUMA configuration of your SQL Server environment.
    I hope the few SQL queries above can help you understand a bit more about how the NUMA architecture works from a SQL Server perspective.

    Cet article SQL Server: Automatic Soft-NUMA and uneven CPU load est apparu en premier sur Blog dbi services.

    Oracle Data Guard : Where to Host The Master Observer

    Fri, 2022-04-15 07:33

    An observer is an OCI client that connects to the primary and target standby databases using the same SYS credentials you used when you connected to the Oracle Data Guard configuration with DGMGRL.
    The observer is highly recommended in a Data Guard environment. But it is mandatory if a Fast-Start Failover is configured.

    Since Oracle 12.2 we can have up to 3 observers and the maximum number of observers is increased to 4 since Oracle 21c. One important thing is that even if we have multiple observers, only one observer is the master and all other are backup observers. Only the master observer can initiate a fast-start failover process.

    The question we often ask is where to host my observers. Does the support of multiple observers close this question?
    In this blog I am trying to test many scenarios so that we will have an idea of where to put my observers.

    I will suppose that I have 3 datacenters
    -The primary datacenter hosting the primary server oraadserver
    -The secondary datacenter hosting the primary server oraadserver1
    -The third datacenter where I have the server oraadserver3 I can use for observer for example

    The fast-start failover is already configured, and I have 3 observers

    DGMGRL> show configuration verbose
    
    Configuration - db21
    
      Protection Mode: MaxPerformance
      Members:
      DB21_SITE1 - Primary database
        DB21_SITE2 - (*) Physical standby database
    
      (*) Fast-Start Failover target
    
      Properties:
        FastStartFailoverThreshold      = '15'
        OperationTimeout                = '30'
        TraceLevel                      = 'USER'
        FastStartFailoverLagLimit       = '30'
        CommunicationTimeout            = '180'
        ObserverReconnect               = '0'
        ObserverPingInterval            = '0'
        ObserverPingRetry               = '0'
        FastStartFailoverAutoReinstate  = 'TRUE'
        FastStartFailoverPmyShutdown    = 'TRUE'
        BystandersFollowRoleChange      = 'ALL'
        ObserverOverride                = 'FALSE'
        ExternalDestination1            = ''
        ExternalDestination2            = ''
        PrimaryLostWriteAction          = 'CONTINUE'
        ConfigurationWideServiceName    = 'DB21_CFG'
        ConfigurationSimpleName         = 'db21'
        DrainTimeout                    = '0'
    
    Fast-Start Failover: Enabled in Potential Data Loss Mode
      Lag Limit:          30 seconds
      Threshold:          15 seconds
      Ping Interval:      3000 milliseconds
      Ping Retry:         0
      Active Target:      DB21_SITE2
      Potential Targets:  "DB21_SITE2"
        DB21_SITE2 valid
      Observers:      (*) oraadserver1
                          oraadserver21
                          oraadserver31
      Shutdown Primary:   TRUE
      Auto-reinstate:     TRUE
      Observer Reconnect: (none)
      Observer Override:  FALSE
    
    Configuration Status:
    SUCCESS
    
    DGMGRL>
    

    Case 1 : The master observer is running on oraadserver so the observer is located in the primary datacenter

    DGMGRL> show observer
    
    Configuration - db21
    
      Fast-Start Failover:     ENABLED
    
      Primary:            DB21_SITE1
      Active Target:      DB21_SITE2
    
    Observer "oraadserver1" - Master
    
      Host Name:                    oraadserver
      Last Ping to Primary:         0 seconds ago
      Last Ping to Target:          0 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    Observer "oraadserver21" - Backup
    
      Host Name:                    oraadserver2
      Last Ping to Primary:         2 seconds ago
      Last Ping to Target:          2 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver2.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    Observer "oraadserver31" - Backup
    
      Host Name:                    oraadserver3
      Last Ping to Primary:         0 seconds ago
      Last Ping to Target:          2 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver3.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    DGMGRL>
    

    The first test I am doing is to simulate the loss of the first datacenter and to see if a fast-start failover will happen. The loss of the primary datacenter means that I lose both primary database and master observer

    Ok let’s poweroff the primary server

    [root@oraadserver ~]# poweroff
    

    In the logfile of one observer located in a remaining datacenter (oraadserver3) we can see following lines

    [W000 2022-04-15T12:48:16.563+02:00] Primary database cannot be reached.
    [W000 2022-04-15T12:48:16.563+02:00] Fast-Start Failover threshold has not exceeded. Retry for the next 2 seconds
    [W000 2022-04-15T12:48:17.563+02:00] Try to connect to the primary.
    [W000 2022-04-15T12:48:19.891+02:00] Primary database cannot be reached.
    [W000 2022-04-15T12:48:19.891+02:00] Fast-Start Failover threshold has expired.
    [W000 2022-04-15T12:48:19.891+02:00] Try to connect to the standby.
    [W000 2022-04-15T12:48:19.891+02:00] Check if the standby is ready for failover.
    [W000 2022-04-15T12:48:19.899+02:00] Fast-Start Failover is not possible because this observer is not the master.
    [W000 2022-04-15T12:48:20.902+02:00] Try to connect to the primary.
    [W000 2022-04-15T12:48:28.908+02:00] Primary database cannot be reached.
    [W000 2022-04-15T12:48:28.908+02:00] Fast-Start Failover threshold has not exceeded. Retry for the next 7 seconds
    

    As expected, the fast_start failover did not happen because the master observer was down. But the question is why another observer was not promoted as a master. Yes I have 3 observers, I am expecting that when the master crash that a backup observer will become the master.

    I then restart the primary server and confirm that the db_site1 is still the primary database

    DGMGRL> show configuration
    
    Configuration - db21
    
      Protection Mode: MaxPerformance
      Members:
      DB21_SITE1 - Primary database
        DB21_SITE2 - (*) Physical standby database
    
    Fast-Start Failover: Enabled in Potential Data Loss Mode
    
    Configuration Status:
    SUCCESS   (status updated 31 seconds ago)
    
    DGMGRL>
    

    Ok, we restart everything and still have the master observer in the primary datacenter

    DGMGRL> show fast_start failover
    
    Fast-Start Failover: Enabled in Potential Data Loss Mode
    
      Protection Mode:    MaxPerformance
      Lag Limit:          30 seconds
    
      Threshold:          15 seconds
      Ping Interval:      3000 milliseconds
      Ping Retry:         0
      Active Target:      DB21_SITE2
      Potential Targets:  "DB21_SITE2"
        DB21_SITE2 valid
      Observers:      (*) oraadserver1
                          oraadserver21
                          oraadserver31
      Shutdown Primary:   TRUE
      Auto-reinstate:     TRUE
      Observer Reconnect: (none)
      Observer Override:  FALSE
    
    Configurable Failover Conditions
      Health Conditions:
        Corrupted Controlfile          YES
        Corrupted Dictionary           YES
        Inaccessible Logfile            NO
        Stuck Archiver                  NO
        Datafile Write Errors          YES
    
      Oracle Error Conditions:
        (none)
    
    DGMGRL>
    

    And let’s kill the observer without crashing the datacenter (we only crash the observer not the primary database)

    [oracle@oraadserver ~]$ ps -ef | grep -i observer
    oracle   12816     1  0 12:55 ?        00:00:01 /u01/app/oracle/product/dbhome_1/bin/dgmgrl START OBSERVER NONAME FILE IS 'fsfo.dat'
    oracle   12988 12959  0 12:57 pts/2    00:00:00 grep --color=auto -i observer
    [oracle@oraadserver ~]$
    
    
    [oracle@oraadserver ~]$ kill -9 12816
    [oracle@oraadserver ~]$
    

    We can see in this case that the observer located in another datacenter was promoted to a master one as few minutes after. A fast-start failover will happen if now we crash the primary datacenter.

    DGMGRL> show fast_start failover
    
    Fast-Start Failover: Enabled in Potential Data Loss Mode
    
      Protection Mode:    MaxPerformance
      Lag Limit:          30 seconds
    
      Threshold:          15 seconds
      Ping Interval:      3000 milliseconds
      Ping Retry:         0
      Active Target:      DB21_SITE2
      Potential Targets:  "DB21_SITE2"
        DB21_SITE2 valid
      Observers:      (*) oraadserver21
                          oraadserver1
                          oraadserver31
      Shutdown Primary:   TRUE
      Auto-reinstate:     TRUE
      Observer Reconnect: (none)
      Observer Override:  FALSE
    
    Configurable Failover Conditions
      Health Conditions:
        Corrupted Controlfile          YES
        Corrupted Dictionary           YES
        Inaccessible Logfile            NO
        Stuck Archiver                  NO
        Datafile Write Errors          YES
    
      Oracle Error Conditions:
        (none)
    
    DGMGRL>
    

    So seems that if we lose at the same time the master observer and the primary database, no backup observer is promoted to a master.

    Case 2 : The master observer is running on oraadserver2 so the observer is located in the secondary datacenter

    In this second test, the master observer is in the same datacenter that the standby database. Let’s simulate a crash of the secondary datacenter by crashing the standby server and see what happens

    DGMGRL> show fast_start failover;
    
    Fast-Start Failover: Enabled in Potential Data Loss Mode
    
      Protection Mode:    MaxPerformance
      Lag Limit:          30 seconds
    
      Threshold:          15 seconds
      Ping Interval:      3000 milliseconds
      Ping Retry:         0
      Active Target:      DB21_SITE2
      Potential Targets:  "DB21_SITE2"
        DB21_SITE2 valid
      Observers:      (*) oraadserver21
                          oraadserver1
                          oraadserver31
      Shutdown Primary:   TRUE
      Auto-reinstate:     TRUE
      Observer Reconnect: (none)
      Observer Override:  FALSE
    
    Configurable Failover Conditions
      Health Conditions:
        Corrupted Controlfile          YES
        Corrupted Dictionary           YES
        Inaccessible Logfile            NO
        Stuck Archiver                  NO
        Datafile Write Errors          YES
    
      Oracle Error Conditions:
        (none)
    
    DGMGRL>
    
    

    Let’s poweroff the standby server

    [root@oraadserver2 ~]# poweroff
    

    As expected, there was not a fast-start failover as I lose both standby database and observer because no backup observer was promoted.
    And what is also important is that my primary database was shut down by Oracle. Indeed if the alert log of the primary database we can see following lines

    Thread 1 advanced to log sequence 33 (LGWR switch),  current SCN: 77729099
      Current log# 1 seq# 33 mem# 0: /u01/app/oracle/oradata/DB21/onlinelog/o1_mf_1_hx1xy9yc_.log
      Current log# 1 seq# 33 mem# 1: /u01/app/oracle/fast_recovery_area/DB21/onlinelog/o1_mf_1_hx1xybv4_.log
    2022-04-15T13:09:26.832279+02:00
    ARC0 (PID:12144): Archived Log entry 907 added for B-1101901028.T-1.S-32 ID 0x465cfcd1 LAD:1 [krse.c:4912]
    2022-04-15T13:10:01.882983+02:00
    Fast-Start Failover reconfiguration in progress.
    2022-04-15T13:10:04.874482+02:00
    DMON: FSFP network call timeout. Killing process FSFP.
    2022-04-15T13:10:04.898659+02:00
    Process termination requested for pid 12003 , [info = 2] [request issued by pid: 11934, uid: 54323]
    2022-04-15T13:10:07.914848+02:00
    Starting background process FSFP
    2022-04-15T13:10:07.986554+02:00
    FSFP started with pid=7, OS id=13725
    2022-04-15T13:10:11.906564+02:00
    Primary has heard from neither observer nor target standby within FastStartFailoverThreshold seconds.
    It is likely an automatic failover has already occurred. Primary is shutting down.
    2022-04-15T13:10:11.911704+02:00
    Errors in file /u01/app/oracle/diag/rdbms/db21_site1/DB21/trace/DB21_lg00_11908.trc:
    ORA-16830: primary isolated from fast-start failover partners longer than FastStartFailoverThreshold seconds: shutting down
    USER (ospid: 11908): terminating the instance due to ORA error 16830
    2022-04-15T13:10:12.031189+02:00
    System state dump requested by (instance=1, osid=11908 (LG00)), summary=[abnormal instance termination].
    2022-04-15T13:10:12.031406+02:00
    Memory (Avail / Total) = 792.82M / 3789.53M
    Swap (Avail / Total) = 3072.00M /  3072.00M
    
    2022-04-15T13:10:12.125885+02:00
    System State dumped to trace file /u01/app/oracle/diag/rdbms/db21_site1/DB21/trace/DB21_diag_11877.trc
    2022-04-15T13:10:12.699552+02:00
    Dumping diagnostic data in directory=[cdmp_20220415131012], requested by (instance=1, osid=11908 (LG00)), summary=[abnormal instance termination].
    2022-04-15T13:10:13.866769+02:00
    Instance terminated by USER, pid = 11908
    2022-04-15T13:12:58.049262+02:00
    

    This means that if your master observer is located in the same datacenter that the standby server, if your standby datacenter crash,
    -No automatic failover will happen
    -Your primary database will be shutdown

    Case 3 : The master observer is running on oraadserver3 so the observer is located in the third datacenter

    DGMGRL> show observer
    
    Configuration - db21
    
      Fast-Start Failover:     ENABLED
    
      Primary:            DB21_SITE1
      Active Target:      DB21_SITE2
    
    Observer "oraadserver31" - Master
    
      Host Name:                    oraadserver3
      Last Ping to Primary:         1 second ago
      Last Ping to Target:          1 second ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver3.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    Observer "oraadserver1" - Backup
    
      Host Name:                    oraadserver
      Last Ping to Primary:         1 second ago
      Last Ping to Target:          0 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    Observer "oraadserver21" - Backup
    
      Host Name:                    oraadserver2
      Last Ping to Primary:         1 second ago
      Last Ping to Target:          0 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver2.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    DGMGRL>
    

    Now let’s crash the third datacenter which only host the master observer, no primary or standby database is running on this datacenter.

    [root@oraadserver3 ~]# poweroff
    

    A few minutes after, a backup observer was automatically promoted to a master one.

    DGMGRL> show observer
    
    Configuration - db21
    
      Fast-Start Failover:     ENABLED
    
      Primary:            DB21_SITE1
      Active Target:      DB21_SITE2
    
    Observer "oraadserver1" - Master
    
      Host Name:                    oraadserver
      Last Ping to Primary:         0 seconds ago
      Last Ping to Target:          2 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    Observer "oraadserver21" - Backup
    
      Host Name:                    oraadserver2
      Last Ping to Primary:         0 seconds ago
      Last Ping to Target:          2 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver2.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    
    Observer "oraadserver31" - Backup
    
      Host Name:                    oraadserver3
      Last Ping to Primary:         59 seconds ago
      Last Ping to Target:          59 seconds ago
      Log File:                     /u01/app/oracle/admin/prod20/broker_files/config_db21/log/observer_oraadserver3.log
      State File:                   /u01/app/oracle/admin/prod20/broker_files/config_db21/dat/fsfo.dat
    

    To resume we can see that

    Prmiary database and master observer in the same datacenter
    -loss of datacenter = No automatic failover because no master observer promoted

    Standby database and master observer in the same datacenter
    -loss of datacenter = No automatic failover because no master observer promoted + shutdown of primary database

    Master observer in a third datacenter
    -loss of datacenter = a backup observer will be promoted to a master one.

    Conclusion

    I will conclude with a question
    Where will you put your master observer if you have
    2 datacenters?
    3 datacenter?

    Hope this blog will help

    Cet article Oracle Data Guard : Where to Host The Master Observer est apparu en premier sur Blog dbi services.

    Oracle 21c : Dealing with FastStartFailoverActionOnPreCalloutFailure

    Thu, 2022-04-14 11:39

    In a previous blog I talked about the FSFO callout scripts which is a new feature with Oracle 21c with the broker.

    This feature will allow to execute some tasks before and after a fast-start failover. By default, the automatic failover will not happen if the pre-script fails. Maybe it’s not what we want, sometimes we will want to continue the automatic failover even if the pre-tasks did not execute successfully.

    Oracle has a parameter for this, it is the FastStartFailoverActionOnPreCalloutFailure. This parameter has two values :
    STOP: the FSFO will not happen if there is not a .suc file (the pre-tasks fail)
    CONTINUE: the FSFO will continue even if the pre-tasks fail

    In this blog I do some tests with this parameter and show the results. Below the configuration I used, the same that the one used in my previous blog

    DGMGRL> show configuration
    
    Configuration - db21
    
      Protection Mode: MaxPerformance
      Members:
      DB21_SITE1 - Primary database
        DB21_SITE2 - (*) Physical standby database
    
    Fast-Start Failover: Enabled in Potential Data Loss Mode
    
    Configuration Status:
    SUCCESS   (status updated 17 seconds ago)
    
    DGMGRL>
    

    FastStartFailoverActionOnPreCalloutFailure=STOP

    The first tests are done with the parameter set to STOP. Below my callout scripts

    fsfocallout.ora script with the value=STOP

    oracle@oraadserver3:/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/ [DB21 (CDB$ROOT)] cat fsfocallout.ora | grep -v ^#
    FastStartFailoverPreCallout=fsfo_precallout
    FastStartFailoverPreCalloutTimeout=25
    FastStartFailoverPreCalloutSucFileName=fsfo_precallout.suc
    FastStartFailoverPreCalloutErrorFileName=fsfo_precallout.err
    FastStartFailoverActionOnPreCalloutFailure=STOP
    FastStartFailoverPostCallout=fsfo_postcallout
    oracle@oraadserver3:/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/ [DB21 (CDB$ROOT)]
    

    fsfo_precallout script with errors inside

    oracle@oraadserver3:/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/ [DB21 (CDB$ROOT)] cat fsfo_precallout
    #! /bin/bash
    if [ 1 -lt 100 ]
     then
       touch /temp/test
       echo "starting fun observer" > /temp/test
       echo "starting fun observer" > /temp/test
       touch  /u01/app/oracle/aadmin/prod20/broker_files/config_db21/callout/fsfo_precallout.suc
    else
      touch /u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfo_precallout.err
    fi
    oracle@oraadserver3:/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/ [DB21 (CDB$ROOT)]
    

    As you may see, I did some mistakes (/temp instead of /tmp, aadmin instead of admin) in the script so that the pre-tasks will not finish successfully.

    Now let’s simulate a Failover to validate the expected behavior

    SQL> select db_unique_name,open_mode from v$database;
    
    DB_UNIQUE_NAME                 OPEN_MODE
    ------------------------------ --------------------
    db21_site1                     READ WRITE
    
    SQL> shut abort
    ORACLE instance shut down.
    SQL>
    

    After the shutdown abort of the primary, we can see in the observer logfile, that the automatic failover did not happen because of the value of the parameter FastStartFailoverActionOnPreCalloutFailure=STOP

    [W000 2022-04-13T11:07:07.786+02:00] Fast-Start Failover is not enabled or can't be checked. Retry after 15 seconds.
    [W000 2022-04-13T11:07:22.792+02:00] Standby database has changed to DB21_SITE2.
    [W000 2022-04-13T11:07:22.794+02:00] Try to connect to the primary.
    [W000 2022-04-13T11:07:22.794+02:00] Try to connect to the primary DB21_SITE1.
    [W000 2022-04-13T11:07:24.028+02:00] Connection to the primary restored!
    [W000 2022-04-13T11:07:24.034+02:00] The standby DB21_SITE2 is ready to be a FSFO target
    [W000 2022-04-13T11:07:26.036+02:00] Disconnecting from database DB21_SITE1.
    [W000 2022-04-13T11:24:02.493+02:00] Primary database cannot be reached.
    [W000 2022-04-13T11:24:02.494+02:00] Fast-Start Failover threshold has not exceeded. Retry for the next 15 seconds
    [W000 2022-04-13T11:24:03.496+02:00] Try to connect to the primary.
    [W000 2022-04-13T11:24:05.797+02:00] Primary database cannot be reached.
    [W000 2022-04-13T11:24:06.799+02:00] Try to connect to the primary.
    [W000 2022-04-13T11:24:19.665+02:00] Primary database cannot be reached.
    [W000 2022-04-13T11:24:19.665+02:00] Fast-Start Failover threshold has expired.
    [W000 2022-04-13T11:24:19.666+02:00] Succeeded to parse FSFO callout config file '/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfocallout.ora'
    [W000 2022-04-13T11:24:19.666+02:00] Try to connect to the standby.
    [W000 2022-04-13T11:24:19.666+02:00] Check if the standby is ready for failover.
    [W000 2022-04-13T11:24:19.685+02:00] Doing pre-FSFO callout.
    [W000 2022-04-13T11:24:23.746+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:24:29.821+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:24:36.020+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:24:41.040+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:24:41.040+02:00] Failed to detect the pre-FSFO callout suc file '/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfo_precallout.suc', or error file '/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfo_precallout.err', after 25 seconds passed.
    [W000 2022-04-13T11:24:41.040+02:00] Will not continue Fast-Start Failover since pre-FSFO callout failure action is STOP
    [W000 2022-04-13T11:24:41.040+02:00] Returning to primary ping state.
    [W000 2022-04-13T11:24:41.040+02:00] Try to connect to the primary.
    [W000 2022-04-13T11:24:43.274+02:00] Primary database cannot be reached.
    

    FastStartFailoverActionOnPreCalloutFailure=CONTINUE

    If for any raison I want the fsfo to happen event if the pre-tasks fail, I have to explicitly set the value to CONTINUE

    Let’s do the same tests but with the parameter to CONTINUE

    oracle@oraadserver3:/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/ [DB21 (CDB$ROOT)] cat fsfocallout.ora | grep -v ^#
    FastStartFailoverPreCallout=fsfo_precallout
    FastStartFailoverPreCalloutTimeout=25
    FastStartFailoverPreCalloutSucFileName=fsfo_precallout.suc
    FastStartFailoverPreCalloutErrorFileName=fsfo_precallout.err
    FastStartFailoverActionOnPreCalloutFailure=CONTINUE
    FastStartFailoverPostCallout=fsfo_postcallout
    oracle@oraadserver3:/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/ [DB21 (CDB$ROOT)]
    

    In the observer logfile, we can see that as expected the automatic failover happens because the value is CONTINUE for the parameter.

    [W000 2022-04-13T11:36:15.409+02:00] Primary database cannot be reached.
    [W000 2022-04-13T11:36:15.410+02:00] Fast-Start Failover threshold has not exceeded. Retry for the next 15 seconds
    [W000 2022-04-13T11:36:16.410+02:00] Try to connect to the primary.
    [W000 2022-04-13T11:36:18.949+02:00] Primary database cannot be reached.
    [W000 2022-04-13T11:36:19.950+02:00] Try to connect to the primary.
    [W000 2022-04-13T11:36:30.063+02:00] Primary database cannot be reached.
    [W000 2022-04-13T11:36:30.063+02:00] Fast-Start Failover threshold has expired.
    [W000 2022-04-13T11:36:30.072+02:00] Succeeded to parse FSFO callout config file '/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfocallout.ora'
    [W000 2022-04-13T11:36:30.072+02:00] Try to connect to the standby.
    [W000 2022-04-13T11:36:30.072+02:00] Check if the standby is ready for failover.
    [W000 2022-04-13T11:36:30.087+02:00] Doing pre-FSFO callout.
    [W000 2022-04-13T11:36:34.095+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:36:40.146+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:36:46.255+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:36:52.311+02:00] Failed to ping the primary.
    [W000 2022-04-13T11:36:55.352+02:00] Failed to detect the pre-FSFO callout suc file '/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfo_precallout.suc', or error file '/u01/app/oracle/admin/prod20/broker_files/config_db21/callout/fsfo_precallout.err', after 25 seconds passed.
    [W000 2022-04-13T11:36:55.352+02:00] Will continue Fast-Start Failover since pre-FSFO callout failure action is CONTINUE
    [S006 2022-04-13T11:36:55.352+02:00] Fast-Start Failover started...
    
    2022-04-13T11:36:55.352+02:00
    Initiating Fast-Start Failover to database "DB21_SITE2"...
    [S006 2022-04-13T11:36:55.352+02:00] Initiating Fast-start Failover.
    2022-04-13T11:36:55.362+02:00
    Performing failover NOW, please wait...
    
    2022-04-13T11:37:18.566+02:00
    Failover succeeded, new primary is "DB21_SITE2".
    
    2022-04-13T11:37:18.566+02:00
    Failover processing complete, broker ready.
    2022-04-13T11:37:18.566+02:00
    [S006 2022-04-13T11:37:18.566+02:00] Fast-Start Failover finished...
    [W000 2022-04-13T11:37:18.566+02:00] Failover succeeded. Restart pinging.
    [W000 2022-04-13T11:37:18.582+02:00] Primary database has changed to DB21_SITE2.
    
    Conclusion

    We can just say that when dealing with fsfo callout scripts, be sure that the parameter FastStartFailoverActionOnPreCalloutFailure is correctly set according your wishes.

    Cet article Oracle 21c : Dealing with FastStartFailoverActionOnPreCalloutFailure est apparu en premier sur Blog dbi services.

    PostgreSQL 15: Some new features

    Tue, 2022-04-12 07:55

    PostgreSQL version 15 is on its way and we get a better and better look, what will be included in the future release. So it’s time to have a look at some new future features

    1. Create privilege on public schema removed

    Until today with PostgreSQL 14 it is possible for everybody to write into the public schema per default. With PostgreSQL 15 this won’t be possible anymore. The public schema is owned by the “pg_database_owner” now. Let’s do a short test.

    postgres=# create user test;
    CREATE ROLE
    postgres=# \du
                                       List of roles
     Role name |                         Attributes                         | Member of
    -----------+------------------------------------------------------------+-----------
     postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
     test      |                                                            | {}
    
    postgres=# \c postgres test
    You are now connected to database "postgres" as user "test".
    postgres=> create table t1 (a int);
    ERROR:  permission denied for schema public
    LINE 1: create table t1 (a int);
                         ^
    postgres=
    

    If you want to be able again to write into the public schema, the permission has to be granted again explicitly.

    2. Extended pg_basebackup compression

    There are some extensions coming for pg_basebackup. Especially the compression improves. First of all –compress is now able to accept a compression method and an (optional) compression level using e.g. gzip:9. Futhermore –compress accepts client-gzip and server-gzip as compression method to define where to compress the backup. Another plus is that the compression can also be used with –format=p now. This gives you the opportunity to compress your pg_basebackup on server side and extract it again on client side automatically. Especially for slow network connections this can be a benefit. Let’s have a short look at the new –compress syntax.

    postgres@pgdebian:/u99/backup2/ [PG15] pg_basebackup --help | grep -A 1 compress
      -z, --gzip             compress tar output
      -Z, --compress=[{client|server}-]METHOD[:DETAIL]
                             compress on client or server as specified
      -Z, --compress=none    do not compress tar output
    

    To create a backup it looks like this now. Really simple and easy to use.

    postgres@pgdebian:/u99/backup2/ [PG15] pg_basebackup -h localhost -p 5438 -Ft --compress=client-gzip:9 --pgdata=/u99/backup2/ -Xs
    postgres@pgdebian:/u99/backup2/ [PG15] ll
    total 3136
    -rw------- 1 postgres postgres  136487 Mar 28 15:49 backup_manifest
    -rw------- 1 postgres postgres 3050084 Mar 28 15:49 base.tar.gz
    -rw------- 1 postgres postgres   17649 Mar 28 15:49 pg_wal.tar.gz
    postgres@pgdebian:/u99/backup2/ [PG15]
    

    Or with lz4 (available for client or server side compression):

    postgres@pgdebian:/u99/backup2/ [PG15] pg_basebackup -h localhost -p 5438 -Ft --compress=lz4:9 --pgdata=/u99/backup2/ -Xs
    postgres@pgdebian:/u99/backup2/ [PG15] ll
    total 20096
    -rw------- 1 postgres postgres   136487 Mar 28 15:18 backup_manifest
    -rw------- 1 postgres postgres  3657232 Mar 28 15:18 base.tar.lz4
    -rw------- 1 postgres postgres 16779264 Mar 28 15:18 pg_wal.tar
    
    3. New role: pg_checkpointer

    Until PostgreSQL 14 only the superuser(s) were allowed to execute CHECKPOINT commands. Starting with PostgreSQL 15 there is a new role called pg_checkpointer. Once you grant that role to an user, it is able to execute CHECKPOINT commands.

    postgres=# create user checky;
    CREATE ROLE
    postgres=# \c postgres checky
    You are now connected to database "postgres" as user "checky".
    postgres=> checkpoint;
    ERROR:  must be superuser or have privileges of pg_checkpointer to do CHECKPOINT
    postgres=> \c postgres postgres
    You are now connected to database "postgres" as user "postgres".
    postgres=# grant pg_checkpointer to checky;
    GRANT ROLE
    postgres=# \c postgres checky
    You are now connected to database "postgres" as user "checky".
    postgres=> checkpoint;
    CHECKPOINT
    postgres=>
    
    4. MERGE command

    MERGE gives you the opportunity to execute one SQL statement that INSERT/UPDATE/DELETE rows in regular and partitioned tables and more. There is some overhead if you use it as a single SQL command, because you need a lot of WHEN / THEN expressions.

    Let’s have a look at this new feature using a simple example with two table which are similar, but there are some more entries.

    dvdrental=# select * from category;
     category_id |    name     |     last_update
    -------------+-------------+---------------------
               1 | Action      | 2006-02-15 09:46:27
               2 | Animation   | 2006-02-15 09:46:27
               3 | Children    | 2006-02-15 09:46:27
               4 | Classics    | 2006-02-15 09:46:27
               5 | Comedy      | 2006-02-15 09:46:27
               6 | Documentary | 2006-02-15 09:46:27
               7 | Drama       | 2006-02-15 09:46:27
               8 | Family      | 2006-02-15 09:46:27
               9 | Foreign     | 2006-02-15 09:46:27
              10 | Games       | 2006-02-15 09:46:27
              11 | Horror      | 2006-02-15 09:46:27
              12 | Music       | 2006-02-15 09:46:27
              13 | New         | 2006-02-15 09:46:27
              14 | Sci-Fi      | 2006-02-15 09:46:27
              15 | Sports      | 2006-02-15 09:46:27
              16 | Travel      | 2006-02-15 09:46:27
    (16 rows)
    
    dvdrental=# select * from category_new;
     category_id |    name     |        last_update
    -------------+-------------+----------------------------
               1 | Action      | 2006-02-15 09:46:27
               2 | Animation   | 2006-02-15 09:46:27
               3 | Children    | 2006-02-15 09:46:27
               4 | Classics    | 2006-02-15 09:46:27
               5 | Comedy      | 2006-02-15 09:46:27
               6 | Documentary | 2006-02-15 09:46:27
               7 | Drama       | 2006-02-15 09:46:27
               8 | Family      | 2006-02-15 09:46:27
               9 | Foreign     | 2006-02-15 09:46:27
              10 | Games       | 2006-02-15 09:46:27
              11 | Horror      | 2006-02-15 09:46:27
              12 | Music       | 2006-02-15 09:46:27 
              13 | Biography   | 2022-04-12 11:53:34.986878
              14 | Sci-Fi      | 2006-02-15 09:46:27
              15 | Sports      | 2006-02-15 09:46:27
              16 | Travel      | 2006-02-15 09:46:27
              17 | Dramedy     | 2022-04-12 11:48:49.559058
              18 | Love        | 2022-04-12 11:49:32.072536
    (17 rows)
    
    dvdrental=# MERGE INTO category AS c
    USING category_new AS n
    ON c.category_id = n.category_id
    WHEN MATCHED AND c.name = n.name  THEN
      DO NOTHING
    WHEN MATCHED AND c.name  n.name THEN
      UPDATE SET name=n.name
    WHEN NOT MATCHED THEN
      INSERT VALUES (n.category_id, n.name, n.last_update)
    ;
    MERGE 17
    dvdrental=# 
    

    Once the MERGE command is done, select the original table again, to see if it added and updated everything as planned:

    dvdrental=# select * from category order by 1;
     category_id |    name     |        last_update
    -------------+-------------+----------------------------
               1 | Action      | 2006-02-15 09:46:27
               2 | Animation   | 2006-02-15 09:46:27
               3 | Children    | 2006-02-15 09:46:27
               4 | Classics    | 2006-02-15 09:46:27
               5 | Comedy      | 2006-02-15 09:46:27
               6 | Documentary | 2006-02-15 09:46:27
               7 | Drama       | 2006-02-15 09:46:27
               8 | Family      | 2006-02-15 09:46:27
               9 | Foreign     | 2006-02-15 09:46:27
              10 | Games       | 2006-02-15 09:46:27
              11 | Horror      | 2006-02-15 09:46:27
              12 | Music       | 2006-02-15 09:46:27
              13 | Biography   | 2022-04-12 13:42:26.187381
              14 | Sci-Fi      | 2006-02-15 09:46:27
              15 | Sports      | 2006-02-15 09:46:27
              16 | Travel      | 2006-02-15 09:46:27
              17 | Dramedy     | 2022-04-12 11:48:49.559058
              18 | Love        | 2022-04-12 11:49:32.072536
    (18 rows)
    
    

    Merge does not support foreign tables or updatable views. Maybe this will come later, if there is need for it. But for the moment it is not planned.

    Conclusion

    These are only a few new features coming with PostgreSQL Version 15. Much more new stuff to follow. Especially with replication there is a lot more to check out and also in cases of security PostgreSQL improves with every new release.

    Cet article PostgreSQL 15: Some new features est apparu en premier sur Blog dbi services.

    PostgreSQL 15: More flexibility with setting parameters

    Tue, 2022-04-12 07:14

    Another nice feature just made it into PostgreSQL 15: You can now control who is allowed to set specific parameters. This gives you a lot more flexibility, especially when you are hosting PostgreSQL instances for your customers. You can delegate more control to trusted users or roles and of course, you can also revoke it later on when it is not anymore required. One goal of this is to reduce the number of tasks which require superuser privileges. As always, lets do a simple example to highlight how you might use it.

    To start with, we create a new user and connect with it:

    postgres=# create user u with login password 'u';
    CREATE ROLE
    postgres=# \c postgres u
    You are now connected to database "postgres" as user "u".
    

    Which parameters is this user allowed to set? Nothing on the instance and database level:

    postgres=> alter system set work_mem='12MB';
    ERROR:  permission denied to set parameter "work_mem"
    postgres=> alter database postgres set work_mem='12MB';
    ERROR:  must be owner of database postgres
    

    Of course the user can set a new value for himself (either permanently or in the session):

    postgres=> alter user u set work_mem='12MB';
    ALTER ROLE
    postgres=> set work_mem='10MB';
    SET
    

    This works for all parameters that have a context of “user” in pg_settings:

    postgres=> select name,context from pg_settings where context = 'user' order by 1;
                    name                 | context 
    -------------------------------------+---------
     application_name                    | user
     array_nulls                         | user
     backend_flush_after                 | user
     backslash_quote                     | user
     bytea_output                        | user
     check_function_bodies               | user
     client_connection_check_interval    | user
     client_encoding                     | user
     client_min_messages                 | user
     commit_siblings                     | user
     constraint_exclusion                | user
     cpu_index_tuple_cost                | user
     cpu_operator_cost                   | user
     cpu_tuple_cost                      | user
     cursor_tuple_fraction               | user
     DateStyle                           | user
     debug_pretty_print                  | user
     debug_print_parse                   | user
    ...
     vacuum_cost_limit                   | user
     vacuum_cost_page_dirty              | user
     vacuum_cost_page_hit                | user
     vacuum_cost_page_miss               | user
     vacuum_failsafe_age                 | user
     vacuum_freeze_min_age               | user
     vacuum_freeze_table_age             | user
     vacuum_multixact_failsafe_age       | user
     vacuum_multixact_freeze_min_age     | user
     vacuum_multixact_freeze_table_age   | user
     wal_sender_timeout                  | user
     wal_skip_threshold                  | user
     work_mem                            | user
     xmlbinary                           | user
     xmloption                           | user
    (136 rows)
    

    For all the other parameters it does not work:

    postgres=> select name,context from pg_settings where context != 'user' order by 1;
                      name                  |      context      
    ----------------------------------------+-------------------
     allow_in_place_tablespaces             | superuser
     allow_system_table_mods                | superuser
     archive_cleanup_command                | sighup
     archive_command                        | sighup
     archive_library                        | sighup
     archive_mode                           | postmaster
     archive_timeout                        | sighup
     authentication_timeout                 | sighup
    ...
     wal_sync_method                        | sighup
     wal_writer_delay                       | sighup
     wal_writer_flush_after                 | sighup
     zero_damaged_pages                     | superuser
    (202 rows)
    postgres=> set track_counts='on';
    ERROR:  permission denied to set parameter "track_counts"
    postgres=> 
    

    These times are now over: From PostgreSQL 15 on you can grant setting specific parameters to users or roles:

    postgres=# select current_user;
     current_user 
    --------------
     postgres
    (1 row)
    postgres=# grant set on parameter track_counts to u;
    GRANT
    postgres=# \c postgres u
    You are now connected to database "postgres" as user "u".
    postgres=> set track_counts = 'on';
    SET
    postgres=> 
    

    This also works on the instance level:

    postgres=# grant alter system on parameter shared_buffers to u;
    GRANT
    postgres=# \c postgres u
    You are now connected to database "postgres" as user "u".
    postgres=> alter system set shared_buffers = '129MB';
    ALTER SYSTEM
    postgres=> 
    

    A new catalog view comes alongside which lists all the grants for parameters:

    postgres=> \c postgres
    You are now connected to database "postgres" as user "u".
    postgres=> select * from pg_parameter_acl;
      oid  |    parname     |               paracl                
    -------+----------------+-------------------------------------
     16392 | track_counts   | {postgres=sA/postgres,u=s/postgres}
     16393 | wal_level      | {postgres=sA/postgres,u=s/postgres}
     16394 | shared_buffers | {postgres=sA/postgres,u=A/postgres}
    (3 rows)
    

    Nice, thanks to all involved.

    Cet article PostgreSQL 15: More flexibility with setting parameters est apparu en premier sur Blog dbi services.

    Extract all DDL from an Oracle database

    Mon, 2022-04-11 08:30
    Introduction

    Extracting DDL is sometime useful for creating similar objects in another database without data. Basically everything can be extracted from a running Oracle database.

    The needs

    My customer asked me to replicate a database without any data. The goal is to feed development environments running on Docker, so with a minimal footprint. The precise needs were:

    • All the metadata
    • Data from some tables may be needed (based on a provided list)
    • Imported users should be filtered (based on criteria and an exclude list)
    • Imported users will be created with basic password (password = username)
    • Source and target databases are not on the same network (thus no direct communication between both instances)
    • Logon triggers must be disabled on target database
    • Audit rules must also be disabled

    Additional criteria may be added later.

    How to proceed?

    An Oracle package is dedicated to DDL extraction: dbms_metadata.get_ddl. If it’s very convenient for few objects, it does not do the job for a complete DDL extraction.

    Since years now, datapump is also able to do this extraction. Actually, it was already possible with older exp/imp on 9i and older versions.

    Export could be done as a normal expdp with metadata-only extraction. Then impdp will be used with the sqlfile directive. Datapump import with sqlfile directive won’t import anything but will parse the dumpfile and generate a SQL script from it.

    SQL script will then be parsed and several actions will be done:

    • password change for all users (reset to username)
    • logon trigger disabling
    • audit rules disabling

    Once done, SQL script will then be ready to send to target server.

    Another expdp will be done with selected tables, this is for parameter tables for example and for sure tables without any sensible data. It is based on a text file (with the list of the tables to export) as input.

    Before creating the metadata on target database, tablespaces must exist but with minimal sizes. This is why a script is also used to generate tablespace creation using a single datafile with minimum size and autoextend. Data size will be low, so users may not be annoyed by space exhaust on tablespaces.

    Prerequisites

    These are the prerequisites to use these scripts:

    • 12c or later source database (should also work with older versions)
    • target database in same or higher version and configured for OMF (db_create_file_dest)
    • connection to oracle user on both systems (or a user in the dba system group)
    • 1+GB free space on source and target server
    • nfs share between the 2 servers is recommended
    • users list for exclusion is provided by the customer
    • tables list for inclusion is provided by the customer

    Here is an example of both lists:

    cat ddl_tmp/excl_users.txt | more
    ABELHATR
    ACALENTI
    ACTIVITY_R
    ADELANUT
    ALOMMET
    AMERAN
    AOLESG
    APEREAN
    APP_CON_MGR
    ...
    
    cat ddl_tmp/incl_tables.txt
    DERAN.TRAD_GV_CR
    DERAN.TRAD_GV_PS
    APPCN.PARAM_BASE
    APPCN.PARAM_EXTENDED
    OAPPLE.MCUST_INVC
    ...
    Output files

    The script will generate 3 files prefixed with the step number for identifying sequence on target database:

    • 01_${ORACLE_SID}_create_tablespace.sql: tablespace creation script using OMF
    • 02_${ORACLE_SID}_create_ddl.sql: main SQL script to create the DDL
    • 03_impdp_${ORACLE_SID}_tables.sh: import shell script for importing tables with data
    Complete script explained

    The first part of the script is for defining variables, variables are basically source database SID, working folder and file names:

    # Set source database
    export ORACLE_SID=MARCP01
    
    # Set environment variables, main folder and file names
    export DDL_TARGET_DIR=/home/oracle/ddl_tmp
    export DDL_TARGET_DUMPFILE=ddl_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`.dmp
    export DDL_TARGET_LOGFILE_EXP=ddl_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
    export DDL_TARGET_LOGFILE_IMP=ddl_${ORACLE_SID}_imp_`date +"%Y%m%d_%H%M"`.log
    export DDL_TARGET_TABLES_DUMPFILE=tables_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`_%U.dmp
    export DDL_TARGET_TABLES_LOGFILE_EXP=tables_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
    export DDL_TARGET_SCRIPT=ddl_${ORACLE_SID}_extracted_`date +"%Y%m%d_%H%M"`.sql
    export DDL_TBS_SCRIPT=01_${ORACLE_SID}_create_tablespace.sql
    export DDL_CREATE_SCRIPT=02_${ORACLE_SID}_create_ddl.sql
    export DDL_IMPORT_TABLES_CMD=03_impdp_${ORACLE_SID}_tables.sh
    export DDL_EXCLUDE_USER_LIST=excl_users.txt
    export DDL_INCLUDE_TABLE_LIST=incl_tables.txt

    Second part is for creating target folder and deleting temporary files from the hypothetical last run:

    # Create target directory and clean up the folder
    # Directory should include a user list to exclude: $DDL_EXCLUDE_USER_LIST
    #  => User list is basically 1 username per line
    # Directory may include a table list to include: $DDL_INCLUDE_TABLE_LIST
    #  => Table list is 1 table per line, prefixed with the username (owner)
    mkdir $DDL_TARGET_DIR 2>/dev/null
    rm $DDL_TARGET_DIR/ddl_*.par 2>/dev/null
    rm $DDL_TARGET_DIR/tables_*.par 2>/dev/null
    rm $DDL_TARGET_DIR/0*.sql 2>/dev/null
    rm $DDL_TARGET_DIR/0*.sh 2>/dev/null
    rm $DDL_TARGET_DIR/ddl_*.dmp 2>/dev/null
    rm $DDL_TARGET_DIR/tables_*.dmp 2>/dev/null
    rm $DDL_TARGET_DIR/ddl_*.log 2>/dev/null
    rm $DDL_TARGET_DIR/tables_*.log 2>/dev/null
    rm $DDL_TARGET_DIR/ddl_*.sql 2>/dev/null

    A parameter file will be used for the first expdp, it must be created before. All users will be included, but not the default’s one. Excluding unneeded users will be done later:

    # Create parameter file for metadata export
    # No need to parallelize as DDL extraction runs on a single thread
    . oraenv <<< $ORACLE_SID
    sqlplus -s / as sysdba <<EOF
     create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
     set pages 0
     set lines 200
     set feedback off
     spool $DDL_TARGET_DIR/ddl_extract.par
     SELECT 'dumpfile=$DDL_TARGET_DUMPFILE' FROM DUAL; 
     SELECT 'logfile=$DDL_TARGET_LOGFILE_EXP' FROM DUAL;
     SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
     SELECT 'content=metadata_only' FROM DUAL;
     SELECT 'cluster=N' FROM DUAL;
     SELECT 'exclude=fga_policy' FROM DUAL;
     SELECT 'exclude=AUDIT_OBJ' FROM DUAL;
     SELECT 'exclude=DB_LINK' FROM DUAL;
     SELECT 'schemas='||username FROM DBA_USERS WHERE oracle_maintained='N' ORDER BY username;
     spool off;
    exit;
    EOF

    In this parameter file, let’s exclude the users from the txt file:

    # Exclude users' list from parameter file
    cp $DDL_TARGET_DIR/ddl_extract.par $DDL_TARGET_DIR/par1.tmp
    for a in `cat $DDL_TARGET_DIR/$DDL_EXCLUDE_USER_LIST`; do cat $DDL_TARGET_DIR/par1.tmp | grep -v $a > $DDL_TARGET_DIR/par2.tmp; mv $DDL_TARGET_DIR/par2.tmp $DDL_TARGET_DIR/par1.tmp; done
    mv $DDL_TARGET_DIR/par1.tmp $DDL_TARGET_DIR/ddl_extract.par

    A parameter file is also needed for tables expdp. Tables’ export will be included in separate dump files:

    # Create parameter file for tables to include
    # Tables will be consistent at the same SCN
    # Export is done with parallel degree 4
    sqlplus -s / as sysdba <<EOF
     create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
     set pages 0
     set lines 200
     set feedback off
     spool $DDL_TARGET_DIR/tables_extract.par
     SELECT 'dumpfile=$DDL_TARGET_TABLES_DUMPFILE' FROM DUAL; 
     SELECT 'logfile=$DDL_TARGET_TABLES_LOGFILE_EXP' FROM DUAL;
     SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
     SELECT 'parallel=4' FROM DUAL;
     SELECT 'cluster=N' FROM DUAL;
     SELECT 'flashback_scn='||current_scn FROM V\$DATABASE;
     spool off;
    exit;
    EOF

    In this parameter file, let’s include all the tables as described in the related txt file:

    # Include tables' list to parameter file
    for a in `cat $DDL_TARGET_DIR/$DDL_INCLUDE_TABLE_LIST`; do echo "tables="$a >> $DDL_TARGET_DIR/tables_extract.par; done
    

    Now metadata export could start:

    # Export metadata to a dump file
    expdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/ddl_extract.par
    
    # Output example
    # ...
    # Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:
    #   /home/oracle/ddl_tmp/ddl_MARCP01_20220318_1351.dmp
    # Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at Fri Mar 18 14:21:47 2022 elapsed 0 00:24:13

    And tables could also be exported now:

    # Export included tables in another set of dump files
    expdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/tables_extract.par
    
    # Output example
    # ...
    # Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
    #   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
    #   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
    #   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
    #   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
    # Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Mar 18 14:22:08 2022 elapsed 0 00:00:14

    A script is needed for tablespace creation, let’s create it:

    # Create tablespace script for tablespace creation on target database (10MB with autoextend)
    sqlplus -s / as sysdba <<EOF
     set pages 0
     set lines 200
     set feedback off
     spool $DDL_TARGET_DIR/$DDL_TBS_SCRIPT
      SELECT 'create tablespace '||tablespace_name||' datafile size 10M autoextend on;' FROM dba_data_files WHERE tablespace_name NOT IN ('SYSTEM','SYSAUX') and tablespace_name NOT LIKE 'UNDOTBS%' group by tablespace_name order by tablespace_name;
      spool off;
    exit;
    EOF

    Another parameter file is needed for doing the datapump import that will create the SQL file:

    # Create parameter file for metadata import as an SQL file
    echo "dumpfile=$DDL_TARGET_DUMPFILE" > $DDL_TARGET_DIR/ddl_generate.par
    echo "logfile=$DDL_TARGET_LOGFILE_IMP" >> $DDL_TARGET_DIR/ddl_generate.par
    echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/ddl_generate.par
    echo "sqlfile=$DDL_TARGET_SCRIPT" >> $DDL_TARGET_DIR/ddl_generate.par

    Now let’s start the impdp task to extract DDL from the metadata dumpfile:

    # Generate SQL script from previous dump (with impdp - it will not import anything)
    impdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/ddl_generate.par
    
    # Output example
    # ...
    # Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Fri Mar 18 14:34:48 2022 elapsed 0 00:08:37

    Once the SQL script with all DDL has been created, it’s time to change users’ passwords, lock some specific users, and disable logon triggers. You may probably need different changes:

    # Define standard password for all internal users and generate DDL script (not definitive's one)
    cat $DDL_TARGET_DIR/$DDL_TARGET_SCRIPT | awk -F ' ' '{if ($1 == "CREATE" && $2 == "USER" && $6 == "VALUES")  print $1" "$2" "$3" "$4" "$5" "$3; else print $0}' > $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
    
    # Lock *_MANAGER users (lock is added at the end of DDL script)
    cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
    cat $DDL_TARGET_DIR/ddl.tmp | grep "CREATE USER \"" | grep "_MANAGER\"" | awk -F ' ' '{print "ALTER USER "$3" ACCOUNT LOCK;"}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
    rm $DDL_TARGET_DIR/ddl.tmp
    
    # Remove logon triggers (disabled at the end of DDL script)
    cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
    cat $DDL_TARGET_DIR/ddl.tmp | awk -F ' ' '{if ($1 == "CREATE" && $2 == "EDITIONABLE" && $3 == "TRIGGER")  {trig=1; trigname=$4;} else if (trig == 1 && $1  == "after" && $2 == "logon") {trig=0 ; print "ALTER TRIGGER "trigname" DISABLE;"}}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
    rm $DDL_TARGET_DIR/ddl.tmp

    I’m still on the source server, but it does not prevent me to generate parameter and command file for impdp:

    # Create parameter file for tables import (will be needed on target server)
    echo "dumpfile=$DDL_TARGET_TABLES_DUMPFILE" > $DDL_TARGET_DIR/tables_import.par
    echo "logfile=tables_import.log" >> $DDL_TARGET_DIR/tables_import.par
    echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/tables_import.par
    
    # Script for importing tables on the target database (on the target server)
    echo 'impdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/tables_import.par' > $DDL_TARGET_DIR/$DDL_IMPORT_TABLES_CMD

    Last operation done on this source server is displaying the files generated by the script:

    # Display files to transport to target server
    ls -lrth $DDL_TARGET_DIR/0*.* $DDL_TARGET_DIR/tables*.dmp $DDL_TARGET_DIR/tables_import.par | sort
    # Output example
    # -rw-r----- 1 oracle asmadmin  20K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
    # -rw-r----- 1 oracle asmadmin 472K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
    # -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
    # -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
    # -rw-rw-r-- 1 oracle oinstall  29K Mar 18 14:25 /home/oracle/ddl_tmp/01_MARCP01_create_tablespace.sql
    # -rw-rw-r-- 1 oracle oinstall  63M Mar 18 14:35 /home/oracle/ddl_tmp/02_MARCP01_create_ddl.sql
    # -rw-rw-r-- 1 oracle oinstall   64 Mar 18 14:35 /home/oracle/ddl_tmp/03_impdp_MARC01_tables.sh
    # -rw-rw-r-- 1 oracle oinstall   98 Mar 18 14:35 /home/oracle/ddl_tmp/tables_import.par
    Conclusion

    This is not high level Oracle database stuff, but you can achieve nice automation simply using command shell and dynamic SQL scripting. It does not require any extra tool, and in this example, it brought to my customer exactly what he needs.

    Cet article Extract all DDL from an Oracle database est apparu en premier sur Blog dbi services.

    Striim, real-time data integration

    Mon, 2022-04-11 08:12

    In my first blog-post about Striim, I showed you how to create an Initial Load from an on-premise database to an Azure SQL DB.
    In this second blog post I will create an application which will stream Data Changes from a CDC-Enabled on-premise SQL Database to my Azure SQL Database.

    The first step is to enable CDC (Chane Data Capture) on the on-premise SQL Server database.
    CDC uses the SQL Server agent to record insert, update, and delete activity that applies to a table.
    I will enable CDC on my Pokemon database and also on my table named Pokemons by executing the following script:

    USE Pokemon
    GO
    EXEC sys.sp_cdc_enable_db
    GO
    select is_cdc_enabled, * from sys.databases where name = 'Pokemon'
    GO
    EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'Pokemons',@role_name = null
    GO
    select is_tracked_by_cdc, * from sys.tables where name = 'Pokemons'

    On Striim Platform, I will create a new app named StreamReplicationPok and select a Streaming Integration from SQL Server CDC to Azure SQL Database:

    I name my new application:

    I set my source database information:

    Striim validates the connectivity to my instance and database, checks the metadata and the CDC permissions:

    Same as last blog, I will now select my schema to move: dbo and after select my table to load from my schema: my Pokemons table.
    After some validation I will now enter information to connect to my Azure SQL Database:

    Once done the application is created and we can review our populated information for the source and target:

    We can deploy the application and once done Start the App.
    My application is now waiting that some DML commands are running against my source to replicate it on my target:

    I will generate some insert, update and delete on my source table and check what I can see on the monitor progress dashboard:

    I can visualize the DML I have executed against my source table and which have been replicated to my Azure SQL Database target.
    More information are available by clicking the “View Detailed Metrics”:

    My Azure SQL Database is now ready to accept user connections for Offload Operational Reporting with real-time data as it is continuously feeding with data modification from my source with no overhead on the primary.
    We should also use the same process to migrate a database to the cloud without any downtime or very limited one.

    It was very interesting to test Striim which I didn’t know and gave me the opportunity to discover a tool very similar in my opinion to Qlik Replicate.
    I hope it can help you for a future migration or live data replication.

    Cet article Striim, real-time data integration est apparu en premier sur Blog dbi services.

    Active Duplicate from standby database

    Thu, 2022-04-07 10:32

    Generally we do active duplicate from primary database, but I think it should be fine to see what happen if we do active duplicate a database from a standby database. So, I do some tests and I write the results in this blog

    Below the actual configuration of my Data Guard environment

    DGMGRL> show configuration
    
    Configuration - db21
    
      Protection Mode: MaxPerformance
      Members:
      DB21_SITE1 - Primary database
        DB21_SITE2 - Physical standby database
    
    Fast-Start Failover:  Disabled
    
    Configuration Status:
    SUCCESS   (status updated 35 seconds ago)
    
    DGMGRL>
    

    The tests are done with Oracle 21c but I guess it should work for 19c

    Standby in Read Only Mode

    The first tests I do is with the standby opened in read only mode

    If you do not have Active Data Duard license stop the apply process before opening the database in Read Only mode

    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)] DB21
    ********* dbi services Ltd. *********
    STATUS                 : OPEN
    DB_UNIQUE_NAME         : DB21_SITE2
    OPEN_MODE              : READ ONLY
    LOG_MODE               : ARCHIVELOG
    DATABASE_ROLE          : PHYSICAL STANDBY
    FLASHBACK_ON           : YES
    FORCE_LOGGING          : YES
    VERSION                : 21.0.0.0.0
    CDB Enabled            : YES
    List PDB(s)    MOUNTED : PDB1, PDB2, PDB3
    List PDB(s)  READ ONLY : PDB$SEED
    *************************************
    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)]
    

    The goal is to duplicate the standby database DB21_SITE2 to DB21CLNE to a remote server. I assume that all network files are already configured. Password file, spfile are also already configured.

    From the source server

    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)] tnsping db21_site2
    
    TNS Ping Utility for Linux: Version 21.0.0.0.0 - Production on 06-APR-2022 13:29:59
    
    Copyright (c) 1997, 2020, Oracle.  All rights reserved.
    
    Used parameter files:
    /u01/app/oracle/network/admin/sqlnet.ora
    
    
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oraadserver2)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DB21_SITE2_DGMGRL)))
    OK (0 msec)
    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)] tnsping db21clne
    
    TNS Ping Utility for Linux: Version 21.0.0.0.0 - Production on 06-APR-2022 13:30:05
    
    Copyright (c) 1997, 2020, Oracle.  All rights reserved.
    
    Used parameter files:
    /u01/app/oracle/network/admin/sqlnet.ora
    
    
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oraadserver4)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DB21CLNE)))
    OK (0 msec)
    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)]
    

    From the target server

    oracle@oraadserver4:/home/oracle/ [DB21CLNE (CDB$ROOT)] tnsping db21_site2
    
    TNS Ping Utility for Linux: Version 21.0.0.0.0 - Production on 06-APR-2022 13:33:12
    
    Copyright (c) 1997, 2020, Oracle.  All rights reserved.
    
    Used parameter files:
    /u01/app/oracle/network/admin/sqlnet.ora
    
    
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oraadserver2)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DB21_SITE2_DGMGRL)))
    OK (0 msec)
    oracle@oraadserver4:/home/oracle/ [DB21CLNE (CDB$ROOT)] tnsping db21clne
    
    TNS Ping Utility for Linux: Version 21.0.0.0.0 - Production on 06-APR-2022 13:33:16
    
    Copyright (c) 1997, 2020, Oracle.  All rights reserved.
    
    Used parameter files:
    /u01/app/oracle/network/admin/sqlnet.ora
    
    
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oraadserver4)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DB21CLNE)))
    OK (10 msec)
    oracle@oraadserver4:/home/oracle/ [DB21CLNE (CDB$ROOT)]
    

    I then create a simple duplicate rman script

    oracle@oraadserver4: [DB21CLNE (CDB$ROOT)] cat duplicate_from_standby.rcv
    connect target sys/******@db21_site2
    connect auxiliary  sys/****@db21clne
    run {
    duplicate target database to DB21CLNE from active database nofilenamecheck;
    }
    

    After starting the target database DB21CLNE in NOMOUNT state, I run the script to create DB21CLNE

    oracle@oraadserver4: [DB21CLNE (CDB$ROOT)] nohup rman cmdfile=duplicate_from_standby.rcv log=duplicate_standby_`date +"%Y-%m-%d_%H%M%S"`.log &
    

    A few minutes later the duplicate went fine (output truncated)

    Recovery Manager: Release 21.0.0.0.0 - Production on Wed Apr 6 14:51:56 2022
    Version 21.1.0.0.0
    
    Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.
    
    RMAN> connect target *
    2> connect auxiliary *
    3> run {
    4> duplicate target database to DB21CLNE from active database nofilenamecheck;
    5> }
    6> 
    connected to target database: DB21 (DBID=1137202071)
    
    connected to auxiliary database: DB21CLNE (not mounted)
    
    Starting Duplicate Db at 06-APR-2022 14:51:57
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=25 device type=DISK
    current log archived at primary database
    current log archived at primary database
    duplicating Online logs to Oracle Managed File (OMF) location
    duplicating Datafiles to Oracle Managed File (OMF) location
    
    contents of Memory Script:
    {
       sql clone "alter system set  control_files = 
      ''/u01/app/oracle/oradata/DB21CLNE/controlfile/o1_mf_k4v3hqr1_.ctl'', ''/u01/app/oracle/fast_recovery_area/DB21CLNE/controlfile/o1_mf_k4v3hqr7_.ctl'' comment=
     ''Set by RMAN'' scope=spfile";
       sql clone "alter system set  db_name = 
     ''DB21'' comment=
     ''Modified by RMAN duplicate'' scope=spfile";
       sql clone "alter system set  db_unique_name = 
     ''DB21CLNE'' comment=
     ''Modified by RMAN duplicate'' scope=spfile";
       shutdown clone immediate;
       startup clone force nomount
       restore clone from service  'db21_site2' primary controlfile;
       alter clone database mount;
    }
    executing Memory Script
    
    sql statement: alter system set  control_files =   ''/u01/app/oracle/oradata/DB21CLNE/controlfile/o1_mf_k4v3hqr1_.ctl'', ''/u01/app/oracle/fast_recovery_area/DB21CLNE/controlfile/o1_mf_k4v3hqr7_.ctl'' comment= ''Set by RMAN'' scope=spfile
    
    sql statement: alter system set  db_name =  ''DB21'' comment= ''Modified by RMAN duplicate'' scope=spfile
    
    sql statement: alter system set  db_unique_name =  ''DB21CLNE'' comment= ''Modified by RMAN duplicate'' scope=spfile
    
    Oracle instance shut down
    
    Oracle instance started
    
    Total System Global Area    1577057624 bytes
    
    Fixed Size                     9686360 bytes
    Variable Size                385875968 bytes
    Database Buffers             889192448 bytes
    Redo Buffers                 292302848 bytes
    
    Starting restore at 06-APR-2022 14:56:46
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=333 device type=DISK
    
    channel ORA_AUX_DISK_1: starting datafile backup set restore
    channel ORA_AUX_DISK_1: using network backup set from service db21_site2
    channel ORA_AUX_DISK_1: restoring control file
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
    output file name=/u01/app/oracle/oradata/DB21CLNE/controlfile/o1_mf_k4v3hqr1_.ctl
    output file name=/u01/app/oracle/fast_recovery_area/DB21CLNE/controlfile/o1_mf_k4v3hqr7_.ctl
    Finished restore at 06-APR-2022 14:56:49
    
    database mounted
    …
    …
    
    Executing: alter database force logging
    
    contents of Memory Script:
    {
       Alter clone database open resetlogs;
    }
    executing Memory Script
    
    database opened
    Executing: alter database flashback on
    
    contents of Memory Script:
    {
       sql clone "alter pluggable database all open";
    }
    executing Memory Script
    
    sql statement: alter pluggable database all open
    Finished Duplicate Db at 06-APR-2022 15:02:41
    
    Recovery Manager complete.
    
    

    And DB21CLNE is created and opened in READ WRITE and ready to be used

    oracle@oraadserver4:/u01/app/oracle/admin/DB21CLNE/create/ [DB21CLNE (CDB$ROOT)] DB21CLNE
    ********* dbi services Ltd. *********
    STATUS                 : OPEN
    DB_UNIQUE_NAME         : DB21CLNE
    OPEN_MODE              : READ WRITE
    LOG_MODE               : ARCHIVELOG
    DATABASE_ROLE          : PRIMARY
    FLASHBACK_ON           : YES
    FORCE_LOGGING          : YES
    VERSION                : 21.0.0.0.0
    CDB Enabled            : YES
    List PDB(s)  READ ONLY : PDB$SEED
    List PDB(s) READ WRITE : PDB1, PDB2, PDB3
    *************************************
    

    Standby in MOUNT Mode

    I decide to do the same tests , but with the source standby database in MOUNT state

    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)] DB21
    ********* dbi services Ltd. *********
    STATUS                 : MOUNTED
    DB_UNIQUE_NAME         : DB21_SITE2
    OPEN_MODE              : MOUNTED
    LOG_MODE               : ARCHIVELOG
    DATABASE_ROLE          : PHYSICAL STANDBY
    FLASHBACK_ON           : YES
    FORCE_LOGGING          : YES
    CDB Enabled            : YES
    List PDB(s)    MOUNTED : PDB$SEED, PDB1, PDB2, PDB3
    *************************************
    oracle@oraadserver2:/home/oracle/ [DB21 (CDB$ROOT)]
    

    In this case the starts fine, but the recovery process was asking for a archived and was not able to finish. There was no error returned, but the duplicate is asking, asking the missing archived

    RMAN> connect target *
    2> connect auxiliary *
    3> run {
    4> duplicate target database to DB21CLNE from active database nofilenamecheck;
    5> }
    6>
    connected to target database: DB21 (DBID=1137202071, not open)
    
    connected to auxiliary database: DB21CLNE (not mounted)
    
    Starting Duplicate Db at 06-APR-2022 14:12:28
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=23 device type=DISK
    current log archived at primary database
    current log archived at primary database
    duplicating Online logs to Oracle Managed File (OMF) location
    duplicating Datafiles to Oracle Managed File (OMF) location
    
    contents of Memory Script:
    {
       sql clone "alter system set  control_files =
      ''/u01/app/oracle/oradata/DB21CLNE/controlfile/o1_mf_k4v15kz7_.ctl'', ''/u01/app/oracle/fast_recovery_area/DB21CLNE/controlfile/o1_mf_k4v15l04_.ctl'' comment=
     ''Set by RMAN'' scope=spfile";
       sql clone "alter system set  db_name =
     ''DB21'' comment=
     ''Modified by RMAN duplicate'' scope=spfile";
       sql clone "alter system set  db_unique_name =
     ''DB21CLNE'' comment=
     ''Modified by RMAN duplicate'' scope=spfile";
       shutdown clone immediate;
       startup clone force nomount
       restore clone from service  'db21_site2' primary controlfile;
       alter clone database mount;
    }
    executing Memory Script
    
    …
    …
    
    starting media recovery
    
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    ...
    ...
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    archived log file name=/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06/o1_mf_1_35_k4v156k8_.arc thread=1 sequence=35
    
    ….
    

    As a workaround from the standby server I manually copy the requested archive to the remote source server

    oracle@oraadserver2 ] scp o1_mf_1_35_k4v156k8_.arc oraadserver4:/u01/app/oracle/fast_recovery_area/DB21_SITE2/archivelog/2022_04_06
    

    And the duplicate process went fine

    …
    …
    contents of Memory Script:
    {
       Alter clone database open resetlogs;
    }
    executing Memory Script
    
    database opened
    Executing: alter database flashback on
    
    contents of Memory Script:
    {
       sql clone "alter pluggable database all open";
    }
    executing Memory Script
    
    sql statement: alter pluggable database all open
    Finished Duplicate Db at 06-APR-2022 14:34:39
    
    Recovery Manager complete.
    

    So we can see that duplicate can be done from a standby database like a primary one. Just when the standby is mounted, you may have to manually copy some archived.

    Note that I repeat the duplicate a second time with the database mount and I had the same behavior, I had to manually copy a missing archived log.

    Cet article Active Duplicate from standby database est apparu en premier sur Blog dbi services.

    Pages