Feed aggregator

AEM Forms – No SSLMutualAuthProvider available

Yann Neuhaus - Sat, 2019-10-26 15:00

In the process of setting up the AEM Workbench to use 2-way-SSL, you will need at some point to use a Hybrid Domain and a specific Authentication Provider. Depending on the version of the AEM that you are using, this Authentication Provider might not be present and therefore you will never be able to set that up properly. In this blog, I will describe what was done in our case to solve this problem.

The first time we tried to set that up (WebLogic Server 12.2, AEM 6.4.0), it just wasn’t working. Therefore, we opened a case with the Adobe Support and after quite some time, we found out that the documentation was not complete (#CQDOC-13273) and that there were actually missing steps and missing configuration inside the AEM to allow the 2-way-SSL to work. So basically everything said that the 2-way-SSL was possible but there were just missing pieces inside AEM to have it really working. Therefore after discussion & investigation with the Adobe Support Engineers (#NPR-26490), they provided us the missing piece: adobe-usermanager-ssl-dsc.jar.

When you install AEM Forms, it will automatically deploy a bunch of DSC (jar file) to provide all features of the AEM Forms. These are a few examples:

  • adobe-pdfservices-dsc.jar
  • adobe-usermanager-dsc.jar
  • adobe-jobmanager-dsc.jar
  • adobe-scheduler-weblogic-dsc.jar

Therefore, our AEM Forms version at that time (mid-2018, AEM 6.4.0) was missing one of these DSC and it was the root cause of our issue. So what can you do fix that? Well you just have deploy it and since we are anyway in the middle of working with the AEM Workbench to set it up with 2-way-SSL, that’s perfect. While the Workbench is still able to use 1-way-SSL (don’t set your Application Server in 2-way-SSL or revert it to 1-way-SSL):

  • Download or request the file “adobe-usermanager-ssl-dsc.jar” for your AEM version to the Adobe Support
  • Open the AEM Workbench (run the workbench.exe file)
  • Click on “File > Login
  • Set the Log on to to: <AEM_HOST> – SimpleAuth (or whatever the name of your SimpleAuth is)
  • Set the Username to: administrator (or whatever other account you have)
  • Set the Password for this account
  • Click on “Login
  • Click on “Window > Show View > Components
  • The Components window should be opened (if not already done before) somewhere on the screen (most probably on the left side)
  • Inside the Components window, right click on the “Components” folder and select “Install Component …
  • Find the file “adobe-usermanager-ssl-dsc.jar” that has been downloaded earlier, select it and click on “Open
  • Right click on the “Components” folder and select “Refresh
  • Expand the “Components” folder (if not already done), and look for the component named “SSLAuthProvider
  • If this component isn’t started yet (there is a red square on the package), then start it using the following steps:
    • Right click on “SSLAuthProvider
    • Select “Start Component

Note: If the “SSLAuthProvider” component already exists, then you will see an error. This is fine, it just needs to be there and to be started/running. If this is the case then it’s all good.

Workbench - Open components

Workbench - Refresh components

Workbench - Start component

Once the SSLAuthProvider DSC has been installed and is running, you should be able to see the SSLMutualAuthProvider in the list of custom providers while creating the Hybrid Domain on the AdminUI. Adobe was normally supposed to fix this in the following releases but I didn’t get the opportunity to test the installation of AEM 6.5 from scratch yet. If you have this information, don’t hesitate to share!

Cet article AEM Forms – No SSLMutualAuthProvider available est apparu en premier sur Blog dbi services.

AEM Forms – “2-way-SSL” Setup and Workbench configuration

Yann Neuhaus - Sat, 2019-10-26 14:45

In the past two years almost, I have been working with AEM (Adobe Experience Manager) Forms. The road taken by this project was full of problem because of security constraints that AEM has/had big trouble dealing with. In this blog, I will talk about one security aspect which brings some trouble: how to setup and use the “2-way-SSL” (I will describe below why I put that in quote) for the AEM Workbench.

I have been using AEM Forms 6.4.0 initially (20180228) with its associated Workbench version. I will consider that the AEM Forms has been installed already and is working properly. In this case, I used AEM Forms on a WebLogic Server (12.2) which I configured in HTTPS. So once you have that, what do you need to do to configure and use the AEM Workbench with “2-way-SSL”? Well first, let’s ensure that the AEM Workbench is working properly and then start with the setup.

Open the AEM Workbench and configure a new “Server”:

  • Open the AEM Workbench (run the workbench.exe file)
  • Click on “File > Login
  • Click on “Configure...”
  • Click on the “+” sign to add a new Server
    • Set the Server Title to: <AEM_HOST> – SimpleAuth
    • Set the Hostname to: <AEM_HOST>
    • Set the Protocol to: Simple Object Access Protocol (SOAP/HTTPs)
    • Set the Server Port Number to: <AEM_PORT>
    • Click on “OK
  • Click on “OK
  • Set the Log on to the newly created Server (“<AEM_HOST> – SimpleAuth“)
  • Set the Username to: administrator (or whatever other account you have)
  • Set the Password for this account
  • Click on “Login

Workbench login 1-way-SSL

If everything was done properly, the login should be working. The next step is to configure AEM for the “2-way-SSL” communications. As mentioned at the beginning of this blog, I put that in quote because it’s a 2-way-SSL but there is one security layer that is bypassed when doing that. With the AEM Workbench in 1-way-SSL, you need to enter a username and a credential. Adding a 2-way-SSL instead would normally just add another layer of security where the server and client will exchange their certificate and will trust each other but the user’s authentication is still needed!

In the case of the AEM Workbench, the “2-way-SSL” setup actually completely bypass the user’s authentication and therefore I do not really consider that as a real 2-way-SSL setup… It might even be considered as a security issue (it’s a shame for a feature that is supposed to increase security) because, as you will see below, as soon as you have the Client SSL Certificate (and its password obviously), then you will be able to access AEM Workbench. So protect this certificate with great care.

To configure the AEM, you will then need to create an Hybrid Domain:

  • Open the AEM AdminUI (https://<AEM_HOST>:<AEM_PORT>/adminui)
  • Login with the administrator account (or whatever other account you have)
  • Navigate to: Settings > User Management > Domain Management
  • Click on “New Hybrid Domain
    • Set the ID to: SSLMutualAuthProvider
    • Set the Name to: SSLMutualAuthProvider
    • Check the “Enable Account Locking” checkbox
    • Uncheck the “Enable Just In Time Provisioning” checkbox
    • Click on “Add Authentication
      • Set the “Authentication Provider” to: Custom
      • Check the “SSLMutualAuthProvider” checkbox
      • Click on “OK
    • Click on “OK

Note: If “SSLMutualAuthProvider” isn’t available on the Authentication page, then please check this blog.

Hybrid Domain 1

Hybrid Domain 2

Hybrid Domain 3

Then you will need to create a user. In this example, I will use a generic account but it is possible to have several accounts for each of your devs for example, in which case each user must have their own SSL Certificate. The user Canonical Name and ID must absolutely match the CN used to generate the SSL Certificate that the Client will use. So if you generated an SSL Certificate for the Client with “/C=CH/ST=Jura/L=Delemont/O=dbi services/OU=IT/CN=aem-dev“, then the Canonical Name and ID to be used for the user in AEM should be “aem-dev“:

  • Navigate to: Settings > User Management > Users and Groups
  • Click on “New User
  • On the New User (Step 1 of 3) screen:
    • Uncheck the “System Generated” checkbox
    • Set the Canonical Name to: <USER_CN>
    • Set the First Name to: 2-way-SSL
    • Set the Last Name to: User
    • Set the Domain to: SSLMutualAuthProvider
    • Set the User Id to: <USER_CN>
    • Click on “Next
  • On the New User: 2-way-SSL (Step 2 of 3) screen:
    • Click on “Next
  • On the New User: 2-way-SSL (Step 3 of 3) screen:
    • Click on “Find Roles
      • Check the checkbox for the Role Name: Application Administrator (or any other valid role that you want this user to be able to use)
      • Click on “OK
  • Click on “Finish

User 1

User 2

User 3

At this point, you can configure your Application Server to handle the 2-way-SSL communications. In WebLogic Server, this is done by setting the “Two Way Client Cert Behavior” to “Client Certs Requested and Enforced” in the SSL subtab of the Managed Server(s) hosting the AEM Forms applications.

Finally the last step is to get back to the AEM Workbench and try your 2-way-SSL communications. If you try again to use the SimpleAuth that we defined above, it should fail because the Application Server will require the Client SSL Certificate, which isn’t provided in this case. So let’s create a new “Server”:

  • Click on “File > Login
  • Click on “Configure...”
  • Click on the “+” sign to add a new Server
    • Set the Server Title to: <AEM_HOST> – MutualAuth
    • Set the Hostname to: <AEM_HOST>
    • Set the Protocol to: Simple Object Access Protocol (SOAP/HTTPs) Mutual Auth
    • Set the Server Port Number to: <AEM_PORT>
    • Click on “OK
  • Click on “OK
  • Set the Log on to the newly created Server (“<AEM_HOST> – MutualAuth“)
  • Set the Key Store to: file:C:\Users\Morgan\Documents\AEM_Workbench\aem-dev.jks (Adapt to wherever you put the keystore)
  • Set the Key Store Password to: <KEYSTORE_PWD>
  • Set the Trust Store to: file:C:\Users\Morgan\Documents\AEM_Workbench\trust.jks (Adapt to wherever you put the truststore)
  • Set the Trust Store Password to: <TRUSTSTORE_PWD>
  • Click on “Login

Workbench login 2-way-SSL

In the above login screen, the KeyStore is the SSL Certificate that was created for the Client and the TrustStore will be used to validate/trust the SSL Certificate of the AEM Server. It can be the cacerts from the AEM Workbench for example. If you are using a Self-Signed SSL Certificate, don’t forget to add the Trust Chain into the TrustStore.

Cet article AEM Forms – “2-way-SSL” Setup and Workbench configuration est apparu en premier sur Blog dbi services.

match_recognize()

Jonathan Lewis - Fri, 2019-10-25 12:47

A couple of days ago I posted a note with some code (from Stew Ashton) that derived the clustering_factor you would get for an index when you had set a value for the table_cached_blocks preference but had not yet created the index. In the note I said that I had produced a simple and elegant (though massively CPU-intensive) query using match_recognize() that seemed to produce the right answer, only to be informed by Stew that my interpretation of how Oracle calculated the clustering_factor was wrong and that the query was irrelevant.  (The fact that it happened to produce the right answer in all my tests was an accidental side effect of the way I had been generating test data. With Stew’s explanation of what Oracle was doing it was easy to construct a data set that proved my query was doing the wrong thing.)

The first comment I got on the posting was a request to publish my match_recognize() query – even though it was irrelevant to the problem in hand – simply because it might be a useful lesson in what could be done with the feature; so here’s some code that doesn’t do anything useful but does demonstrate a couple of points about match_recognize(). I have to stress, though, that I’m not an expert on match_recognize() so there may be a more efficient way of using it to acheieve the same result. There is certainly a simple and more efficient way to get the same result using some fairly straightforward PL/SQL code.

Requirement

I had assumed that if you set the table_cached_blocks preference to N then Oracle would keep track of the previous N rowids as it walked an index to gather stats and only increment the “clustering factor counter” if it failed to find a match for the block address of the current rowid in the block addresses extracted from the previous N rowids. My target was to emulate a way of doing this counting.

Strategy

Rather than writing code that “looked backwards” as it walked the index, I decided to take advantage of the symmetry of the situation and write code that looked forwards (you could think of this as viewing the index in descending order and looking backwards along the descending index). I also decided that I could count the number of times I did find a match in the trail of rowids, and subtract that from the total number of index entries.

So my target was to look for patterns where I start with the block address from the current rowid, then find the same block address after zero to N-1 failures to find the block address. Since the index doesn’t exist I’ll need to emulate its existence by selecting the columns that I want in the index along with the rowid, ordering the data in index order. Then I can walk this “in memory” index looking for the desired pattern.

Here’s some code to create a table to test against:


rem
rem     Script:         clustering_factor_est_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem 

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        cast(rownum as varchar2(10))            v1,
        trunc(dbms_random.value(0,10000))       rand,
        rpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
/

My table has 1M rows, and there’s a column called rand which has 10,000 distinct values. This is generated through Oracle’s dbms_random package and the procedure I’ve used will give me roughly 100 occurrences for each value scattered uniformly across the table. An index on this column might be quite useful but it’s probably going to have a very high clustering_factor because, on average, any two rows of a particular value are likely to be separated by 10,000 rows, and even if I look for rows with a pair of consecutive values any two rows are likely to be separated by a few hundred other rows.

Here’s the code that I was using to get an estimate of (my erroneous concept of) the clustering_factor with table_cached_blocks set to 32. For debugging purposes it reports the first row of the pattern for each time my pattern is matched, so in this version of the code I’d  have to check the number of rows returned and subtract that from the number of rows in the table (where rand is not null).


select
        first_rand, first_file_id, first_block_id, first_row_id, step_size
from    (
                select
                        rand, 
                        dbms_rowid.rowid_relative_fno(rowid) file_id,
                        dbms_rowid.rowid_block_number(rowid) block_id,
                        dbms_rowid.rowid_row_number(rowid)   row_id
                from
                        t1
                where
                        rand is not null
        )
match_recognize(
        order by rand, file_id, block_id, row_id
        measures
                count(*) as step_size,
                first(strt.rand)        as first_rand,
                first(strt.file_id)     as first_file_id,
                first(strt.block_id)    as first_block_id,
                first(strt.row_id)      as first_row_id
        one row per match
        after match skip to next row
        pattern (strt miss{0, 31} hit)
        define 
                miss as (block_id != strt.block_id or  file_id != strt.file_id),
                hit  as (block_id  = strt.block_id and file_id  = strt.file_id)
)
order by 
        first_rand, first_file_id, first_block_id, first_row_id
;

The first point to note is that the inline view is the thing I use to model an index on column rand. It simply selects the value and rowid from each row. However I’ve used the dbms_rowid package to break the rowid down into the file_id, block_id and row_id within block. Technically (to deal with partitioned tables and indexes) I also ought to be thinking about the object_id which is the first component of the full rowid. I haven’t actually ordered the rows in the in-line view as I’m going to let the match_recognize() operation handle that.

A warning as we start looking at the match_recognize() section of the query: I’ll be jumping around the clause a little bit, rather than working through it in order from start to finish.

First I need the raw data sorted in order so that any results I get will be deterministic. I have an order by clause that sorts my data by rand and the three components I’ve extracted from the rowid. (It is also possible to have a partition clause – similar to the partition clause of an analytic function – but I don’t need one for this model.)

Then I need to define the “columns” that I’m planning to output in the final query. This is the set of measures and in the list of measures you can see a count(*) (which doesn’t quite mean what it usually does) and a number of calls to a function first() which takes an aliased column name as it’s input and, although the column names look familiar, the alias (strt) seems to have sprung from nowhere.

At this point I want to jump down to the define clause because that’s where the meaning of strt could have been defined.  The define clause is a list of “rules”, which are also treated as “variables”, that tell us how to classify a row. We are looking for patterns, and pattern is a set of rows that follows a set of rules in a particular order, so one of the things I need to do is create a rule that tells Oracle how to identify a row that qualifies as the starting point for a pattern – so I’ve defined a rule called strt to do this – except I haven’t defined it explicitly, it’s not visible in the define list, so Oracle has assumed that the rule is “1 = 1”, in other words every row I’m going to look at could be classified as a strt row.

Now that I have a definition for what strt means I could go back to the measures – but I’ll postpone doing that for a moment and look at the other rules in my define list. I have a rule called miss which says “if either of these comparisons evaluates to true” then the row is a miss;  but the predicates includes a reference to strt which means we are doing comparisons with the most recent row that was classified as a strt row. So a miss means we’ve found a starting row and we’re now looking at other rows comparing the block_id and file_id for each row to check that they don’t match the block_id and file_id of the starting row.

Similarly we have a hit rule which says a hit means we’ve previously found a starting row and we’re now looking at other rows checking for rows where the current block_id and file_id match the starting block_id and file_id.

Once we’ve got a set of rules explaining how to classify rows we can specify a pattern (which means going back up the match_recognize() clause one section). Our pattern reads:  “find a strt row, followed by zero and 31 miss rows, followed by a hit row”. And that’s just a description of my back-to-front way of saying “remember the last 32  rowids and check if the current block address matches the block address in one of those rowids”.

The last two clauses that need to be explained before I revisit the measures clause are the “one row per match” and “after match skip to next row”.

If I find a sequence of rows that matches my pattern there are several things I could do with that set of rows – for example I could report every row in that pattern along with the classification of strt/miss/hit (which would be useful if I’m looking for a few matches to a small pattern in a large data set), or (as I’ve done here) I could report just one row from the pattern and give Oracle some instruction about which one row I want reported.

Then, after I’ve found (and possibly reported) a pattern, what should I do next. Again there are several possibilities – the two most obvious ones, perhaps are: “just keep going” i.e. look at the row after the end of the pattern to see if it’s another strt row, and “go to the row after the start of the pattern you’ve just reported”. These translate into: “after match skip past last row” (which is the default if you don’t specify an “after match” clause) and “after match skip to next row” (which is what I’ve specified).

Finally we get back to the measures clause – I’ve defined four “columns” with names like ‘first_xxx’ and a step_size. The step_size is defined as count(*) which – in this context – means “count the number of rows in the current matched pattern”. The other measures are defined using the first() function, referencing strt alias which tells Oracle I want to retain the value from the first row that met the strt rule in the current matched pattern.

In summary, then my match_recognize() clause tells Oracle to

  • Sort the data by rand, file_id, block_id, row_id
  • For each row in turn
    • extract the file_id and block_id
    • take up to 32 steps down the list looking for a matching file_id and block_id
    • If you find a match pass a row to the parent operation that consists of: the number of rows between strt to hit inclusive, and the values of rand, file_id, block_id, and row_id of the strt row.

Before you try testing the code, you might like to know about some results.

As it stands my laptop with a virtual machine running 12.2.0.1 took 1 minute and 5 seconds to complete with “miss{0, 31}” in the pattern. In fact the first version of the code had the file_id and block_id tests in the define clause in the opposite order viz:

        define 
                miss as (file_id != strt.file_id or  block_id != strt.block_id),
                hit  as (file_id  = strt.file_id and block_id  = strt.block_id)

Since the file_id for my test is the same for every row in my table (it’s a single file tablespace), this was wasting a surprising amount of CPU, leading to a run time of 1 minute 17 seconds! Neither time looks particularly good when compared to the time required to create the index, set the table_cached_blocks to 32, and gather stats on the index – in a total time of less than 5 seconds. The larger the value I chose for the pattern match, the worse the workload became, until at “miss{0, 199}” – emulating a table_cached_blocks of 200 – the time to run was about 434 seconds of CPU!

A major portion of the problem is the way that Oracle is over-optimistic (or “greedy”, to use the technical term) with its pattern matching, combined with the nature of the data which (in this example) isn’t going to offer much opportunity for matching, combined with the fact that Oracle cannot determine that a row that is not a “miss” has to be a “hit”.  In this context “greedy” means Oracle will try to find as many consecutive occurrences of the first rule in the pattern before it tries to find and occurrence of the second rule – and when it fails to match a pattern it will “backtrack” one step and have another go, being slightly less greedy. So, for our example, the greedy algorithm will operate as follows:

  • find 31 rows that match miss, then discover the 32nd row does not match hit
  • go back to the strt and find 30 rows that match miss, then discover the 31st row does not match hit
  • go back to the strt and find 29 rows that match miss, then discover the 30th row does not match hit
  • … repeat until
  • go back to the strt and find 0 rows that match miss, then discover the 1st does not match hit
  • go to the next row, call it strt, and repeat the above … 1M times.

From a human perspective this is a pretty stupid strategy for this specific problem – but that’s because we happen to know that “hit” = “not(miss)” (ignoring nulls, of which there are none) while Oracle has to assume that there is no relationship between “hit” and “miss”.

There is hope, though, because you can tell Oracle that it should be “reluctant” rather than “greedy” which means Oracle will consume the smallest possible number of occurrences of the first rule before testing the second rule, and so on. All you have to do is append a “?” to the count qualifier:

        pattern (strt miss{0, 31 }? hit)

Unfortunately this seemed to have very little effect on execution time (and CPU usage) in our case. Again this may be due to the nature of the data etc., but it may also be a limitation in the way that the back tracking works. I had expected a response time that would more closely mirror the human expectation, but a few timed tests suggest the code uses exactly the same type of strategy for the reluctant strategy as it does for the greedy one, viz:

  • find 0 rows that match miss, then discover the next row does not match hit
  • go back to the strt and find 1 row that matches miss, then discover the 2nd row does not match hit
  • go back to the strt and find 2 rows that match miss, then discover the 3rd row does not match hit
  • … repeat until
  • go back to the strt and find 31 rows that match miss, then discover the 32nd does not match hit
  • go to the next row, call it strt, and repeat the above … 1M times.

Since there are relatively few cases in our data where a pattern match will occur both the reluctant and the greedy strategies will usually end up doing all 32 steps. I was hoping for a more human-like algorithm that would recognise that Oracle would recognise that if it’s just missed on the first X rows then it need only check the X+1th and not go back to the beginning (strt) – but my requirement makes it easy to see (from a human perspective) that that makes sense; in a generic case (with more complex patterns and without the benefit of having two mutially exclusive rules) the strategy of “start all over again” is probably a much safer option to code.

Plan B

Plan B was to send the code to Stew Ashton and ask if he had any thoughts on making it more efficient – I wasn’t expecting him to send a PL/SQL solution my return of post, but that’s what I got and published in the previous post.

Plan C

It occurred to me that I don’t really mind if the predicted clustering_factor is a little inaccurate, and that the “backtracking” introduced by the variable qualifer {0,31} was the source of a huge amount of the work done, so I took a different approach which simply said: “check that the next 32 (or preferred value of N) rows are all misses”.  This required two changes – eliminate one of the defines (the “hit”) and modify the pattern definition as follows:

         pattern (strt  miss{32} )
         define
                 miss as (block_id != strt.block_id or  file_id != strt.file_id)

The change in strategy means that the result is going to be (my version of) the clustering_factor rather than the number to subtract from num_rows to derive the clustering_factor. And I’ve introduced a small error which shows up towards the end of the data set – I’ve demanded that a pattern should include exactly 32 misses; but when you’re 32 rows from the end of the data set there aren’t enough rows left to match the pattern. So the result produced by the modified code could be as much as 32 short of the expected result.  However, when I showed the code to Stew Ashton he pointed out that I could include “alternatives” in the pattern, so all I had to do was add in to the pattern something which said “and if there aren’t 32 rows left, getting to the end of the data set is good enough” (technically that should be end of the current partition, but we have only one partition).

         pattern (strt  ( miss{32} | ( miss* $) ) )

The “miss” part of the pattern now reads:  “32 misses” or “zero or more misses and then the end of file/partition/dataset”.

It’s still not great – but the time to process the 1M rows with a table_cached_blocks of 32 came down to 31 seconds

Finally

I’ll close with one important thought. There’s a significant difference in the execution plans for the two strategies – which I’m showing as outputs from the SQL Monitor report using a version of the code that does a simple count(*) rather than listing any rows:


pattern (strt miss{0, 31} hit)
==========================================================================================================================================
| Id |         Operation          | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                            |       | (Estim) |       | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
==========================================================================================================================================
|  0 | SELECT STATEMENT           |       |         |       |        54 |    +14 |     1 |        1 |     . |          |                 |
|  1 |   SORT AGGREGATE           |       |       1 |       |        54 |    +14 |     1 |        1 |     . |          |                 |
|  2 |    VIEW                    |       |      1M | 12147 |        54 |    +14 |     1 |     2782 |     . |          |                 |
|  3 |     MATCH RECOGNIZE SORT   |       |      1M | 12147 |        60 |     +8 |     1 |     2782 |  34MB |          |                 |
|  4 |      VIEW                  |       |      1M |   325 |        14 |     +1 |     1 |       1M |     . |          |                 |
|  5 |       INDEX FAST FULL SCAN | T1_I1 |      1M |   325 |         7 |     +8 |     1 |       1M |     . |          |                 |
==========================================================================================================================================

pattern (strt  miss{32} )
==================================================================================================================================================================
| Id |                     Operation                      | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                                                    |       | (Estim) |       | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
==================================================================================================================================================================
|  0 | SELECT STATEMENT                                   |       |         |       |        15 |    +16 |     1 |        1 |     . |          |                 |
|  1 |   SORT AGGREGATE                                   |       |       1 |       |        15 |    +16 |     1 |        1 |     . |          |                 |
|  2 |    VIEW                                            |       |      1M | 12147 |        15 |    +16 |     1 |     997K |     . |          |                 |
|  3 |     MATCH RECOGNIZE SORT DETERMINISTIC FINITE AUTO |       |      1M | 12147 |        29 |     +2 |     1 |     997K |  34MB |          |                 |
|  4 |      VIEW                                          |       |      1M |   325 |        16 |     +1 |     1 |       1M |     . |          |                 |
|  5 |       INDEX FAST FULL SCAN                         | T1_I1 |      1M |   325 |         9 |     +8 |     1 |       1M |     . |          |                 |
==================================================================================================================================================================

The significant line to note is operation 3 in both cases. The query with the pattern that’s going to induce back-tracking reports only “MATCH RECOGNIZE SORT”. The query with the “fixed” pattern reports “MATCH RECOGNIZE SORT DETERMINISTIC FINITE AUTO” Oracle can implement a finite state machine with a fixed worst case run-time. When you write some code that uses match_recognize() the three magic words you want to see in the plan are “deterministic finite auto” – if you don’t then, in principle, your query might be one of those that could (theoretically) run forever.

Addendum

Congratulations if you’ve got this far – but please remember that I’ve had very little practice using match_recognize; this was a little fun and learning experience for me and there may be many more things you could usefully know about the technology before you use it in production; there may also be details in what I’ve written which are best forgotten about. That being the case I’d be more than happy for anyone who wants to enhance or correct my code, descriptions and observations to comment below.

 

ServiceManager … as a daemon

DBASolved - Thu, 2019-10-24 15:34

In an earlier post on ServiceManager, I took a look at how you could start/stop the ServiceManager manually.  A lot of what was said in that post still applies to this post; however, in this one I’m going to take a look at how to review the ServiceManager when it is configured as a daemon […]

The post ServiceManager … as a daemon appeared first on DBASolved.

Categories: DBA Blogs

What is an LP? (EP vs LP)

VitalSoftTech - Thu, 2019-10-24 09:45

If you’re an aspiring musician or independent artist looking to promote your work in the music industry, you’ve probably wondered at one point or another; what is an LP? You might have heard of EPs as well, bringing up an even bigger question in your head; EP vs. LP? Which format for music promotion is […]

The post What is an LP? (EP vs LP) appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Autonomous Data Warehouse Speeds Up Decision Making at Telecom Fiji

Oracle Press Releases - Thu, 2019-10-24 09:00
Blog
Oracle Autonomous Data Warehouse Speeds Up Decision Making at Telecom Fiji

By Peter Schutt, Senior Director, Oracle—Oct 24, 2019

Telecom Fiji, one of the largest communications providers in the remote South Pacific Fiji archipelago, needed to accelerate its digital transformation to meet the increasing demand for connectivity from local residents, businesses and tourists.

To speed up decision making to enhance customer service, Telecom Fiji is leveraging AI-powered algorithms from Oracle’s cloud-based Autonomous Data Warehouse and Analytics to integrate and sift through increasing amounts of data from multiple sources to produce actionable insights in minutes.

In the past, the IT staff spent weeks in manually-intensive processes of aggregating and correlating the data which raised the risk of errors being introduced into the results. Reports were also often late. The database administrators wrote SQL queries to generate raw data and then feed those results back to an analytics team for correlation in spreadsheets.

The autonomous database provisions easily and quickly in minutes to improve time to market, self-patches to eliminate downtime for maintenance, and auto scales capacity on demand for flexibility to maximize performance and minimize costs. Telecom Fiji would have had to spend hundreds of thousands of dollars to get equally, or less powerful equipment on-premises in its data center to power the same kind of data warehouse. 

The front-end Oracle Analytics platform is used to create dashboards to visualize and detail sales and marketing performance, product performance, service usage trends, service-delivery performance, and other key indicators.

“We can ask our IT staff to come up with ideas or innovative solutions on how they can use the analytics. Then they will be performing more of an advisory role to management,” said Shalvin Narayan, Head of IT, Telecom Fiji. “It did surpass my expectations because the amount of functionality and performance it was providing, with respect to the amount of investment was marvelous.”

Watch the Telecom Fiji Video

In this video, Shalvin Narayan, Head of IT for Telecom Fiji, shares how the company is using Autonomous Database to produce real-time actionable insights in minutes rather than weeks.

embedBrightcove('responsive', false, 'single', '6086098044001');


Read More Stories from Oracle Cloud

Telecom Fiji is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

Subquery

Dominic Brooks - Thu, 2019-10-24 07:11


with x as (select sysdate from dual)
select * from utter_bollocks.x;

Oracle Named a Leader in Translytical Data Platforms by Major Research Firm

Oracle Press Releases - Thu, 2019-10-24 07:00
Press Release
Oracle Named a Leader in Translytical Data Platforms by Major Research Firm According to report, customers like Oracle’s general data security capabilities, technical support and capability to support many workloads

Redwood Shores, Calif.—Oct 24, 2019

Today, Forrester Research named Oracle a leader in translytical data platforms, a database category which can handle a wide range of transactional and analytic workloads. “The Forrester Wave™: Translytical Data Platforms, Q4 2019” report1 cites that, “unlike other vendors, Oracle uses a dual-format database (row and columns for the same table) to deliver optimal translytical performance,” and that “customers like Oracle’s capability to support many workloads including OLTP, IoT, microservices, multimodel, data science, AI/ML, spatial, graph, and analytics.”

“Oracle is pleased to be acknowledged as a leader by Forrester for its translytical database capabilities,” said Juan Loaiza, Executive Vice President, Mission-Critical Database Technology, Oracle. “Oracle is setting the bar for the industry by offering a single converged database that brings together both transaction processing and analytic capabilities to provide our customers a mission-critical translytical data platform. Our converged database also provides built-in machine learning, spatial, graph, and JSON capabilities enabling our customers to easily run many kinds of workloads against the same data.”

The report states that Translytical is a hot, emerging market that delivers a unified data platform to support all kinds of workloads. The sweet spot is the ability to perform all of these workloads within a single database, leveraging innovation in in-memory, multimodel, distributed, and cloud architectures. Translytical databases can support various use cases including real-time insights, machine learning, streaming analytics, extreme transactional processing, and operational reporting.

Forrester reports on Oracle’s Translytical platform for Oracle environments. The report cites Oracle Database in-memory, an option that extends Oracle Database to support analytics in the same database as the one running transactions. Existing Oracle applications do not require any changes to the application in order to leverage Oracle Database in-memory. Unlike other vendors, Oracle uses a dual-format database (row and columnar representations for the same table) to deliver optimal translytical performance. In addition, Oracle leverages the Oracle Exadata appliance that supports a large-scale flash cache to perform fast in-memory and in-flash columnar operations. Customers like Oracle’s capability to support many workloads including OLTP, IoT, microservices, multimodel, data science, AI/ML, spatial, graph, and analytics.

According to the report, customers like the platform’s ease of use, ease of expanding existing Oracle applications to take advantage of translytics, general data security capabilities, and technical support, as well as the Cloud at Customer offering.

Download a complimentary copy of “The Forrester Wave™: Translytical Data Platforms, Q4 2019.”

[1] Source: “The Forrester Wave™: Translytical Data Platforms Q4 2019,” 23 October 2019.

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

SEO for Single Page Oracle (SPA) JET Application

Andrejus Baranovski - Thu, 2019-10-24 02:49
This is a quick tip post related to Oracle JET.

SEO (search engine optimization) is a must for a public website or public-facing app. Search engines do indexing well for static HTML pages. Oracle JET on contrary follows SPA (single page application) approach with dynamic UI. This makes it harder for the search engines to build an index for such an app. The same applies to any other SPA implementations with Angular, React, Vue.js, etc.

If you want to build SEO support for Oracle JET, similar solutions as for Angular, React or Vue.js SPAs apps can be applied.

One of the simpler and reliable solutions - generate static HTML pages for Oracle JET SPA app. You can follow this article, it explains how the same can be done for Angular - SEO for Single Page Applications.

Clustering_Factor

Jonathan Lewis - Wed, 2019-10-23 15:56

A few days ago I published a little note of a script I wrote some time ago to estimate the clustering_factor of an index before it had been built. At the time I pointed out that one of its limitations was that it would not handle cases where you were planning to set the table_cached_blocks preference, but a couple of days later I decided that I’d write another version of the code that would cater for the new feature – and that’s how I made an embarrassing discovery.

Having reviewed a number of notes I’ve published about the table_cached_blocks preference and its impact on the clustering_factor I’ve realised the what I’ve written has always been open to two interpretations – the one that I had in mind as I was writing, and the correct one.  I made this discovery because I had written a simple SQL statement – using the match_recognize() mechanism – to do what I considered to  be the appropriate calculation. After testing the query with a few sets of sample data that produced the correct results I emailed Stew Ashton (my “go-to” person for match_recognize() questions) asking if he would do a sanity check on the code because it was rather slow and I wondered if there was a better way of writing it.

His reply was roughly:

“I’ve read the notes you and Richard Foote have written about the clustering_factor and table_cached_blocks, and this isn’t doing what your description says it should.”

Then he explained what he had inferred from what I had written … and it made more sense than what I had been thinking when I wrote it. He also supplied some code to implement his interpretation – so I designed a couple of data models that would produce the wrong prediction for whichever piece of code implemented the wrong interpretation. His code gave the right answers, mine didn’t.

So here’s the difference in interpretation – the wrong one first – using 16 as a discussion value for the table_cached_blocks:

  • WRONG interpretation:  As you walk through index entries in order remember the last 16 rowids (that’s rowid for the rows in the table that the index is pointing to) you’ve seen. If the current rowid has a block id component that doesn’t match the block id from one of the remembered 16 rowids then increment the counter for the clustering_factor.
    • The simplicity of this algorithm means you can fix a “circular” array of 16 entries and keep walking around the circle overwriting the oldest entry each time you read a new one. It’s a pity that it’s the wrong idea because there’s a simple (though massively CPU -intensive match_recognize() strategy for implementing it – and if you were using an internal library mechanism during a proper gather_index_stats() it could be incredibly efficient.
  • RIGHT interpretation: set up an array for 16 block ids, each with an associated “row-number”. Walk through the index in order – giving each entry a row-number as you go. Extract the block id from the current entry and search through the array for a matching block id.  If you find a match then update its entry with the current row-number (so you can remembr how recently you saw the block id); if you don’t find a match then replace the entry that has the smallest (i.e. greatest distance into the past) row-number with the current block id and row-number and increment the counter for the clustering_factor.

The first piece of code that Stew Ashton sent me was an anonymous PL/SQL block that included some hard-coded fragments and embedded SQL to use a test table and index that I had defined, but he then sent a second piece of code that creates a generic function that uses dynamic SQL to construct a query against a table and an index definition that you want to test. The latter is the code I’ve published (with permission) below:


create or replace function predict_clustering_factor(
/*
Function to predict the clustering factor of an index,
taking into account the intended value of
the TABLE_CACHED_BLOCKS parameter of DBMS_STATS.SET_TABLE_PREFS.

Input is the table name, the list of column names
and the intended value of TABLE_CACHED_BLOCKS.

The function collects the last N block ids (not the last N entries).
When there is no more room, it increments the clustering factor
and replaces the least recently used block id with the current one.

Note: here a "block id" is a rowid with the row_number portion set to 0.
It is effectively a "truncated" rowid.
*/
  p_table_name in varchar2,
  p_column_list in varchar2,
  p_table_cached_blocks in number
) return number authid current_user is

  rc sys_refcursor;
  type tt_rids is table of rowid;
  lt_rids tt_rids;
  
  type t_block_list is record(
    rid rowid,
    last_hit number
  );

  type tt_block_list is table of t_block_list;
  lt_block_list tt_block_list := new tt_block_list();

  l_rid rowid;
  l_clustering_factor number := 0;
  b_block_found boolean;
  l_rn number := 0;
  l_oldest_hit number;
  i_oldest_hit binary_integer := 0;
  
  function truncated_rid(p_rid in rowid) return rowid is
    rowid_type number;
    object_number NUMBER;
    relative_fno NUMBER;
    block_number NUMBER;
    row_number NUMBER;
    rid rowid;

  begin

    DBMS_ROWID.ROWID_INFO (
      p_rid,
      rowid_type,
      object_number,
      relative_fno,
      block_number,
      row_number
    );

    rid := DBMS_ROWID.ROWID_CREATE (
      rowid_type,
      object_number,
      relative_fno,
      block_number,
      0
    );

    return rid;

  end truncated_rid;
  
begin
  if p_table_cached_blocks != trunc(p_table_cached_blocks)
  or p_table_cached_blocks not between 1 and 255 then
    raise_application_error(
      -20001, 
      'input parameter p_table_cached_blocks must be an integer between 1 and 255'
    );
  end if;

  open rc for 'select rowid from '||p_table_name||' order by '||p_column_list||', rowid';
  loop
    fetch rc bulk collect into lt_rids limit 1000;

    for irid in 1..lt_rids.count loop
      l_rn := l_rn + 1;
      l_rid := truncated_rid(lt_rids(irid));
      b_block_found := false;
      l_oldest_hit := l_rn;

      if l_rn = 1 then
        l_clustering_factor := l_clustering_factor + 1;
        lt_block_list.extend;
        lt_block_list(1).rid := l_rid;
        lt_block_list(1).last_hit := l_rn;

      else

        for i in 1..lt_block_list.count loop
          if l_oldest_hit > lt_block_list(i).last_hit then
            l_oldest_hit := lt_block_list(i).last_hit;
            i_oldest_hit := i;
          end if;
          if lt_block_list(i).rid = l_rid then
            b_block_found := true;
            lt_block_list(i).last_hit := l_rn;
            exit;
          end if;
        end loop;

        if not b_block_found then
          l_clustering_factor := l_clustering_factor + 1;
          if lt_block_list.count < p_table_cached_blocks then
            lt_block_list.extend;
            lt_block_list(lt_block_list.count).rid := l_rid;
            lt_block_list(lt_block_list.count).last_hit := l_rn; 
          else         
            lt_block_list(i_oldest_hit).rid := l_rid;
            lt_block_list(i_oldest_hit).last_hit := l_rn;
          end if;
        end if;

      end if;

    end loop;
    exit when rc%notfound;
  end loop;

  close rc;
  return l_clustering_factor;

exception when others then
  if rc%isopen then
    close rc;
  end if;
  raise;

end predict_clustering_factor;
/

After executing the above to create the function, here’s an example of usage:

rem
rem     Script:         clustering_factor_est_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2019
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        cast(rownum as varchar2(10))            v1,
        trunc(dbms_random.value(0,10000))       rand,
        rpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
/

-- -------------------------------------------------------------------

SQL> execute dbms_output.put_line('Predicted cf for t1(rand, id): ' || predict_clustering_factor('t1','rand, id',16))
Predicted cf for t1(rand, id): 997218
Elapsed: 00:00:07.54

SQL> execute dbms_output.put_line('Predicted cf for t1(rand, id): ' || predict_clustering_factor('t1','rand, id',255))
Predicted cf for t1(rand, id): 985607
Elapsed: 00:00:50.61

You’ll notice that the larger the setting for the “table_cached_blocks” parameter the more time it takes to predict the clustering_factor – and it was all CPU time in my example. This isn;t surprising given the need to search through an array holding the previous history. In this example the table t1 holds 1,000,000 rows, and the number and scatter of distinct values is so arranged that the code will hardly ever find a cached block id – essentially it’s the sort of index that isn’t going to cause much of confusion to the optimizer and isn’t likely to need special attention to make the optimizer use it when it should and ignore it when it’s inappropriate.

Finally a cut-n-paste to show the accuracy of the two predictions:

SQL> create index t1_i on t1(rand, id);
Elapsed: 00:00:02.96

SQL> execute dbms_stats.set_table_prefs(null,'t1','table_cached_blocks',16)
Elapsed: 00:00:00.01

SQL> execute dbms_stats.gather_index_stats(null,'t1_i')
Elapsed: 00:00:09.55

SQL> select clustering_factor from user_indexes where index_name = 'T1_I';

CLUSTERING_FACTOR
-----------------
           997218

Elapsed: 00:00:00.11

SQL> execute dbms_stats.set_table_prefs(null,'t1','table_cached_blocks',255)
Elapsed: 00:00:00.01

SQL> execute dbms_stats.gather_index_stats(null,'t1_i')
Elapsed: 00:00:07.80

SQL> select clustering_factor from user_indexes where index_name = 'T1_I';

CLUSTERING_FACTOR
-----------------
           985607

Elapsed: 00:00:00.00

Both match perfectly – but you might notice that creating the index and gathering the stats was much faster than predicting the clustering factor for the case where we set table_cached_blocks = 255.

(If you’re wondering, my “simple but irrelevant” match_recognize() query took 370 CPU second to complete for table_cached_blocks = 200 – and a limit on march_recognize() meant that 200 was the maximum value I was allowed to use – so now you know why I emailed Stew Ashton (and just for lagniappe. he also told me about a simple workaround for the 200 limit)).

 

 

Multiple SQL statements in a single Execute Immediate

Tom Kyte - Wed, 2019-10-23 15:46
What is the exact syntax to be able to execute multiple sql statements from within a single execute immediate statement.
Categories: DBA Blogs

Can I get a single table where each column consists of a query (each query returning three rows with ids)?

Tom Kyte - Wed, 2019-10-23 15:46
Hi, I have 27 different queries, all of which yield 3 rows max. Something like this: <code> SELECT CUSTOMER_NUMBER FROM customer WHERE name = 'SomeName' order by CUSTOMER_NUMBER fetch first 3 rows only; </code> and <code> SELECT CUSTOM...
Categories: DBA Blogs

Database Event Error Tracking

Tom Kyte - Wed, 2019-10-23 15:46
Hi Guys, I have a schema with multiple Schema objects like Procedures,Functions,Triggers and Packages. While testing via application if any DB error occurs we need to check the log to identify errors. Is there any way to create a single trig...
Categories: DBA Blogs

XML Aggregation

Tom Kyte - Wed, 2019-10-23 15:46
Consider: <code>with data as (select 'MH' initials, to_date('01092019','ddmmyyyy') cal_date, 23 quantity from dual union all select 'MH' initials, to_date('02092019','ddmmyyyy') cal_date, 18 quantity fro...
Categories: DBA Blogs

Remove a hint with DBMS_ADVANCED_REWRITE and DBMS_SQL_TRANSLATOR failed

Tom Kyte - Wed, 2019-10-23 15:46
Hello Masters, I am testing the two packages DBMS_ADVANCED_REWRITE and DBMS_SQL_TRANSLATOR and they work fine : for exemple I can remove an ORDER BY from a SELECT. But, there is always an exception, I have problems with Hints : I want to remove...
Categories: DBA Blogs

Constraining JSON Data

Tom Kyte - Wed, 2019-10-23 15:46
Hello AskTom-Team, is there a possibility to contrain and check JSON data? For example: <code>insert into test(questionnaire_id, var_val, year) '{ "f1": "2571", "f11": "38124", "f31": "332.64", "f4...
Categories: DBA Blogs

Oracle Health Sciences and Phlexglobal Collaborate to Reduce Operational Bottlenecks in Clinical Research

Oracle Press Releases - Wed, 2019-10-23 07:00
Press Release
Oracle Health Sciences and Phlexglobal Collaborate to Reduce Operational Bottlenecks in Clinical Research Integration aids timeliness and quality in regulatory compliance, enhancing audit readiness

Redwood Shores, Calif.—Oct 23, 2019

Oracle Health Sciences, a leader in eClinical technology and Phlexglobal, pioneers in the provision of Trial Master File (TMF) technology and services for the global life sciences industry, have announced enhanced integrations to accelerate the speed and accuracy of regulatory compliance and inspection readiness in clinical trials.

A TMF is a requirement that all sponsors and contract research organizations (CROs) must meet to assure that the rights, safety and well-being of trial subjects are protected, that their trials are being conducted in accordance with Good Clinical Practice (GCP) principles, and that the clinical trial data is credible to ensure audit readiness.

Phlexglobal’s PhlexEview TMF management system uses automation to streamline processes, has an open architecture to ensure easy data integration, and provides advanced functionality designed to better facilitate TMF Management.

The Oracle Health Sciences eClinical platform, integrated with the TMF, provides a comprehensive, proven and best-in-class clinical trial ecosystem for life science organizations. Oracle Health Sciences provides key integration components (i.e., data staging, business rule compliance, target data staging, and API processing) to ensure seamless and successful integrations with its applications.

Standards are Critical

Other eClinical solutions only support clinical trial functions in silos. The ability to bring together the collective institutional memory and the facilitation between systems and team members is critical, and addresses an unmet need. The challenges in integration of eClinical systems stem from the lack of standards.

Phlexglobal’s Chief Strategy Officer Karen Roy is highly active in the TMF Reference Model, an industry-led initiative to define these standards—the TMF Reference Model is now widely adopted as the TMF structure of choice. In addition, the group has developed an exchange mechanism for vendor interoperability. This critical initiative establishes a common language (both business and technical exchange mechanism) enabling different organizational departments and companies to automate the exchange of information, which satisfies end-to-end requirements and validation.

The adoption of industry standards is critical to plug-and-play coexistence in the eClinical ecosystem and what is required is a deep understanding of the right type of integration at the right time for the different integration points. The TMF Reference Model Exchange mechanism facilitates these integrations by providing a common language amongst all eClinical vendors. Eliminating inefficiencies and bottlenecks associated with disparate data silos is critical to the goal of reducing cycle times in clinical trials. Today, regulatory authorities are placing more emphasis on contemporariness and timeliness of the TMF content they receive from sponsors and CROs. To achieve this, it is critical to ensure and document most of the content quality, as specified in the ALCOA-C guidance, prior to the content being transferred to the TMF. Together, Oracle Health Sciences and Phlexglobal address this challenge. 

“Our integration with Oracle Health Sciences’ industry-leading applications will save organizations significant resources, as documents only have to be checked once early in the process, which will ensure that any issues can be found and corrected in a timely manner enhancing overall quality,” said John McNeill, CEO of Phlexglobal. “Our collaboration represents a leap forward in improving clinical trials and controlling runaway costs and timelines. Our life science customers will be able to leverage industry KPIs to support business process optimization and business intelligence in clinical study startup, strengthening our regulatory compliance and global trial oversight capabilities.”

“Adoption of industry standards is critical to plug-and-play coexistence in the eClinical ecosystem,” said James Streeter, global vice president life sciences product strategy, Oracle Health Sciences. “What’s required is a deep understanding of the right type of integration at the right time for the different integration points. The TMF Reference Model Exchange Mechanism facilitates these integrations by providing a common language amongst all eClinical vendors. We strongly support this initiative.”

Phlexglobal is a Silver level member of Oracle PartnerNetwork (OPN).

Phlexglobal and Oracle representatives will be available to discuss this collaboration at:

Contact Info
Chris Englerth
Phlexglobal
+1 215.622.8798
cenglerth@phlexglobal.com
Judi Palmer
Oracle
+1 650.506.0266
judi.palmer@oracle.com
About Phlexglobal

Phlexglobal are world-renowned thought leaders and specialists in the provision of electronic Trial Master File (eTMF) systems and services. They offer a unique combination of eTMF technology with PhlexEview – their market leading eTMF system, quality services and specialist resource that delivers a range of flexible, targeted solutions to meet your business needs. Phlexglobal is headquartered in the UK, with offices in Malvern, PA, Durham, NC, and Lublin, Poland.

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As a leader in Life Sciences technology, Oracle Health Sciences is trusted by 30 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for managing clinical trials and pharmacovigilance around the globe.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Chris Englerth

  • +1 215.622.8798

Judi Palmer

  • +1 650.506.0266

Adding a dbvisit standby database on the ODA in a non-OMF environment

Yann Neuhaus - Wed, 2019-10-23 06:04

I have recently been working on a customer project where I had been challenged adding a dbvisit standby database on an ODA X7-2M, named ODA03. The existing customer environment was composed of Oracle Standard 12.2 version database. The primary database, myDB, is running on server named srv02 and using a non-OMF configuration. On the ODA side we are working with OMF configuration. The dbvisit version available at that time was version 8. You need to know that version 9 is currently the last one and brings some new cool features. Through this blog I would like to share with you my experience, the problem I have been facing and the solution I could put in place.

Preparing the instance on the ODA

First of all I have been creating an instance only database on the ODA.

root@ODA03 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status   
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
ec33e32a-37d1-4d0d-8c40-b358dcf5660c     OraDB12201_home1     12.2.0.1.180717                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@ODA03 ~]# odacli create-database -m -u myDB_03 -dn domain.name -n myDB -r ACFS -io -dh ec33e32a-37d1-4d0d-8c40-b358dcf5660c
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  96fd4d07-4604-4158-9c25-702c01f4493e
            Description:  Database service creation with db name: myDB
                 Status:  Created
                Created:  May 15, 2019 4:29:15 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@ODA03 ~]# odacli describe-job -i 96fd4d07-4604-4158-9c25-702c01f4493e

Job details
----------------------------------------------------------------
                     ID:  96fd4d07-4604-4158-9c25-702c01f4493e
            Description:  Database service creation with db name: myDB
                 Status:  Success
                Created:  May 15, 2019 4:29:15 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               May 15, 2019 4:29:16 PM CEST        May 15, 2019 4:29:16 PM CEST        Success
Creating volume datmyDB                    May 15, 2019 4:29:16 PM CEST        May 15, 2019 4:29:38 PM CEST        Success
Creating volume reco                     May 15, 2019 4:29:38 PM CEST        May 15, 2019 4:30:00 PM CEST        Success
Creating ACFS filesystem for DATA        May 15, 2019 4:30:00 PM CEST        May 15, 2019 4:30:17 PM CEST        Success
Creating ACFS filesystem for RECO        May 15, 2019 4:30:17 PM CEST        May 15, 2019 4:30:35 PM CEST        Success
Database Service creation                May 15, 2019 4:30:35 PM CEST        May 15, 2019 4:30:51 PM CEST        Success
Auxiliary Instance Creation              May 15, 2019 4:30:35 PM CEST        May 15, 2019 4:30:47 PM CEST        Success
password file creation                   May 15, 2019 4:30:47 PM CEST        May 15, 2019 4:30:49 PM CEST        Success
archive and redo log location creation   May 15, 2019 4:30:49 PM CEST        May 15, 2019 4:30:49 PM CEST        Success
updating the Database version            May 15, 2019 4:30:49 PM CEST        May 15, 2019 4:30:51 PM CEST        Success

Next steps are really common DBA operations :

  • Create a pfile from the current primary database
  • Transfer the pfile to the ODA
  • Update the pfile as needed (path, db_unique_name, …)
  • Create a spfile from the pfile on the new ODA database
  • Apply ODA specific instance parameters
  • Copy or create the password file with same password

The parameters that are mandatory to be set on the ODA instance are the following :
*.db_create_file_dest=’/u02/app/oracle/oradata/myDB_03′
*.db_create_online_log_dest_1=’/u03/app/oracle/redo’
*.db_recovery_file_dest=’/u03/app/oracle/fast_recovery_area’

Also all the convert parameters should be removed. Using convert parameter is incompatible with OMF.

Creating the standby database Using dbvisit

I first tried to use dbvisit to create the standby database.

As usual and common dbvisit operation, I first created the DDC configuration file from the primary server :

oracle@srv02:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -o setup
...
...
...
Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         myDB
ORACLE_HOME                        /opt/oracle/product/12.2.0

SOURCE                             srv02
ARCHSOURCE                         /u03/app/oracle/dbvisit_arch/myDB
RAC_DR                             N
USE_SSH                            N
DESTINATION                        ODA03
NETPORT                            7890
DBVISIT_BASE_DR                    /u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0.1/dbhome_1
DB_UNIQUE_NAME_DR                  myDB_03
ARCHDEST                           /u03/app/oracle/dbvisit_arch/myDB
ORACLE_SID_DR                      myDB
ENV_FILE                           myDBSTD1

Are these variables correct?  [Yes]:
...
...
...

I then used this DDC configuration file to create the standby database :

oracle@srv02:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -d myDBSTD1 --csd


-------------------------------------------------------------------------------

INIT ORA PARAMETERS
-------------------------------------------------------------------------------
*              audit_file_dest                         /u01/app/oracle/admin/myDB/adump
*              compatible                              12.2.0
*              control_management_pack_access          NONE
*              db_block_size                           8192
*              db_create_file_dest                     /u02/app/oracle/oradata/myDB_03
*              db_create_online_log_dest_1             /u03/app/oracle/redo
*              db_domain
*              db_name                                 myDB
*              db_recovery_file_dest                   /u03/app/oracle/fast_recovery_area
*              db_recovery_file_dest_size              240G
*              db_unique_name                          myDB_03
*              diagnostic_dest                         /u01/app/oracle
*              dispatchers                             (PROTOCOL=TCP) (SERVICE=myDBXDB)
*              instance_mode                           READ-WRITE
*              java_pool_size                          268435456
*              log_archive_dest_1                      LOCATION=USE_DB_RECOVERY_FILE_DEST
*              open_cursors                            3000
*              optimizer_features_enable               12.2.0.1
*              pga_aggregate_target                    4194304000
*              processes                               8000
*              remote_login_passwordfile               EXCLUSIVE
*              resource_limit                          TRUE
*              sessions                                7552
*              sga_max_size                            53687091200
*              sga_target                              26843545600
*              shared_pool_reserved_size               117440512
*              spfile                                  OS default
*              statistics_level                        TYPICAL
*              undo_retention                          300
*              undo_tablespace                         UNDOTBS1

-------------------------------------------------------------------------------

Status: VALID

What would you like to do:
   1 - Create standby database using existing saved template
   2 - View content of existing saved template
   3 - Return to the previous menu
   Please enter your choice [1]:

This operation failed with following errors :

Cannot create standby data or temp file /usr/oracle/oradata/myDB/myDB_bi_temp01.dbf for
primary file /usr/oracle/oradata/myDB/myDB_bi_temp01.dbf as location /usr/oracle/oradata/myDB
does not exist on the standby.

A per dbvisit documentation, dbvisit standby is certified ODA and fully compatible with non-OMF and OMF databases. This is correct, the only distinction is that the full environment needs to be in same configuration. That’s to say that if the primary is OMF, the standby is expected to be OMF. If the primary is running a non-OMF configuration, the standby should be using non-OMF as well.

Using RMAN

I decided to duplicate the database using RMAN and a backup that I transferred locally on the ODA. The backup was the previous nightly inc0 backup. Before running the rman duplication I executed a last archive log backup to make sure to have the most recent archive used in the duplication.

I’m taking this opportunity to highlight that, thanks to ODA NVMe technology, the duplication of the 3 TB database without multiple channel (standard edition) took a bit more than 2 hours only. On the existing servers this took about 10 hours.

I added following tns entry in the tnsnames.ora.

myDBSRV3 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ODA03.domain.name)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = myDB)
      (UR = A)
    )
  )

Of course I could have been using a local connection.

I made sure the database to be in nomount status and ran the rman duplication :

oracle@ODA03:/opt/oracle/backup/ [myDB] rmanh

Recovery Manager: Release 12.2.0.1.0 - Production on Mon May 20 13:24:29 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect auxiliary sys@myDBSRV3

auxiliary database Password:
connected to auxiliary database: myDB (not mounted)

RMAN> run {
2> duplicate target database for standby dorecover backup location '/opt/oracle/backup/myDB';
3> }

Starting Duplicate Db at 20-MAY-2019 13:25:51

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone standby controlfile from  '/opt/oracle/backup/myDB/ctl_myDB_myDB_s108013_p1_newbak.ctl';
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 20-MAY-2019 13:25:51
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=9186 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl
Finished restore at 20-MAY-2019 13:25:52

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=9186 device type=DISK

contents of Memory Script:
{
   set until scn  49713361973;
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   switch clone tempfile all;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  15 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   set newname for clone datafile  18 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_bi_te_%u_.tmp in control file

executing command: SET NEWNAME

...
...
...

executing command: SET NEWNAME

Starting restore at 20-MAY-2019 13:25:57
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lxdataid_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_renderz2_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00007 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_ods_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00013 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_renderzs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00015 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_stagi_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/backup/myDB/inc0_myDB_s107963_p1
...
...
...
archived log file name=/opt/oracle/backup/myDB/1_58043_987102791.dbf thread=1 sequence=58043
archived log file name=/opt/oracle/backup/myDB/1_58044_987102791.dbf thread=1 sequence=58044
archived log file name=/opt/oracle/backup/myDB/1_58045_987102791.dbf thread=1 sequence=58045
archived log file name=/opt/oracle/backup/myDB/1_58046_987102791.dbf thread=1 sequence=58046
archived log file name=/opt/oracle/backup/myDB/1_58047_987102791.dbf thread=1 sequence=58047
archived log file name=/opt/oracle/backup/myDB/1_58048_987102791.dbf thread=1 sequence=58048
archived log file name=/opt/oracle/backup/myDB/1_58049_987102791.dbf thread=1 sequence=58049
archived log file name=/opt/oracle/backup/myDB/1_58050_987102791.dbf thread=1 sequence=58050
media recovery complete, elapsed time: 00:12:40
Finished recover at 20-MAY-2019 16:06:22
Finished Duplicate Db at 20-MAY-2019 16:06:39

I could check and see that my standby database has been successfully created on the ODA :

oracle@ODA03:/u01/app/oracle/local/dmk/etc/ [myDB] myDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : myDB_03
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************

As a personal note, I really found using oracle RMAN more convenient to duplicate a database. Albeit dbvisit script and tool is really stable, I think that this will give you more flexibility.

Registering the database in the grid cluster

As next step I registered the database in the grid.

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [LX] srvctl add database -db MyDB_03 -oraclehome /u01/app/oracle/product/12.2.0.1/dbhome_1 -dbtype SINGLE -instance MyDB -domain team-w.local -spfile /u02/app/oracle/oradata/MyDB_03/dbs/spfileMyDB.ora -pwfile /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/orapwMyDB -role PHYSICAL_STANDBY -startoption MOUNT -stopoption IMMEDIATE -dbname MyDB -node ODA03 -acfspath "/u02/app/oracle/oradata/MyDB_03,/u03/app/oracle"

I stopped the database :

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

And started it again with the grid infrastructure :

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] MyDB
********* dbi services Ltd. *********
STATUS          : STOPPED
*************************************

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl status database -d MyDB_03
Instance MyDB is not running on node ODA03

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl start database -d MyDB_03

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl status database -d MyDB_03
Instance MyDB is running on node ODA03
dbvisit synchronization

We now have our standby database created on the ODA. We just need to synchronize it with the primary.

Run a gap report

Executing a gap report, we can see that the newly created database is running almost 4 hours behind.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 321953)
dbvctl started on srv02: Mon May 20 16:24:35 2019
=============================================================


Dbvisit Standby log gap report for myDB thread 1 at 201905201624:
-------------------------------------------------------------
Destination database on ODA03 is at sequence: 58050.
Source database on srv02 is at log sequence: 58080.
Source database on srv02 is at archived log sequence: 58079.
Dbvisit Standby last transfer log sequence: .
Dbvisit Standby last transfer at: .

Archive log gap for thread 1:  29.
Transfer log gap for thread 1: 58079.
Standby database time lag (DAYS-HH:MI:SS): +03:39:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:24:40 2019
=============================================================
Send the archive logs from primary to the standby database

I have been shipping the last archive logs from the primary database to the newly created standby.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 326409)
dbvctl started on srv02: Mon May 20 16:29:14 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 30. Transfer log gap: 58080
>>> Sending heartbeat message... skipped
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    ODA03...
>>> Transferring Log file(s) from myDB on srv02 to ODA03 for thread 1:

    thread 1 sequence 58051 (1_58051_987102791.dbf)
    thread 1 sequence 58052 (1_58052_987102791.dbf)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.dbf)
    thread 1 sequence 58080 (1_58080_987102791.dbf)

=============================================================
dbvctl ended on srv02: Mon May 20 16:30:50 2019
=============================================================
Apply archive logs on the standby database

Then I could finally apply the archive logs on the standby database.

oracle@ODA03:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -d myDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 21504)
dbvctl started on ODA03: Mon May 20 16:33:42 2019
=============================================================

>>> Sending heartbeat message... skipped

>>> Applying Log file(s) from srv02 to myDB on ODA03:

    thread 1 sequence 58051 (1_58051_987102791.arc)
    thread 1 sequence 58052 (1_58052_987102791.arc)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.arc)
    thread 1 sequence 58080 (1_58080_987102791.arc)
    Last applied log(s):
    thread 1 sequence 58080

    Next SCN required for recovery 49719323442 generated at 2019-05-20:16:27:09 +02:00.
    Next required log thread 1 sequence 58081

=============================================================
dbvctl ended on ODA03: Mon May 20 16:36:52 2019
=============================================================
Run a gap report

Running a new gap report, we can see that there is no delta between the primary and the standby database.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 335068)
dbvctl started on srv02: Mon May 20 16:37:53 2019
=============================================================


Dbvisit Standby log gap report for myDB thread 1 at 201905201637:
-------------------------------------------------------------
Destination database on ODA03 is at sequence: 58081.
Source database on srv02 is at log sequence: 58082.
Source database on srv02 is at archived log sequence: 58081.
Dbvisit Standby last transfer log sequence: 58081.
Dbvisit Standby last transfer at: 2019-05-20 16:37:36.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:37:57 2019
=============================================================
Preparing the database for switchover

Are we done? Absolutely not. In order to be able to successfully perform a switchover, 3 main modifications are mandatory on the non-ODA Server (running non-OMF database) :

  • The future database files should be OMF
  • The online redo log should be newly created
  • The temporary file should be newly created

Otherwise you might end with unsuccessfull switchover having below errors :

>>> Starting Switchover between srv02 and ODA03

Running pre-checks       ... failed
No rollback action required

>>> Database on server srv02 is still a Primary Database
>>> Database on server ODA03 is still a Standby Database


<<<>>>
PID:40386
TRACEFILE:40386_dbvctl_switchover_myDBSTD1_201905272153.trc
SERVER:srv02
ERROR_CODE:1
Remote execution error on ODA03.

====================Remote Output start: ODA03=====================
<<<>>>
PID:92292
TRACEFILE:92292_dbvctl_f_gs_get_info_standby_myDBSTD1_201905272153.trc
SERVER:ODA03
ERROR_CODE:2146
Dbvisit Standby cannot proceed:
Cannot create standby data or temp file /usr/oracle/oradata/myDB/temp01.dbf for primary
file /usr/oracle/oradata/myDB/temp01.dbf as location /usr/oracle/oradata/myDB does not
exist on the standby.
Cannot create standby data or temp file /usr/oracle/oradata/myDB/lx_bi_temp01.dbf for
primary file /usr/oracle/oradata/myDB/lx_bi_temp01.dbf as location /usr/oracle/oradata/myDB
does not exist on the standby.
Review the following standby database parameters:
        db_create_file_dest = /u02/app/oracle/oradata/myDB_03
        db_file_name_convert =
>>>> Dbvisit Standby terminated <<<>>> Dbvisit Standby terminated <<<<
Having new OMF configuration

There is no need to convert the full database into OMF. A database can run having both file naming configuration, non-OMF and OMF. We just need to have the database working now with OMF configuration. For this we will just apply the appropriate value to the init parameter. In my case the existing primary database was storing all data and redo files in the /opt/oracle/oradata directory.

SQL> alter system set DB_CREATE_FILE_DEST='/opt/oracle/oradata' scope=both;

System wurde geändert.

SQL> alter system set DB_CREATE_ONLINE_LOG_DEST_1='/opt/oracle/oradata' scope=both;

System wurde geändert.
Refresh the online log

We will create new OMF redo log files as described below.

The current redo log configuration :

SQL> select v$log.group#, member, v$log.status from v$logfile, v$log where v$logfile.group#=v$log.group#;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------
        12 /opt/oracle/oradata/myDB/redo12.log                  ACTIVE
        13 /opt/oracle/oradata/myDB/redo13.log                  CURRENT
        15 /opt/oracle/oradata/myDB/redo15.log                  INACTIVE
        16 /opt/oracle/oradata/myDB/redo16.log                  INACTIVE
         1 /opt/oracle/oradata/myDB/redo1.log                   INACTIVE
         2 /opt/oracle/oradata/myDB/redo2.log                   INACTIVE
        17 /opt/oracle/oradata/myDB/redo17.log                  INACTIVE
        18 /opt/oracle/oradata/myDB/redo18.log                  INACTIVE
        19 /opt/oracle/oradata/myDB/redo19.log                  INACTIVE
        20 /opt/oracle/oradata/myDB/redo20.log                  INACTIVE
         3 /opt/oracle/oradata/myDB/redo3.log                   INACTIVE
         4 /opt/oracle/oradata/myDB/redo4.log                   INACTIVE
         5 /opt/oracle/oradata/myDB/redo5.log                   INACTIVE
         6 /opt/oracle/oradata/myDB/redo6.log                   INACTIVE
         7 /opt/oracle/oradata/myDB/redo7.log                   INACTIVE
         8 /opt/oracle/oradata/myDB/redo8.log                   ACTIVE
         9 /opt/oracle/oradata/myDB/redo9.log                   ACTIVE
        10 /opt/oracle/oradata/myDB/redo10.log                  ACTIVE
        11 /opt/oracle/oradata/myDB/redo11.log                  ACTIVE
        14 /opt/oracle/oradata/myDB/redo14.log                  INACTIVE

For all INACTIVE redo log groups, we will be able to drop the group and create it again with the OMF naming convention :

SQL> alter database drop logfile group 1;

Datenbank wurde geändert.

SQL> alter database add logfile group 1;

Datenbank wurde geändert.

In order to move to the next redo group and release the current one, we will run a switch log file :

SQL> alter system switch logfile;

System wurde geändert.

To move the ACTIVE redo log to INACTIVE we will run a checkpoint :

SQL> alter system checkpoint;

System wurde geändert.

And then drop and recreate the last INACTIVE redo groups :

SQL> alter database drop logfile group 10;

Datenbank wurde geändert.

SQL> alter database add logfile group 10;

Datenbank wurde geändert.

To finally have all our online log with OMF format :

SQL> select v$log.group#, member, v$log.status from v$logfile, v$log where v$logfile.group#=v$log.group# order by group#;

    GROUP# MEMBER                                                       STATUS
---------- ------------------------------------------------------------ ----------
         1 /opt/oracle/oradata/myDB/onlinelog/o1_mf_1_ggqx5zon_.log       INACTIVE
         2 /opt/oracle/oradata/myDB/onlinelog/o1_mf_2_ggqxjky2_.log       INACTIVE
         3 /opt/oracle/oradata/myDB/onlinelog/o1_mf_3_ggqxjodl_.log       INACTIVE
         4 /opt/oracle/oradata/myDB/onlinelog/o1_mf_4_ggqxkddc_.log       INACTIVE
         5 /opt/oracle/oradata/myDB/onlinelog/o1_mf_5_ggqxkj1t_.log       INACTIVE
         6 /opt/oracle/oradata/myDB/onlinelog/o1_mf_6_ggqxkmnm_.log       CURRENT
         7 /opt/oracle/oradata/myDB/onlinelog/o1_mf_7_ggqxn373_.log       UNUSED
         8 /opt/oracle/oradata/myDB/onlinelog/o1_mf_8_ggqxn7b3_.log       UNUSED
         9 /opt/oracle/oradata/myDB/onlinelog/o1_mf_9_ggqxnbxd_.log       UNUSED
        10 /opt/oracle/oradata/myDB/onlinelog/o1_mf_10_ggqxvlbf_.log      UNUSED
        11 /opt/oracle/oradata/myDB/onlinelog/o1_mf_11_ggqxvnyg_.log      UNUSED
        12 /opt/oracle/oradata/myDB/onlinelog/o1_mf_12_ggqxvqyp_.log      UNUSED
        13 /opt/oracle/oradata/myDB/onlinelog/o1_mf_13_ggqxvv2o_.log      UNUSED
        14 /opt/oracle/oradata/myDB/onlinelog/o1_mf_14_ggqxxcq7_.log      UNUSED
        15 /opt/oracle/oradata/myDB/onlinelog/o1_mf_15_ggqxxgfg_.log      UNUSED
        16 /opt/oracle/oradata/myDB/onlinelog/o1_mf_16_ggqxxk67_.log      UNUSED
        17 /opt/oracle/oradata/myDB/onlinelog/o1_mf_17_ggqxypwg_.log      UNUSED
        18 /opt/oracle/oradata/myDB/onlinelog/o1_mf_18_ggqy1z78_.log      UNUSED
        19 /opt/oracle/oradata/myDB/onlinelog/o1_mf_19_ggqy2270_.log      UNUSED
        20 /opt/oracle/oradata/myDB/onlinelog/o1_mf_20_ggqy26bj_.log      UNUSED

20 Zeilen ausgewählt.
Refresh temporary file

The database was using 2 temp tablespaces : TEMP and MyDB_BI_TEMP.

We first need to add new temp files in OMF format for both tablespaces.

SQL> alter tablespace TEMP add tempfile size 20G;

Tablespace wurde geändert.

SQL> alter tablespace myDB_BI_TEMP add tempfile size 20G;

Tablespace wurde geändert.

Both tablespace will now include 2 files : a previous non-OMF one and a new OMF one :

SQL> @qdbstbsinf.sql
Enter a tablespace name filter (US%): TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
TEMP                 /opt/oracle/oradata/myDB/datafile/o1_mf_temp_ggrjzm9o_.tmp     ONLINE               20480 NO                    0
TEMP                 /usr/oracle/oradata/myDB/temp01.dbf                            ONLINE               20480 NO                    0

SQL> @qdbstbsinf.sql
Enter a tablespace name filter (US%): myDB_BI_TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
myDB_BI_TEMP           /opt/oracle/oradata/myDB/datafile/o1_mf_lx_bi_te_ggrk0wxz_.tmp ONLINE               20480 NO                    0
myDB_BI_TEMP           /usr/oracle/oradata/myDB/lx_bi_temp01.dbf                      ONLINE               20480 YES                5120

Dropping temporary file will end into error :

SQL> alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles;
alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles
*
FEHLER in Zeile 1:
ORA-25152: TEMPFILE kann momentan nicht gelöscht werden

We need to restart the database. This will only be possible during the maintenance windows scheduled to run the switchover.

SQL> shutdown immediate;
Datenbank geschlossen.
Datenbank dismounted.
ORACLE-Instanz heruntergefahren.

SQL> startup
ORACLE-Instanz hochgefahren.

Total System Global Area 5,3687E+10 bytes
Fixed Size                 26330584 bytes
Variable Size            3,3152E+10 bytes
Database Buffers         2,0401E+10 bytes
Redo Buffers              107884544 bytes
Datenbank mounted.
Datenbank geöffnet.

The previous non-OMF temporary file can now be deleted :

SQL>  alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles;

Datenbank wurde geändert.

SQL> alter database tempfile '/usr/oracle/oradata/myDB/lx_bi_temp01.dbf' drop including datafiles;

Datenbank wurde geändert.

And we only have OMF temporary files now :

SQL>  @qdbstbsinf.sql
Enter a tablespace name filter (US%): TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
TEMP                 /opt/oracle/oradata/myDB/datafile/o1_mf_temp_ggrjzm9o_.tmp     ONLINE               20480 NO                    0

SQL>  @qdbstbsinf.sql
Enter a tablespace name filter (US%): myDB_BI_TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
myDB_BI_TEMP           /opt/oracle/oradata/myDB/datafile/o1_mf_lx_bi_te_ggrk0wxz_.tmp ONLINE               20480 NO                    0

Testing switchover

We are now ready to test the switchover from current srv02 primary to ODA03 server after making sure both databases are synchronized.

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 12196)
dbvctl started on srv02: Tue May 28 00:07:34 2019
=============================================================

>>> Starting Switchover between srv02 and ODA03

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: ODA03
    Standby Database Server: srv02

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD1


PID:12196
TRACE:12196_dbvctl_switchover_MyDBSTD1_201905280007.trc

=============================================================
dbvctl ended on srv02: Tue May 28 00:13:31 2019
=============================================================
Conclusion

With dbvisit standby it is possible to mix non-OMF and OMF databases after completing several manual steps. The final recommendation would be to run a unique configuration. This is why, after having run a switchover to the new ODA03 database, and making sure the new database is stable, we created from scratch the old primary srv02 database with OMF configuration. Converting a database to OMF using move option is not possible with standard edition.

Cet article Adding a dbvisit standby database on the ODA in a non-OMF environment est apparu en premier sur Blog dbi services.

Having multiple standby databases and cascading with dbvisit

Yann Neuhaus - Wed, 2019-10-23 05:47

Dbvisit standy is a disaster recovery solution that you will be able to use with Oracle standard edition. I have been working on a customer project where I had to setup a system having one primary and two standby databases. One of the standby database had to run with a gap of 24 hours. Knowing that flashback possibilities are very limited on standard edition, this would give customer the ability to extract and restore some data been wrongly lost following human errors.

The initial configuration would be the following one :

Database instance, db_name : MyDB
MyDB_02 (db_unique_name) primary database running on srv02 server.
MyDB_01 (db_unique_name) expected standby database running on srv01 server.
MyDB_03 (db_unique_name) expected standby database running on srv03 server.

The following DDC configuration file will be used :
MyDBSTD1 : Configuration file for first standby been synchronized every 10 minutes.
MyDBSTD2 : Configuration file for second standby been synchronized every 24 hours.

Let me walk you through the steps to setup such configuration. This article is not intended to show the whole process of implementing a dbvisit solution, but only the steps required to work with multiple standby. We will also talk about how we can implement cascaded standby and apply lag delay within dbvisit.

Recommendations

In order to limit the manual configuration changes in the DDC file after a switchover, it is recommended to use as much as possible same ORACLE_HOME, ARCHIVE Destination and DBVISIT home directory.

Creating MyDBSTD1 DDC configuration file

The first standby configuration file will be created and used between MyDB_03 (srv03) and MyDB_02 (srv02).

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -o setup


=========================================================

     Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd)
           http://www.dbvisit.com

=========================================================

=>dbvctl only needs to be run on the primary server.

Is this the primary server?  [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:

     DDC
     ===
1)   Create New DDC
2)   Cancel

Please enter choice [] : 1

Is this correct?  [Yes]:

...
...
...

Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         MyDB
ORACLE_HOME                        /opt/oracle/product/12.2.0

SOURCE                             srv02
ARCHSOURCE                         /u03/app/oracle/dbvisit_arch/MyDB
RAC_DR                             N
USE_SSH                            N
DESTINATION                        srv03
NETPORT                            7890
DBVISIT_BASE_DR                    /u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0.1/dbhome_1
DB_UNIQUE_NAME_DR                  MyDB_03
ARCHDEST                           /u03/app/oracle/dbvisit_arch/MyDB
ORACLE_SID_DR                      MyDB
ENV_FILE                           MyDBSTD1

Are these variables correct?  [Yes]:

>>> Dbvisit Database configuration (DDC) file MyDBSTD1 created.

>>> Dbvisit Database repository (DDR) MyDB created.
   Repository Version          8.4
   Software Version            8.4
   Repository Status           VALID


Do you want to enter license key for the newly created Dbvisit Database configuration (DDC) file?  [Yes]:

Enter license key and press Enter: []: XXXXXXXXXXXXXXXXXXXXXXXXXXX
>>> Dbvisit Standby License
License Key     : XXXXXXXXXXXXXXXXXXXXXXXXXXX
customer_number : XXXXXX
dbname          : MyDB
expiry_date     : 2099-05-06
product_id      : 8
sequence        : 1
status          : VALID
updated         : YES

PID:423545
TRACE:dbvisit_install.log
Synchronizing both MyDB_02 and MyDB_03 Shippping logs from primary to standby
oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d MyDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 326409)
dbvctl started on srv02: Mon May 20 16:29:14 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 30. Transfer log gap: 58080
>>> Sending heartbeat message... skipped
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    srv03...
>>> Transferring Log file(s) from MyDB on srv02 to srv03 for thread 1:

    thread 1 sequence 58051 (1_58051_987102791.dbf)
    thread 1 sequence 58052 (1_58052_987102791.dbf)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.dbf)
    thread 1 sequence 58080 (1_58080_987102791.dbf)

=============================================================
dbvctl ended on srv02: Mon May 20 16:30:50 2019
=============================================================
Applying log on standby database
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 21504)
dbvctl started on srv03: Mon May 20 16:33:42 2019
=============================================================

>>> Sending heartbeat message... skipped

>>> Applying Log file(s) from srv02 to MyDB on srv03:

    thread 1 sequence 58051 (1_58051_987102791.arc)
    thread 1 sequence 58052 (1_58052_987102791.arc)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.arc)
    thread 1 sequence 58080 (1_58080_987102791.arc)
    Last applied log(s):
    thread 1 sequence 58080

    Next SCN required for recovery 49719323442 generated at 2019-05-20:16:27:09 +02:00.
    Next required log thread 1 sequence 58081

=============================================================
dbvctl ended on srv03: Mon May 20 16:36:52 2019
=============================================================
Running a gap report
oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 335068)
dbvctl started on srv02: Mon May 20 16:37:53 2019
=============================================================


Dbvisit Standby log gap report for MyDB thread 1 at 201905201637:
-------------------------------------------------------------
Destination database on srv03 is at sequence: 58081.
Source database on srv02 is at log sequence: 58082.
Source database on srv02 is at archived log sequence: 58081.
Dbvisit Standby last transfer log sequence: 58081.
Dbvisit Standby last transfer at: 2019-05-20 16:37:36.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:37:57 2019
=============================================================
Switchover to srv03

At that time in the project we did a switchover to the newly created srv03 in order to test its stability. The switchover has been performed as described below, but this step is not mandatory when implementing several standby databases. As best practices, we will always test the first configuration by running a switchover before moving forward.

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 12196)
dbvctl started on srv02: Tue May 28 00:07:34 2019
=============================================================

>>> Starting Switchover between srv02 and srv03

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: srv03
    Standby Database Server: srv02

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD1


PID:12196
TRACE:12196_dbvctl_switchover_MyDBSTD1_201905280007.trc

=============================================================
dbvctl ended on srv02: Tue May 28 00:13:31 2019
=============================================================

srv03 is now the new primary and srv02 a new standby database.

Creating MyDBSTD2 DDC configuration file

Once myDB_01 standby database is up and running, we can create its related DDC configuration file. To do so, we simply copy previous DDC configuration file, MyDBSTD1, and update it as needed.

I first transferred the file from current primary srv03 to new standby server srv01 :

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] scp dbv_MyDBSTD1.env oracle@srv01:$PWD
dbv_MyDBSTD1.env		100% 	23KB 	22.7KB/s 		00:00

I copied it into the new DDC configuration file name :

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] cp dbv_MyDBSTD1.env dbv_MyDBSTD2.env

I updated new DDC configuration accordingly to have :

  • DESTINATION as srv01 instead of srv02
  • DB_UNIQUE_NAME_DR as MyDB_01 instead of MyDB_02
  • MAILCFG to see the alerts coming from STD2 configuration.

The primary will remain the same : srv03.

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD2.env

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] diff dbv_MyDBSTD1.env dbv_MyDBSTD2.env
86c86
DESTINATION = srv02
---
DESTINATION = srv01
93c93
DB_UNIQUE_NAME_DR = MyDB
---
DB_UNIQUE_NAME_DR = MyDB_01
135,136c135,136
MAILCFG_FROM = dbvisit_conf_1@domain.name MAILCFG_FROM_DR = dbvisit_conf_1@domain.name
---
MAILCFG_FROM = dbvisit_conf_2@domain.name
MAILCFG_FROM_DR = dbvisit_conf_2@domain.name

In case the ORACLE_HOME and ARCHIVE destination are not the same, these parameters will have to be updated as well.

Synchronizing both MyDB_03 and MyDB_01 Shippping logs from primary to standby
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 25914)
dbvctl started on srv03: Wed Jun  5 20:32:09 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 383. Transfer log gap: 67385
>>> Sending heartbeat message... done
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    srv01...
>>> Transferring Log file(s) from MyDB on srv03 to srv01 for thread 1:

    thread 1 sequence 67003 (o1_mf_1_67003_ghgwj0z2_.arc)
    thread 1 sequence 67004 (o1_mf_1_67004_ghgwmj1w_.arc)
...
...
...
    thread 1 sequence 67384 (o1_mf_1_67384_ghj2fbgj_.arc)
    thread 1 sequence 67385 (o1_mf_1_67385_ghj2g883_.arc)

=============================================================
dbvctl ended on srv03: Wed Jun  5 20:42:05 2019
=============================================================
Applying log on standby database
oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 69764)
dbvctl started on srv01: Wed Jun  5 20:42:45 2019
=============================================================

>>> Sending heartbeat message... done

>>> Applying Log file(s) from srv03 to MyDB on srv01:

    thread 1 sequence 67003 (1_67003_987102791.arc)
    thread 1 sequence 67004 (1_67004_987102791.arc)
...
...
...
    thread 1 sequence 67384 (1_67384_987102791.arc)
    thread 1 sequence 67385 (1_67385_987102791.arc)
    Last applied log(s):
    thread 1 sequence 67385

    Next SCN required for recovery 50112484332 generated at 2019-06-05:20:28:24 +02:00.
    Next required log thread 1 sequence 67386

>>> Dbvisit Archive Management Module (AMM)

    Config: number of archives to keep      = 0
    Config: number of days to keep archives = 3
    Config: diskspace full threshold        = 80%
==========

Processing /u03/app/oracle/dbvisit_arch/MyDB...
    Archive log dir: /u03/app/oracle/dbvisit_arch/MyDB
    Total number of archive files   : 383
    Number of archive logs deleted = 0
    Current Disk percent full       : 8%

=============================================================
dbvctl ended on srv01: Wed Jun  5 21:16:30 2019
=============================================================
Running a gap report
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 44143)
dbvctl started on srv03: Wed Jun  5 21:17:03 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052117:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 67385.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67385.
Dbvisit Standby last transfer at: 2019-06-05 20:42:05.

Archive log gap for thread 1:  1.
Transfer log gap for thread 1: 1.
Standby database time lag (DAYS-HH:MI:SS): +00:48:41.
Switchover to srv01

Now we are having both srv01 and srv02 standby databases up and running and connected with current srv03 primary database. Let’s switchover to srv01 and see what would be the required steps. After each switchover the other standby DDC configuration files will have to be manually updated.

Checking srv03 and srv02 are synchronized

Both srv03 and srv02 databases should be in sync otherwise ship and apply archive logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 93307)
dbvctl started on srv03: Wed Jun  5 21:27:02 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052127:
-------------------------------------------------------------
Destination database on srv02 is at sequence: 67386.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67386.
Dbvisit Standby last transfer at: 2019-06-05 21:24:47.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:27:02.


=============================================================
dbvctl ended on srvxdb03: Wed Jun  5 21:27:08 2019
=============================================================
Checking srv03 and srv01 are synchronized

Both srv03 and srv01 databases should be in sync otherwise ship and apply archive logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 90871)
dbvctl started on srv03: Wed Jun  5 21:26:31 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052126:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 67386.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67386.
Dbvisit Standby last transfer at: 2019-06-05 21:26:02.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:26:02.
Switchover to srv01
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 20334)
dbvctl started on srv03: Wed Jun  5 21:31:56 2019
=============================================================

>>> Starting Switchover between srv03 and srv01

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: srv01
    Standby Database Server: srv03

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD2


PID:20334
TRACE:20334_dbvctl_switchover_MyDBSTD2_201906052131.trc

=============================================================
dbvctl ended on srv03: Wed Jun  5 21:37:40 2019
=============================================================
Attach srv02 to srv01 (new primary)

Previously to the switchover :

  • srv03 and srv01 was using MyDBSTD2 DDC configuration file
  • srv03 and srv02 was using MyDBSTD1 DDC configuration file

srv02 standby database needs now to be attach to new primary srv01. For this we will copy the MyDBSTD1 DDC configuration file from srv02 to srv01 as it is the first time srv01 is primary. Otherwise, we would only need to update accordingly the already existing file.

I have been transferring the DDC file :

oracle@srv02:/u01/app/dbvisit/standby/conf/ [MyDB] scp dbv_MyDBSTD1.env oracle@srv01:$PWD
dbv_MyDBSTD1.env    100%   23KB  14.8MB/s   00:00

MyDBSTD1 configuration file has been updated accordingly to reflect new changes and configuration :

  • SOURCE needs to be replaced from srv03 to srv01
  • DESTINATION will remain srv02
  • DB_UNIQUE_NAME needs to be replaced fromMyDB_03 to MyDB_01
  • DB_UNIQUE_NAME_DR will remain MyDB_02
oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD1.env

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] grep ^SOURCE dbv_MyDBSTD1.env
SOURCE = srv01

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] grep DB_UNIQUE_NAME dbv_MyDBSTD1.env
# DB_UNIQUE_NAME      - Primary database db_unique_name
DB_UNIQUE_NAME = MyDB_01
# DB_UNIQUE_NAME_DR   - Standby database db_unique_name
DB_UNIQUE_NAME_DR = MyDB_02
Checking that databases are all synchronized

After performing several switch logfile on the primary in order to generate archive logs, I transferred and applied needed archive log files on both srv02 and srv03 standby databases. I made sure both are synchronized.

srv01 and srv03 databases :

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 98156)
dbvctl started on srv01: Wed Jun  5 21:52:08 2019
=============================================================


Dbvisit Standby log gap report for MyDB_01 thread 1 at 201906052152:
-------------------------------------------------------------
Destination database on srv03 is at sequence: 67413.
Source database on srv01 is at log sequence: 67414.
Source database on srv01 is at archived log sequence: 67413.
Dbvisit Standby last transfer log sequence: 67413.
Dbvisit Standby last transfer at: 2019-06-05 21:51:13.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:00.


=============================================================
dbvctl ended on srv01: Wed Jun  5 21:52:18 2019
=============================================================

srv01 and srv02 databases :

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 100393)
dbvctl started on srv01: Wed Jun  5 21:56:06 2019
=============================================================


Dbvisit Standby log gap report for MyDB_01 thread 1 at 201906052156:
-------------------------------------------------------------
Destination database on srv02 is at sequence: 67413.
Source database on srv01 is at log sequence: 67414.
Source database on srv01 is at archived log sequence: 67413.
Dbvisit Standby last transfer log sequence: 67413.
Dbvisit Standby last transfer at: 2019-06-05 21:55:22.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:05:13.


=============================================================
dbvctl ended on srv01: Wed Jun  5 21:56:07 2019
=============================================================
Apply delay lag

MyDBSTD2 configuration should at the end have an apply lag of 24 hours. This can be achieved using APPLY_DELAY_LAG_MINUTES in the configuration. In order to test it, I have decided with customer to use 60 minutes delay.

Update MyDBSTD2 DDC configuration file

Following parameters have been updated in the configuration :
APPLY_DELAY_LAG_MINUTES = 60
DMN_MONITOR_INTERVAL_DR = 0
TRANSFER_LOG_GAP_THRESHOLD = 0
ARCHIVE_LOG_GAP_THRESHOLD = 60

APPLY_DELAY_LAG_MINUTES is the delay in minutes to take in account before applying the vector changes.
DMN_MONITOR_INTERVAL_DR is the interval in sec for log monitor schedule on destination. 0 mean deactivated.
TRANSFER_LOG_GAP_THRESHOLD is the difference allowed between the last archived sequence on the primary and the last sequence transferred to the standby server.
ARCHIVE_LOG_GAP_THRESHOLD is the difference allowed between the last archived sequence on the primary and the last applied sequence on the standby database before an alert is sent.

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] cp dbv_MyDBSTD2.env dbv_MyDBSTD2.env.201906131343

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD2.env

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] diff dbv_MyDBSTD2.env dbv_MyDBSTD2.env.201906131343
281c281
DMN_MONITOR_INTERVAL_DR = 0
---
DMN_MONITOR_INTERVAL_DR = 5
331c331
APPLY_DELAY_LAG_MINUTES = 60
---
APPLY_DELAY_LAG_MINUTES = 0
374c374
ARCHIVE_LOG_GAP_THRESHOLD = 60
---
ARCHIVE_LOG_GAP_THRESHOLD = 0

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] grep ^TRANSFER_LOG_GAP_THRESHOLD dbv_MyDBSTD2.env
TRANSFER_LOG_GAP_THRESHOLD = 0
Report displayed with an apply delay lag been configured

When generating a report, we can see that there is no gap in the log transfer as the archive log would be transferred through the crontab every 10 minutes. On the other side, we can see that there is an expected delay of 60 minutes in applying the logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDBTEST] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 66003)
dbvctl started on srv03: Thu Jun 13 15:21:29 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906131521:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 73856.
Source database on srv03 is at log sequence: 73890.
Source database on srv03 is at archived log sequence: 73889.
Dbvisit Standby last transfer log sequence: 73889.
Dbvisit Standby last transfer at: 2019-06-13 15:20:15.

Archive log gap for thread 1:  33 (apply_delay_lag_minutes=60).
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +01:00:00.


=============================================================
dbvctl ended on srv03: Thu Jun 13 15:21:35 2019
=============================================================
Cascading standby database

What about cascading standby database? Cascading standby database is possible with dbvisit. We would be using a cascaded standby for a reporting server that needs to be updated less frequently or if we would like to unload the primary database in sending archive logs to multiple standby databases. The cascaded standby database will remain updated through the first standby. Cascading is possible since dbvisit version 8.

Following needs to be known :

  • Switchover will not be possible between the primary and the cascaded standby database.
  • The DDC configuration file between the first standby and the cascaded standby needs to have :
    • As SOURCE the first standby database
    • CASCADE parameter set to Y. This will be done automatically when creating the DDC configuration with dbvctl -o setup. From the traces you will see : >>> Source database is a standby database. CASCADE flag will be turned on.
    • ARCHDEST and ARCHSOURCE location on the first standby needs to have same values.

    The principle is then exactly the same, and running dbvctl -d from the first standby will ship the archive log to the second standby.

I had been running some tests in my lab.

Environment

DBVP is the primary server.
DBVS is the first standby server.
DBVS2 is the second cascaded server.

oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : DBVPDB_SITE1
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
VERSION                : 12.2.0.1.0
CDB Enabled            : NO
*************************************

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : DBVPDB_SITE2
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************


oracle@DBVS2:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : DBVPDB_SITE3
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************
Create cascaded DDC configuration file

The DDC configuration file will be created from the first standby node.
DBVS (first standby server) will be the SOURCE.
DBVS2 (cascaded standby server) will be the DESTINATION.

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -o setup


=========================================================

     Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b)
           http://www.dbvisit.com

=========================================================

=>dbvctl only needs to be run on the primary server.

Is this the primary server?  [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:

     DDC
     ===
1)   Create New DDC
2)   DBVPDB
3)   DBVPDB_SITE1
4)   DBVPOMF_SITE1
5)   Cancel

Please enter choice [] : 1

Is this correct?  [Yes]:

...


Continue ?  [No]: yes

=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:

     SID            ORACLE_HOME
     ===            ===========
1)   rdbms12201     /u01/app/oracle/product/12.2.0/dbhome_1
2)   DBVPDB         /u01/app/oracle/product/12.2.0/dbhome_1
3)   DBVPOMF        /u01/app/oracle/product/12.2.0/dbhome_1
4)   DUP            /u01/app/oracle/product/12.2.0/dbhome_1
5)   Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 2

Is this correct?  [Yes]:
=>ORACLE_SID will be: DBVPDB
=>ORACLE_HOME will be: /u01/app/oracle/product/12.2.0/dbhome_1

>>> Source database is a standby database. CASCADE flag will be turned on.

Yes to continue or No to cancel setup?  [Yes]:

...
...
...

Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         DBVPDB
ORACLE_HOME                        /u01/app/oracle/product/12.2.0/dbhome_1

SOURCE                             DBVS
ARCHSOURCE                         /u90/dbvisit_arch/DBVPDB_SITE2
RAC_DR                             N
USE_SSH                            Y
DESTINATION                        DBVS2
NETPORT                            22
DBVISIT_BASE_DR                    /oracle/u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0/dbhome_1
DB_UNIQUE_NAME_DR                  DBVPDB_SITE3
ARCHDEST                           /u90/dbvisit_arch/DBVPDB_SITE3
ORACLE_SID_DR                      DBVPDB
ENV_FILE                           DBVPDB_CASCADED

Are these variables correct?  [Yes]:

>>> Dbvisit Database configuration (DDC) file DBVPDB_CASCADED created.

>>> Dbvisit Database repository (DDR) already installed.
   Repository Version          8.3
   Software Version            8.3
   Repository Status           VALID


Do you want to enter license key for the newly created Dbvisit Database configuration (DDC) file?  [Yes]:

Enter license key and press Enter: []: 4jo6z-8aaai-u09b6-ijjxe-cxks5-1114a-ozfvp
oracle@dbvs2's password:
>>> Dbvisit Standby License
License Key     : 4jo6z-8aaai-u09b6-ijjxe-cxks5-1114a-ozfvp
customer_number : 1
dbname          :
expiry_date     : 2019-05-29
product_id      : 8
sequence        : 1
status          : VALID
updated         : YES

PID:25571
TRACE:dbvisit_install.log

dbvisit software could see that the SOURCE is already a standby database. The software will then automatically configured the CASCADE flag to Y.

>>> Source database is a standby database. CASCADE flag will be turned on.
oracle@DBVS:/u01/app/dbvisit/standby/conf/ [DBVPDB] grep CASCADE dbv_DBVPDB_CASCADED.env
# Variable: CASCADE
#      CASCADE = Y
CASCADE = Y
Synchronize first standby with primary Ship archive log from primary to first standby
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 23506)
dbvctl started on DBVP: Wed May 15 01:24:55 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 3. Transfer log gap: 3
>>> Transferring Log file(s) from DBVPDB on DBVP to DBVS for thread 1:

    thread 1 sequence 50 (o1_mf_1_50_gfpmk7sg_.arc)
    thread 1 sequence 51 (o1_mf_1_51_gfpmkc7p_.arc)
    thread 1 sequence 52 (o1_mf_1_52_gfpmkf7w_.arc)

=============================================================
dbvctl ended on DBVP: Wed May 15 01:25:06 2019
=============================================================
Apply archive log on first standby
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 27769)
dbvctl started on DBVS: Wed May 15 01:25:25 2019
=============================================================


>>> Applying Log file(s) from DBVP to DBVPDB on DBVS:

>>> No new logs to apply.
    Last applied log(s):
    thread 1 sequence 52

    Next SCN required for recovery 885547 generated at 2019-05-15:01:24:29 +02:00.
    Next required log thread 1 sequence 53

=============================================================
dbvctl ended on DBVS: Wed May 15 01:25:27 2019
=============================================================
Run a gap report
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB -i
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 23625)
dbvctl started on DBVP: Wed May 15 01:25:55 2019
=============================================================


Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201905150125:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 52.
Source database on DBVP is at log sequence: 53.
Source database on DBVP is at archived log sequence: 52.
Dbvisit Standby last transfer log sequence: 52.
Dbvisit Standby last transfer at: 2019-05-15 01:25:06.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:33.


=============================================================
dbvctl ended on DBVP: Wed May 15 01:25:58 2019
=============================================================
Synchronize cascaded standby with first standby Ship archive log from first standby to cascaded standby
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 27965)
dbvctl started on DBVS: Wed May 15 01:26:41 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 3. Transfer log gap: 3
>>> Transferring Log file(s) from DBVPDB on DBVS to DBVS2 for thread 1:

    thread 1 sequence 50 (1_50_979494498.arc)
    thread 1 sequence 51 (1_51_979494498.arc)
    thread 1 sequence 52 (1_52_979494498.arc)

=============================================================
dbvctl ended on DBVS: Wed May 15 01:26:49 2019
=============================================================
Apply archive log on cascaded standby
oracle@DBVS2:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 21118)
dbvctl started on DBVS2: Wed May 15 01:27:21 2019
=============================================================


>>> Applying Log file(s) from DBVS to DBVPDB on DBVS2:

    thread 1 sequence 50 (1_50_979494498.arc)
    thread 1 sequence 51 (1_51_979494498.arc)
    thread 1 sequence 52 (1_52_979494498.arc)
    Last applied log(s):
    thread 1 sequence 52

    Next SCN required for recovery 885547 generated at 2019-05-15:01:24:29 +02:00.
    Next required log thread 1 sequence 53

=============================================================
dbvctl ended on DBVS2: Wed May 15 01:27:33 2019
=============================================================
Run a gap report
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED -i
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 28084)
dbvctl started on DBVS: Wed May 15 01:28:07 2019
=============================================================


Dbvisit Standby log gap report for DBVPDB_SITE2 thread 1 at 201905150128:
-------------------------------------------------------------
Destination database on DBVS2 is at sequence: 52.
Source database on DBVS is at applied log sequence: 52.
Dbvisit Standby last transfer log sequence: 52.
Dbvisit Standby last transfer at: 2019-05-15 01:26:49.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:00.


=============================================================
dbvctl ended on DBVS: Wed May 15 01:28:11 2019
=============================================================
Conclusion

With dbvisit we are able to configure several standby databases, choose apply lag delay and also configure cascaded standby. The cons would be that the DDC configuration file needs to be manually adapted after each switchover.

Cet article Having multiple standby databases and cascading with dbvisit est apparu en premier sur Blog dbi services.

Why you really should use peer authentication in PostgreSQL

Yann Neuhaus - Tue, 2019-10-22 14:36

It is always a bit of a surprise that many people do not know peer authentication in PostgreSQL. You might ask why that is important as initdb creates a default pg_hba.conf which does not allow any connections from outside the PostgreSQL server. While that is true there is another important point to consider.

Let’s assume you executed initdb without any options like this:

postgres@centos8pg:/home/postgres/ [pgdev] mkdir /var/tmp/test
postgres@centos8pg:/home/postgres/ [pgdev] initdb -D /var/tmp/test
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/tmp/test ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Europe/Zurich
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

pg_ctl -D /var/tmp/test -l logfile start

Did you ever notice the warning at the end of the output?

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

You might think that this is not important as only the DBAs will have access to the operating system user postgres (or whatever user you used when you executed initdb). Although this might be true in your case, the server eventually might have other local users. Before creating a new user lets start the instance:

postgres@centos8pg:/home/postgres/ [pgdev] export PGPORT=9999
postgres@centos8pg:/home/postgres/ [pgdev] pg_ctl -D /var/tmp/test/ start -l /dev/null
waiting for server to start.... done
server started

You really need to be aware of is this:

postgres@centos8pg:/home/postgres/ [pgdev] sudo useradd test
postgres@centos8pg:/home/postgres/ [pgdev] sudo su - test
[test@centos8pg ~]$ /u01/app/postgres/product/DEV/db_1/bin/psql -p 9999 -U postgres postgres
psql (13devel)
Type "help" for help.

postgres=#

… and you are in as the superuser! So any local user can connect as the superuser by default. What you might want to do is this:

postgres@centos8pg:/home/postgres/ [pgdev] sudo chmod o-rwx /u01/app/postgres/product
postgres@centos8pg:/home/postgres/ [pgdev] sudo su - test
Last login: Tue Oct 22 21:19:58 CEST 2019 on pts/0
[test@centos8pg ~]$ /u01/app/postgres/product/DEV/db_1/bin/psql -p 9999 -U postgres postgres
-bash: /u01/app/postgres/product/DEV/db_1/bin/psql: Permission denied

This prevents all other users on the system from executing the psql binary. If you can guarantee that nobody installs psql in another way on the system that might be sufficient. As soon as psql is available somewhere on the system you’re lost again:

postgres@centos8pg:/home/postgres/ [pgdev] sudo dnf provides psql
Last metadata expiration check: 0:14:53 ago on Tue 22 Oct 2019 09:09:23 PM CEST.
postgresql-10.6-1.module_el8.0.0+15+f57f353b.x86_64 : PostgreSQL client programs
Repo        : AppStream
Matched from:
Filename    : /usr/bin/psql

postgres@centos8pg:/home/postgres/ [pgdev] sudo dnf install -y postgresql-10.6-1.module_el8.0.0+15+f57f353b.x86_64
[test@centos8pg ~]$ /usr/bin/psql -p 9999 -U postgres -h /tmp postgres
psql (10.6, server 13devel)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
Type "help" for help.

postgres=#

Not really an option. This is where peer authentication becomes very handy.

postgres@centos8pg:/home/postgres/ [pgdev] sed -i 's/local   all             all                                     trust/local   all             all                                     peer/g' /var/tmp/test/pg_hba.conf

Once you switched from trust to peer for local connections only the operating system user that created the instance will be able to connect locally without providing a password:

postgres@centos8pg:/home/postgres/ [pgdev] pg_ctl -D /var/tmp/test/ reload
server signaled
postgres@centos8pg:/home/postgres/ [pgdev] psql postgres
psql (13devel)
Type "help" for help.

[local]:9999 postgres@postgres=#

Other local users will not be able to connect anymore:

postgres@centos8pg:/home/postgres/ [pgdev] sudo su - testLast login: Tue Oct 22 21:25:36 CEST 2019 on pts/0
[test@centos8pg ~]$ /usr/bin/psql -p 9999 -U postgres -h /tmp postgres
psql: FATAL:  Peer authentication failed for user "postgres"
[test@centos8pg ~]$

So, please, consider enabling peer authentication or at least go for md5 for local connections as well.

Cet article Why you really should use peer authentication in PostgreSQL est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator