Syed Jaffar

Subscribe to Syed Jaffar feed
Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.The Human Fly
Updated: 6 days 22 hours ago

DBMCLI Utility

Wed, 2018-01-03 03:45
Exadata comes with two types of servers, cell (storage) and compute (db). Its important for Database Machine Administrators (DMAs) to manage and configure servers for various purposes. For cell server administration and server settings, we use the cellcli utility. With X6 and higher, the DBMCLI utility can be used to administrate and configure server settings on compute nodes.

The DBMCLI utility is the command-line administration tool for configuring database servers, and managing server environment. Like cellcli, DBMCLI also runs on each compute server to enable you to configure an individual database server. The DBMCLI can be used to start and stop the server, to manage server configuration information, and to enable or disable servers. The utility is preconfigured when Oracle Exadata Database Machine is shipped.

Note : If you have enabled Capacity on Demand (CoD) during Exadata implementation, using this utility you can increase the CPU count on demand.

 To invoke/start the utility on the $ prompt, just execute dbmcli command

$ dbmcli [port_number] [-n] [-m] [-xml] [-v | -vv | -vvv] [-x] [-e command]

 With dbmcli utility, you can retireve the server configuration details, metric information and configure SMTP server details, start / stop the server etc.

Fore more details, read the Oracle documentation:


Wrong Cell Server name on X6-2 Elastic Rack - Bug 25317550

Tue, 2018-01-02 05:55
Two X6-2 Elastic Full capacity Exadata systems were deployed recently. Due to the following BUG, cell names were not properly updated with the client provided names after executing the


Though this doesn't impact the operations, but, certainly will create confusion when multiple Exadata systems are deployed in the same data center, due to exact name of cell, cell disks, grid disks.

Note : Its highly recommended to validate the cell names after executing the, before running the onecommand. If you encounter the similar problem, simply change the cell name with alter cell name=[correctname] and proceed with onecommand execution to avoid the BUG.

The default names looks like the below :

# dcli -g cell_group -l root 'cellcli -e list cell attributes name'
                        celadm1: ru02
                        celadm1: ru04
                        celadm3: ru06
                        celadm4: ru08
                        celadm5: ru10
                        celadm6: ru12

Changing the cell name to reflect the cell disk, grid disk names, you need to follow the below procedure:

The procedure below must be performed on all cells separately and sequentially(to avoid full downtime);

1) Change the cell name:
    cellcli> alter cell name=celadm5

2) Confirm griddisks can be taken offline.

    cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
            ASMDeactivationOutcome - Should be YES for all griddisks

3) Inactivate griddisk on that cell
    cellcli> alter griddisk all inactive

            Observation - IF any votesiks are in the storage server will relocate to any surviving storage servers.

4) Change cell disk name
            alter celldisk CD_00_ru10 name=CD_00_celadm5;    
            alter celldisk CD_01_ru10 name=CD_01_
            alter celldisk CD_02_ru10 name=CD_02_
            alter celldisk CD_03_ru10 name=CD_03_
            alter celldisk CD_04_ru10 name=CD_04_
            alter celldisk CD_05_ru10 name=CD_05_
            alter celldisk CD_06_ru10 name=CD_06_
            alter celldisk CD_07_ru10 name=CD_07_
            alter celldisk CD_08_ru10 name=CD_08_
            alter celldisk CD_09_ru10 name=CD_09_
            alter celldisk CD_10_ru10 name=CD_10_
            alter celldisk CD_11_ru10 name=CD_11_

5) Change Griddisk name using the below examples (do it for all grid disks, DATAC1, DBFS & RECOC1)
            alter GRIDDISK DATAC1_CD_00_ru10  name=DATAC1_CD_00_
            alter GRIDDISK DBFS_DG_CD_02_ru10 name=DBFS_DG_CD_02_
            alter GRIDDISK RECOC1_CD_11_ru10  name=RECOC1_CD_11_

6) Activate griddisk on that cell
            cellcli> ALTER GRIDDISK ALL ACTIVE;
        There are some important points to be noted after activating griddisks.

      a) asm disks path and name
       * griddisk name change is automatically getting relflected in asm disk path.
       * asm logical name is still referring old name.
      b) failgroup
       * failgroup name is changed and using the same old name.

7) Changing ASM logical name and failgroup name.

    * This can be achived by dropping asmdisk and adding back with correct name. The observation is failgroup name will get automatically changed when we adding
      back asm disks with correct name.
    * ASMCA is the best tool to drop and add back asm disks with 250+ rebalancing power limit.
    a) Drop asm disks and observations.
        * We need to make sure asmdisks can be dropped
             cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
                ASMDeactivationOutcome - Should be YES for all griddisks
        * Drop asmdisks using asmca or alter diskgroup 

        We can see asmdisk state will be dropping and there will be an ongoing rebalance operation.
        * ASM rebalance operation.
            We can see ongoing asm rebalance operation using below command and change the power to finish it fast.

                sqlplus / as asm

                sql> select * from v$asm_operation;
                sql> alter diskgroup DATAC1 rebalance power 256;

        * Once rebalance operation completed we can asm disk state as changed to noraml, name will become empty failgroup also changed with corret name.
a) ADD back asm disks and observations.

Adding back as well can be done by using asmca or asm diskgroup alter commands.
We need to make sure we are adding back with correct name in this case DATAC1_CD_00_RU10 should be added back DATAC1_CD_00_arb02celadm19

We can see ongoing asm rebalance operation using below command and change the power to finish it fast.

        sqlplus / as asm
        sql> select * from v$asm_operation;
8) Remaining cells

We can continue same operation for remaining cells and entire operation can be completed with out any downtime at database level.
Once we have completed we can see all votedisks as well relocated or renamed with new name.


I appreciate and thank my team member Khalid Kizhakkethil for doing this wonderful job and preparing the documentation.


OBIEE 12c ( post upgrade challenges - unable to save HTML based dashboard reports

Mon, 2017-12-11 03:18
Very recently, an OBIEE v12.2.1.1 upgraded to v12.2.1.3 on an Exalytics machine. Though it wasn't a major upgrade, we had to do it to fix some bugs encountered in v12.2.1.1. For sure, it wasn't a easy walk in the park during the upgrade and post upgrade.

I am here to discuss a particular issue we faced post upgrade. This should be interesting.

Post upgrade to v12.2.1.3, we heard from the application team that all the reports working perfectly fine, except the dashboard reports with HTML tag unable to save. The below error was thrown when try to save the existing HTML reports:

"You do not currently have sufficient privileges to save a report or dashboard page that contains HTML markup"

It appeared to be a very generic error, and we tried all the workarounds we found on various websites and Oracle docs. Unfortunately, none of the solution fixed the issues. One of the popular advice is the following:

you need to go and check the HTML markup privileges in Administration.

1. Login with OBIEE Admin user,
2. Go to Administration link - Manage Privileges,
3. Search for "Save Content with HTML Markup" and see if it has necessary privileges. else assign the necessary roles and re-check the issue. 

After giving an additional privilege we could save the existing HTML reports. However, still the issue exists when we create a new dashboard report, with HTML tag. This time the below error msg was appeared:

While doing the research we also opened a SR with Oracle support. Luckily, (I mean it), the engineer who assigned this SR was good enough to catch the issue and provide the solution.

The engineer tried to simulate the issue on his environment and surprisingly he faced the similar problem. He then contacted the OBIEE development team and reaised the above concerns.

A few hours later, he provided us the below workaround, which seems to be a generic solution to the first and second problem we faced. I believe this is a common issue on

Please follow below steps ::

Take a backup of Instanceconfig.xml file, which is located in the following location, 


2. Add the following entry under security. 




3. Restart the presentation services with command below :: 

cd /refresh/home/oracle/12c/Oracle_Home/Oracle_Config/bi/bitools/bin

./ -i obips1 

./ -i obips1 

4. Now you will see the "Contains HTML MARK up" in the answers , check it and try to save the report now. 

The solution perfectly worked and all HTML dashboard reports were able to save.

If you are on OBIEE, I strongly recommend you to test the HTML reports, and if you encounter one similar to ours, apply the workaround.

Database direct upgrade from 11g to 12c to Exadata X6-2 from X5-2 - RMAN DUPLICATE DATABSAE WITH NOOPEN

Sun, 2017-12-03 08:45
At one of the customers recently, a few production databases migration from X5-2 to X6-2 with Oracle 11g upgrade to Oracle 12cR1 was successfully delivered.

As we all knew, there are handful of methods available to migrate and upgrade an Oracle databases. Our objective was simple. Migrate Oracle 11g ( database from X5-2 to Oracle 12c ( on X6-2.

Since the OS on source and target remains the same, no conversion is required.
There was no issue of downtime, so, doesn't required to be worried of various options.

Considering the above, we have decided to go for a direct RMAN (duplicate command) with NOOPEN option to upgrade the database. You can also use same procedure with RMAN restore/recovery method. But, the RMAN duplicate database with NOOPEN simplified the procedure.

Below is he high level procedure to upgrade an Oracle 11g database to 12cR1 using RMAN DUPLICATE command{

  • Copy preupgrd.sql & utluppkg.sql scripts from Oracle 12c ?/rdbms/admin home to 11g to /tmp or any other location;
  • Run preupgrd.sql script on the source database (which is Oracle 11gR2 in our case);
  • Review preupgrade.log and apply any recommendations; You can also execute preupgrade_fixups script to fix the issues raised in the preupgrade.log;
  • Execute RMAN backup (database and archivelog) on source database;
  • scp the backup sets to remote host
  • Create a simple init file on the target with just db_name parameter;
  • Create a password file on the target;
  • Startup nomount the database on the target;
  • Create TNS entries for auxiliary (target) and primary database (source) on the target host;
  • Run the DUPLICATE DATABASE with NOOPEN , with all adjusted / required parameters;
  • After the restore and recovery, open the database with resetlogs upgrade option;
  • Exit from the SQL prompt and run the catupgrade (12c new command) with parallel option;
on target host:

nohup $ORACLE_HOME/perl/bin/perl -n 8 catupgrd.sql &
After completing the catupgrade, get the postupgrade_fixup script from resource and execute on the target database;
Perform the timezone upgrade;

Gather dictionary, fixed_objects, database and system stats accordingly;
Run ultrp.sql to ensure all invalid objects are compiled;
Review the dba_registry t ensure no INVALID components remain;

Above are the high level steps. If you are looking for a step-by-step procedure, review the below url from We must appreciate for the work done at the blogspot for a very comprehensive procedure and demonstration.


Oracle EBS Suite blank Login page - Post Exadata migration

Tue, 2017-11-07 08:35
As part of EBS database migration on Exadata, we recently deployed a brand new Exadata X5-2L (eighth rack), and migrated an Oracle EBS Database from the typical storage/server technologies.

Below are the environment details:

EBS Suite 12.1.3 (running on two application servers with hardware LOAD BALANCER)
Database :, RAC database with 2 instance

After the database migration, buildxml and autoconfig procedures went well on both application and database tiers. However, when the EBS login page is launched, it came out as just blank page, plus, apps passwords were unable to change through the typical procedure. We wonder what went wrong, as none of the procedure gave any significant failure indications, all went fine and we could see the successful completion messages.

After a quick initial investigation, we found that there is an issue with the GUEST user, and also found that the profile was not loaded when the autoconfig was ran on the application server. In the autoconfig log file, we could see the process was failed to update the password (ORACLE). We then tried all workaround, the recommended on Oracle support and other websites. Unfortunately, none of the workarounds helped us.

After almost spending a whole day, investigating and analyzing the root cause, we looked at the DB components and their status through dba_registry. We found the JSserver Java Virtual Machine component INVALID. I then realized the issue happened during the Exadata software deployment. There was an error during the DB software installation while applying the patch due to conflict between the patches. Due to which the catbundle didn't executed.

Without wasting a single second, we ran the @catbundle command and followed by ultrp.sql.

Guess what, all issues disappeared. We run the autoconfig on the application servers. After which we could change the app user password and we could see the Login page too.

It was quite a nice experience.

This really 

Exadata X5-2L deployment & Challenges

Sun, 2017-11-05 07:01
A brand new Exadata X5-2L eighth rack (I knew the latest is X7 now, but it was for a POC, so no worries) has been deployed recently at a customer for Oracle EBS Exadata migration POC purpose. Its wasn't an easy walk in the park as I initially presumed. There were some challenges (network, configuration) thrown during the migration, but, happily overcome and had it installed and EBS database migration completion.

So, I am going to share yet another Exadata bare metal deployment story explaining the challenges I have faced, and how they are fixed.

Issue 1) DB network cable issues:
After successful execution of the elasticConfig, all the Exadata factory IP addresses have been set to client IPs. Though the management network was accessible from the outside, client network was not accessible. When verified with the network team about enabling the ports on the corporate switch, they confirmed that the ports are enabled, however, the connection is showing as not active and asked to us investigate the network cables connected to the DB nodes. When we verified the network cables ports, we didn't find any lights flashing and after an extensive investigation (Switch ports, SFP on Exadata and Corporate switch, checking the cables status), it was found that the cables pin direction was not properly connected. Also, found that the network bonding interfaces (eth1 and eth2) were not up, confirmed from ethtool eth1 command. After fixing the cables, and bringing up the interfaces (ifup eth1 & eith2), we could see that cables are connected properly and we can also see the lights on the ports.

$ethtool eth1 (shows the interfaces were not connected)
Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   100baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  100baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: Unknown!
        Duplex: Unknown! (255)

        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: no

Issue 2) Wrong Netmask selection for client network:
After fixing the cable issues, we then continued with onecommand execution. During the validation it failed because of different netmask for client network (under the cluster information section). The customer unfortunately made a mistake in the client network netmask selection for cluster settings, so there was a difference in the client netmask value for client and cluster. This was fixed by modifying the netmask value in the ifcfg-boneth0 file (/etc/sysconfig/network-scripts), restart the network services.

Issue 3) Failed eight Rack configuration (rack type and disk size):
Since the system was delivered somewhere during end of August 2015, no one actually knows exactly the disk size and rack model. The BOQ (Bill of quantity) for the order only shows X5-2 HC storage. So, there was wrong selection in the OEDA for Exadata rack and disk size. Instead of 4TB disk size, it was selected as 8TB disk size and instead of Elastic configuration, fixed Eighth rack was selected. This was fixed by rerunning the OEDA with the correct options.

Issue 4) Cell server IP issues:
There was another obstacle faced while doing the cell connectivity (part of onecommand). Cell server IPs were not modified by the elasticConfig. Fortunately, I found my friend blog on this topic and quickly fixed the issue. This is why I like to blog all the technical issues, who knows, this could solve someone pains.

Issue 5) SCAN Listener configuration:
Cluster validation failed due to inconsistent values for scan name. During the investigation of various issues, private, public & scan IPs are put in the /etc/hosts file. So, while configuring LISTENER_SCAN2 and SCAN3, this issue happened. This was fairly understandable. Due to 3 entries of scan values in the /etc/hosts file this happened. Upon a quick google about the issue, the following blog helped me to fix the issue

Finally, I have managed to deploy the Exadata successfully and perform the Oracle EBS database migration . No doubt, this experience really made me strong in network and other areas. So, every challenges comes with opportunity to learn.

I thank those individuals who really write blogs and share their experience to help Oracle community.

There is still one open issues which is yet to be resolved. A slow sqlplus and db startup. I presume this is due to heavy resource utilization over the server. Yet to resolve the mystery. Stay tuned for more updates.

Oracle R Enterprise Server Configuration

Fri, 2017-10-27 11:17

Oracle R Enterprise Server Overview

Oracle R Enterprise includes several components on the server. Together these components enable an Oracle R Enterprise client to interact with Oracle R Enterprise Server.
The server-side components of Oracle R Enterprise are: 
  • Oracle Database Enterprise Edition (64bit)
  • Oracle R Distribution or open source R
  • Oracle R Enterprise Server
Oracle R Enterprise Server consists of the following: 
    • The rqsys schema
    • Metadata and executable code in sys
    • Oracle R Enterprise Server libraries in $ORACLE_HOME/lib (Linux and UNIX) or %ORACLE_HOME%\bin (Windows)
    • Oracle R Enterprise R packages in $ORACLE_HOME/R/library (%ORACLE_HOME%\R\library on Windows)
Oracle R Enterprise Server RequirementsBefore installing Oracle R Enterprise Server, verify your system environment, and ensure that your user ID has the proper permissions.

Oracle R Enterprise runs on 64-bit platforms only.
·        Linux x86-64
·        Oracle Solaris
·        IBM AIX
·        IBM AIX

Oracle Database must be installed and configured as described
·        Oracle R Enterprise requires the 64-bit version of Oracle Database Enterprise Edition.

Installing Oracle R Enterprise Server
verify if ORACLE_HOME, ORACLE_SID, R_HOME, PATH, and LD_LIBRARY_PATH environment variables are properly set. For example, you could specify values like the following in a bashrc file:

export ORACLE_HOME=/u01/app/oracle/product/
export R_HOME=/usr/lib64/R
export PATH=$PATH:$R_HOME/bin:$ORACLE_HOME/bin

Download Oracle R Enterprise ServerGo to the Oracle R Enterprise home page on the Oracle Technology Network:. a new window
Select Oracle R Enterprise Downloads. On the Downloads page, select Oracle R Enterprise Server and the Supporting Packages for Linux. The following files are downloaded for Oracle R Enterprise 1.4.1.

Login as root, and copy the installers for Oracle R Enterprise Server and the supporting packages across nodes. For example:

$ dcli -g nodes -l oracle mkdir -p /home/oracle/ORE
$ dcli -g nodes -l oracle -f -d 
$ dcli -g nodes -l oracle -f -d 

Unzip the supporting packages on each node:
$ dcli -t -g nodes -l oracle unzip   
     /home/oracle/ORE/ -d 

 Install Oracle R Enterprise server components:
$ dcli -t -g nodes -l oracle "cd /my_destination_directory; ./ -y
      --admin --sys syspassword --perm permtablespace
      --temp temptablespace --rqsys rqsyspassword
      --user-perm usertablespace --user-temp usertemptablespace
      --pass rquserpassword --user RQUSER"

Installing Oracle R Enterprise Server interactively:
$ ./ -i

Oracle R Enterprise 1.4.1 Server.

Copyright (c) 2012, 2014 Oracle and/or its affiliates. All rights reserved.

Checking platform .................. Pass
Checking R ......................... Pass
Checking R libraries ............... Pass
Checking ORACLE_HOME ............... Pass
Checking ORACLE_SID ................ Pass
Checking sqlplus ................... Pass
Checking ORACLE instance ........... Pass
Checking CDB/PDB ................... Pass
Checking ORE ....................... Pass

Choosing RQSYS tablespaces
  PERMANENT tablespace to use for RQSYS [list]: DATA
  TEMPORARY tablespace to use for RQSYS [list]: TEMP1
Choosing RQSYS password
  Password to use for RQSYS:

Choosing ORE user
  ORE user to use [list]:

ORE user to use [list]: OREUSER1

Current configuration
  R Version ........................ Oracle Distribution of R version 3.1.1  (--)
  R_HOME ........................... /usr/lib64/R
  R_LIBS_USER ...................... /u01/app/oracle/product/
  ORACLE_HOME ...................... /u01/app/oracle/product/
  ORACLE_SID ....................... OFSAPRD1

  Existing R Version ............... None
  Existing R_HOME .................. None
  Existing ORE data ................ None
  Existing ORE code ................ None
  Existing ORE libraries ........... None

  RQSYS PERMANENT tablespace ....... DATA
  RQSYS TEMPORARY tablespace ....... TEMP1

  ORE user type .................... Existing
  ORE user name .................... OREUSER1
  ORE user PERMANENT tablespace .... DATA
  Grant RQADMIN role ............... No

  Operation ........................ Install/Upgrade/Setup

Proceed? [yes] y

Removing R libraries ............... Pass
Installing R libraries ............. Pass
Installing ORE libraries ........... Pass
Installing RQSYS data .............. Pass
Configuring ORE .................... Pass
Installing RQSYS code .............. Pass
Installing ORE packages ............ Pass
Creating ORE script ................ Pass
Installing migration scripts ....... Pass
Installing supporting packages ..... Pass
Granting ORE privileges ............ Pass




How to install Oracle R Enterprise (ORE) on Exadata

Sun, 2017-10-22 08:38
Very recently, we have deployed ORE (R Distribution and R Enterprise) 3.1.1 packages on 4 node Exadata environment. This blog will discuss the prerequisites and procedure to deploy Oracle R Distribution v3.1.1.

Note: Ensure you have a latest system (root and /u01) backup before you deploy the packages on the db server.

What is R and Oracle Enterprise

R is third-party, open source software. Open source R is governed by GNU General Public License (GPL) and not by Oracle licensing. Oracle R Enterprise requires an installation of R on the server computer and on each client computer that interacts with the server.

Why Oracle R Distribution? 
  • Oracle R Distribution simplifies the installation of R for Oracle R Enterprise.
  • Oracle R Distribution is supported by Oracle for customers of Oracle Advanced Analytics, Oracle Linux, and Oracle Big Data Appliance.

What is needed for R Distribution deployment for Oracle Linux 6?
The Oracle R Distribution RPMs for Oracle Linux 6 are listed as follows:
If the following dependent RPM is not automatically included, then download and install it explicitly:

The picture below depicts the ORE client/server installation steps:

Description of Figure 1-2 follows

Oracle R Distribution on Oracle Linux Using RPMsOracle recommends that you use yum to install Oracle R Distribution, because yum automatically resolves RPM dependencies. However, if yum is not available, then you can install the RPMs directly and resolve the dependencies manually. Download the required rpms and its dependent rpms from below link:

To know more about rpms and its dependent rpms, visit the following Oracle website:

You can install the rpms in the following order:

yum localinstall libRmath-3.1.1-2.el6.x86_64.rpm
yum localinstall libRmath-devel-3.1.1-2.el6.x86_64.rpm
yum localinstall libRmath-static-3.1.1-2.el6.x86_64.rpm
yum localinstall R-core-3.1.1-2.el6.x86_64.rpm
yum localinstall R-devel-3.1.1-2.el6.x86_64.rpm
yum localinstall R-3.1.1-2.el6.x86_64.rpm

Once the rpms are installed, you can validate the installation , using the below procedure:

go to /usr/lib64/R directory on the database, as oracle user, type R:

You must see the output below:

type q() to exit from the R interface.

And repeat on the rest of the db nodes, if you are on RAC.

To install R distribution, use the procedure below:

rpm -e R-Rversion
rpm -e R-devel
rpm -e R-core
rpm -e libRmath-devel
rpm -e libRmath
 In the blog post, I will demonstrate how to configure Oracle R Enterprise.

How to expand Exadata Database Storage capacity on demand

Thu, 2017-10-19 04:27

Exadata Storage expansion

Most of us knew the capabilities that Exadata Database Machine delivers. Its known fact that Exadata comes in different fixed rack size capacity: 1/8 rack (2 db nodes, 3 cells), quarter rack (2 db nodes, 3 cells), half rack (4 db nodes, 7 cells) and full rack (8 db nodes, 14 cells). When you want to expand the capacity, it must be in fixed size as well, like, 1/8 to quarter, quarter to half and half to full.

With Exadata X5 Elastic configuration, one can also have customized sizing by extending capacity of the rack by adding any number of DB servers or storage servers or combination of both, up to the maximum allowed capacity in the rack.

In this blog post, I will summarize and walk through a procedure about extending Exadata storage capacity, i.e, adding a new cell to an existing Exadata Database Machine.

Preparing to Extend Exadata Database Machine

·        Ensure HW placed in the rack, and all necessary network and cabling requirements are completed. (2 IPs from the management network is required for the new cell).
·        Re-image or upgrade of image:
o   Extract the imageinfo from one of the existing cell server.
o   Login to the new cell through ILOM, connect to the console as root user and get the imageinfo
o   If the image version on the new cell doesn’t match with the existing image version, either you download the exact image version and re-image the new cell or upgrade the image on the existing servers.

Review "Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)" if you want to reimage the new cell.
  • Add the IP addresses acquired for the new cell to the /etc/oracle/cell/network-config/cellip.ora file on each DB node. To do this, perform the steps below from the first 1 db serer in the cluster:
    • cd /etc/oracle/cell/network-config
    • cp cellip.ora cellip.ora.orig
    • cp cellip.ora cellip.ora-bak
    • Add the new entries to /etc/oracle/cell/network-config/cellip.ora-bak.
    • /usr/local/bin/dcli -g database_nodes -l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.ora

  • If ASR alerting was set up on the existing storage cells, configure cell ASR alerting for the cell being added.
    • List the cell attributes required for configuring cell ASR alerting. Run the following command from any existing storage grid cell:
o   CellCLI> list cell attributes snmpsubscriber
    • Apply the same SNMP values to the new cell by running the command below as the celladmin user, as shown in the below example:
o   CellCLI> alter cell snmpSubscriber=((host='',port=162,community=public))
  • Configure cell alerting for the cell being added.
    • List the cell attributes required for configuring cell alerting. Run the following command from any existing storage grid cell:
o   CellCLI> list cell attributes
o    notificationMethod,notificationPolicy,smtpToAddr,smtpFrom,
o    smtpFromAddr,smtpServer,smtpUseSSL,smtpPort
    • Apply the same values to the new cell by running the command below as the celladmin user, as shown in the example below:
o   CellCLI> alter cell notificationmethod='mail,snmp',notificationpolicy='critical,warning,clear',smtptoaddr= '',smtpfrom='Exadata',smtpfromaddr='',smtpserver='',smtpusessl=FALSE,smtpport=25
  • Create cell disks on the cell being added.
    • Log in to the cell as celladmin and run the following command:
o   CellCLI> create celldisk all
    • Check that the flash log was created by default:
o   CellCLI> list flashlog
You should see the name of the flash log. It should look like cellnodename_FLASHLOG, and its status should be "normal".
If the flash log does not exist, create it using:
CellCLI> create flashlog all
    • Check the current flash cache mode and compare it to the flash cache mode on existing cells:
o   CellCLI> list cell attributes flashcachemode
To change the flash cache mode to match the flash cache mode of existing cells, do the following:
i. If the flash cache exists and the cell is in WriteBack flash cache mode, you must first flush the flash cache:
CellCLI> alter flashcache all flush
Wait for the command to return.
ii. Drop the flash cache:
CellCLI> "drop flashcache all"
iii. Change the flash cache mode:
CellCLI> "alter cell flashCacheMode=writeback_or_writethrough"
The value of the flashCacheMode attribute is either writeback or writethrough. The value must match the flash cache mode of the other storage cells in the cluster.
iv. Create the flash cache:
cellcli -e create flashcache all
  • Create grid disks on the cell being added.
    • Query the size and cachingpolicy of the existing grid disks from an existing cell.
o   CellCLI> list griddisk attributes name,asmDiskGroupName,cachingpolicy,size,offset
    • For each disk group found by the above command, create grid disks on the new cell that is being added to the cluster. Match the size and the cachingpolicy of the existing grid disks for the disk group reported by the command above. Grid disks should be created in the order of increasing offset to ensure similar layout and performance characteristics as the existing cells. For example, the "list griddisk" command could return something like this:
o   DATAC1          default         5.6953125T         32M
o   DBFS_DG         default         33.796875G         7.1192474365234375T
o   RECOC1          none            1.42388916015625T  5.6953582763671875T
When creating grid disks, begin with DATAC1, then RECOC1, and finally DBFS_DG using the following command:
CellCLI> create griddisk ALL HARDDISK PREFIX=DATAC1, size=5.6953125T, cachingpolicy='default', comment="Cluster cluster-clux6 DR diskgroup DATAC1"

CellCLI> create griddisk ALL HARDDISK PREFIX=RECOC1,size=1.42388916015625T, cachingpolicy='none', comment="Cluster cluster-clux6 DR diskgroup RECOC1"

CellCLI> create griddisk ALL HARDDISK PREFIX=DBFS_DG,size=33.796875G, cachingpolicy='default', comment="Cluster cluster-clux6 DR diskgroup DBFS_DG"
CAUTION: Be sure to specify the EXACT size shown along with the unit (either T or G).
  • Verify the newly created grid disks are visible from the Oracle RAC nodes. Log in to each Oracle RAC node and run the following command:
·        $GI_HOME/bin/kfod op=disks disks=all | grep cellName_being_added
This should list all the grid disks created in step 7 above.
  • Add the newly created grid disks to the respective existing ASM disk groups.
·        alter diskgroup disk_group_nameadd disk 'comma_separated_disk_names';
The command above kicks off an ASM rebalance at the default power level. Monitor the progress of the rebalance by querying gv$asm_operation:
SQL> select * from gv$asm_operation;
Once the rebalance completes, the addition of the cell to the Oracle RAC is complete.
  • Download and run the latest exachk to ensure that the resulting configuration implements the latest best practices for Oracle Exadata.
Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)


K21 Technologies - Upcoming Oracle trainings

Tue, 2017-10-17 08:46
Atual's K21 technologies offering some quality of Oracle training. Below are the upcoming Oracle training:

Apps DBA : Install | Patch | Clone | Maintain :

Apps DBA Webinar:…/appsdbawebinar/062687

Oracle GoldenGate Training

Whats new in Exadata X7

Tue, 2017-10-17 07:58
Oracle announced its new Exadata Database Machine X7 during OOW 2017.  Lets walk through quickly about the key features of X7.

Key features
  • Up to 912 CPU core and 28.5TB memory per rack
  • 2 to 19 DB servers per rack
  • 3 to 18 Storage servers per rack
  • Maximum of 920TB flash capacity
  • 2.1PB of disk capacity
  • Delivers 20% faster throughput from earlier models
  • 50% more memory capacity from earlier models
  • 10TB size disk. (10TB x 12 = 120TB RAW per storage server). The only system in the market today with 10TB disk capacity
  • Increased OLTP performance : about 4.8 million reads and about 4.3 million writes per second
  • Featuring an Intel Skylake processor with 24 cores 
  • Enhanced Ethernet connectivity: supports over 25GbE
  • Delivers in-memory performance from Shared Storage 
  • OEDA CML interface
  • New Exadata Smart Software : Exadata 18c

Switchover and Switchback simplified in Oracle 12c

Fri, 2017-10-13 07:51

Business continuity (Disaster Recovery) has become a very critical factor for every business, especially in the financial sectors. Most of the banks are tending to have their regular DR test to meet the central bank regulation on DR testing capabilities.

Very recently, there was a request from one of the clients to perform a reverse replication and rollback (i.,e switchover & switchback) between the HO and DR for one of the business critical databases. Similar activities performed with easy on pre 12c databases. However, this was my first experience with Oracle 12c. After spending a bit of time to explore whats new in 12c Switchover, it was amazing to learn how 12c simplified the procedure. So, I decided to write a post on my experience.

This post demonstrates how Switchover and Switchback procedure is simplified in Oracle 12c.

The following is used in the scenario:

·        2 instances Oracle 12c RAC primary database (IMMPRD)
·        Single instance Oracle 12c RAC Standby database (IMMSDB)

Look at the current status of the both databases:
-- Primary
IMMPRD> select status,instance_name,database_role from v$database,v$instance;

------------ ---------------- ----------------
OPEN         IMMPRD1           PRIMARY

-- Standby
IMMSDB> select status,instance_name,database_role from v$database,v$instance;

------------ ---------------- ----------------

Before getting into the real action, validate the following to avoid any failures during the course of role transition:

·        Ensure log_archive_dest_2 is configured on PRIMARY and STANDBY databases
·        Media Recovery Process (MRP) is active on STANDBY and in sync with PRIMARY database
·        Create STANDBY REDO logs on PRIMARY, if not exists
·        FAL_CLIENT & FAL_SERVER parameters set on both databases
·        Verify TEMP tablespaces on STANDBY, add them if required, as TEMPFFILES created after STANDBY creation won’t be propagated to STANDBY site.

Pre-Switchover in 12c

For a smooth role transition, it is important to have everything in-place and in sync. Pre-Oracle 12c, a set of commands used on PRIMARY and STANDBY to validate the readiness of the systems. However, with Oracle 12c, this is simplified with the ALTER DATABASE SWITCHOVER VERIFY command. The command performs the following set of actions:

·        Verifies minimum Oracle version, i.e, Oracle 12.1
·        Verify MRP status on Standby database

Let’s run the command on the primary database to validate if the environments are ready for the role transition.

IMMPRD>  alter database switchover to IMMSDB verify;
 alter database switchover to IMSDB verify
ERROR at line 1:
ORA-16475: succeeded with warnings, check alert log for more details

When the command is executed, an ORA-16475 error was encountered. For more details, lets walk through the PRIMARY and STANDBY databases alert.log file, and pay attention to the SWITCHOVER VERIFY WARNING.

--primary database alert.log

Fri Oct 13 11:16:00 2017
SWITCHOVER VERIFY: Send VERIFY request to switchover target IMSDB
SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter. If the switchover target is converted to a primary database, the new primary database will not be protected.

ORA-16475 signalled during:  alter database switchover to IMSDB verify...

The LOG_ARCHIVE_DEST_2 parameter was not set on the STANDBY database and the VERIFY command produced the warning. After setting the parameter on the STANDBY, the verify command was re-ran, and it went well this time.

IMMPRD> alter database switchover to IMMSDB verify;

Database altered.

PRIMARY database alert.log confirms no WARINGS

alter database switchover to IMMSDB verify
Fri Oct 13 08:49:20 2017
SWITCHOVER VERIFY: Send VERIFY request to switchover target IMMSDB
Completed: alter database switchover to IMMSDB verify

Switchover in 12c 

After successful validation and confirmation about the DBs readiness for the role transition, execute the actual switchover command on the primary database. (advised to view the alert.log files of PRIMARY and STANDBY instances).

IMMPRD> alter database switchover to IMMSDB;

Database altered.

Let’s walk through the PRIMARY and STANDBY database alert.log files to review what Oracle has internally done.

--primary database alert.log

alter database switchover to IMMSDB
Fri Oct 13 08:50:21 2017
Starting switchover [Process ID: 302592]
Fri Oct 13 08:50:21 2017
Waiting for target standby to receive all redo
Fri Oct 13 08:50:21 2017
Waiting for all non-current ORLs to be archived...
Fri Oct 13 08:50:21 2017
All non-current ORLs have been archived.
Fri Oct 13 08:50:21 2017
Waiting for all FAL entries to be archived...
Fri Oct 13 08:50:21 2017
All FAL entries have been archived.
Fri Oct 13 08:50:21 2017
Waiting for dest_id 2 to become synchronized...
Fri Oct 13 08:50:22 2017
Active, synchronized Physical Standby switchover target has been identified
Preventing updates and queries at the Primary
Generating and shipping final logs to target standby
Switchover End-Of-Redo Log thread 1 sequence 24469 has been fixed
Switchover End-Of-Redo Log thread 2 sequence 23801 has been fixed
Switchover: Primary highest seen SCN set to 0x960.0x8bcd0f48
ARCH: Noswitch archival of thread 2, sequence 23801
ARCH: End-Of-Redo Branch archival of thread 2 sequence 23801
ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log switch
ARCH: Standby redo logfile selected for thread 2 sequence 23801 for destination LOG_ARCHIVE_DEST_2
ARCH: Noswitch archival of thread 1, sequence 24469
ARCH: End-Of-Redo Branch archival of thread 1 sequence 24469
ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log switch
ARCH: Standby redo logfile selected for thread 1 sequence 24469 for destination LOG_ARCHIVE_DEST_2
ARCH: Archiving is disabled due to current logfile archival
Primary will check for some target standby to have received all redo
Waiting for target standby to apply all redo
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/imprd/IMPRD1/trace/IMPRD1_ora_302592.trc
Converting the primary database to a new standby database
Clearing standby activation ID 627850507 (0x256c3d0b)
The primary database controlfile was created using the
'MAXLOGFILES 192' clause.
There is space for up to 186 standby redo logfiles
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
Archivelog for thread 1 sequence 24469 required for standby recovery
Archivelog for thread 2 sequence 23801 required for standby recovery
Switchover: Primary controlfile converted to standby controlfile succesfully.
Switchover complete. Database shutdown required
USER (ospid: 302592): terminating the instance
Fri Oct 13 08:50:44 2017
Instance terminated by USER, pid = 302592
Completed: alter database switchover to IMMSDB
Shutting down instance (abort)

--standby database alert.log

SWITCHOVER: received request 'ALTER DTABASE COMMIT TO SWITCHOVER  TO PRIMARY' from primary database.
Fri Oct 13 08:50:32 2017
Maximum wait for role transition is 15 minutes.
Switchover: Media recovery is still active
Role Change: Canceling MRP - no more redo to apply

SMON: disabling cache recovery
Fri Oct 13 08:50:41 2017
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/imsdb/IMMSDB1/trace/IMMSDB1_rmi_120912.trc
SwitchOver after complete recovery through change 10310266982216
Online logfile pre-clearing operation disabled by switchover
Online log +DATAC1/IMMSDB/ONLINELOG/group_1.3018.922980623: Thread 1 Group 1 was previously cleared
Standby became primary SCN: 10310266982214
Switchover: Complete - Database mounted as primary
SWITCHOVER: completed request from primary database.
Fri Oct 13 08:51:11 2017

At this point-in-time, the new PRIMARY database is in MOUNT state, so you need to OPEN the database.

IMMSDB> alter database open

And startup the STANDBY database and enable MRP: (below is the active standby database command)

IMMPRD> startup
IMMPRD> recover managed standby database using current logfile disconnect from session;

Post Switchover, run through the following:

IMMSDB> alter system switch logfile;

IMMSDB> select dest_id,error,status from v$archive_dest where dest_id=2;

IMMSDB> select max(sequence#),thread# from v$log_history group by thread#;
IMMSDB> select max(sequence#)  from v$archived_log where applied='YES' and

On Standby database

IMMPRD> select thread#,sequence#,process,status from gv$managed_standby;
-- in 12.2, use gv$dataguard_status instead of gv$managed_standby view

IMMPRD> select max(sequence#),thread# from v$archived_log group by thread#;

You can also enable the trace on primary and standby before performing the role transition to analyze any failures during the procedure. Use the below procedure on the PRIMARY database to enable the tracing:

SQL> alter system set log_archive_trace=8191;  -- enabling trace

SQL> alter system set log_archive_trace=0;      -- disabling trace


To revert (switch back) to the previous situation, perform the same action. Remember, now, your primary is your previous STANDBY and standby is previous PRIMARY.


12c Data guard Switchover Best Practices using SQLPLUS (Doc ID 1578787.1)

A few useful Oracle 12cR2 MOS Docs

Thu, 2017-07-06 07:33
A few useful MOS Docs are listed below , in case if 12cR2 upgrade around the corner.

  • How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues (Doc ID 2240959.1)
  • Complete Checklist for Upgrading to Oracle Database 12c Release 2 (12.2) using DBUA (Doc ID 2189854.1)
  • 12.2 Grid Infrastructure Installation: What's New (Doc ID 2024946.1)
  • Patches to apply before upgrading Oracle GI and DB to (Doc ID 2180188.1)
  • Differences Between Enterprise, Standard Edition 2 on Oracle 12.2 (Doc ID 2243031.1)
  • 12.2 Does Not List Disks Unless the Discovery String is Provided (Doc ID 2244960.1)

Oracle Clusterware 12cR2 - deprecated and desupported features

Thu, 2017-07-06 04:27

Having clear understanding of deprecated and desupported features in a new release is equally important as knowing the new features of the release. In this short blog post, I would like to highlight the following features that are either deprecated or desupported in 12cR2.

·        config.shwill no longer be used for Grid configuration wizard, instead, the is used in 12cR2;
·        Placement of OCR and Voting files directly on a shared filesystem is not deprecated;
·        The utility is deprecated in favor of Oracle Trace File Analyzer;

·        You are no longer able to use Oracle Clusterware commands that are prefixed with crs_.

In my next blog post, will go over some of the important features Oracle Clusterware in 12cR2. Stay tuned.


SQL Tuning Advisor against sql_id's in AWR

Tue, 2017-05-23 04:23
We were in a situation very recently to run SQL Tuning Advisor against a bunch of SQL statements that appeared in the AWR's ADDM recommendations report. The initial effort to launch SQL Tuning Advisor against the SQL_ID couldn't go through as the SQL didn't exist in the shared pool.

Since the sql_id was present in the AWR report, thought of running the advisory against the AWR data, and found a very nice and precisely explained at the following blog:

---- Example how to run SQL Tuning advisor against sql_id in AWR

variable stmt_task VARCHAR2(64);
SQL> exec :stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK (begin_snap => 4118, end_snap => 4119, sql_id => 'caxcavmq6zkv9' , scope => 'COMPREHENSIVE', time_limit => 60, task_name => 'sql_tuning_task01' );

SQL> exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'sql_tuning_task01');

SQL> SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'sql_tuning_task01';

set long 50000
set longchunksize 500000
Set pagesize 5000


SQL> exec DBMS_SQLTUNE.drop_tuning_task(task_name =>'sql_tuning_task01');


Happy reading/learning.

Migrating data from on-primeses to cloud

Mon, 2017-05-15 14:37
No doubt everyone talks about cloud technologies and certainly could holds the future for various reasons. Oracle doesn't want to left behind in the competition and put the top gear towards cloud offerings. 

This blog explore various Oracle options to migrate on-premises data to cloud. Typically, when a database is created on cloud, the next challenging factor is loading the data to cloud. The good thing about data migration is that the methods and procedures remain the same as you were doing earlier. All data migration constraints still applied, like the following:
  • OS versions of on-premises and cloud machine
  • DB versions
  • Character set
  • DB Size
  • data types
  • Network bandwidth
 The very known and DBA friendly popular Oracle methods are still valid for cloud data migration too :
  • Logical method (conventional data pumps)
  • TTS
  • Cross platform TTS
  • Unplugging/Plugging/Cloning/Remote Cloning of PDBs
  • SQL Developer and SQL Loader
  • Golden Gate
Usually, you take the data backup, choosing the method which suits your requirements,  and upload the backup files to the cloud machine where the database is hosted. Please consider good network and internet speed to expedite the data migration process.

In the example below, data pump (dumpfile) is copied from the on-premises machine to the cloud host machine:

Once the backup files are transferred to the cloud host, you use the typically method to do the data restore.

For more options, read the URL below:

For example using SQL Developer and SQL Loading, read the URLs below:

Golden Gate

Transforming a heap table to a partitioned table - how and whats new in 12c R1 & R2

Sun, 2017-05-14 04:13
As part of the daily operational job, one of the typical requests we DBAs get is to convert a regular (heap) table into a partitioned table. This can be achieved either offline or online. This blog will demonstrate some of the pre-12c methods and enhancements in Oracle 12c R1 and R2.

I remembered when I  had such requests in the past, I used the following offline/online methods to achieve the goals, whatever best fit my application needs.

The offline method involves the following action sequence:
  1. Create empty interim partitioned table, indexes and etc
  2. Stop the application services if the non-partitioned table involved in any operations
  3. Migrate the data from the non-partitioned table to partitioned table
  4. Swap the table names
  5. Drop the non-partitioned table
  6. Compile any invalid package/procedure/functions/triggers
  7. Gather table stats
Note: If any integrity references, dependencies exists, the above procedure slightly defers with a couple of additional actions. The downside of this workaround is the service interruption during the course of transformation.

To avoid any service interruption, Oracle provides redefinition feature to perform the action online, without actually impacting the going DML operations on the table. The redefinition option involves the following action sequence:
  1.  Validate if the table can use redefinition feature or not (DBMS_REDEFINITION.CAN_REDEF_TABLE procedure)
  2.  Create interim partition table and all indexes
  3. Start the online redefinition process (DBMS_REDEFINITION.START_REDEF_TABLE procedure)
  4. Copy dependent objects (DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS procedure)
  5. Perform data synchronization (DBMS_REDEFINITION.SYNC_INTERIM_TABLE procedure)
  6. Stop the online redefinition process (DBMS_REDEFINITIONS.FINISH_REDEF_TABLE procedure)
  7. Swap the table names
  8. Compile any invalid package/procedure/functions/triggers
  9. Gather table stats
 However, such sort of action is simplified in Oracle 12c R1 and made easier in R2. The following demonstraes12c R1 and R2 methods.

With EXCHANGE PARTITION feature, the data can be quickly loaded from a non-partitioned table to a partitioned table:

Once you have the partitioned table, use the following example to exchange the data of heap table to partitioned table. In this example, the existing data will be copied to a single partition in the partitioned table.


With 12cR2 ALTER TABLE MODIFY option, a non-partitioned table can be easily transformed into a partitioned table, either offline or online. The example below demonstrate creating daily interval partition:

offline procedure:
(partition p1 values less than (100),
partitionp2 values less than (1000)) ;

Online procedure:
(partition p1 values less than (100),
partitionp2 values less than (1000))ONLINE UPDATE INDEXES (index1, index2 LOCAL) ;


Oracle Private Cloud Appliance (PCA) - when and why?

Sun, 2017-04-30 13:44

What has become so critical in today's competitive business is the ability to fulfill the sudden and unpredictable demands that arises. It requires data centers agility, rapid deployments and cloud ready solutions. To succeed in today's modern business, companies must be ready to deploy innovative applications and quickly adopt the changes in the market.

Oracle Private Cloud Appliance (PCA) is an integrated, 'wire once' converged system designed for fast cloud and rapid application deployments at the data centers. PCA is a one stop system for all your applications, where mixed operating systems (Linux, Solaris, RHEL and Windows) workloads can be consolidated into a single machine.

Its has been observed off-late here in GCC specially, more and more organization are moving towards the PCA adoption. Hence, I thought of just writing a blog explaining the prime features and functionalities of PCA.  Once I get some hands-on (which is in the very near future), I would love to write some advance concepts about of PCA and how really organization benefited with PCA.

Here are the key features of PCA:
  • Engineered system comes with fully prebuilt and preconfigured setup
  • Cost effective solution for most of the Oracle and non-Oracle workloads
  • Automated installation and configuration software controller
  • Prebuilt OVM to speed-up the Oracle deployments
  • Single-button DR solutions through OEM
  • Pay for only you use policy
  • Flexibility to Oracle storage or any pre-existing storage
  • PCA certifies all Oracle software that is certified to run on OVM  
  • Deployment of PCA at the data center is very straightforward and simple. The system will be ready within minutes/
  • You can add virtual machines (OVM) either with some basic configuration or use the standard OVM templates
  • No additional software licenses are required on PCA
  • Greatly reduces the time required for deployments. A new deployment can be achieved in hours rather than days in contrast to the traditional infrastructure
  • Easy integration into to existing data center models
  • OVM included with no additional cost

Below picture depicts the typical architecture, what PCA comprises of and supports:

A pair of management servers are installed in a active/standard  for HA. The master management node runs the full set of services, whereas the standby node runs only a subset of services.

The compute nodes (Oracle Servers X series) constitutes the virtual platforms and provides the processing power and memory capacity for the servers they hosted. The entire functionality is orchestrated by the management node (master). 


Stay tuned for more updates on this.


How to stabilize, improve and tune Oracle EBS Payroll and Retropay process

Sat, 2017-04-15 05:45
Visited few customers off late to review the performance issues pertaining to their Oracle EBS Payroll and Retro-pay processes. Not sure if many are aware of the tools Oracle has to analyze and improve any Oracle EBS modules, including Payroll and Retro-pay. To get proactive with Oracle EBS, refer the following note:

  • Get Proactive with Oracle E-Business Suite - Product Support Analyzer Index (Doc ID 1545562.1)

I must say, after running through the analyzers (Retro and Payroll), and implementing the suggestions, significant performance is achieved without making any change to the queries. I would strongly recommend to run the analyzers on different modules on Oracle EBS to get proactive and achieve performance improvements and stability. Below is the Payroll analyzer report screen shot, explains the findings and recommendations: