Syed Jaffar

Subscribe to Syed Jaffar feed
Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.The Human Fly
Updated: 1 week 44 min ago

Exadata Cloud Machine - Hardware capacity

Sun, 2018-11-18 04:13
Everyone who is aware and utilizes Exadata Database Machine is certainly knew the performance it can deliver. I have involved in many Exadata migration projects, and witnessed how customers gained the database performance and satisfied post migration. I am not talking about the cost, the need etc., as a technical guy, I knew the capabilities of the box and how it can benefit customers to fulfill their need and future demand.

We all knew about Cloud technologies, how every software company and organization trying to race with the trend and need of cloud technologies. In some countries, the cloud adoption is bit slower compare to the other part of the world. But, gradually majority of the companies would be adopting cloud technologies, this is for sure. Certainly, cloud has its own share of advantages and disadvantages. Whoever utilizes it smartly, can gain much flexibility and benefits.

To ensure and meet customers demand to have Exadata availability on cloud, Oracle started Exadata Cloud services offering to facilitate Exadata machine on cloud. Still, some organization couldn't adopt cloud due to industry regulations, corporate policies, security compliance etc. Therefore, Oracle announced Exadata Cloud Machine availability. With this model, customers who want to have cloud on-premises with Exadata hardware, can go for this model.

I would like to highlight the hardware capabilities that Exadata Could Machine (ExaCM) offers.

  • 40Gbps InfiniBand Networking
  • Ultra-fast NVMe Flash storage
  • Up to 257GB/sec Throughput
  • Up to 3.6 Million 8k I/Os per sec
  • 1/4 millisecond response time
  • Fastest Compute 
  • Fastest x86 processors
  • Large Memory Capacity - 720GB per compute node
  • Complete Redundancy
Soon will talk more about Exadata Cloud Machine migrations. Stayed tuned and hunger for knowledge.

Few Exadata MOS Docs to review

Wed, 2018-10-10 04:20
If you have MOS login credentials and managing Exadata database machines, below is the list of few MOS Doc which is worth reading:

  • 888828.1, "Exadata Database Machine and Exadata Storage Server Supported Versions"
  • 1070954.1, "Oracle Exadata Database Machine Exachk or HealthCheck"
  • 1353073.2, "Exadata Diagnostic Collection Guide"
  • 1500257.1, " Exadata Write-Back Flash Cache - FAQ"
  • 1553103.1, " and Updating Exadata Database Server Software using the DBNodeUpdate Utility and patchmgr"
  • 1589868.1, "Procedure to check for corrupted root file system on Exadata Storage Servers and Linux database servers"

(EX42) Flash disk failure may lead to ASM metadata corruption when using write-back flash cache

Mon, 2018-10-08 07:19
While reviewing the latest Exachk report on X5-2 machine, the following critical alrams were observed:

And details shows below description:

And the MOS Note : 1270094.1 explains the following:

According to MOS Doc: 2356460.1, the said behavior is due to a bug (27372426) which applies on Exa version to or to


If you are running GI or 12.1 with the above said Exa version, and  with FlashCache configured as Writeback mode, the following ORA error may encounter, during: ASM rebalancing operation, disk group mount, & disk group consistency checks, ASM review asm alert.log:

ORA-00600: internal error code, arguments: [kfdAuDealloc2]

WARNING: cache read a corrupt block: group=1(DATA) fn=381 indblk=27 disk=110 (DATA_CD_04_DM01CEL01)
ORA-15196: invalid ASM block header [kfc.c:26411] [endian_kfbh]

ORA-00600: internal error code, arguments: [kfrValAcd30]

ORA-00600: internal error code, arguments: [kfdAuPivotVec2], [kfCheckDG]

ERROR: file +DATADG1.3341.962251267: F3341 PX38530 => D55 A765853 => F1677
PX1647463: fnum mismatch
ERROR: file +DATADG1.3341.962251267: F3341 PX38531 => D15 A205431 => F3341
PX56068: xnum mismatch

To fix the bug, Following action plan needs to be applied:

1) Update the storage server to >= or >=
2) Apply patch 27510959 and scan ASM metadata

Note :

The issues doesn't impact on GI 12.2 or whenever you have higher version of Exa software mentioned in this bug;
The bug also doesn't affect if the FlashCache mode is WriteThrough;


Exadata Critical Issues (Doc ID 1270094.1)

(EX42) Flash disk failure may lead to ASM metadata corruption when using write-back flash cache (Doc ID 2356460.1)

Updating Exadata Software summary

Sat, 2018-09-15 07:06
Updating an Exadata software is one of the crucial tasks for any Database Machine Administrator (DMA). Though not necessarily one has to patch the environments whenever there is a new patch released by Oracle, but, it is highly recommended to patch the systems at least twice a year to fix any known &unknown bugs, security vulnerabilities and other issues.

This blog post summarizes the overall overview of software updates on an Exadata Database Machine. The post explains what components are needed the updates, the update order of components, pre-requisites and etc.

Typically, Exadata database machine updates are divided in the following categories:

  • Exadata Infrastructure Software 
  • Grid Infrastructure and Oracle Database Software

Updating the Exadata Software comprises of following components:

  • Storage Servers
  • Database Servers
  • InfiniBand Switches
Software upgrade for Cell and DB nodes typically contains the updates for the following:
  • OLE OS
  • Exadata Software
  • Firmware (Disk, Flash, RAID Controller, HCA, ILOM etc)


The following pre-upgrade activities are highly recommended before upgrading the Exadata software in any environment:

  • Review MOS Doc 888828.1 and download the target version software
  • Download from MOS Doc 1553103.1
  • Review MOS Doc 1270094.1 for any critical issues
  • Run the latest version of ExaCHK utility. Fix any FAIL and WARNINGS issues reported in the ExaCHK report. Also, review version recommendations in the MAA scoreboard section
  • Ensure you have latest upgrade/patching utilities, such as, patchmgr, opatch etc. (MOS Doc 1070954.1)
  • Perform prerequisites checks
  • Backup the Exadata database servers before the update 
Rolling vs Non-rolling upgrades

Software updates can be performed online or offline (rolling or non-rolling) fashion. For online updates, it is highly recommended ASM high level disk group redundancy to avoid any data or service loss.

As part of best practices, the following is update order is recommended and treated as safe:
  1. GI and Oracle Database home
  2. Database Servers
  3. Storage Servers
  4. IB Switches
patchmgr update utiity

patchmgr update utility is used to patch the Exadata infrastructure components.  Following are the capabilities of patchmgr:
  • Single invocation for Database servers, storage servers and IB Switches
  • updates firmware, OS and Exadata softwares
  • Online update advantage
Conclusion: Though the procedure looks pretty straight forward & simply when reading, with my past experience, patching each environments comes up with surprises and we need to be ready, unless we are very lucky on the particular day to have a smooth patching experience.

In the upcoming posts, I will talk about how to use patchmgr and other update utilizes to update Exadata software, Database, Storage servers and IB Switches.

Exadata and Capacity on Demand (CoD)

Fri, 2018-09-14 06:10
As most of us knew that the Exadata Database Machine comes in different sizes with different resource capacity. Not sure how many of you aware that Capacity on Demand (CoD) option can enable customers to start with limited active cores processors and increase them dynamically on demand. If CoD option is not enabled during the initial EDM configuration, then, all active cores are enabled by default and can't be reduced any further.

With X4-2 or higher, number of active cores can be reduced during the installation and can be increased based on the demand.  For X4-2, cores are increased in two (2) core increment, where as X4-8 increased in eight (8) core factor, see the table below.

Below example demonstrates the procedure to increase the active core processors:

Using DBMCLI utility:

DBMCLI> LIST DBSERVER attributes coreCount

DBMCLI> ALTER DBSERVER pendingCoreCount = new_core_count

DBMCLI> LIST DBSERVER attributes coreCount

Note: Once active cores are enabled (increased), there is no procedure to reduce them again.

Restart the database servers after increasing the core count.

Below table depicts the capacity-on-demand core configuration for various EDM types and releases:

Cloud Access Security Broker (CASB) for Oracle Cloud Infrastructure (OCI)

Thu, 2018-09-13 08:33
Customer adoption to cloud services (IaaS, PaaS, SaaS)  has been rapidly grown and growing. The most challenging aspect moving to cloud is the ability to secure the application and data that is put on the cloud. Oracle's hetrogenous security solution Cloud Access Security Broker (CASB) helps customers protecting their cloud-based infrastructure, platforms, applications across vendors. CASBs have emerged as  the go-to cloud security solution. CASB has the ability to provide security to entire cloud footprint (SaaS, PaaS, IaaS).

Most essentially, for all Oracle Cloud Infrastructure (OCI) deployments, it provides visibility, threat protection, data security and complaince. Following are a few key advantages of CASB:

  • Governance of privileged activities
  • Unified incident management
  • Proactive remediation
  • Continuous security compliance for OCI deployments
As part of complete visibility, it provides holistic view of entire cloud environment, including users and devices.

Threat Detection with User Behavior Analytics (UBA) builds a baseline for typical behavior, down to the user and application. Also, maintain a log when and how a user deviates from the baseline. With the help of predictive analytics, you can easily identify the risky users who performs folder permission change, changing user privileges , or tampering with the configuration settings.

All your cloud compliance configuration settings can be easily maintained. Once the settings are made, CASB monitoring the settings and alerts you whenever there is a change in the setting.
CASB provides three key components to secure your data in cloud:
  1. Data visibility
  2. Data inspection
  3. Data Accessibility
It can easily integrate with the existing cloud security solutions, such as, SWG, NGF, IDaaS, DLP and SIEM.

For more details and information, visit Oracle website:

All about 'Autonomous Transaction Processing' - Part I

Wed, 2018-09-12 03:47
There has been a lot of buzz about 'Self driving & Self tuning database', 'autonomous', 'automation', etc. since Oracle 18c announced. I have decided to do my homework and test/validate some of them. So, this blog post will focus about 'Autonomous Transaction Processing (ATP)', how this is helpful to an organization and what role a DBA can play.

Its nothing but another typical cloud offering from Oracle. To begin with, Oracle ATP is built upon Oracle database and is designed, optimized to deliver scalable transaction performance across all standard business applications. As a service, ATP doesn't require DBA and no DBA intervention for any installation,configuration or management related activities. It handles all the DB related activities, such as, DB creation, backup, patching, upgrade, space management etc.

Its completely elastic service, where you can dynamically increase and decrease the resources (OCPU and storage capacity) without having any service interruption or database downtime. Using the cloud based console, you can easily manage the service, such as, scaling the service and monitoring. Additionally, cloud based notebook application provides easy querying, colobration and  data-visualization capabilities.

Below picture (source : Oracle documentation) describes the ATP architecture:

Below are some of the key features of Oracle Autonomous Transaction Processing:

  • Simplified management of : rapid provisioning of new database, dynamic resource management (allocation and de-allocation of cpu and storage), patching & upgrades and backup & recovery
  • complete elastic service
  • Supports: existing, cloud and on-prime applications
  • supports high query performance and concurrent workloads
  • Easy data migration
  • BI tools support
  • Building reports and dashboards with analytics
  • All data stores in encrypted formatted to secure the data
  • Strong authenticity for connection and data access control
In part II, I will discuss details subscription, creating and users management.
Trying to avail 30 days free account on Oracle could. If I succeed to have the credentials, I will run through practically and post the configuration and management tasks.

Oracle 18c Autonomous Health Framework (AHF) - Part 1

Thu, 2018-05-31 10:03
Recently I had to present at eProseed AnualTech conference in Luxembourg and I was requested  to present a topic something about Oracle18c.

Obviously I don't want to talk and repeat the same about Autonomous Database, many experts already said much on this. I then decided to pick a topic which really helps DBAs, Administrators and finally to the organization. I was really fascinated about Oracle 18c autonomous health framework concepts and decided to do a presentation on this topics.

Working in a complex and huge Oracle environment, I knew where most of our energy and time is spend, as a DBA or system administrator. We always focus on avoiding run time availability and performance issues. In a complex and critical environment, every other day, you will face a new challenge and you must be on your toes as DBA during the business hours.

For DBA, most importantly, we need to ensure the database availability, at the same time, ensure its deliver the same performance 24x7. Imagine, if you get stuck with latches, instance crash, node crash, someone changes the binaries permission/ownership, you will spend hours and hours to fix and find the root cause of the issues.

With 18c autonomous health framework, its easy to avoid and auto fix run-time availability and performance issues. There are 8 components that makes this framework. Though some of them are present in 12.2, but, all these are configured automatically upon 18c configuration and run 24x7 in daemon mode. Also, 3 of the components have machine learning capabilities. to fix issue automatically.

I will start discussing about 8 components in next series. Stay tune for more on this topic.

utlrp weird behavior with INVAID objects in an Orace EBS database

Wed, 2018-05-30 07:05
In one of the recent database migration & upgrade activities, I have migrated an Oracle EBS database to a new DB host and upgraded to After migration and post upgrade, an utlrp.sql was ran to validate the 100k invalid objects in the database.

Weirdly, during the utlrp execution and when the INVALID objects count goes to 200, the INVALID objects counts started to increase again, and ultimately reaching the 100k number again. The utlrp was in kind of loop and never exited.

As a workaround, I have manually compiled all invalid objects and ran ultrp which ran successfully. I am yet to unlock the theory caused the situation.

Stay tuned for more updates on this post.

Traditional Export/Import Fail With ORA-01031 in DB Vault Environment Starting Release

Wed, 2018-05-30 06:27
Oracle Database Vault was configured on RAC database v11.2.0.3 and appropriate permissions given to allow the regular duties, such as data pump export/import etc.

However, when classical/traditional import (a dump file from v9 database) was performed, the import on database failed with the following errors:

Import: Release - Production on Tue May 29 16:56:23 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

IMP-00058: ORACLE error 1031 encountered
ORA-01031: insufficient privilegesUsername: 

Data pump jobs were successfully executing without any issues. The only problem is with traditional export/import . According to below Oracle notes, it is an expected behavior in with DB Vault in place.

Classic Export/Import Fail With ORA-01031 in Database Vault Environment Starting Release (Doc ID 1419687.1)
Bug 13615338 - EXP/IMP disabled in Database Vault environment (Doc ID 13615338.8)

According to the notes,

Export and Import tools are no longer recommended to be used, especially in Database Vault environments.
When it is not possible to use data pump (for instance when the export has been created from a pre-10g database),
the patch 13615338 must be installed to allow the usage of classic export/import utilities.

The option would be to upgrade the database to higher than

Will apply the patch and update this blog if the issue is fixed or not.

ORA-12545 TNS: Host or Object Does not Exist , while using SCAN

Sat, 2018-03-31 11:30
As part of SSL configuration between WebLogic tier and DB tier, post sqlnet.ora settings and scan listener restart, we encountered an ORA-12545 error on a 3 node RAC database when using the SCAN listener.

We verified the LISTENER status and we could find all the services in READY status. Tried using VIP, it worked perfectly. So, the only issue was connecting through SCAN. We were fortunate to find a solution after a very quick search over the net. The MOS Doc 364855.1 explained the issue and provided the solution.

As part of the configuration, we set the LOCAL_LISTENER to an VIP hostname. After modifying the LOCAL_LISTENER string to point to VIP IP address, we could manage to connect through the SCAN.

Logically the VIP hostname should have worked, but, we noticed through many references the same issue. 

Client Connection to RAC Intermittently Fails-ORA-12545 TNS: Host or Object Does not Exist (Doc ID 364855.1)

Backup2Cloud - a backup/restore automation utility

Wed, 2018-03-28 10:38
For cost optimization and flexibility, its a good idea to go with Oracle Database backup cloud services. The Oracle Database Backup Cloud service can actually compliment with the existing backup strategy by providing an off-site storage location in the cloud.

The bck2cloud utility is easy to use, install and secures the data. It simplifies the RMAN backup/restore operations when using the Oracle Backup Cloud Services. Once you subscribe to the service, you simply need to download  module from the OTN and install it on the database host. Once the module is successfully installed, you then need to configure the RMAN settings. With the utility, all data transfers strictly between the database instance and Oracle backup cloud account controlled by the customer.

Following operations are supported by the utility:

  • RMAN Operations
  • Cloud Storage operations
  • Password Encryption operation
  • Install Oracle Public Cloud (OPC) Module operation

Bck2Cloud simplifies and perform many RMAN operations:
  • Backup : all levels/types of backups (full, incremental, spfile, controlfile etc)
  • BckReport : provides information about all backups
  • BValidate : Validates backups
  • Crosscheck :
  • DelBck
  • ListBck
  • ListInc
  • Recovery
  • Restore
  • RPreview
  • Rvalidate

For more details and examples, read the Oracle White paper mentioned below:


Benefits from Oracle Backup to could service

Sun, 2018-03-25 08:13
Organizations treat their data as an asset. In this regards, the top priority for them is data availability and protection from all sorts of disasters. We have seen more and more organizations investing huge amount of money to achieve continuous data availability and safeguard the data form any aspects. Keeping the data backups off-site also become one of the critical factors of data protection policy. Traditionally, following model is followed by many organizations (image taken from Oracle white paper)

Oracle database backup cloud service offers very flexible and strong protection from disasters:
  1. Backups are always available over the internet and for immediate recovery need
  2. Data is transparently replicated across multiple nodes in the same geographic region
  3. Provided end-to-end security
  4. Supports all levels of Oracle editions, DB version >=10.2.5, and compressions
  5. Supports all RMAN operations
  6. low cost solution
  7. leverages MAA configuration and BP
  8.  Guaranteed backup and recovery operations
High level of OCB architecture:

on-premises End-to-End flow example

 Cloud end-to-end flow example:


Oracle Cloud backup service is a low cost solution with MAA configuration to protect Oracle databases.

Will discuss about 1-Button automated backup tool in the next blog.

White papers:

Oracle database backup cloud service
Best Practices for on-premise and cloud database

1-Button Cloud backup/restore automation utility

Tricky Oracle data guard situation

Sat, 2018-03-24 14:43
Had an interesting situation configuring a Data Guard setup of 7TB Oracle 12c database at one of the clients. After a long and rounds of discussions, considering the constraints and limitations, agreed to use the traditional way of Data Guard setup. Backup the database, replicate the backups to DR site, create a standby controfile, restore the control file on standby host , do DB restore, recovery and configure the synchronization.

Restore failed due to unavailability of the data files location (same as production) on the DR host, because, all application tablespaces data files on PROD were not used OMF format, stored under ASM with .dbf extension. Due to this fact, only default tablespaces data files (which actually used OMF format in ASM) were restored. A PROD similar directory structure (using ALTER DISKGROUP ADD DIRECTORY) created and all the data files were restored successfully. The PROD similar directory (+DATA/PROD/DATAFILE)  contained only the soft links and data files were created under the Standby Database SID directory. The entire process took more than 24 hrs, and a subsequent incremental backup was taken to fill the gap and have the data in sync using the roll-forward procedure.

As part of the roll-forward, a new standby controlfile was created on PROD and restored on the DR DB. When CATALOG START WITH command was issued, only few data files (system related) were able to CATALOG, while the soft link data files weren't. This stopped us doing the roll-forward recovery. Tried all possible option, nothing was materialized.

We then cleaned-up the DR database data files, to have a fresh restore. Used the db_convert_file_name parameter and start over the traditional DR configuration procedure. This time, all the data files were restored under standby SID directory, also, soft links for the non-standard OMF data files, were created in the same directory, unlike the first attempt. Once the restore done, successfully performed the roll-forward procedure to make the PROD and DR DBs in sync.

If you have non-standard OMF files, with .dbf extension under ASM, ensure you use the db_file_name_convert to avoid the mess.

Well, if you have any alternative, you are most welcome to suggest.

Exadata X7 installation some Whitney+ issues

Sun, 2018-03-18 05:45
One of our DB teams from Europe highlighted the following issues while installation and configuring VM on Exadata X7.

"Today we started X7-2 Exadata installation. But we faced some issues. First problem is, when installing the cluster, resources are being up then suddenly, being down and there are some internal bugs for this problem:

Bug 27275655 - Running tcpdump on bondeth0 kills the Whitney LOM interface
Bug 27195117 - Finisar FTLX8574D3BCV-SU transceiver does not link up on X7-2 LOM

These are internal bugs required to re-image the whole stack and use the image Also, for the PXE(for re-imaging), there is another problem.

Without for those bugs when you are installing ExadataX7.


ORA-01033 when Connecting to the DB Via Scan Listener

Sun, 2018-03-18 04:51
Had a very interesting and tricky scenario a few days ago. It was something I never encountered in my DBA career. Hence, thought of sharing the detail of there store here today.


During mid-day, an application team reported that they are suddenly getting an ORA-01033 when connecting to a 3 node RAC database ( Quick basic validations reveals that the issue is happening only when connecting through SCAN IP, but, the VIP and direct (physical IP) connections were having no issues. So, it was clear that the issue is with the SCAN IPs. Verified all the configuration settings , network and firewall to ensure there is no issues accessing the SCAN IPS. To our surprise, everything was just perfectly okay. This really puzzles us.We also suspected the Data Guard configuration of this database, but, it wasn't the case either.


After a quick search over the internet, we come across of MOS Doc: ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)

The issue was, one of the team members was restoring the database backup on a new host.  The tricky part here is, the new host is part of the same network/subnet where the 3 node RAC database is running, and can access to SCAN IPs too. Perhaps the instance that is restoring is registered with the SCAN, and whenever a new connection request is made through SCAN, the connection was referred to an instance which in mount state (restoring). Hence, an ORA-1033 error is thrown.


After reviving the note, the restore immediately stopped, and things got back to normal. Even nullifying the remote_listener parameter to de-register with SCAN would have been also worked in this case.

This issue can be prevented through configuring Class of Secure Transport (COST).

The probabilities of hitting the issue is very low, but, I felt its interesting and sharing it worth while.


ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)
NOTE:1340831.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration in Oracle RAC
NOTE:1453883.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration 

ORA-19527: physical standby redo log must be renamed

Thu, 2018-03-08 13:59
Data Guard was configured a few days back at one of the customers site. An ORA-19527 error is continuously reported in the standby database alert.log (database name is changed)

Errors in file /u01/app/oracle/diag/rdbms/orcl/ORCL1/trace/ORCL1_rfs_3504.trc:
ORA-19527: physical standby redo log must be renamed
ORA-00312: online log 9 thread 1: '+DATAC1/ORCL/ONLINELOG/group_9.459.960055023'

Though this didn't interrupt the data guard replication, the customer wanted to get-rid of the error message. Starting with v10g, this was an expected behavior to improve the switchover and failover. With v10g, when MRP is started, it will attempt to clear the online log files rather than at the time of role transition.

This is also an expected behavior when the log_file_name_convert parameter is not set on the standby database. But, in our case, the log_file_name_convert was not required as PRIMARY and STANDBY has the same directory structure (DISK GROUPS for DATA and RECO).

The workaround to get rid of the message, when there is no different in the directory structure, is simply to set some dummy values to the parameter, as shown in the below example:

SQL> ALTER SYSTEM SET log_file_name_convert='dummy','dummy';

After the parameter was set, the ORA message was no longer seen in the alert.log

ORA-19527 reported in Standby Database when starting Managed Recovery (Doc ID 352879.1)
ORA-19527: Physical Standby Redo Log Must Be Renamed...during switchover (Doc ID 2194825.1)

Oracle ERP DB Exadata migration + AC-50480: Internal error occurred + ORA-12154

Sun, 2018-02-18 06:24
I was recently involved in a project to migrate an Oracle ERP database to an Exadata server. The ERP database was a non-RAC Oracle 12cR1 running on RHEL with Oracle EBusiness suite 12.1.3.

The migration involved only migrating the ERP database from traditional storage/server technologies to Exadata machine. Upgrade from non-RAC database to RAC. The application remains on the same host. Though similar migration projects were successfully done earlier, this time, I faced a tough challenge resolving autoconfig issues. Here are the details of the issue and how I resolved the problem.

Downtime was not an issue, hence, I opted out for an RMAN backup and restore approach. In nutshell, the following was done:

  • Prepared the Oracle home for ERP on Exadata and applied all recommended patches on the home.
  • Performed all mandatory steps required for ERP Oracle Home.
  • After graceful shutdown of application tier, ERP database was stopped and started in MOUNT state to proceed with RMAN full backup (this is the approach I used, though, many different approaches can be achieved).
  • Copied the files to target (Exadata system) and complete the recovery procedure.
  • Through manual approach, converted the non-RAC database to RAC mode and completed the post migration steps.
Its mandatory to run catbundle script after the database migration to avoid issues like blank login page and issues changing the passwords. Some of you might defer with my advice, but, I faced this at multiple clients. So, I decided to make this as a practice right after migration.
  • The autoconfig on database nodes were successful. However, when autoconfig was executed on application tier, it completed with warnings. The autoconfig,log has the following errors:
Updating s_tnsmode to 'generateTNS'
UpdateContext exited with status: 0
AC-50480: Internal error occurred: java.lang.Exception: Error while generating listener.ora.
Error generating tnsnames.ora from the database, temporary tnsnames.ora will be generated using templates
Instantiating Tools tnsnames.ora
Tools tnsnames.ora instantiated
Web tnsnames.ora instantiated

The NetServiceHandler.log reported the following error:

SQLPLUS Executable : /oracle/PROD/apps/tech_st/10.1.2/bin/sqlplus

SQL*Plus: Release - Production on Thu Feb 15 21:27:02 2018

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter value for 1: Enter value for 2: ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

Unable to generate listener.ora from database
Using default listener.ora file

 After googling for the solution, the common workaround was the following:
  1. Clean up the entries from fnd_nodes using the EXEC FND_CONC_CLONE.SETUP_CLEAN;
  2. Run the autoconfig on Dbtier and run the autoconfig on App tier
 Unfortunately this doesn't helped us in our case.

When I looked at the host names in the FND_NODES for database, the hostname was registered against the management hostname (Exadata has management, public IPs). Though client opened ports for Public hostname, they didn't opened the firewall against the management network. Though we have multiple IPs (management and public), the DB server can only take one hostname. So, if you are on Exadata, you need to ensure what is the hostname registered on the server, despite the different IPs.
After allowing firewall to open the LISTENER ports through management IP, the autoconfig on App tier went successfully and we manage to connect the application.

The bottom line is, when you are on Exadata for ERP databases, you need to watchout for the hostnames and the ports opened against IPs from application server.

Oracle Cloud Infrastructure services and third-party backup tools (CloudBerry)

Mon, 2018-02-12 02:46
As part of the Cloud Infrastructure services, Oracle offers below persistent I/O intensive block storage and high-throughput storage options, which is manageable through the console and by CLI:

  • Oracle Cloud Infrastructure Block Volumes
  • Oracle Cloud Infrastructure Object Storage
  • Oracle Cloud Infrastructure Archive Storage
  • Oracle Cloud Infrastructure File Storage
  • Oracle Cloud Infrastructure Data Transfer Service
 To understand the details, features, pricing and etc, visit the URL below:

One of the customers was struggling to delete the backup data from the Oracle Cloud to avoid the charges. Oracle support requested to use the CloudBerry tools for easy management. Below is the excerpt from the CloudBerry website :

"With CloudBerry Backup you can use Oracle Storage Cloud Service as a cost-effective, remote backup solution for your enterprise data and applications. By backing up your data and applications to Oracle Storage Cloud Service, you can avoid large capital and operating expenditures in acquiring and maintaining storage hardware. By automating your backup routine to run at scheduled intervals, you can further reduce the operating cost of running your backup process. In the event of a disaster at your site, the data is safe in a remote location, and you can restore it quickly to your production systems"

CloudBerry offers below Oracle tools :
  • CloudBerry Explorer
  • CloudBerry Backup
  • CloudBerry Managed Backup
  • CloudBerry Drive
 Visit their website to explore more about these tools:

We then used CloudBerry Backup tool (15 days free trail) to move the backup files from the Oracle cloud storage and removed it from the cloud.

You may download the 15 days free trail and give a try.

Oracle Cloud - Database Services

Thu, 2018-01-18 03:05
This blog post highlights some of the essentials of Oracle cloud Database service offerings and its advantages. Also discuss about Database deployment (DBaaS) benefits and tools that can be used to automate backup/recovery operations and maintenance tasks.

No doubt, most of the software houses are now pushing the clients towards the cloud services. Though Cloud provides several benefits, one should really understands the benefits, threats and offerings from the different cloud vendors. I am going to discuss here about Oracle Cloud Database Services offering.

Database service
  • Oracle Database Exadata cloud at customer (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the customer's DC)
  • DB Service on Bare Metal (Dedicated database instances with full administrate control)
  • Exadata Express Service
  • Oracle Database Exadata Cloud Service (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the Oracle Cloud)
  • Database Schema service (A dedicated schema with a complete development and deployment platform managed by Oracle)
 Provides :

  • Rapid provisioning to use in minutes
  • Grow as your business grow
  • Provides tight security to protect the data
  • Off-loads your day-to-day maintenance work

Database deployment (earlier known as DBaaS): is a compute environment which provides:
  • A Linux VM
  • Oracle software
  • A per-created database
  • Cloud tools 
  • for automated and on-demand backup and recovery, automated patching and upgrades, web monitoring tool etc.

Patching a deployment database:
  • Use UI Cloud Interface from Oracle cloud DB Service console or use the command line utility : dbaascli
Backup and Recovery of deployment database:
  • bkup_api Automated backup service level
  • dbaascli Automated recovery service level 
For more information about Features, Overview and pricing of Oracle Cloud Database services, visit :

I will be blogging about each topic separately in the coming posts.

Stay tuned.