Amardeep Sidhu

Subscribe to Amardeep Sidhu feed
Little bit of fun with Oracle and the related technologies...
Updated: 2 weeks 4 days ago

Understanding grid disks in Exadata

Mon, 2019-02-18 07:07

Use of Exadata storage cells seems to be a very poorly understood concept. A lot of people have confusions about how exactly ASM makes uses of disks from storage cells. Many folks assume there is some sort of RAID configured in the storage layer whereas there is nothing like that. I will try to explain some of the concepts in this post.

Let’s take an example of an Exadata quarter rack that has 2 db and 3 storage nodes (node means a server here). Few things to note:

  • The space for binaries installation on db nodes comes from the local disks installed in db nodes (600GB * 4 (expandable to 8) configured in RAID5). In case you are using OVM, same disks are used for keeping configuration files, Virtual disks for VMs etc.
  • All of the ASM space comes from storage cells. The minimum configuration is 3 storage cells.

So let’s try to understand what makes a storage cell. There are 12 disks in each storage cell (latest X7 cells are coming with 10 TB disks). As I mentioned above that there are 3 storage cells in a minimum configuraiton. So we have a total of 36 disks. There is no RAID configured in the storage layer. All the redundancy is handled at ASM level. So to create a disk group:

  • First of all cell disks are created on each storage cell. 1 physical disk makes 1 cell disk. So a quarter rack has 36 cell disks.
  • To divide the space in various disk groups (by default only two disk groups are created : DATA & RECO; you can choose how much space to give to each of them) grid disks are created. grid disk is a partition on the cell disk. slice of a disk in other words. Slice from each cell disk must be part of both the disk groups. We can’t have something like say DATA has 18 disks out of 36 and the RECO has another 18. That is not supported. Let’s say you decide to allocate 5 TB to DATA grid disks and 4 TB to RECO grid disks (out of 10 TB on each disk, approx 9 TB is what you get as usable). So you will divide each cell disk into 2 parts – 5 TB and 4 TB and you would have 36 slices of 5 TB each and 36 slices of 4 TB each.
  • DATA disk group will be created using the 36 5 TB slices where grid disks from each storage cell constitute one failgroup.
  • Similarly RECO disk group will be created using the 36 4 TB slices.

What we have discussed above is a quarter rack scenario with High Capacity (HC) disks. There can be somewhat different configurations too:

  • Instead of HC disks, you can have the Extreme Flash (EF) configuration which uses flash cards in place of disks. Everything remains the same except the number. Instead of 12 HC disks there will be 8 flash cards.
  • With X3 I think, Oracle introduced an eighth rack configuration. In an eighth rack configuration db nodes come with half the cores (of quarter rack db nodes) and storage cells come with 6 disks in each of the cell. So here you would have only 18 disks in total. Everything else works in the same way.

Hope it clarified some of the doubts about grid disks.


Categories: BI & Warehousing

ORA-04080: trigger ‘PRICE_HISTORY_TRIGGERV1’ does not exist

Tue, 2019-01-22 07:45

It is actually a dumb one. I was disabling triggers in a schema and ran this SQL to generate the disable statements. (Example from here)

HR@test> select 'alter trigger '||trigger_name|| ' disable;' from user_triggers where table_name='PRODUCT';

'ALTERTRIGGER'||TRIGGER_NAME||'DISABLE;'
--------------------------------------------------------------------------------
alter trigger PRICE_HISTORY_TRIGGERv1 disable;

HR@test> alter trigger PRICE_HISTORY_TRIGGERv1 disable;
alter trigger PRICE_HISTORY_TRIGGERv1 disable
*
ERROR at line 1:
ORA-04080: trigger 'PRICE_HISTORY_TRIGGERV1' does not exist


HR@test>

WTF ? It is there but the disable didn’t work. I was in hurry, tried to connect through SQL developer and disable and it worked ! Double WTF ! Then i spotted the problem. Someone created it with one letter in the name in small. So to make it work, we need to use double quotes.

HR@test> alter trigger "PRICE_HISTORY_TRIGGERv1" disable;

Trigger altered.

HR@test>

One of the reasons why you shouldn’t use case sensitive names in Oracle. That is stupid.

Categories: BI & Warehousing

Error while running ggsci

Sat, 2019-01-12 08:21

This was another issue that I faced while trying to configure GoldenGate in HA mode. ggsci was working fine after normal installation but after configuring it in HA mode and trying to run ggsci, it resulted in this:

[oragg@node2 product]$ ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 12.3.0.1.4 OGGCORE_12.3.0.1.0_PLATFORMS_180415.0359_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Apr 16 2018 00:53:30
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.
2019-01-08 16:28:37.913
CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process
Additional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1
CLSU-00103: error location: sclsdgcwd2
(:CLSD00183:)
GGSCI (exadatadb02.industowers.com) 1>

No obvious clues in the error message but little searching revealed that it had something to do with permissions. It was on Exadata so i tried to do a strace of ggsci and see if it could give some clues. There we go:

[oragg@node2 product]$ strace ggsci
.
.
mkdir("/u01/app/oracle/product/12.1.0.2/dbhome_4/log/exadatadb02", 01777) = -1 EACCES (Permission denied)

That is the Oracle database home, GoldenGate is supposed to use. It is trying to create a directory at the mentioned path and not able to do it. There was another directory called client needed inside this. I created both of them and set the needed permissions & the sticky bit and it worked fine. It was working fine on the other node, so i could check the permissions over there and do the same on this node.

Categories: BI & Warehousing

Failed to execute the command “”/u01/app/xag/bin/clsecho”

Tue, 2019-01-08 11:22

I was configuring GoldenGate in HA mode by following this document. Everything worked ok but in the end while running agctl config goldengate to view the configuration of GoldenGate resource, it was failing with the following error:

[oracle@exadatadb02 ~]$ agctl config goldengate GG_TARGET
Failed to execute the command ""/u01/app/xag/bin/clsecho" -p xag -f xag -m 5080 "GG_TARGET"" (rc=134), with the message:
Oracle Clusterware infrastructure fatal error in clsecho.bin (OS PID 126367_140570897783808): Internal error (ID (:CLSB00107:)) - Error -1 (ORA-08275) determining Oracle base
/u01/app/xag/bin/clsecho: line 45: 126367 Aborted (core dumped) ${CRS_HOME}/bin/clsecho.bin "$@"
Failed to execute the command ""/u01/app/xag/bin/clsecho" -p xag -f xag -m 5081 "/u01/app/oragg/product"" (rc=134), with the message:

If you look at the error in bold it sounds kinda obvious that it is not able to figure our where the ORACLE_BASE is. But somehow it didn’t strike me at that moment. So started looking around. If we look at the command it is running, it runs clsecho. This is simply a shell script which in turn calls $CRS_HOME/bin/clsecho.bin . In the script, it sets various environment variables and that is where the problem was. There are lines like:

ORACLE_BASE=
export ORACLE_BASE

Nowhere in the script, it is setting the value of ORACLE_BASE. That was causing an issue. I changed the first line to set the ORACLE_BASE location and it worked fine after that. There was another issue i faced with ggsci after doing xag configuration. Will do another blog post on that.

Categories: BI & Warehousing

dbca doesn’t list diskgroups

Wed, 2018-12-26 09:31

This is an Exadata machine running GI version 18.3.0.0.180717 and DB version 12.1.0.2.180717. On one of the DB nodes while running dbca, it doesn’t list the diskgroups. it works fine on the other node.

I cheked the dbca trace and found that the kfod command was failing. I tried to run it manually and got the same error:

[oracle@exadb01 ~]$ /u01/app/18.0.0.0/grid/bin/kfod op=groups verbose=true
KFOD-00300: OCI error [-1] [OCI error] [Could not fetch details] [-105777048]

KFOD-00105: Could not open pfile 'init@.ora'
[oracle@exadb01 ~]$

I ran it with strace then:

[oracle@exadb01 ~]$ strace /u01/app/18.0.0.0/grid/bin/kfod op=groups verbose=true
execve("/u01/app/18.0.0.0/grid/bin/kfod", ["/u01/app/18.0.0.0/grid/bin/kfod", "op=groups", "verbose=true"], [/* 18 vars */]) = 0
brk(0) = 0x2641000
.
.
.
.
.
open("/u01/app/18.0.0.0/grid/dbs/ab_+ASM1.dat", O_RDONLY) = -1 EACCES (Permission denied)
geteuid() = 1003
open("/u01/app/18.0.0.0/grid/rdbms/mesg/kfodus.msb", O_RDONLY) = 13
fcntl(13, F_SETFD, FD_CLOEXEC) = 0
lseek(13, 0, SEEK_SET) = 0
read(13, "\25\23\"\1\23\3\t\t\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"…, 280) = 280
lseek(13, 512, SEEK_SET) = 512
read(13, "\352\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"…, 512) = 512
lseek(13, 1024, SEEK_SET) = 1024
read(13, ".\1=\1E\1M\1X\1\352\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"…, 512) = 512
lseek(13, 1536, SEEK_SET) = 1536
read(13, "\n\0d\0\0\0D\0e\0\1\0e\0f\0\1\0\230\0g\0\1\0\306\0h\0\2\0\325\0"…, 512) = 512
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 3), …}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f43f85f2000
write(1, "KFOD-00300: OCI error [-1] [OCI "…, 78KFOD-00300: OCI error [-1] [OCI error] [Could not fetch details] [-132605848]
) = 78

The text in bold just before the kfod error caught my attention. When I checked actually oracle user wasn’t able to read the file. The permissions looked like this:

[root@exadb01 dbs]# ls -ltr
total 20
-rw-r--r-- 1 oragrid oinstall 3079 May 14 2015 init.ora
-rw-r--r-- 1 oragrid oinstall 587 Dec 12 15:33 initbackuppfile.ora
-rw-rw---- 1 oragrid asmadmin 1656 Dec 20 14:26 ab_+ASM1.dat
-rw-rw---- 1 oragrid oinstall 1544 Dec 20 14:26 hc_+APX1.dat
-rw-rw---- 1 oragrid oinstall 1544 Dec 21 16:57 hc_+ASM1.dat
[root@exadb01 dbs]#

Whereas on node2 they were like:

[oracle@exadb02 dbs]$ ls -ltr 
total 16
-rwxrwxrwx 1 oragrid oinstall 3079 Dec 12 14:52 init.ora
-rwxrwxrwx 1 oragrid oinstall 1544 Dec 21 16:57 hc_+ASM2.dat
-rw-rw---- 1 oragrid oinstall 1720 Dec 21 16:57 ab_+ASM2.dat
-rwxrwxrwx 1 oragrid oinstall 1544 Dec 21 16:57 hc_+APX2.dat
[oracle@exadb02 dbs]$

Since oracle user isn’t member of asmadmin group, it is not able to read the meniotned file. Changing the owner to oracle:oinstall fixed the issue.

Categories: BI & Warehousing

New web based OEDA for Exadata

Wed, 2018-11-21 03:17

It started with an xls sheet (that was called dbm configurator) . Then OEDA (Oracle Exadata Deployment Assistant) was introduced that was a Java based GUI tool to enter all the information needed to configure an Exadata machine. Now with the latest patch released in Oct, OEDA has changed again; to become a web based tool. It is deployed on WebLogic and comes with some new features as well. SuperCluster deployments will continue to use the Java based OEDA tool.  The new interface has support for Exadata, ZDLRA and ExaCC. It is backward compatible and can import the XMLs generated by older versions of OEDA. Some of the new features include the ability to configure single instance homes, create more than 2 diskgroups, create more than 1 database homes and databases, allow ILOMs to have a different subnet etc.

To configure the OEDA application you need to unzip the contents and run the installWls script with -p switch (that mentions the port). It will deploy the application on WebLogic and give you the URL to access the OEDA. The interface is similar to the older version. Just that it runs in a browser and there are some new features added. MOS note 2460104.1 and the Exadata documentation has more details:

Using Oracle Exadata Deployment Assistant

 

 

Categories: BI & Warehousing

Garbled display while running FMW installer on Linux

Sat, 2017-11-18 04:56

A colleague faced this while running FMW installer on a Linux machine. The display appeared like this

 

 

 

 

 

 

 

This thread gave a clue that it could have something to do with fonts. So I checked what all fonts related stuff was installed.

[root@someserver ~]# rpm -aq |grep -i font
stix-fonts-1.1.0-5.el7.noarch
xorg-x11-font-utils-7.5-20.el7.x86_64
xorg-x11-fonts-cyrillic-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-1-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-9-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-9-75dpi-7.5-9.el7.noarch
libXfont-1.5.2-1.el7.x86_64
xorg-x11-fonts-ISO8859-14-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-1-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-2-100dpi-7.5-9.el7.noarch
libfontenc-1.1.3-3.el7.x86_64
xorg-x11-fonts-ethiopic-7.5-9.el7.noarch
xorg-x11-fonts-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-misc-7.5-9.el7.noarch
fontpackages-filesystem-1.44-8.el7.noarch
fontconfig-2.10.95-11.el7.x86_64
xorg-x11-fonts-ISO8859-2-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-14-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-Type1-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-15-75dpi-7.5-9.el7.noarch
[root@someserver ~]#

stix-fonts looked suspicious to me. So I removed that with rpm -e stix-fonts.

That actually fixed the issue. After this the Installer window was displaying fine.

 

Categories: BI & Warehousing

root.sh fails with CRS-2101:The OLR was formatted using version 3

Sat, 2017-11-18 04:33

Got this while trying to install 11.2.0.4 RAC on Redhat Linux 7.2. root.sh fails with a message like

ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2017-11-09 15:43:37.883:
[client(37246)]CRS-2101:The OLR was formatted using version 3.

This is bug 18370031. Need to apply the patch before running root.sh.

Categories: BI & Warehousing

Presenting at Cloud day event of North India Chapter of AIOUG

Mon, 2017-11-06 05:47

I will be presenting a session titled “An 18 pointers guide to setting up an Exadata machine” at Cloud Day being organized by North India chapter of AIOUG. Vivek Sharma is doing multiple sessions on various cloud and performance related topics. You can register for the event here

https://www.meraevents.com/event/aioug-nic-cloud-day 

 

Categories: BI & Warehousing

ksplice kernel updates and Exadata patching

Sun, 2017-11-05 11:32

If you have installed some one off ksplice fix for kernel on Exadata, remember to uninstall it before you do a kernel upgrade  eg regular Exadata patching. As such fixes are kernel version specific so they may not work with the newer version of the kernel. 

Categories: BI & Warehousing

ORA-15040 ORA-15042 with EXTERNAL redundancy Diskgroup

Fri, 2017-11-03 12:57

A colleague was working on an ASM issue (Standalone one, Version 11.2.0.3 on AIX) at one of the customer sites. Later on, I also joined him. The issue was that the customer added few news disks to an existing diskgroup. Everything went well and the rebalance kicked in. After some time, something happened and all of a sudden the diskgroup was dismounted. While trying the mount the diskgroup again, it was giving

ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "27" is missing from group number "2"

Here is the relevant text from the ASM alert log

ORA-27063: number of bytes read/written is incorrect
IBM AIX RISC System/6000 Error: 19: <strong>No such device</strong>
Additional information: -1
Additional information: 1048576
WARNING: <strong>Write Failed</strong>. group:2 disk:27 AU:1005 offset:0 size:1048576
Fri Nov 03 10:55:27 2017
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_dbw0_58983380.trc:
ORA-27063: number of bytes read/written is incorrect
IBM AIX RISC System/6000 Error: 19: No such device
Additional information: -1
Additional information: 4096
WARNING: Write Failed. group:2 disk:27 AU:0 offset:16384 size:4096
NOTE: cache initiating offline of disk 27 group DATADG
NOTE: process _dbw0_+asm1 (58983380) initiating offline of disk 27.3928481273 (DISK_01) with mask 0x7e in group 2
Fri Nov 03 10:55:27 2017
WARNING: Disk 27 (DISK_01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 27 (DISK_01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
NOTE: initiating PST update: grp = 2, dsk = 27/0xea27ddf9, mask = 0x6a, op = clear
ERROR: failed to copy file +DATADG.263, extent 1952
GMON updating disk modes for group 2 at 36 for pid 9, osid 58983380
ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 2)
ERROR: ORA-15080 thrown in ARB0 for group number 2
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_57672234.trc:
ORA-15080: synchronous I/O operation to a disk failed
Fri Nov 03 10:55:27 2017
NOTE: stopping process ARB0
WARNING: Disk 27 (DISK_01) in group 2 mode 0x7f offline is being aborted
WARNING: Offline of disk 27 (DISK_01) in group 2 and mode 0x7f failed on ASM inst 1
NOTE: halting all I/Os to diskgroup 2 (DATADG)
Fri Nov 03 10:55:28 2017
NOTE: cache dismounting (not clean) group 2/0xDEB72D47 (DATADG)
NOTE: messaging CKPT to quiesce pins Unix process pid: 62128816, image: oracle@tiiproddb1.murugappa.co.in (B000)
NOTE: dbwr not being msg'd to dismount
Fri Nov 03 10:55:28 2017
NOTE: LGWR doing non-clean dismount of group 2 (DATADG)
NOTE: LGWR sync ABA=124.7138 last written ABA 124.7138
NOTE: cache dismounted group 2/0xDEB72D47 (DATADG)
SQL> alter diskgroup DATADG dismount force /* ASM SERVER */ 

At this stage disk 27 was not readable even with dd. So that means something is wrong with the disk. Since it is an external redundancy diskgroup not much can be done until the disk becomes available.

Speaking to the storage team cleared the air. One that the disk had gone offline at storage level so that is why even dd was not able to read it. Two that all these disks were thin provisioned (over provisioning of the storage space to improve the utilization; similar to over provisioning of CPU cores in the Virtualization world) from the storage. This particular disk 27 was meant for some other purpose but got wrongly allocated to this diskgroup. The actual space available in the pool (of this disk) was less than what was needed. The moment disks were added to the diskgroup, the rebalance kicked in and ASM started writing data to the disk. Within few minutes space became full and the storage software took the disk offline. Since ASM couldn’t write to the disk, the diskgroup was dismounted.

Fortunately, in the same pool, there was another disk that was still unused. So the storage guy dropped that disk and it freed up some space in the pool. He brought this disk 27 online after that. Diskgroup got mounted and the rebalance kicked in again. Finally, we dropped this disk and the rebalance started again. Once the rebalance completed, disk was free to be taken offline.

 

Categories: BI & Warehousing

TNS-12543: TNS:destination host unreachable

Fri, 2017-07-14 23:53

Scenario : Setting up a physical standby from Exadata to a non-Exadata single instance. tnsping from standby to primary works fine but tnsping from primary to standby fails with:

TNS-12543: TNS:destination host unreachable

I am able to ssh standby from primary, can ping as well but tnsping doesn’t work.  From the error description we can figure out that something is blocking the access. In this case it was iptables that was enabled on the standby server.

Stopping the service resolved the issue.

service iptables stop
chkconfig iptables off

The error is an obvious one but sometimes it just doesn’t strike you that it could be something simple like that.

Categories: BI & Warehousing

ORA-12154 in Data Guard environment

Wed, 2017-05-31 10:54

Hit this silly issue in one of the data guard environments today. Primary is a 2 node RAC running 11.2.0.4 and standby is also a 2 node RAC. Archive logs from node2 aren’t shipping and the error being reported is

ORA-12154: TNS:could not resolve the connect identifier specified

We tried usual things like going to $TNS_ADMIN, checking the entry in tnsnames.ora and then also trying to connect using sqlplus sys@target as sysdba. Everything seemed to be good but logs were not shipping and the same problem was being reported repeatedly. As everything on node1 was working fine so it looked even more weird.

From the error it is clear that the issue is with tnsnames entry. Finally found the issue after some 30 mins. It was an Oracle EBS environment so the TNS_ADMIN was set to the standard $ORACLE_HOME/network/admin/*hostname* path (on both the nodes). On node1 there was no tnsnames.ora file in $ORACLE_HOME/network/admin so it was connecting to the standby using the Apps tnsnames.ora which was having the correct entry for standby. On node2 there was a file called tnsnames.ora in $ORACLE_HOME/network/admin but it was not having any entry for standby. It was trying to connect using that file (the default tns path) and failing with ORA-12154. Once we removed that file, it started using the Apps tnsnames.ora and logs started shipping.

Categories: BI & Warehousing

Failed to create voting files on disk group RECOC1

Fri, 2017-04-28 04:01

Long story short, faced this issue while running OneCommand for one Exadata system. The root.sh step (Initialize Cluster Software) was failing with the following error on the screen

Checking file root_dm01dbadm02.in.oracle.com_2017-04-27_18-13-27.log on node dm01dbadm02.somedomain.com
Error: Error running root scripts, please investigate…
Collecting diagnostics…
Errors occurred. Send /u01/onecommand/linux-x64/WorkDir/Diag-170427_181710.zip to Oracle to receive assistance.

Doesn’t make much sense. So let us check the log file of this step

2017-04-27 18:17:10,463 [INFO][  OCMDThread][        ClusterUtils:413] Checking file root_dm01dbadm02.somedomain.com_2017-04-27_18-13-27.log on node inx321dbadm02.somedomain.com
2017-04-27 18:17:10,464 [INFO][  OCMDThread][        OcmdException:62] Error: Error running root scripts, please investigate…
2017-04-27 18:17:10,464 [FINE][  OCMDThread][        OcmdException:63] Throwing OcmdException… message:Error running root scripts, please investigate…

So we need to go to root.sh log file now. That shows

Failed to create voting files on disk group RECOC1.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.
Voting file add failed
2017/04/27 18:16:37 CLSRSC-261: Failed to add voting disks

Died at /u01/app/12.1.0.2/grid/crs/install/crsinstall.pm line 2068.
The command ‘/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/root
crs.pl ‘ execution failed

Makes some senses but we can’t understand what happened while creating Voting files on RECOC1. Let us check ASM alert log also

NOTE: Creating voting files in diskgroup RECOC1
Thu Apr 27 18:16:36 2017
NOTE: Voting File refresh pending for group 1/0x39368071 (RECOC1)
Thu Apr 27 18:16:36 2017
NOTE: Attempting voting file creation in diskgroup RECOC1
NOTE: voting file allocation (replicated) on grp 1 disk RECOC1_CD_00_DM01CELADM01
NOTE: voting file allocation on grp 1 disk RECOC1_CD_00_DM01CELADM01
NOTE: voting file allocation (replicated) on grp 1 disk RECOC1_CD_00_DM01CELADM02
NOTE: voting file allocation on grp 1 disk RECOC1_CD_00_DM01CELADM02
NOTE: voting file allocation (replicated) on grp 1 disk RECOC1_CD_00_DM01CELADM03
NOTE: voting file allocation on grp 1 disk RECOC1_CD_00_DM01CELADM03
ERROR: Voting file allocation failed for group RECOC1
Thu Apr 27 18:16:36 2017
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_228588.trc:
ORA-15274: Not enough failgroups (5) to create voting files

So we can see the issue here. We can look at the above trace file also for more detail.

Now to why did this happen ?

The RECOC1 is a HIGH redundancy disk group which means that if we want to place Voting files there, it should have at least 5 failure groups. In this configuration there are only 3 cells and that doesn’t meet the minimum failure groups condition (1 cell = 1 failgroup in Exadata).

Now to how did it happen ?

This one was an Exadata X3 half rack and we planned to deploy it (for testing purpose) as 2 quarter racks : 1st cluster with db1, db2 + cell1, cell2, cell3 and 2nd cluster with db3, db4 + cell4, cell5, cell6, cell7. All the disk groups to be in High redundancy.

Before a certain 12.x Exadata software version it was not even possible to have all disk groups in High redundancy in a quarter rack as to have Voting disk in a High redundancy disk group you need to have a minimum of 5 failure groups (as mentioned above). In a quarter rack you have only 3 fail groups. With a certain 12.x Exadata software version a new feature quorum disks was introduced which made is possible to have that configuration. Read this link for more details. Basically we take a slice of disk from each DB node and add it to the disk group where you want to have the Voting file. 3 cells + 2 disks from DB nodes makes it 5 so all is good.

Now while starting with the deployment we noticed that db node1 was having some hardware issues. As we needed the machine for testing so we decided to build the first cluster with 1 db node only. So the final configuration of 1st cluster had 1 db node + 3 cells. We imported the XML back in OEDA, modified the cluster 1 configuration to 1 db node and generated the configuration files. That is where the problem started. The RECO disk group still was High redundancy but as we had only 1 db node at this stage so the configuration was not even a candidate for quorum disks. Hence the above error. Changing DBFS_DG to Normal redundancy fixed the issue as when DBFS_DG is Normal redundancy, OneCommand will place the Voting files there.

Ideally it shouldn’t happened as OEDA shouldn’t allow a configuration that is not doable. The case here is that as originally the configuration was having 2 db nodes + 3 cells so High redundancy for all disk groups was allowed in OEDA. While modifying the configuration when one db node was removed from the cluster, OEDA probably didn’t run the redundancy check on disk groups and it allowed the go past that screen. If you try to create a new configuration with 1 db node + 3 cells, it will not allow you to choose High redundancy for all disk groups. DBFS will remain in Normal redundancy. You can’t change that.

Categories: BI & Warehousing

OneCommand Step 1 error

Mon, 2017-04-10 11:50

Hit this silly issue while doing an Exadata deployment for a customer. Step 1 was giving the following error:

ERROR: 192.168.99.102 configured on dm01celadm01.example.com as dm01dbadm02 does not match expected value dm01dbadm02.example.com

I wasn’t able to make sense of it for quite some time until a colleague pointed out that the reverse lookup entries should be done for FQDN only. As it is clear in the above message reverse lookup of the IP 192.168.99.102 returns dm01dbadm02 instead of dm01dbadm02.example.com. Fixing this in DNS resolved the issue.

Actually the customer had done reverse lookup entries for both the hostname and FQDN. As the DNS can return the results in any order, so the error message was bit random. Whenever the the hostname was returned first, Step 1 gave an error. But when the FQDN was the first thing returned, there was no error in Step 1 for that IP.

Categories: BI & Warehousing

Oracle RAC 12.1 – lsnodes exited with code 9

Tue, 2017-03-28 11:31

I was trying to do a 2 node RAC setup on Solaris 11.3 where Oracle Solaris Cluster 4.3 was already configured. Installed was running but the Cluster Node Information screen was appearing like this

error

The install log shows this:

INFO: Checking cluster configuration details

INFO: Found Vendor Clusterware. Fetching Cluster Configuration

INFO: Executing [/tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes]

with environment variables {TERM=xterm, LC_COLLATE=, SHLVL=3, JAVA_HOME=, XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt, SSH_CLIENT=172.16.64.55 56370 22, LC_NUMERIC=, LC_MESSAGES=, MAIL=/var/mail/oracle, PWD=/export/software/grid/grid, XTERM_VERSION=XTerm(320), WINDOWID=2097165, LOGNAME=oracle, _=*50727*/export/software/grid/grid/install/.oui, NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat, SSH_CONNECTION=172.16.64.55 56370 172.16.72.18 22, OLDPWD=/export/oracle, LC_CTYPE=, CLASSPATH=, PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install, LC_ALL=, DISPLAY=localhost:10.0, LC_MONETARY=, USER=oracle, HOME=/export/oracle, XTERM_SHELL=/bin/bash, XAUTHORITY=/tmp/ssh-xauth-mlq21a/xauthfile, A__z=”*SHLVL, XTERM_LOCALE=en_US.UTF-8, TZ=localtime, LC_TIME=, LANG=en_US.UTF-8}

INFO: Starting Output Reader Threads for process /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes

INFO: The process /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes exited with code 9

So we can see the problem. lsnodes is not able to list the nodes. Let us try to run that command manually.

-bash-4.1$ export PATH=PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install

-bash-4.1$ /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes

ld.so.1: lsnodes: fatal: libskgxn2.so: open failed: No such file or directory

Killed

-bash-4.1$

So looks like it is not able to find this library called libskgxn2.so. If we do a find for this file name we can see that it is present in this directory /usr/cluster/lib/sparcv9/libskgxn2.so .

Some googling and MOS searches revealed that it expects the library to be present at /opt/ORCLcluster/lib. This directory doesn’t exist here. As a workaround we can create this directory manually and create symbolic link to file libskgxn2.so

The lsnodes command worked fine after this workaround and installer also shows both the nodes listed.

Categories: BI & Warehousing

ERROR: SPFile in diskgroup does not match the specified spfile

Tue, 2016-09-20 11:42

Just a stupid error. Posting it so that someone else googling for the same thing can get a clue.

An ASM instance running with default parameters (no pfile, no spfile). Updated spfile for the instance with asmcmd spset command and bounced crs. After reboot also, it still wasn’t using spfile. Got puzzled and checked GPnP settings again. All looked good. Then in alert log came across this

ERROR: SPFile in diskgroup <> does not match the specified spfile +DATA/asm/asmparameterfile/registry.253.769187275

The problem was that while copying the spfile path the complete name didn’t get copied. The last character got missed. So the filename that it was looking for wasn’t there. Updating GPnP with correct filename and bouncing crs resolved the issue.

Categories: BI & Warehousing

addNode.sh, failed root.sh and IB listener

Tue, 2016-09-13 09:14

So this customer has an Exadata quarter rack and they have an IB listener configured on both DB nodes (for DB connections from a multi-racked Exalogic system). We were adding a new DB node to this rack. So just followed the standard procedure of creating users, directories etc on the new node, setting up ssh equivalence and running addNode.sh. All went fine but root.sh failed. Little looking into the logs revealed that it failed while running srvctl start listener –n <node_name>

If we manually run this command, it will immediately reveal what the problem is. It is not able to start IB listener on the new node as the IB VIP doesn’t yet exist. It could happen for any of the additional networks added.

There is a MOS note that describes this exact situation but the solution that it gives is to remove the additional listener, complete addNode.sh & root.sh and add the additional listener back. That wasn’t possible in this case. After little bit of googling I stumbled upon this post by Jeremy Schneider. His colleague solved this problem with a very simple and clever workaround. Before root.sh prepares to run srvctl start listener command, run the add VIP command from another Window Winking smile. Additional network would have already got added when root.sh runs on the new node.

To be able to perform this trick, you have to have the hosts file updated with the new VIP name and IP and be ready with the command to add the VIP. While root.sh is running, it will show a message like “there is already an active cluster, restarting to join”, immediately start trying to run srvctl add vip command in another window. The moment CRS, comes up the command will succeed. Immediately after that root.sh is going to run srvctl start listener command, and this time it shouldn’t fail as the VIP is already added.

Another small mistake we made was not updating the cellip.ora on the new node before running root.sh. That caused the root.sh to fail as it couldn’t talk to ASM running on existing cell nodes. Updating cellip.ora with the existing storage node IPs fixed the problem.

Categories: BI & Warehousing

OEDA–Things to keep an eye on

Thu, 2016-09-08 05:14

So if you are filling an OEDA for Exadata deployment there are few things you should take care of. Most of the screens are self explanatory but there are some bits where one should focus little more. I am running the Aug version of it and the screenshots below are from that version.

  1. On the Define customer networks screen, the client network is the actual network where your data is going to flow. So typically it is going to be bonded (for high availability) and depending upon the network in your data center you have to select one out of 1/10 G copper and 10 G optical.

    image

  2. If you are going to use trunk VLANs for your client network, remember to enabled it by clicking the Advanced button and then entering the relevant VLAN id.
  3. image

    Also if it is going to be an OVM configuration, you may want to have different VMs in different VLAN segments. It will allow you to change VLAN ids for individual VMs on the respective cluster screens like below

    image

  4. If all the cores aren’t licensed remember to enable Capacity on Demand (COD) on the Identify Compute node OS screen.
  5. image

  6. On the Define clusters screen make sure that you enter a unique (across your environment) cluster name.

    image

  7. The cluster details screen captures some of the most important details like
    1. Whether you want to have flash cache in WriteBack mode instead of WriteThrough
    2. Whether you want to have a role separated install or want to install both GI and Oracle binaries with oracle user itself.
    3. GI & Database versions and home for binaries. Always good to leave it at the Oracle recommended values as that makes the future maintenance easy and less painful.
    4. Disk Group names, redundancy and the space allocation.
    5. Default database name and type (OLTP or DW).

      image

Of course it is important to carefully fill the information in all the screens but the above ones are some of them which should be filled very carefully after capturing the required information from other teams, if needed.

Categories: BI & Warehousing

ORA-56841: Master Diskmon cannot connect to a CELL

Thu, 2016-05-19 11:45

Faced this error while querying v$asm_disk after adding new storage cell IPs to cellip.ora on DB nodes of an existing cluster on Exadata. Query ends with ORA-03113 end-of-file on communication channel and ORA-56841 is reported in $ORA_CRS_HOME/log/<hostname>/diskmon/diskmon.log. Reason in my case was that the new cell was using different subnet for IB. It was pingable from the db nodes but querying v$asm_disk wasn’t working. Changing the subnet for IB on new cell to the one on existing cells fixed the issue.

 

 

Categories: BI & Warehousing

Pages