Feed aggregator

Introducing the Solution Beacon Release 12 Webinar Series

Solution Beacon - Thu, 2007-07-19 17:42
We're pleased to announce our first Release 12 Webinar Series! These live webinars range from 30 to 60 minutes long and are intended to get people informed about the new Oracle Release 12 E-Business Suite. Topics include a Technical Introduction for Newcomers, Security Recommendations, and reviews of the new features in the apps modules, so whether your interest is functional or technical you're

Top 10 areas to address before taking Oracle BPEL Process Manager 10.1.3 to a Production Implementation

Arvind Jain - Mon, 2007-07-16 17:33
Here is a summary of the article I am writing on How to adopt BPEL PM in a Production Environment. This is based on 10.1.3 release of BPEL PM. If you need specific details then please drop me a line.

Top 10 areas to address before taking Oracle BPEL Process Manager 10.1.3 to a Production Implementation
Arvind Jain
5th July 2007

1) Version Management (Design Time)
When we are choosing a Source Safe System or Version Control system for Business Processes the consideration are quite different than choosing a Source Safe System or Version Control system for Java, C++ code components. The average user / designer of Business processes is not CODE savvy, they cannot be expected to manually merge code (*.bpel files or *.wsdl files for example). BPEL PM lacks in Design time version management of Business Processes using the jDeveloper IDE. What is needed is a Process Based Development and Merge environment. We need Visibility into Process Repository. So the requirements are different from that of a Component based repository. Consider using a good BPMN / BPA tool.

2) Version Governance (Run Time)
While BPEL PM can maintain version number for deployed BPEL processes, it is still left to an administrator or a Business Analyst to decide which process version will be active at a given point in time and what will be the naming, versioning standard. Since every deployed BPEL Process is a service, so it becomes critical to apply SOA governance methodology to control various deployed and running BPEL Processes.

3) SOAP over JMS (over SSL)
Most of the big corporation and multinationals have policies which restrict use of HTTP traffic from outside world to inside intranet. Moreover they have policies which require the use of a Messaging Layer or an ESB as a Service Intermediatory for persistence, logging, security and compliance reasons. BPEL PM support for bi directional SSL enabled JMS communication is not out of box. It needs to be tried and tested out within your organization and workarounds needs to be implemented.

4) Authentication & Authorization - Integration with LDAP / Active Directory
SOA governance requires authentication and authorization for service access based on a corporate repository and roles defined within them. This is also critical for BPEL Human Workflow (HWF). Make sure to do a small Pilot / POC for integration with your corporate identity repository before taking BPEL PM to production.
5) Integration with Rules Engine
BPEL should be used for Orchestration only and not for coding programming logic or hard coded rules. Hence it is important to have a separate Rules Engine. Many rules engine available in Market support Java facts and BPEL Engine Being a Java Engine should integrate out of the Box with these. But some rules engine have the limitation that they can take only XML facts, so it is an overhead to go from Java to XML so as to use XML facts and then marshal back to Java. So make sure that you have sorted out Integration with Rules Engine prior to BPEL production implementation.
6) Implementation Architecture
BPEL processes and projects can and will expand to occupy all available resources within your organization. These business processes are pretty visible processes within a company and have strict SLAs to meet. Make sure you have a proven and tested reference architecture for Clustering, High Availability and Disaster recovery. There has to be a provisioning process, deployment process and Process Life cycle governance methodology in place before you can fire up all engines in a production environment.
7) Throughput Consideration
BPEL PM by nature is an interpretation engine and hence there is a performance hit when running long running processes and doing heavy transformations. Plan on doing some stress and load testing on the servers running your Business Processes to get a Ball Park estimate of what is the end to end processing time and how much load can be taken by the BPEL server. Specifically do a capacity planning based on results from these pilot load and stress tests.

8) Design of BPEL Process (Payload Size, BPEL Variables - Pass by Reference or by Value)
Designing a Business Process is more of an art than a science and the same holds for BPEL Business Processes. It is important to understand what will be best practices in your organization in terms of Payload size and length of various Processes and how they are orchestrated. Are you passing across big XML payloads which can be avoided by changing the process and using a technique called as passing by reference? Will that also make your process more efficient and create true Business Services from these processes? Give a consideration to these and spend some white boarding sessions with Business and IT Analysts before creating a BPEL process.
9) Schema Consideration - Canonical Data Model & Minimal Transformations
The most cost and resource intensive step in any Integration or Process Orchestration scenario is Transformations. Especially in an orchestration engine like BPEL PM the XML payload goes through multiple massaging steps. If you can design your process flow in such a way that there is minimal of these steps then it will improve the performance of Business Process end 2 end. Also it is a best practice to have an enterprise wide canonical data model derived from some industry wide standard like OASIS, Rosettanet, ebXML etc
10) Administration - Multiple BPEL Console, Central HWF Server, Customized UI or use existing UI?
BPEL PM is easy to use and makes process orchestration almost a zero coding activity. Also it is pretty easy to learn and hence there is suddenly a bunch of BPEL processes deployed and a bunch of BPEL developers in enterprise once the floodgates are opened.

It is very critical for an enterprise scale deployment to figure out ways to Provision BPEL Server Instances and to give selective access to BPEL Console to relevant developers. BPEL console is a powerful tool and there is not much of a role based security functionality in that except for the concept of domains. Options are to create your own Administration / Console UI using the BPEL Server API’s or to have a BPEL Administrator take care of such requests.
BPEL PM comes with a built in Human Workflow Server (HWF) but in an enterprise you might want to have a centralized HWF server. All these need to be given though to before putting BPEL PM into a production environment.

10 @ Sun

Siva Doe - Mon, 2007-07-16 15:31

The title should say '10 @ Sun; 15 w. Sun'. When I joined Larsen and Toubro (L&T) in 1992, little did I know that those pizza boxes named SparcStation1 and Sun 3/xx (Motorola CPUs??) were made by a company that I was going to work for in about 4 years time. It was fun playing with SS1, writing Postscript programs that directly draws on the root window. The 3 series was running Sunview (I am sure quite a few would remember this GUI). My impression is that it was as fast and responsive as I it is currently on my Ultra 20 running GNOME ;)
It had been a roller coaster ride with Sun. I had moments of extreme happiness (probably the news that Sun stock was doing $120+) and also the complete opposite. I had been with Sun IT doing application development, later doing system administration with ITOps, and now am with engineering teams.
I greatly admire Sun as a company and cant think of working with any one else. I am afraid I will be too much biased to work any where else. The freedom that you get here is awesome. One has to work at Sun to believe and feel it. I am proud to be part of Sun's efforts, with open source in particular.
I hope I will be around to write '15 @ Sun' and '20 @ Sun'. Thanks to all my colleagues who has been making my life at Sun a great one. Thank you Sun.

ATG Rollup 4 and my Custom schema

Fadi Hasweh - Mon, 2007-07-16 01:38
After Appling ATG Rollup 4 patch no. (4676589) on our HP-UX server successfully we start to receive the following error only on our customized schema but not on the standard schemas.
The error was showing when we try to run any procedure from this customized schema we keep getting the following even though it used to work fine before the patch
"
ORA-00942: table or view does not existORA-06512: at "APPS.FND_CORE_LOG", line 23ORA-06512: at "APPS.FND_CORE_LOG", line 158ORA-06512: at "APPS.FND_PROFILE", line 2468ORA-06512: at "APPS.XX_PACKAGE_PA", line 682ORA-06512: at line 4
"

After checking on metalink we got a hint from note 370000.1 the note dose not apply for the same case but it did gave us the hint and the solution was as follow

connect as APPLSYSGRANT SELECT ON FND_PROFILE_OPTIONS TO SUPPORT;GRANT SELECT ON FND_PROFILE_OPTION_VALUES TO SUPPORT;


Support is my customized schema Custom
Have an error free day ;-)

fadi

Blogging away!

Menon - Sat, 2007-07-14 18:00
For a long time, I wanted to create a web site with some articles that reflected my thoughts on database and J2EE. During the 15 odd years of my experience in the software industry, I have realized that there is a huge gap between the middle tier folks in Java and the database folks (or the backend folks.) In fact my book - Expert Oracle JDBC Programming - was largely inspired by my desire to fill this gap for Java developers who develop Oracle-based applications. Although most of my industry experience has been in developing Oracle-based applications, during the last 2 years or so, I have had the opportunity to work with MySQL and SQL Server databases as well. This has given me a somewhat unique perspective on developing Java applications that use database (a pretty large spectrum of applications.)

This blog will contain my opinions on this largely controversial subject (think database-independence for example), on good practices related to Java/J2EE and database programming (Oracle, MySQL and SQL Server). From time to time, it will also include any other personal ramblings I may choose to add.

Feel free to give comments on any of my posts here.

Enjoy!

Using dbx collector

Fairlie Rego - Sat, 2007-07-14 08:45
It is quite possible that you have a single piece of sql which consumes more and more cpu over time without an increase in logical i/o for the statement or due to increased amount of hard parsing.

The reason could be extra burning of cpu in an Oracle source code function with time which has not been instrumented as a wait in the RDBMS kernel. One way to find out which function in the Oracle source code is the culprit is via the dbx collector function available in the Sun Studio 11. I guess dtrace would also help but I haven’t played with it. This tool can also be used in diagnosing increased cpu usage of Oracle tools across different RDBMS versions.

Let us take a simple example on how to run this tool on a simple insert statement.

SQL> create table foo ( a number);

Table created.

> sqlplus

SQL*Plus: Release 10.2.0.3.0 - Production on Sat Jul 14 23:46:03 2007

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Enter user-name: /

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> set sqlp sess1>>
sess1>>

Session 2
Find the server process servicing the previously spawned sqlplus session and attach to it via the debugger.

> ps -ef | grep sqlplus
oracle 20296 5857 0 23:47:38 pts/1 0:00 grep sqlplus
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus
> ps -ef | grep 17205
oracle 20615 5857 0 23:47:48 pts/1 0:00 grep 17205
oracle 17237 17205 0 23:46:04 ? 0:00 oracleTEST1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus

> /opt/SUNWspro/bin/dbx $ORACLE_HOME/bin/oracle 17237

Reading oracle
==> Output trimmed for brevity.

dbx: warning: thread related commands will not be available
dbx: warning: see `help lwp', `help lwps' and `help where'
Attached to process 17237 with 2 LWPs
(l@1) stopped in _read at 0xffffffff7bfa8724
0xffffffff7bfa8724: _read+0x0008: ta 64
(dbx) collector enable


Session 1
==================================================================
begin
for i in 1..1000
loop
insert into foo values(i);
end loop;
end;
/

Session 2
==================================================================

(dbx) cont
Creating experiment database test.3.er ...
Reading libcollector.so

Session 1
==================================================================
PL/SQL procedure successfully completed.

sess1>>exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

Session 2
=========

execution completed, exit code is 0
(dbx) quit

The debugger creates a directory called test.1.er.
You can analyse the collected data by using analyser which is a GUI tool.

> export DISPLAY=10.59.49.9:0.0
> /opt/SUNWspro/bin/analyzer test.3.er



You can also generate a callers-callees report using the following syntax

/opt/SUNWspro/bin/er_print test.3.er
test.3.er: Experiment has warnings, see header for details
(/opt/SUNWspro/bin/er_print) callers-callees

A before and after image of the performance problem would help in diagnosing the function in the code which consumes more CPU with time.

Architectural Differences in Linux

Solution Beacon - Fri, 2007-07-13 16:55
In this second edition in the Evaluating Linux series of posts I want to discuss what is both one of the strengths and weaknesses of Linux, namely the architectural differences between it and the traditional UNIX platforms. The relevant architectural differences between Linux and UNIX (AIX, HP-UX, Solaris: take your pick) can be viewed as several broad categories:hardware differencesfilesystem

Oracle Ireland employee # 74 signing off...

Donal Daly - Fri, 2007-07-13 08:29
I will shortly be starting my life outside Oracle after some 15 years there. My last day is today.

I've enjoyed it immensely and am proud of our accomplishments. It really doesn't seem like 15 years, and I have been lucky to work on some very exciting projects with some very clever people, many of whom have become friends. I look forward to hearing about all the new releases coming from Database Tools in the future.

Next it is two weeks holidays in France (I hope the weather gets better!) and then the beginning of my next adventure in a new company. More on that later.

I think I'll continue to blog on database tools topics.

Eclipse JSF Tools Turns 1.0

Omar Tazi - Thu, 2007-07-12 18:54
I would like to congratulate Raghu Srinivasan from Oracle (Eclipse JSF Tools Project Lead) and his team for helping the community produce its first official release of the JSF Tools Project. A couple of weeks ago the Eclipse Foundation announced the Europa release which among other things included Web Tools Platform (WTP) 2.0 of which the JSF Tools Project v1.0 is an important piece.

JSF Tools v1.0 is a key milestone as it simplifies the development of JavaServer Faces applications in the Eclipse environment. The highlights of this release include performance improvements, a new Web Page Editor as well as a graphical editor for building HTML/JSP/JSF web pages. This release is also extensible by design, it comes with an extensibility framework that allows third party developers to come up with their own enhancements.

This release is yet another milestone in delivering "productivity with choice" to our customers. For more information on other recent activities around Oracle's involvement with Eclipse check out this blog entry.

- Download Eclipse Europa: http://download.eclipse.org/webtools/downloads/drops/R2.0
- Release notes for Eclipse WTP 2.0:
http://www.eclipse.org/webtools/releases/2.0

Can a change in execution plan change the results?

Rob Baillie - Thu, 2007-07-12 08:15
We've been using Oracle Domain indexes for a while now in order to search documents to get back a ranked order of things that meet certain criteria. The documents are releated to people, and we augment the basic text search with other filters and score metrics based on the 'people' side of things to get an overall 'suitability' score for the results in a search. Without giving too much away about the business I work with I can't really tell you much more about the product than that, but it's probably enough of a background for this little gem. We've known for a while that the domain index 'score' returned from a 'contains' clause is based not only on the document to which that score relates, but also on the rest of the set that is searched. An individual document score does not live in isolation, rather in lives in the context of the whole result set. No problem. As I say, we've known this for a while and so have our customers. Quite a while ago they stopped asking what the numbers mean and learned to trust them. However, today we realised something. Since the results are affected by the result set that is searched, this means that the results can be affected by the order in which the optimizer decides to execute a query. I can't give you a full end to end example, but I can assure you that the following is most definately the case on one of our production domain indexes (names changed, obviously): We have a two column table 'document_index', which contains 'id' and 'document_contents'. Both columns have an index. The ID being the primary key and the other being a domain index. The following SQL gives the related execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, :1, 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX DOMAIN INDEX SCOTT.DOCUMENT_INDEX_IDX01 However, the alternative SQL gives this execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, 'Some text', 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX INDEX UNIQUE SCAN SCOTT.DOCUMENT_INDEX_PK Normally, this kind of change in execution path wouldn't be a problem. But as stated earlier, the result of a score operation against a domain index is not just dependant on the individual records, but the context of the whole result set. The first execution provides you a score for the single document in the context of the all the documents in the table, the second gives you a score within the context of just that document. The scores are different. Now obviously, this is an extreme example, but more subtle examples will almost certainly exist if you combine the domain index lookups with any other where clause criteria. This is especially true if you're using literal values instead of bind variables in which case you may find the execution path changing between calls to the 'same' piece of SQL. My advice? Well, we're going to split our domain index look ups from all the rest of the filtering criteria, that way we can prepare the set of documents we want the search to be within and know that the scoring algorithm will be applied consistently.

How to OBFUSCATE passwords and ENCRYPT sensitive fields in BPEL PM?

Arvind Jain - Wed, 2007-07-11 15:19
Here is a small tip on security while using Oracle BPEL Process Manager.

Many a times you have to supply password information and other sensitive information in your BPEL PM project files (*.bpel, *.xml, *.wsdl). How do you ensure that these are not visible as clear text to others who do not have access to source codes? Here is a quick tip on using the XML tag <encryption="encrypt">.

Where can this be used?

- to obfuscate password info while accessing a partnerlink that refers to a WebService secured by Basic Authentication ... login/password.

Example:

Suppose you have a partnerlink definition defined with LOGIN PASSWORD info as shown below. You want to obfuscate the password i.e. You do not want to see clear text "cco-pass"

(sample)
<partnerLinkBinding name="PartnerProfileService">
<property name="wsdlLocation">PartnerProfileWSRef.wsdl</property>
<property name="basicUsername">cco-userid</property>
<property name="basicPassword">cco-pass</property>
<propertyname="basicHeaders">credentials</property>
</partnerLinkBinding>

Add the property encryption="encrypt" for sensitive fields, this will cause the value to be encrypted at deployment. So the new XML will look like


(sample)
<partnerLinkBinding name="PartnerProfileService">
<property name="wsdlLocation">PartnerProfileWSRef.wsdl</property>
<property name="basicUsername">cco-userid</property>
<property name="basicPassword" encryption="encrypt">cco-pass</property>
<property name="basicHeaders">credentials</property>
</partnerLinkBinding>


Then deploy your process and the password will be encrypted.
Have fun encrypting things !!

Backing Up and Recovering Voting Disks

Pankaj Chandiramani - Tue, 2007-07-10 21:31

Backing Up and Recovering Voting Disks

What is a voting disk & why its needed ?
The voting disk records node membership information. A node must be
able to access more than half of the voting disks at any time.

For example, if you have seven voting disks configured, then a node must
be able to access at least four of the voting disks at any time. If a
node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.

Backing Up Voting Disks

When to backup voting disk ?

  1.       After installation
  2.       After adding nodes to or deleting nodes from the cluster
  3.       After performing voting disk add or delete operations

To make a backup copy of the voting disk, use the Linux dd command. Perform this operation on every voting disk as needed where voting_disk_name is the name of the active voting disk and backup_file_name is the name of the file to which you want to back up the voting disk contents:
dd if=voting_disk_name of=backup_file_name

If your voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:
dd if=/dev/sdd1 of=/tmp/voting.dmp

Note : When you use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk.

Recovering Voting Disks

If a voting disk is damaged, and no longer usable by Oracle Clusterware, you can recover the voting disk if you have a backup file.

dd if=backup_file_name of=voting_disk_name

Categories: DBA Blogs

Another Trinidad Milestone

Omar Tazi - Tue, 2007-07-10 19:04
Last week the Apache MyFaces Trinidad team announced another milestone, the release of Trinidad v 1.2.1. This release comes with a JavaServer Faces 1.2 component library initially based on parts of Oracle's ADF Faces. Featured tags in this release include : breadcrumbs, navigation panels, panes, and tabbed panels. More tags can be found on this page. JSF 1.1 is still supported via Trinidad v 1.0.1.

Trinidad 1.2.1 binary and source distributions can be found in the central Maven repository under group id "org.apache.myfaces.trinidad". Downloads are available here.

If you need more frequent information on Trinidad, visit Matthias' blog.

Administring OCR

Pankaj Chandiramani - Mon, 2007-07-09 20:52

Administring OCR
We will see how OCR(Oracle cluster Registry) backup & recovery is done .

Backup
Oracle clusterware automatically creastes a OCR backup every 4 hrs & retains the last 3 backups . Actually the CRSD process creates & manages the backup for each full day & a weekly backup at nd of the week .
Default backup Location : $CRS_HOME/cdata/$clustername

Other than the automated backup , you can export the content any time you want to a file .
eg : $ ocrconfig -export emergency_export.ocr

You can see the list of ocrbackup by using :
$ ocrconfig -showbackup

As the backup directory is default , you can change the dir by using below command
$  ocrconfig -backuploc &lt;directory>

Restore
OCR can be restored (if you have a backup ) be below command

NOTE: Should you need to restore, make sure all CRS daemons on all nodes are stopped.

To perform a restore, execute the command:

$ cd CRS_Home/cdata/crscluster
$ ocrconfig -restore  week.ocr

If you had exported using the above command & want to resore , then you can use import
IMPORTANT: Importing a backup when CRS daemons are running will only corrupt OCR.  

$ ocrconfig -import emergency_export.ocr

If anything is wrong than you can use the OCRDUMP comand to dump all info to a file & check
$ ocrdump OCR_DUMP

Also you can use :

$ ocrcheck 
to check for the stats of OCR

Categories: DBA Blogs

On the Road and Upcoming Talks

Marcos Campos - Mon, 2007-07-09 20:51
This week I am going to be in San Francisco. I have been invited to give a talk at the San Francisco Bay ACM Data Mining SIG on Wednesday. The title of the talk is In-Database Analytics: A Disruptive Technology. Here is a link with information on the talk.On Friday morning, I am presenting at the ST Seminar at Oracle's headquarter. The title of that talk is In-Database Mining: The I in BI. If Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

SQL Techniques Tutorials: Pattern Matching Over Rows (New SQL Snippets Tutorial)

Joe Fuda - Mon, 2007-07-09 16:00

This topic was inspired by Tom Kyte's So, in your opinion ... blog post about a new SQL feature Oracle is considering (described at Pattern matching in sequences of rows).

I'll admit I've never tackled this kind of pattern matching before and I didn't understand the entire paper. It's a pretty dense read. From what I can tell though, using the new feature would be a lot like applying regular expressions to rows of values. This got me thinking. Instead of adding a whole new feature for this, why not simply convert the rows into strings and then use existing regular expression support to do the pattern matching?

Even if the feature described in the paper does something more sophisticated than this, tackling the requirement with existing functionality using simple string aggregation logic and regular expressions sounded like a fun challenge. Here's my stab at a solution.


...

Using SERVICE_NAMES in Oracle

Hampus Linden - Sun, 2007-07-08 12:42
The use of "SERVICE_NAMES" in Oracle is quite an old and probably well known feature but perhaps not everyone is familiar with it yet.
Got asked today about a recovery scenario where the administrator had a failed instance (broken data files, no logs, no backups, just a nightly exp), a new database was created with 'dbca', but with a new name to test importing the exp file.
All worked fine, but there was a problem with the database name. The application had the service name set in a number of config files and there was also a number of ETL scripts with service names hardcoded. The thinking at the time was to delete the old instance, remove all traces of it (oratab etc.) and then create it *again* with the same name.
Now hold on here, we have tested the imp in a new database, all is fine and all we want to do is allow connections to the old database instance name?
That's pretty much a one-liner, not a new database.
We can simply add the new name we want to "listen on" to the SERVICE_NAMES parameter.
Easy peasy.

Ok, here is what we should do. Quite a long example for something simple.
But hey, just want to make it clear.
oracle@htpc:admin$ lsnrctl status
-- What's the current db_name and service_names?
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:31:22

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 7 min. 43 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:24 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string peggy
SQL> show parameter service_names

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string peggy
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- What can we connect to?
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/peggy

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:53 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:58 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:

-- Ouch, that's the one we want!

oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:32:01 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Here is the 'one-liner'
SQL> alter system set service_names='peggy,dog' scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 599785472 bytes
Fixed Size 2022600 bytes
Variable Size 167772984 bytes
Database Buffers 423624704 bytes
Redo Buffers 6365184 bytes
Database mounted.
Database opened.
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Let's see what changed. What can we connect to now?
oracle@htpc:admin$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:33:57

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 10 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "dog" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:34:18 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- It works, but where are we?

SQL> select sys_context('userenv','SERVICE_NAME') from dual;

SYS_CONTEXT('USERENV','SERVICE_NAME')
------------------------------------------------------------------------------------------------------------------------
dog

SQL> select sys_context('userenv','DB_NAME') from dual;

SYS_CONTEXT('USERENV','DB_NAME')
------------------------------------------------------------------------------------------------------------------------
peggy

Issues while installing oracle redhat linux

Fadi Hasweh - Sun, 2007-07-08 04:25
Before a couple of days me and my friend ghassan who is also a certified apps 11i dba was doing an installation at my new p.c and while installing oracle linux on the new Intel 965 motherboard the installer freezes at " PCI: Probing PCI hardware (bus 00)" stage after checking redhat forums we found out that we need to start the installation using the following command from the command line to over come this issue
linux all-generic-ide pci=nommconf
of course that solved our issue after the installation finished and the system reboot we had to do the following the grub screen

at the grub menu. Select the kernel we want to edit, and press (e) to edit. Then move the cursor down to the "kernel" line and press (e) to edit that line. And add the all-generic-ide pci=nommconf at the kernel line.
Then press ENTER to accept the changes. And then press (b) to boot.

When the server boot successfully we had to add the all-generic-ide pci=nommconf at the /boot/grub/grub.conf file.

I guess that this issue in not only related to oracle linux but general to all redhat with this Intel motherboard (from what we saw at the redhat forums), anyway after that we did the installation for vision database and the installation went successfully but with one small issue (RW-50004) at step 2 of 5
At the log it shows (ORA-01031: insufficient privileges) this error was a typo of the oracle user group the group was ordba instated of oradba so we fixed that and the installation went successfully


Hope that helped and thank you guz for your help
Fadi
P.S I got the redhat issue resolved mostly from the http://www.linuxquestions.org/questions/showthread.php?t=479778 and some other forums

SOA Suite 10.1.3.3 patchset and "lost" instances

Clemens Utschig - Fri, 2007-07-06 14:09
It has been a long time since I have blogged the last time - mainly due to the huge amount of miles flown over the last 2 months.

After being in europe for conferences where I did evangelism on our next generation infrastructure, that is based on Service Component Architecture, it was time to revisit my roots and do some consulting in a POC as well as helping one of our customers with their performance problems.

Meanwhile, while I was on the road, we have released the 10.1.3.3 patchset, which includes among many small fixes here and there - some real cool enhancements, e.g
  1. A fault policy framework for BPEL, which allows you to specify policies for faults (e.g remoteFault) outside of the BPEL process and trigger retry activities, or submit the activity with a different payload from the console.

  2. Performance fixes for Oracle ESB - which boost the performance way up

  3. Several SOAP related fixes, especially around inbound and outbound WS-Security - if you happen to have Siebel and use WS-Sec features, the patch will make you happy

  4. Adapters: several huge improvements on performance and scalability

  5. BPEL 2 ESB: several fixes with transaction propagation as well more as sophisticated tracking
You can download it from metalink - patch number is 6148874

After working on 10.1.3.3 for the last 3 weeks, we added an enhancement to implement a federated ESB, where an ESB system binds to another via UDDI. The enhancement request's number is 6133448 and will be part of 10.1.3.4 (our next patch release) - and works exactly the way it works today in BPEL 10.1.3.1.

Back to my performance adventure.
The customer reported that under high load of his 10.1.3.1 instance, a lot of async instances (that were submitted to the engine) "got lost", which means - they could not find any trace of a running instance, nor have the target systems that were called out of the process being updated. Strange, isn't it?

A quick look into the recovery queue (basically a select against the invoke_message table) revealed that a lot of instances have been scheduled (status 0) - but somehow they stayed in the queue. Hugh, why that? Restarting the server helped, some instances were created but, - hugh still way to many weren't.

Checking the settings that we preseed - we figured out - that there is an issue with them. Looking into the Developer's guide it states:

"the sum of dspMaxThreads of ALL domains should be <= the number of listener threads on the workerbean".

Hmm - checking orion-ejb-jar.xml, section workerbean, in the application-deployments/orabpel/ejb_ob_engine folder revealed
  1. there are no listener-threads set and
  2. there are 40 ReceiverThreads
means? Given that we seed each domain with dspMaxThreads being 100, if you have five domains, 500 workerbean threads would be needed - way to much. And what happened to listener-threads?

<message-driven-deployment name="WorkerBean" instances="100" resource-adapter="BPELjms">

A quick check with the JMS engineering enlighted me on that. As we use JMS connectors now - you need to change the ReceiverThreads, to match the above formula.

<config-property>
  <config-property-name>ReceiverThreads</config-property-name>
  <config-property-value>40</config-property-value>
</config-property>

- and tune the dispatcherThreads on the domains to a reasonable level.

Next question: what are dispatcherThreads, and what does the engine need them for?

"ReceiverThreads specifies the maximum number of MDBs that can process BPEL requests asynchronously across all domains. Each domain can allocate a subset of these threads using the dspMaxThreads property; however, the sum of dspMaxThreads across all domains must not exceed the ReceiverThreads value.

When a domain decides that it another thread to execute an activity asychronously, it will send a JMS message to a queue; this message then gets picked up by a WorkerBean MDB, which will end up requesting the dispatcher for some work to execute. If the number of WorkerBean MDBs currently processing activities for the domain is sufficient, the dispatcher module may decide not to request for another MDB. The decision to request or an MDB is based on the current number of active MDBs, the current number pending (that is, where a JMS message has been sent but an MDB has not picked up the message), and the value of dspMaxThreads.

Setting both ReceiverThreads and dspMaxThreads to an appropriate value is important for maximizing throughput and minimizing thread context switching. If there are more dspMaxThreads specified than ReceiverThreads, the dispatcher modules for all the domains will think there are more resources they can request for than actually exist. In this case, the number of JMS messages in the queue will continue to grow as long as request load is high, thereby consuming memory and cpu. If the value of dspMaxThrads is not sufficient for a domain's request load, throughput will be capped.
Another important factor to consider is the value for ReceiverThreads - more threads does not always correlate with higher throughput. The higher the number of threads, the more context switching the JVM must perform. For each installation, the optimal value for ReceiverThreads needs to be found based on careful analysis of the rate of Eden garbage collections and cpu utilization. For most installation, a starting value of 40 should be used; the value can be adjusted up or down accordingly. Values greater than 100 are rarely suitable for small to medium sized boxes and will most likely lead to high cpu utilization just for JVM thread context switching alone."

With all the above in place, and a tuned dehydration store, we got them back on the track, even under high load all messages where picked up, and ended up as instances - recap:
  1. Make sure your settings of ReceiverThreads do match the sum of dspMaxThreads of all domains, and are set appropriately.

  2. If you have external adapters in use, that connect e.g to AQ, make sure AQ is tuned and also the adapter - this is where you are most likely to get timeouts, that would also contribute to recvoverable messages.

Verifying a Virtual X-Server (Xvfb) Setup

Solution Beacon - Thu, 2007-07-05 17:52
With E-Business Suite Release 11i and Release 12, an X-Server display is required for correct configuration. The application framework uses this for generating dynamic images, graphs, etc. It is also needed by reports produced in bit-map format. Note that for functionality using Java technology, the “headless” support feature can be implemented (requires J2SE 1.4.2 or higher). However, reports

Pages

Subscribe to Oracle FAQ aggregator