Feed aggregator

Business Integration Journal changes its name - and the last BIJ contains my new article

Clemens Utschig - Wed, 2007-01-10 15:06
"Business Integration Journal (BIJ) is changing to Business Transformation & Innovation (BTIJ) to better reflect the current editorial content being published and new topics readers want covered.

Targeted at the intersection of business and IT, BTIJ carries on the “how-to” and “what-works” tradition of Business Integration Journal helping IT and business managers quickly adapt to changing business needs, continuously innovate successful new products and services, and consistently gain and maintain a competitive advantage.
"

In the last edition of the BIJ - as we know it - my article on SOA projects got published - and I am very proud of it.

"A World of Assumptions That Make an SOA Project the Perfect Storm—and What You Should Know About Them by Clemens Utschig-Utschig. This article shows that, while Service-Oriented Architecture (SOA) brings many advantages, it doesn’t alleviate the need for good planning, organization, and training."

The whole article can be found here
http://www.bijonline.com/index.cfm?section=article&aid=802

Second Life client running on Solaris x64

Siva Doe - Wed, 2007-01-10 14:31

Yes. I was able to build the recently open sourced Second Life client. It took me couple of days to get to this stage. To get it running took half a day. The reason being that ENDIAN was set to BIG for non-Linux boxes and that had me stumped for a long time. Of course, being behind SWAN firewall doesnt let the client connect to the server :-( Any one knows a way to achieve this?

Any way, here is the obligatory screenshot. Will keep posted on the updates.

Siva

Remote automated install of Oracle 10g client

Stephen Booth - Wed, 2007-01-10 06:04
We have a situation where we need to rationalise the range of installed Oracle clients (i.e. the bit that sits between the app and the network stack) we have installed. We currently have versions from 7.x through to 10.2 installed accross approximately 12,000 desktops (accross various locations in an area of around 26 square miles) running various apps on Windows versions from NT4 to XP (mostly Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com1

Auditing on FND_FLEX_VALUES: How to see Audit History

Jo Davis - Tue, 2007-01-09 21:23
To keep an audit trail be kept for mapping of segment values to external values (such as for extracts to other systems) and changes to it
1) Define DFF on FND_FLEX_VALUES to contain the mapping values
2) enabled auditing on FND_FLEX_VALUES,
3) if they wish to query the audit data start with this:
select *
from FND_FLEX_VALUES_AC1
where attribute1 is not null
FND_FLEX_VALUES_AC1 is a view of the audit table

How to Migrate Personalizations from TEST to PROD on Release 11.5.10.2

Jo Davis - Tue, 2007-01-09 20:35
This applies to iProcurement (11.5.10 only), iExpenses 2nd Generation (i.e. above 11.5.7 and the white screens, not the blue ones) and anything else that uses the new self-service architecture (i.e. NOT iTime or anything else with those blue screens and no bouncing balls across the top.... you know what I mean)

It's all covered in Metalink Note 370734.1 but suffice is to say

- it can be done, you don't need to redo all the personalizations

- you can only use this on release 11.5.10.2

- you will need the appltest and applprod passwords and a passing familiarity with Unix (or a tame developer or DBA to help you)

Have a great day

Jo

Producing Estimates

Rob Baillie - Sun, 2007-01-07 07:52
OK, so it's about time I got back into writing about software development, rather than software (or running, or travelling, or feeds) and the hot topic for me at the moment is the estimation process.

This topic's probably a bit big to tackle in a single post, so it's post series time. Once all the posts are up I'll slap another up with all the text combined.

So – Producing good medium term estimates...

I'm not going to talk about the process for deriving a short term estimate for a small piece of work, that's already covered beautifully by Mike Cohn in User Stories Applied, and I blogged on that topic some time ago. Rather I'm going to talk about producing an overall estimate for a release iteration or module.

I've read an awful lot on this topic over the last couple of years, so I'm sorry if all I'm doing is plagiarising things said by Kent Beck, Mike Cohn or Martin Fowler (OK, the book's Kent as well, but you get the point), or any of those many people out there that blog, and that I read. Truly, I'm sorry.

I'm not intending to infringe copyright or take credit for other people's work, it's just that my thinking has been heavily guided by these people. This writing is how I feel about estimating, having soaked up those texts and put many of their ideas into practice.

Many people think that XP is against producing longer term estimates, but it isn't. It's just that it's not about tying people down to a particular date 2 and a half years in the future and beating them with a 400 page functional design document at the end of it; it's about being able to say with some degree of certainty when some useful software will arrive, the general scope of that software, and ultimately to allow those people in power to be able to predict some kind of cost for the project.

Any business that allows the development team to go off and do a job without providing some kind of prediction back to the upper management team being irresponsible at best, grossly negligent at worst.

Providing good medium term estimates is a skill, and a highly desirable one. By giving good estimates and delivering to them you are effectively managing the expectations of upper management and allowing them to plan the larger scale future of the software in their business. When they feel they can trust the numbers coming out of the development teams it means they are less likely to throw seemingly arbitrary deadlines at their IT departments and then get angry when they're not met. Taking responsibility for, and producing good estimates will ultimately allow you to take control of the time-scales within your department. If you continually provide bad estimates or suggest a level of accuracy that simply isn't there by saying "it'll take 1,203 ½ man days to complete this project", then you've only got yourself to blame when that prediction is used to beat you down when your project isn't complete in exactly 1,203 ½ man days.

Whilst being written within the scope of XP, there's no reason why these practices won't work when you're not using an agile process. All you need to do is to split your work into small enough chunks so that each one can be estimated in the region of a few hours up to 5 days work before you start.

In summary, the process is this:
  • Produce a rough estimate for the whole of the release iteration.
  • Iteratively produce more detailed estimates for the work most likely to be done over the next couple of weeks.
  • Feed the detailed estimates into the overall cost.
  • Do some development.
  • Feed the actual time taken back into the estimates and revise the overall cost.
  • Don't bury your head in the sand if things aren't going to plan.


1 – Produce a rough estimate for the whole of the release iteration.

Before you can start work developing a new release you should have some idea of what functionality is going into that release. You should have a fairly large collection of loosely defined pieces of functionality. You can't absolutely guarantee that these stories will completely form the release that will go out in 3 months time, but what you can say is: "Today we believe that it would be most profitable for us to work on these areas of functionality in order to deliver a meaningful release to the business within a reasonable time-frame".

Things may change over the course of the next few months, and the later steps in this process will help you to respond to those changes and provide the relevant feedback to the business.

Kent Beck states (I paraphrase): You don't drive a car by pointing it at the end of the road and driving in a straight line with your eyes closed. You stay alert and make corrections moment by moment.
I state: When you get in a car and start to drive you should at least have some idea where you're going.

The two statements don't oppose each other.

The basic idea is this:
  • In broad strokes, group the stories in terms of cost in relation to each other. If X is medium and Y is huge then Z is tiny.
  • Assign a numbered cost to each group of stories.
  • Inform that decision with past experience.
    Produce a total.


So, get together the stories that you're putting into this release iteration, grab a room with a big desk, some of your most knowledgeable customers and respected developers and off you go.

Ideally you want to have around 5 to 8 people in the room, more than this and you'll find you end up arguing over details, less and you may find you don't have enough buy in from the customer and development teams.

As a golden rule: developers that are not working on the project should NOT be allowed to estimate. There's nothing worse for developer buy-in than having an estimate or deadline over which they feel that have no control.

Try to keep the number of stories down, I find around 50 stories is good. If you have more you may be able to group similar ones together into a bigger story, or you may find you have to cull a few. That's OK, you can go through this process several times. You can try to go through the process with more, but I find that you can't keep more than 50 stories in your head at one time. You're less likely to get the relative costs right. And it'll get boring.

Quickly estimate each story:
  • The customer explains the functionality required without worrying too much about the about the detail;
  • The developers place each story into one of 4 piles: small, medium, large and wooooaaaah man that's big.


People shouldn't be afraid to discuss their thinking, though you should be worried if it's taking 10 minutes to produce an estimate for each story.

You may find that developers will want to split stories down into smaller ones and drive out the details. Don't, if you can avoid it. At this point we want a general idea of the relative size of each story. We don't need to understand the exact nature of each individual story, we don't need enough to be able to sit down and start developing. All we need is a good understanding of the general functionality needed and a good idea of the relative cost. Stories WILL be re-estimated before they're implemented and many of them will be split into several smaller ones to make their development simpler. But we'll worry about that later.

As the stories are being added to piles be critical of past decisions. Allow stories to be moved between piles. Allow piles to be completely rebuilt. You may start off with 10 stories that spread across all 4 piles, then the 11th 12th and 13th stories come along and have nowhere to go – they're all an order of magnitude bigger than the ones that have gone before. That's OK, look at the stories you've already grouped in light of this new knowledge and recalibrate the piles.

Once all the stories are done, spread the piles out so you can see every story in each one. Appraise your groupings. Move stories around. It's important that you're comfortable that you've got them in the right pile relative to each other.

If the developers involved in this round of estimating recently worked on another project or a previous release of this project then add some recently completed stories from that work. Have the developers assign those stories into the piles. You know how long those stories took and you can use that to help guide your new estimates. There's nothing more useful in estimating than past experience... don't be afraid to use that knowledge formally.

Once you're that the piles are accurate, put a number of days against each pile. The idea is to get 'on average a single story in this pile will take x days'.

If you have to err, then err on the side of bigger. Bear in mind that people always estimate in a perfect world where nothing ever goes wrong. The stories you added from the last project can be used to guide the number. You know exactly how long it took to develop those stories and you'd expect the new numbers to be similar.

The resulting base estimate is then just the sum of (number of stories in each pile * cost of that pile). This number will give you a general estimate on the overall cost.

You'll probably want to add a contingency, and past experience on other projects will help you guide that. If you've gone through this process before, pull out the numbers from the last release... when you last performed this process and compare the starting estimate with the actual amount of time taken. Use that comparison for the contingency.

The gut feeling is to say, "Hang on, the last time we did this we estimated these 50 stories, but only built these 20 and added these other 30. That means that the starting estimate was nothing like what we actually did". But the point is that if every release has the same kind of people working on it, working for the same business with the same kinds of pressures then this release is likely to be the same as the last. Odds are that THIS release will only consist of 20 of these stories plus 30 other unknown ones... and the effect on the estimate is likely to be similar. Each release is NOT unique, each release is likely to go ahead in exactly the same way as the last one. Use that knowledge, and use the numbers you so painstakingly recorded last time.

Aside: Project managers tend to think that I have a problem with them collecting numbers (as they love to do). They're wrong... I have a problem with numbers being collected and never being used. The estimation process is the place where this is most apparent. Learn from what happened yesterday to inform your estimate for tomorrow.


Once the job's done you should have enough information to allow you to have the planning game. For those not familiar with XP, that is the point at which the customer team (with some advice from the development team) prioritises and orders the stories. It gives you a very good overall view of the direction of the project and a clear goal for the next few weeks of work.

At this point you should have a good idea of cost and scope and have a good degree of confidence in both. Now's the time to tell your boss...

Next up – using the shorter term estimates to inform the longer term one...

Technorati Tags: , , , , , , , , ,

A new SOA year has begun - what will the SOA Santa bring?

Clemens Utschig - Thu, 2007-01-04 16:00
Wow, the new SOA year is here - and many cool things are about to arrive. But before looking into the future, let's look back into 2006 - and what happened there.

My first year, living in San Francisco and working within a wonderfull group, envisioning future and creating our next generation products went by - and I must admit it was far better than I thought it'd be.

First, and most important - we delivered SOA Suite 10.1.3.1 - the first major, integrated SOA plattform. Not just integrated, but also with two new key components, the Oracle Enterprise Service Bus (ESB) and Rules. Since OOW 2006 we we presented the first public available release, our customers are using it heavily to build their next generation SOA - and according to many of them - they enjoy doing so :D

Also, we signed an OEM agreement with IDS Scheer, to build the ORACLE BPA Suite on top of it. Using Business Process Architect - for the first time Business Processes will make it all the road down to be executable, and all the way back to feed real world data into simulation and continous refinement.

Another also, was the overwhelming amount of SOA related sessions at Oracle Open World, and the great feedback we received. It was a blast, more than 40000 people made the city for one week being red, being Oracle. Elton John played during the big Oracle party, and so on.

Open SOA - www.osoa.org went live - and with it the big players in the SOA market space, to drive next generation, easy to use standards.

So after all what's in for 2007?

While we all work hard on delivering 11g - our next generation SOA infrastructure, based on Service Component Architecture (SCA) and supporting Service Data Objects (SDO) - we expect a patchset for 10.1.3.1 (which will offer a ton of new features, and little fixes here and there).

BPEL 2.0 will see the light of life - after more than 2 years, and the last public review finished, some more polishing needs to happen - and then we are ready to go. With it - BPEL will present a number of long awaited cool features, such as the new variable layout ($inputVariable) and scoped partnerlinks.

Evangelism on SOA will continue to drive the spirit of SOA and help customers adopting to it - I really look forward to even more presence around the globe, and many customers going live on SOA Suite.

Personally for me - it's moving on with my SOA Best practices series on OTN, supporting people around the globe and the OTN forums, doing evangelism on SOA & help creating 11g. More than enough.

10G SQL Access Advisor and SQL Tuning Advisor

Vidya Bala - Thu, 2006-12-28 15:45
I was able to take advantage of the holidays to complete a 9i to 10g cross platform migration. While I was quite happy with the migration process (especially on how datapump has made this effort much easier), I was a little bit disappointed on what SQL Access and SQL Tuning Advisor had to offer.

There were a bunch of queries that we have been meaning to tune for quite some time , I thought I could create a tuning task with 10G SQL Tuning advisor and see if I could get some valuable recommendations.

The Recommendations I got were far from anything of significance (eg: add an index to a small lkup table).

I couldn’t help but wonder is there is much success/help using Oracle 10g SQL Access/Tuning Advisor in the industry
Categories: Development

Merry Christmas, Happy New Year, and a Poll

Marcos Campos - Sun, 2006-12-24 04:30
It has been a great year. My daughter was born as well as this blog. I have launched this blog at the beginning of the year (January first to be more precise) and the readership has been great. Amongst the posts, Time Series and Automatic Pivoting were probably the most viewed. I am on vacation in Brazil right now enjoying a family reunion. I have a big family and it is hard to get everyone Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

ASM and NetApp Filer

Vidya Bala - Thu, 2006-12-21 15:37
Link to ASM and NetApp

I have been spending the last few days looking into what advantages we would have using ASM on a NFS mount as opposed to having the database files directly on NFS. If your on RAC then ASM is mandatory but for non RAC 10g instances and NetApp - ASM is not mandatory.

The biggest benefit I see is volume management features with ASM.

does this mean I can change volume sizes etc actually online ? may be IO balancing across different volumes an added feature. I am walking into a totally new area (ASM on any kind of direct attach storage I can for sure see it being beneficial on NFS I am not so sure?)

anybody with sucess stories?
Categories: Development

Cross Platform Migration 9i to 10g

Vidya Bala - Thu, 2006-12-21 15:05
Migration Procedure Implemented.

Cross Platform Migration 9.2.0.6(Suse SLES8) Standard Edition to 10g Rel 2 (Solaris 10)

This is a quick overview of a migration procedure I have just finished implementing on a test environment– If you see anything else in the procedure that should be added or should be noted, please feel free to post comments – as always there has been a lot of mutual learning and help from my blog readers.

Step1: Server A :
Clone Production Database to Preprod Environment. (Datafile,Redologfile,controlfiles all on Shared File System NFS).
Database Release : 9.2.0.6
Suse Linix version : SLES 8
Database Size : 50.89 G


Step2: Server B:
Install 10g Release 2 on a new SLES 9 server. Note 10g Release 2 is not supported on SLES8.
Make sure shared file systems on ServerA are mounted on Server B.
Copy parameter file from Server A to Server B . Make appropriate path changes to parameter file on Server B.
Database Release : 10g Release 2
Suse Linix version : SLES9


Step3: Upgrade 9i database to 10G
On Server B upgrade 9.2.0.6 database to 10G

sqlplus /nolog
startup upgrade

CREATE TABLESPACE sysaux DATAFILE ' sysaux01.dbf'
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

Set the system to spool results to a log file for later verification of success:

SQL> SPOOL upgrade.log

Run catupgrd.sql:

SQL> @catupgrd.sql

Run utlu102s.sql to display the results of the upgrade:

SQL> @utlu102s.sql

Turn off the spooling of script results to the log file:

SQL> SPOOL OFF

Shut down and restart the instance to reinitialize the system parameters for normal operation.

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.

SQL> @utlrp.sql

Verify that all expected packages and classes are valid:

SQL> SELECT count(*) FROM dba_objects WHERE status='INVALID';
SQL> SELECT distinct object_name FROM dba_objects WHERE status='INVALID';

Exit SQL*Plus.

Total Time to Upgrade Database : 38 minutes

Step 4 – Upgrade 10g Database (using expdp)

Now that we have the database migrated to 10gRel2 on Server B(SLES9), we can export the database using 10g datapump. We will export only the Application Related Tablespaces. Tablespaces excluded are as below

PERFSTAT
SYSAUX
SYSTEM
UNDOTBS1
TEMP
DRSYS
INDX --- no application related objects in this tablespace
TOOLS
USERS
XDB
UNDOTBS2

Before running the export – OWM and OLAP options need to be de-installed if not being used to avoid export errors

If the Oracle Workspace Manager feature is not used in this database: de-install the Workspace Manager:
SQL> CONNECT / AS SYSDBA
SQL> @$ORACLE_HOME/rdbms/admin/owmuinst.plb

clean up AW procedural objects:SQL> conn / as SYSDBASQL> delete from sys.exppkgact$ where package = 'DBMS_AW_EXP';

Afterwards, run the export.

CREATE OR REPLACE DIRECTORY pump_dir AS 'xxxxxxxxxxxxxxxx';

Export only application related tablespaces

$ORACLE_HOME/bin/expdp system/manager tablespaces=\(t1,t2 \) directory=pump_dir dumpfile=pump.dmp logfile=pump.log

Full database export of 50+G database took about 80 minutes to export

Step5 – Prepare Target environment – Server C with 10g Release 2

Install 10g Release 2 on Solaris 10 Servers (SERVERC).

Installing Oracle Database 10g Products from the Companion CD
The Oracle Database 10g Companion CD contains additional products that you can install. Whether you need to install these products depends on which Oracle Database products or features you plan to use. If you plan to use the following products or features, then you must complete the Oracle Database 10g Products installation from the Companion CD:
· JPublisher
· Oracle JVM
· Oracle interMedia
· Oracle JDBC development drivers
· Oracle SQLJ
· Oracle Database Examples
· Oracle Text supplied knowledge bases
· Oracle Ultra Search
· Oracle HTML DB
· Oracle Workflow server and middle-tier components

On Server C use DBCA to create database creation scripts. Select the config parameters you need for your database as you go through the DCA wizard.

The scripts will create a standard database with no application related objects yet. Run create scripts to create database.

Tablespaces created (this is assuming none of the additional components were installed)
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS

Make sure the Listener is up.
http://ServerC:1158/em
is the em console for the database

Note the below before proceeding with the em console
Oracle Enterprise Manager 10g Database Control is designed for managing a single database, which can be either a single instance or a cluster database. The following premium functionality contained within this release of Enterprise Manager 10g Database Control is available only with an Oracle license:
t(void 0,'12')
Database Diagnostics Pack
Automatic Workload Repository
ADDM (Automated Database Diagnostic Monitor)
Performance Monitoring (Database and Host)
Event Notifications: Notification Methods, Rules and Schedules
Event history/metric history (Database and Host)
Blackouts
Dynamic metric baselines
Memory performance monitoring
t(void 0,'12')
Database Tuning Pack
SQL Access Advisor
SQL Tuning Advisor
SQL Tuning Sets
Reorganize Objects
t(void 0,'12')
Configuration Management Pack
Database and Host Configuration
Deployments
Patch Database and View Patch Cache
Patch staging
Clone Database
Clone Oracle Home
Search configuration
Compare configuration
Policies

Step6 – Prepare Target environment – Server C with Application related Objects

IMPDP will be used to import Application Related objects into this database.
Before running IMPDP the target database will need to be prepared with the Application Tablespaces and Application Schema’s. This is also a great opportunity to reorg objects if you need to. Scripts to create Application Tablespaces and Schemas are prepared.
This is the most important step in preparing the target environment.

Once the Target environment is prepared – import the dumpfile using the following command.
$ORACLE_HOME/bin/impdp system/manager full=y directory=pump_dir1 dumpfile=pump.dmp logfile=pump_import.log

Before opening the database for public connections
1)Recompile for invalid objects (run utrp.sql)
2)Gather statistics for entire database


There will be a regression tests run at the end of all this to test Application Functionality, long datatypes etc.
Categories: Development

Future Direction - 10g Forms/Reports Developer vs JDeveloper

Vidya Bala - Wed, 2006-12-20 15:15
We are on this new effort to move some legacy rbase programs to Oracle. we are required to evaluate - a)would it be better to go with Oracle 10g forms/reports or with Jdeveloper. The data is going to reside on Oracle database servers. The concern about going with Oracle Forms/Reports was a) will Oracle support it in the future? my instant reaction was ofcourse Oracle will......we have bigger problems if Forms/Reports go away considering that the EBS Suite uses Forms/Reports technology as well.

so the final question while migrating new Apps would it be better to use Oracle Forms/Reports or Jdeveloper????? my 2 cents
1)If the application is database centric without too much business logic involved and if your team has a PL/SQL back ground as opposed to a Java backgrouund then 10g Forms/Reports may be a better bet.
2)If the team is pretty much a J2ee development team then JDeveloper may be the route to go.

I am not too worried about Oracle's strategy to support Forms/Reports (I think they will). The above is just my 2 cents any input from my blog readers will be greatly appreciated.
Categories: Development

Run Flashback commands only from Sql*Plus 10.1.x or newer

Mihajlo Tekic - Fri, 2006-12-15 23:24
getting ORA-08186: invalid timestamp specified each time I tried to run a FVQ.

ORA-08186: invalid timestamp specified
Well ... take a look at the following example

First I wanted to make sure, that the format I use is the correct one.


1* select to_char(systimestamp,'DD-MON-RR HH.MI.SSXFF AM') from dual
SQL> /

TO_CHAR(SYSTIMESTAMP,'DD-MON-RR
-------------------------------
14-DEC-06 10.27.26.622829 AM


Now, when I tried to run FVQ, I got "ORA-30052: invalid lower limit snapshot expression". That was an expected result, since my lower limit did not belong in (SYSDATE-UNDO_RETENTION, SYSDATE] range. (UNDO_RETENTION parameter was set to 900).
But you can agree with me that Oracle successfully processed timestamp values that I used in this query.


SQL> ed
Wrote file afiedt.buf

1 select comm
2 from scott.emp
3 versions between timestamp
4 to_timestamp('14-DEC-06 09.45.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM') and
5 to_timestamp('14-DEC-06 10.00.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM')
6 where
7* empno = 7369
SQL> /
from scott.emp
*
ERROR at line 2:
ORA-30052: invalid lower limit snapshot expression

So I modified the lower limit to fit in the right range, and I got ORA-08186: invalid timestamp specified. !?!?!?

SQL> ed
Wrote file afiedt.buf

1 select comm
2 from scott.emp
3 versions between timestamp
4 to_timestamp('14-DEC-06 10.20.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM') and
5 to_timestamp('14-DEC-06 11.00.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM')
6 where
7* empno = 7369
SQL> /
from scott.emp
*
ERROR at line 2:
ORA-08186: invalid timestamp specified

After some time that I spent trying to resolve this issue (I couldn't dare to open SR about it:-)) I remembered I have had similar problems while trying to test some flashback features (flashback table to before drop) on Sql*Plus 9.2.x while ago….and I was using Sql*Plus 9.2 again.

I tried the same example on Sql*Plus 10.1.0.2

… and everything worked well.

Are you exporting and importing compressed partitions?

Dong Jiang - Fri, 2006-12-15 11:45

Your luck just ran out.
Oracle imp utility uses convention inserts exclusively and partitions will lose compression after import as the inserts are not direct-path. The shiny 10g datapump has the same limitation. You will have to recompress the partitions later, like
1. Insert(append) into an empty table from the uncompressed partition.
2. Partition exchange
3. Truncate the table.
Then repeat for every uncompressed partition.


Announcement: Oracle Data Mining Consultants Partnership Program

Marcos Campos - Fri, 2006-12-15 10:03
We're starting a program to work with qualified data mining consultants.You and your colleagues are invited to participate in a 2 day hands-on session designed for data mining consultants here in the Oracle Burlington MA office February 7 & 8, 2007. It is also possible to attend remotely via webminar. Space is limited, so please RSVP asap.The Oracle Data Mining Consultants Partnership Program hasMarcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com2
Categories: BI & Warehousing

online reorg options if you are on Standard Edition

Vidya Bala - Wed, 2006-12-13 14:16
A couple of the online reorg options may not be available if your on Std Edition. Quest Central (Space Management) Live Reorg options should help you get past this problem.

Quest Central Space Management have the following Reorg Options
a)Standard Reorg(offline mode no DML activity allowed on Reorg table)
b)Live Reorg (online mode) - has 2 option
1)TLOCK switch (can be used if your on Std edition - a copy table is created and a trigger based approach to get the copy table in sync, when you are ready the TLOCK switch can be performed - during the switch you will need to have a downtime but will be nothing significant)2)Online Switch(This option needs Oracle Partitioning enabled - pretty much you will need enterprise edition for this option)
Categories: Development

From JavaPolis to Nordwijk aan Zee

Clemens Utschig - Wed, 2006-12-13 11:41
After two intense and really cool days at JavaPolis .. and a our slot on standards based applications, that caught a lot of interest - it's now off to an internal architects camp meeting in the Netherlands to discuss on future product strategy.

For the organizers of JavaPolis - it was a great time theree - and 2 thumbs up - for a VERY well organized conference. For me it was something new, speaking in a cinema, and definetely in front of the biggest screen ever (15 * 8 meters)



More pictures can be found here

So I'd recommend for next year to put JavaPolis 2007 into your calendar and join me and 1000 other java geeks :D

Siebel Analytics Install and Siebel Analytics Administration:

Vidya Bala - Tue, 2006-12-12 17:17
In this post I will review Siebel Analytics Administration Tool. But before we begin you will need to Install the following.
Logon to http://edelivery.oracle.com/Ok finding the Siebel Analytics Download can be tricky on edelivery


Product – pick Oracle Business Intelligence and the appropriate Platform

pick the Business Intelligence media pack
If you plan to evaluate on Windows download
B30721-01 Part 1 of 2 and B30721-01 Part 2 of 2
The above 2 parts will your Sibel Analytics Server, Siebel Analytics Web, Siebel Analytics Scheduler, Siebel Analytics Java Host, Siebel Analytics Cluster

Now if your looking to download third party products like Informatica , Actuate etc you will have to download B27745-01 Parts 1 through 4. This post will focus on the Siebel Analytics Server.

Once you have downloaded B30721-01 Part 1 of 2 and B30721-01 Part 2 of 2 , extract the zip files , find the installer and walk through the install. The install is pretty intuitive. If you run into any issues with the install (post comments on the blog and I can help you out with it).

Once you have completed the install , if you are on windows you will see 5 services created Sibel Analytics Server, Siebel Analytics Web, Siebel Analytics Scheduler, Siebel Analytics Java Host, Siebel Analytics Cluster – these are your key components for the Siebel Analytics Server.
Make sure your Siebel Analytics Server and Siebel Analytics Web service is started.

A couple of Siebel Analytics shortcuts will be installed on your desktop.

The first step in using Siebel Analytics to generate Reports is to define the metadata layer. The metadata layer is defined using the Siebel Analytics Administration tool.Click on the Siebel Analytics Administration Tool.

you can see the Administration Tool has 3 layers. The physical layer, Business Model and Mapping Layer and Presentation Layer.

Step1: Physical Layer:

Define your datasources in this layer. Create an ODBC datasource for the source database. For the purpose of this test we will be connecting to the perfstat schema on a DEV1 Instance.
In the physical layer right click and create your database connection

once you create your database folder import your database objects

select the schema you want to import and click import (choose FK constraints if you want to import the objects with FK constraints) – once you have imported the schema you should see it in the dev1 folder



Perfstat schema has been chosen just for illustration purposes , ideally you want your source database to be a warehouse or a mart , in the abscence of one and oltp system can also be your source (note if an oltp db is your source it will call for more work on the business mappings layer)– however in this post I will attempt to design this schema for Reporting Purposes.

Assuming that the crux of your reporting is Reporting on sql statement statististics :
SQL_STMTS_STATS will be our fact table in the Business Mappings Layer.
Some Dimesions around it will be
Instance Details
Execution Plan Cost Details
This is like a 2 dimensional star schema.

Now let us see how the following objects exist in our physical layer and model it in our business layer.
The 4 objects we will be looking at is
STATS$DATABASE_INSTANCE
STATS$SNAPSHOT
STATS$SQL_PLAN_USAGE
STATS$SQL_SUMMARY
Select the above 4 objects in your physical layer - right click and view physical diagram of the selected objects



Now create a new business model folder and drag and drop the 4 objects to the business model layer.



Once you have dropped the objects in the Business layer you can define the relationship between the 4 objects in the Business Layer(select the objects , right click and define the relationships in the Business Diagram area) - this is where a traditional normalized oltp schema in the physical layer will lend into a star or snowflake schema. in the Business layer.


Now let us also look into what attributes we actually need for the Presentation layer and what dimensions we need.


The Business layer is where I start modeling and maping objects with the Business Model in mind.For instance if STATS$DATABASE_INSTANCE is a good candidate for a dimension then right click on the object in the Business Model Layer and say create dimension

Once I have modeled my Business Layer to the way I want it to be , I can drag and drop objects in the Presentation Layer.


so we started with the perfstat schema and this is what we cameup with in the Presentations Layer


Instance Details
- Instance Name
- Database Name
Sql Stats Details
- Sql Statement
- Fetches
- Executions
- Loads
- Parse Calls
- Disk Reads
Sql Plan Cost
- Hash Value
- Cost

All the underlying relationships and hierarchy is masked at this presentation layer. All you see at this presentation layer is key Presentation Elements that a Business user really care about.

The next post will cover how the presentation layer can be used to build Reports using Siebel Analytics Web Console (typically the power users)

For questions – please feel to free to post them in the comments sections.
Make sure to save your work on the Siebel Analytics Administration Tool –a Repository consistency check is done at the time of saving your work – also check in changes will do a Repository Consistency Check.

Categories: Development

expdp of 9i database from 10g Oracle_Home

Vidya Bala - Tue, 2006-12-12 11:47
for some reason I thought I would be able to use expdp to export a 9i database using 10g expdp - obviously not supported as mentioned below.

Compatibility Matrix for Export & Import Between Different Oracle Versions
Doc ID:
Note:132904.1

Leaves me with the option of either using traditional "exp" can be very slow on a 100G+ database not sure if Cross Plaform Migration is an option with Standard Edition.
Categories: Development

Siebel Analytics

Vidya Bala - Mon, 2006-12-11 10:41
BI Suite Enterprise Edition Getting Started:

I have been spending the last few days setting up BI Reports using BI Suite Enterprise Edition as a proof of concept for Business users to evaluate the Product.

I have liked the Product so far,
(i) very user friendly
(ii)once the metadata layer is defined Business users are masked from underlying tables and relationships they don’t need to know about
(iii)A lot of sleek display features in the Product.
(iv)Skill Level not extremely difficult. If you have worked in the Database and BI world it should be fairly easy to learn how to use the Product.
(v) I will review a step by step evaluation of the Product once I have it installed in my system.

Details on the Product:

Key feature of the Product (mainly Siebel Analytics/Answers):

(i)Has a BI Server ; BI Web Console; BI Admin Client Tool
(ii)BI Server is not integrated with the 10g Application Server (it runs separately and is not a container in your Application Server like the way it was with Discoverer)
(iii)I believe there are claims that with 11G App Server the OEM Console can manage the BI Server as well.
(iv)When a request is sent to the webserver a Logical query is sent to the BI Server – The BI Server then checks if the data is in the Cache – if not in the cache a physical SQL is sent to the database.
(v)All metadata information is stored in flat files as opposed to any repository – so should be easy to move across environments – the metadata flat files also support multiuser capability.
(vi)Security Services available with your BI Server (VPD security)
(vii)BI Admin Tools – has pretty much 3 layers (a) the Physical Layer where datasources are defined; ODBC is used to define the datasources (b) the Business/ Mappings layer where you build your Business mappings and (c) Presentation Layer where define how your data needs to be presented.
(viii)The Webconsole has (a) Answers – this is what is used by your Business users to create Reports using data items available in your presentation layer. (b)Dashboard – is where Reports built using Answers can be published on your Dashboard (c)Admin – to manage user accounts , analytics catalog, dash board permissions etc (d) Delivers – can schedule jobs for web cache refreshes. Also Oracle XML Publisher is a part of the BI EE Suite

I am excited about having all our Reports moved to Siebel Analytics – I will have an end to end sneak preview of the Product posted on my blog soon.
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator