Feed aggregator

A day of conferences with the Swiss Oracle User Group

Yann Neuhaus - Sun, 2019-11-17 10:00
Introduction

I’m not that excited with all these events arround Oracle technologies (and beyond) but it’s always a good place to learn new things, and maybe the most important, discover new ways of thinking. And regarding this point, I was not disappointed.

Franck Pachot: serverless and distributed database

Franck talked about scaling out, it means avoid monoliths. Most of the database servers are this kind of monoliths today. And he advises us to think microservices. It’s not so easy regarding the database component, but it could surely simplify the management of different modules through different developper teams. Achieving scaling out is also get rid of these old cluster technologies (think about RAC) and instead of that, adopt the “sharing nothing”: no storage sharing, no network sharing, etc.
It also means the need for db replication, and also scale of the writes: and that point is more complicated. Sharding is a key point for scaling out (put the associated data where the users resides).

I discovered the CAP theorem, a very interesting theory that shows us that there is actually no ultimate solution. You need to choose your priority: Consistancy and Availability, or Availability and Partition Tolerant or Consistency and Partiton Tolerant. Just remind to keep your database infrastructure adapted to your needs, a google-like infrastructure being probably nice but do you really need the same?

Kamran Aghayer: Transition from dba to data engineer

Times are changing. I knew that since several years, but now it’s like an evidence: as a traditional DBA, I will soon be deprecated. Old-school DBA jobs will be replaced by a lot of new jobs: data architect, data engineer, data analyst, data scientist, machine learning engineer, AI engineer, …

Kamran focused on Hadoop ecosystem and Spark especially when he needed to archive data from EXADATA to HADOOP (and explained how HADOOP manage data through HDFS filesystem and datanodes – sort of ASM). He used a dedicated connector, sort of wrapper using external tables. Actually this is also what’s inside the Big Data Appliance from Oracle. This task was out of the scope of a traditional DBA, as a good knowledge of the data was needed. So, traditionnal DBA is dead.

Stefan Oehrli – PDB isolation and security

Since Oracle announced the availability of 3 free PDBs with each container database, the interest for Multitenant increased.

We had an overview of the top 10 security risks, all about privileges, privilege abuse, unauthorized privileges elevation, platform vulnerability, sql injection, etc. If you’re already in the cloud with PAAS or DBAAS, risks are the same.

We had a presentation of several clues for risk mitigation:
– path_prefix: it’s some kind of chroot for the PDB
– PDB_os_credential (still bugs but…): concerns credentials and dbms_scheduler
– lockdown profiles: a tool for restricting database features like queuing, partitioning, Java OS access, altering the database. Restrictions working with inclusion or exclusion.

Paolo Kreth and Thomas Bauman: The role of the DBA in the era of Cloud, Multicloud and Autonomous Database

Already heard today that the classic DBA is soon dead. And now the second bullet. The fact is that Oracle worked hard to improve autonomous features during the last 20 years, and like it was presented, you realize that it’s clearly true. Who cares about extents management now?

But there is still a hope. DBA of tomorrow is starting today. As the DBA role actually sits between infrastructure team and data scientists, there is a way to architect your career. Keep a foot in technical stuff, but become a champion in data analysis and machine learning.

Or focus on development with opensource and cloud. The DBA job can shift, don’t miss this opportunity.

Nikitas Xenakis – MAA with 19c and GoldenGate 19c: a real-world case study

Hey! Finally, the DBA is not dead yet! Some projects still need technical skills and complex architecture. The presented project was driven by dowtime costs, and for some kind of businesses, a serious downtime can kill the company. The customer concerned by this project cannot afford more than 1h of global downtime.

We had an introduction of MAA (standing for Maximum Availability Architecture – see Oracle documentation for that).

You first need to estimate:
– the RPO: how much data you can afford to loose
– the RTO: how quick you’ll be up again
– the performance you expect after the downtime: because it matters

The presented infrastructure was composed of RHEL, RAC with Multitenant (1 PDB only), Acitve Data Guard and GoldenGate. The middleware was not from Oracle but configured to work with Transparent Application Failover.

For sure, you still need several old-school DBA’s to setup and manage this kind of infrastructure.

Luiza Nowak: Error when presenting your data

You can refer to the blog from Elisa USAI for more information.

For me, it was very surprising to discover how a presentation can be boring, confusing, missing the point just because of inappropriate slides. Be precise, be captivating, make use of graphics instead of sentences, make good use of the graphics, if you want your presentation to have the expected impact.

Julian Frey: Database cloning in a multitenant environment

Back to pure DBA stuff. Quick remind of why we need to clone, and what we need to clone (data, metadata, partial data, refreshed data only, anonymised data, etc). And now, always considering GDPR compliance!

Cloning before 12c was mainly done with these well known tools: rman duplicate, datapump, GoldenGate, dblinks, storage cloning, embedded clone.pl script (didn’t heard about this one before).

Starting from 12c, and only if you’re using multitenant, new convenient tools are available for cloning: PDB snapshot copy, snapshot carousel, refreshable copy, …

I discovered that you can duplicate a PDB without actually putting the source PDB in read only mode: you just need to put your source PDB in begin backup mode, copy the files, generate the metadata file and create the database with resetlogs. Nice feature.

You have to know that cloning a PDB is native with multitenant, a database being always a clone of something (at least an empty PDB is created from PDB$seed).

Note that Snapshot copy of a PDB is limited for some kind of filesystems, the most known being nfs and acfs. If you decide to go for multitenant without actually having the option, don’t forget to limit the maximum of PDB in your CDB settings. It’s actually a parameter: max_PDBs. Another interesting feature is the possibily to create a PDB from a source PDB without the data (but tablespace and tables are created).

Finally, and against all odds, datapump is still a great tool for most of the cases. You’d better still consider this tool too.

Conclusion

This was a great event, from great organizers, and if pure Oracle DBA is probably not a job that makes younger people dream, jobs dealing with data are not planned to disappear in the near future.

Cet article A day of conferences with the Swiss Oracle User Group est apparu en premier sur Blog dbi services.

Library Cache Stats

Jonathan Lewis - Sun, 2019-11-17 03:36

In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).

rem
rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem
rem Has to be run by SYS to create the package
rem
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap
rem

create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;
end;
/

create or replace package body snap_libcache as

cursor c1 is
select
indx,
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst
;

type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;

m_start_time date;
m_start_flag char(1);
m_end_time date;

procedure start_snap is
begin

m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;

for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;

end start_snap;

procedure end_snap is
begin

m_end_time := sysdate;

dbms_output.put_line(‘———————————‘);
dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)
);

if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
);
else
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
);
end if;

dbms_output.put_line(‘———————————‘);

dbms_output.put_line(
rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||
lpad(‘Reloads’,10)
);

dbms_output.put_line(
rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||
lpad(‘——‘,10)
);

for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;

if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then

dbms_output.put(rpad(substr(r.lib_type,1,10),10));
dbms_output.put(rpad(substr(r.name,1,41),41));
dbms_output.put(to_char(
r.gets – w_list1(r.indx).gets,
‘999,999,990’)
);
dbms_output.put(to_char(
r.get_hits – w_list1(r.indx).get_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.get_hits – w_list1(r.indx).get_hits)/
greatest(
r.gets – w_list1(r.indx).gets,
1
),
‘999.0’));
dbms_output.put(to_char(
r.pins – w_list1(r.indx).pins,
‘999,999,990’)
);
dbms_output.put(to_char(
r.pin_hits – w_list1(r.indx).pin_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.pin_hits – w_list1(r.indx).pin_hits)/
greatest(
r.pins – w_list1(r.indx).pins,
1
),
‘999.0’));
dbms_output.put(to_char(
r.invalidations – w_list1(r.indx).invalidations,
‘9,999,999,990’)
);
dbms_output.put(to_char(
r.reloads – w_list1(r.indx).reloads,
‘9,999,990’)
);
dbms_output.new_line;
end if;

end loop;

end end_snap;

begin
select
startup_time, ‘S’
into
m_start_time, m_start_flag
from
v$instance;

end snap_libcache;
/

drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;

You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

PostgreSQL 12 : Setting Up Streaming Replication

Yann Neuhaus - Sat, 2019-11-16 11:29

PostgreSQL 12 was released a few month ago. When actually setting up a replication, there is no longer recovery.conf file in the PGDATA. Indeed all parameters of the recovery.conf should be now in the postgresql.conf file. And in the cluster data directory of the standby server, therre should be a file named standby.signal to trigger the standby mode.
In this blog I am just building a streaming replication between 2 servers to show these changes. The configuration we are using is
Primary server dbi-pg-essentials : 192.168.56.101
Standby server dbi-pg-essentials-2 : 192.168.56.102

The primary server is up and running on dbi-pg-essentials server.

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12] pg12

********* dbi services Ltd. *********
                  STATUS  : OPEN
         ARCHIVE_COMMAND  : test ! -f /u99/pgdata/12/archived_wal/%f && cp %p /u99/pgdata/12/archived_wal/%f
            ARCHIVE_MODE  : on
    EFFECTIVE_CACHE_SIZE  : 4096MB
                   FSYNC  : on
          SHARED_BUFFERS  : 128MB
      SYNCHRONOUS_COMMIT  : on
                WORK_MEM  : 4MB
              IS_STANDBY  : false
*************************************

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]

step 1 : Prepare the user for the replication on the primay server
For streaming replication, we need a user to read the WAL stream, we can do it with a superuser but it is not required. We will create a user with REPLICATION and LOGIN privileges. Contrary to the SUPERUSER privilege, the REPLICATION privilege gives very high permissions but does not allow to modifiy any data.
Here we will create a user named repliuser

postgres=# create user repliuser with password 'postgres'  replication ;
CREATE ROLE
postgres=#

Step 2 : Prepare the authentication on the primary server
The user used for the replication should be allowed to connect for replication. We need then to adjust the pg_hba.conf file for the two servers.

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12] grep repliuser pg_hba.conf
host    replication     repliuser        192.168.56.101/32        md5
host    replication     repliuser        192.168.56.102/32        md5
postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]

Step 3 : Create a replication slot on the primary server
Replication slots provide an automated way to ensure that the master does not remove WAL segments until they have been received by all standbys, and that the master does not remove rows which could cause a recovery conflict even when the standby is disconnected.

psql (12.1 dbi services build)
Type "help" for help.

postgres=# SELECT * FROM pg_create_physical_replication_slot('pg_slot_1');
 slot_name | lsn
-----------+-----
 pg_slot_1 |
(1 row)

postgres=#

Step 4 : Do a backup of the primary database and restore it on the standby
From the standby server launch the following command

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pg_basebackup -h 192.168.56.101 -D /u02/pgdata/12/PG1 --wal-method=fetch -U repliuser

Step 5 : set the primary connection info for the streaming on standby side
The host name and port number of the primary, connection user name, and password are specified in the primary_conninfo. Here there is a little change as there is no longer a recovery.conf parameter. The primary_conninfo should now be specified in the postgresql.conf

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] grep primary postgresql.conf
primary_conninfo = 'host=192.168.56.101 port=5432 user=repliuser password=postgres'
primary_slot_name = 'pg_slot_1'                 # replication slot on sending server

Step 6 : Create the standby.signal file on standby server
In the cluster data directory of the standby, create a file standby.signal

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pwd
/u02/pgdata/12/PG1
postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] touch standby.signal

Step 7 : Then start the standby cluster

postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pg_ctl start

If everything is fine, you should fine in the alert log

2019-11-16 17:41:21.552 CET [1590] LOG:  database system is ready to accept read only connections
2019-11-16 17:41:21.612 CET [1596] LOG:  started streaming WAL from primary at 0/5000000 on timeline 1

As confirmed by dbi dmk tool, the master is now streaming to the standby server

********* dbi services Ltd. *********
                  STATUS  : OPEN
         ARCHIVE_COMMAND  : test ! -f /u99/pgdata/12/archived_wal/%f && cp %p /u99/pgdata/12/archived_wal/%f
            ARCHIVE_MODE  : on
    EFFECTIVE_CACHE_SIZE  : 4096MB
                   FSYNC  : on
          SHARED_BUFFERS  : 128MB
      SYNCHRONOUS_COMMIT  : on
                WORK_MEM  : 4MB
              IS_STANDBY  : false
               IS_MASTER  : YES, streaming to 192.168.56.102/32
*************************************

postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]

Cet article PostgreSQL 12 : Setting Up Streaming Replication est apparu en premier sur Blog dbi services.

Elapsed time of Oracle Parallel Executions are not shown correctly in AWR

Yann Neuhaus - Fri, 2019-11-15 10:08

As the elapsed time (time it takes for a task from start to end, often called wall-clock time) per execution of parallel queries are not shown correctly in AWR-reports, I thought I setup a testcase to find a way to get an elapsed time closer to reality.

REMARK: To use AWR (Automatic Workload Repository) and ASH (Active Session History) as described in this Blog you need to have the Oracle Diagnostics Pack licensed.

I created a table t5 with 213K blocks:

SQL> select blocks from tabs where table_name='T5';
 
    BLOCKS
----------
    213064

In addition I enabled Linux-IO-throttling with 300 IOs/sec through a cgroup on my device sdb to ensure the parallel-statements take a couple of seconds to run:

[root@19c ~]# CONFIG_BLK_CGROUP=y
[root@19c ~]# CONFIG_BLK_DEV_THROTTLING=y
[root@19c ~]# echo "8:16 300" > /sys/fs/cgroup/blkio/blkio.throttle.read_iops_device

After that I ran my test:

SQL> select sysdate from dual;
 
SYSDATE
-------------------
14.11.2019 14:03:51
 
SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.
 
SQL> set timing on
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.63
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.62
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.84
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.73
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.63
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.74
SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.

Please consider the elapsed time of about 5.7 seconds per execution.

The AWR-report shows the following in the “SQL ordered by Elapsed time”-section:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
            67.3              6         11.22   94.5   37.4   61.3 04r3647p2g7qu
Module: SQL*Plus
select /*+ parallel(t5 2) full(t5) */ count(*) from t5

I.e. 11.22 seconds in average per execution. However, as we can see above, the average execution time is around 5.7 seconds. The reason for the wrong elapsed time per execution is that the elapsed time for the parallel slaves is summed up to the elapsed time even though the processes worked in parallel. Thanks to the column SQL_EXEC_ID (very useful) we can get the sum of the elapsed times per execution from ASH:

SQL> break on report
SQL> compute avg of secs_db_time on report
SQL> select sql_exec_id, qc_session_id, qc_session_serial#, count(*) secs_db_time from v$active_session_history
  2  where sql_id='04r3647p2g7qu' and sample_time>to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss')
  3  group by sql_exec_id, qc_session_id, qc_session_serial#
  4  order by 1;
 
SQL_EXEC_ID QC_SESSION_ID QC_SESSION_SERIAL# SECS_DB_TIME
----------- ------------- ------------------ ------------
   16777216	      237                  16626           12
   16777217	      237                  16626           12
   16777218	      237                  16626           10
   16777219	      237                  16626           12
   16777220	      237                  16626           10
   16777221	      237                  16626           10
                                             ------------
avg                                                    11
 
6 rows selected.

I.e. the 11 secs correspond to the 11.22 secs in the AWR-report.

How do we get the real elapsed time for the parallel queries? If the queries take a couple of seconds we can get the approximate time from ASH as well by subtracting the sample_time at the beginning from the sample_time at the end of each execution (SQL_EXEC_ID):

SQL> select sql_exec_id, extract (second from (max(sample_time)-min(sample_time))) secs_elapsed 
  2  from v$active_session_history
  3  where sql_id='04r3647p2g7qu'
  4  and sample_time>to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss')
  5  group by sql_exec_id
  6  order by 1;
 
SQL_EXEC_ID SECS_ELAPSED
----------- ------------
   16777216         5.12
   16777217        5.104
   16777218         4.16
   16777219        5.118
   16777220        4.104
   16777221        4.171
 
6 rows selected.

I.e. those numbers reflect the real execution time much better.

REMARK: If the queries take minutes (or hours) to run then you have to extract the minutes (and hours) as well of course. See also the example I have at the end of the Blog.

The info in V$SQL is also not very helpful:

SQL> set lines 200 pages 999
SQL> select child_number, plan_hash_value, elapsed_time/1000000 elapsed_secs, 
  2  executions, px_servers_executions, last_active_time 
  3  from v$sql where sql_id='04r3647p2g7qu';
 
CHILD_NUMBER PLAN_HASH_VALUE ELAPSED_SECS EXECUTIONS PX_SERVERS_EXECUTIONS LAST_ACTIVE_TIME
------------ --------------- ------------ ---------- --------------------- -------------------
           0      2747857355    67.346941          6                    12 14.11.2019 14:05:17

I.e. for the QC we have the column executions > 0 and for the parallel slaves we have px_servers_executions > 0. You may actually get different child cursors for the Query Coordinator and the slaves.

So in theory we should be able to do something like:

SQL> select child_number, (sum(elapsed_time)/sum(executions))/1000000 elapsed_time_per_exec_secs 
  2  from v$sql where sql_id='04r3647p2g7qu' group by child_number;
 
CHILD_NUMBER ELAPSED_TIME_PER_EXEC_SECS
------------ --------------------------
           0                 11.2244902

Here we do see the number from the AWR again.

So in future be careful when checking the elapsed time per execution of statements, which ran with parallel slaves. The number will be too high in AWR or V$SQL. Further analysis to get the real elapsed time per execution would be necessary.

REMARK: As the numbers in AWR do come from e.g. dba_hist_sqlstat, the following query provides “wrong” output for parallel executions as well:

SQL> column begin_interval_time format a32
SQL> column end_interval_time format a32
SQL> select begin_interval_time, end_interval_time, ELAPSED_TIME_DELTA/1000000 elapsed_time_secs, 
  2  (ELAPSED_TIME_DELTA/EXECUTIONS_DELTA)/1000000 elapsed_per_exec_secs
  3  from dba_hist_snapshot snap, dba_hist_sqlstat sql 
  4  where snap.snap_id=sql.snap_id and sql_id='04r3647p2g7qu' 
  5  and snap.BEGIN_INTERVAL_TIME > to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss');
 
BEGIN_INTERVAL_TIME              END_INTERVAL_TIME                ELAPSED_TIME_SECS ELAPSED_PER_EXEC_SECS
-------------------------------- -------------------------------- ----------------- ---------------------
14-NOV-19 02.04.00.176 PM        14-NOV-19 02.05.25.327 PM                67.346941            11.2244902

To take another example I did run a query from Jonathan Lewis from
https://jonathanlewis.wordpress.com/category/oracle/parallel-execution:

SQL> @jonathan
 
19348 rows selected.
 
Elapsed: 00:06:42.11

I.e. 402.11 seconds

AWR shows 500.79 seconds:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
           500.8              1        500.79   97.9   59.6   38.6 44v4ws3nzbnsd
Module: SQL*Plus
select /*+ parallel(t1 2) parallel(t2 2)
 leading(t1 t2) use_hash(t2) swa
p_join_inputs(t2) pq_distribute(t2 hash hash) ca
rdinality(t1,50000) */ t1.owner, t1.name, t1.typ

Let’s check ASH with the query I used above (this time including minutes):

select sql_exec_id, extract (minute from (max(sample_time)-min(sample_time))) minutes_elapsed,
extract (second from (max(sample_time)-min(sample_time))) secs_elapsed 
from v$active_session_history
where sql_id='44v4ws3nzbnsd'
group by sql_exec_id
order by 1;
 
SQL_EXEC_ID MINUTES_ELAPSED SECS_ELAPSED
----------- --------------- ------------
   16777216	              6       40.717

I.e. 06:40.72 which is close to the real elapsed time of 06:42.11

Cet article Elapsed time of Oracle Parallel Executions are not shown correctly in AWR est apparu en premier sur Blog dbi services.

Iconic South African Retailer Boosts Agility with Oracle

Oracle Press Releases - Thu, 2019-11-14 08:00
Press Release
Iconic South African Retailer Boosts Agility with Oracle Retail powerhouse Cape Union Mart International goes to the cloud to accelerate growth

REDWOOD SHORES, Calif. and CAPE TOWN, South Africa—Nov 14, 2019

Outdoor and Fashion retailer and manufacturer, Cape Union Mart International Pty Ltd, Inc. has selected Oracle to modernize its retail operations. With the Oracle Retail Cloud, the company plans to fuel growth across all sales channels with better inventory visibility and more sophisticated merchandise assortments that keep shoppers coming back for more.

“This is a complex project, touching virtually every part of our business. The Oracle team has partnered with us from start to finish; building our trust and giving us an insight into what we can expect in the implementation of the transformational project – we look forward to working with them and rebuilding our retail IT landscape into a world class environment, taking Cape Union Mart to the next level,” said Grant De Waal-Dubla, Group IT Executive, Cape Union Mart.

Cape Union Mart strives to deliver what their customers need with the right product in the right store at the right time. Until now, the brand has managed its retail assortments with a talented team and well-defined process in excel spreadsheets. As Cape Union Mart continued to grow, they needed a better way to manage their operations. With Oracle Retail, the brand can fully embrace automated, systemized workflows driven by dashboards and end to end reporting with a common user interface. This will lead to more seamless fulfillment and accurate demand forecasts.

“By choosing Oracle, Cape Union Mart can focus on business objectives and results, not technology. As a cloud provider, we take great pride in building appropriate real-time integration across the Oracle portfolio so our customers can get the information and results they needed quickly – whether that’s moving existing inventory or anticipating next seasons fashion trends and ensuring they are available for customers,” said Mike Webster, senior vice president and general manager, Oracle Retail.

Cape Union Mart International Pty Ltd will implement several solutions in the Oracle Retail modern platform including Oracle Retail Merchandising Cloud Service, Oracle Retail Allocation Cloud Service, Oracle Retail Pricing Cloud Services, Oracle Retail Invoice Matching Cloud Service, Oracle Retail Integration Cloud Services, Oracle Retail Merchandise Financial Planning Cloud Services, Oracle Retail Assortment and Item Planning Cloud Service, Oracle Retail Science Platform Cloud Services, Oracle Retail Demand Forecasting Cloud Service, Oracle Retail Store Inventory Operations Cloud Service, Oracle Middleware Cloud Services, Oracle Warehouse Management Cloud and Oracle Financials Cloud. Cape Union Mart has partnered with Oracle Retail Consulting for the implementation.

Contact Info
Kaitlin Ambrogio
Oracle PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Oracle Cloud Applications Achieves Department of Defense Impact Level 4 Provisional Authorization

Oracle Press Releases - Thu, 2019-11-14 07:00
Press Release
Oracle Cloud Applications Achieves Department of Defense Impact Level 4 Provisional Authorization

Redwood Shores, Calif.—Nov 14, 2019

Oracle today announced that Oracle Cloud Applications has achieved Impact Level 4 (IL4) Provisional Authorization from the Defense Information Systems Agency (DISA) and the U.S. Department of Defense (DoD). With IL4, Oracle can now offer its software-as-a-service (SaaS) cloud suite to additional government agencies within the DoD community. Since the authorization was granted, the DoD has selected Oracle Human Capital Management (HCM) Cloud to help transform its HR operations in support of 900,000 civilian employees.

All organizations need comprehensive and adaptable technology to stay ahead of changing business and technology demands. For federal government agencies in particular, it’s even more critical to have a reliable, highly secure solution to navigate time-sensitive workflows and make strategic mission decisions. To meet these demands, Oracle Cloud Applications enables customers to benefit from best-in-class functionality, robust security, high-end scalability, mission-critical performance, and strong integration capabilities.

“At Oracle, our focus is centered on our customers’ needs. For U.S. Federal and Department of Defense customers, they need best in class, agile, and secure software to run their operations – and we can deliver that,” said Mark Johnson, SVP, Oracle Public Sector. “With built-in support for Impact Level 4, the DoD community can now take advantage of Oracle Cloud Applications to break down silos, quickly and easily embrace the latest innovations, and improve user engagement, collaboration, and performance.”

“The Department of Defense awarded a contract to Oracle HCM Cloud to support its enterprise human resource portfolio. The award modernizes its existing civilian personnel business process functions to enable improved streamlined approaches in support of the workforce. The DoD's Defense Manpower Data Center is leading the implementation of the HCM Cloud, which replaces numerous legacy systems and is targeted for full deployment in mid 2020,” according to the DMDC Director, Michael Sorrento.

Oracle has been a long-standing strategic technology partner of the US government, including the Central Intelligence Agency (CIA), the first customer to use Oracle’s flagship database software 35 years ago. Today, more than 500 government organizations take advantage of Oracle’s industry-leading technologies and superior performance.

Contact Info
Celina Bertallee
Oracle
559-283-2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • 559-283-2425

Excel Average Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-11-13 10:33

Calculating average when you only have a few entries in your data is one thing but having to do the same with hundreds of data entries is a whole another story. Even using a calculator for finding the average of this many numbers can be highly time-consuming and to be honest, quite frustrating. After all, […]

The post Excel Average Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

The World Bee Project Works to Sustain Buzz with Oracle Cloud and AI

Oracle Press Releases - Wed, 2019-11-13 08:00
Blog
The World Bee Project Works to Sustain Buzz with Oracle Cloud and AI

By Guest Author, Oracle—Nov 13, 2019

The declining bee population is not just a problem for honey lovers; it’s a threat to the global food supply.

Oracle announced a partnership with The World Bee Project CIC in 2018, offering the use of its cloud storage and AI analytics tools to support the organization’s goals and innovations such as its BeeMark honey certification.

The World Bee Project is the first private organization to launch a global honeybee monitoring initiative to inform and implement actions to improve pollinator habitats, create more sustainable ecosystems, and improve food security, nutrition, and livelihoods by establishing a globally coordinated monitoring program for honeybees and eventually for key pollinator groups.

The World Bee Project Hive Network remotely collects data from varying environments through interconnected hives equipped with commercially available IoT sensors. The sensors combine colony-acoustics monitoring with other parameters such as brood temperature, humidity, hive weight, and apiary weather conditions. They also monitor and interpret the sound of a bee colony to assess colony behavior, strength, and health.

The World Bee Project Hive Network’s multiple local data sources provide a far richer view than any single data source to harness and enable global-scale computation to generate new insights into declining pollinator populations.

After the data has been validated by The World Bee Project database it can be fed into Oracle Cloud, which uses analytics tools including AI and data visualization to provide The World Bee Project with new insights into the relationship between bees and their varying environments. These new insights can be shared with smallholder farmers, scientists, researchers, governments, and other stakeholders.

“The partnership with Oracle will absolutely transform the scene as we can link AI with pollination and agricultural biodiversity,” said Sabiha Malik, founder and executive president of The World Bee Project CIC. “We have the potential to help transform the way the world grows food and to protect the livelihoods of hundreds of millions of smallholder farmers, but we depend entirely on stakeholders such as banks, agritech, insurance companies, and governments to sponsor and invest in our work so that we can begin to step toward fulfilling our mission.”

Oracle will be offering cloud computing technology and analytics tools to The World Bee Project to enable it to process data in collaboration with its science partner, the University of Reading, to enable science-based evidence to emerge.

Oracle is currently looking at funding models to support the expansion of The World Bee Project Hive Network to ensure a truly global view of the health of bee populations.

Watch The World Bee Project Video to Learn More


 

Read More Stories from Oracle Cloud

The World Bee Project is one of the thousands of innovative customers succeeding in the cloud. Read about others in Stories from Oracle Cloud: Business Successes

Oracle Apex Social Sign in

Kubilay Çilkara - Wed, 2019-11-13 03:27
In this post I want to show you how I used the Oracle Apex Social Sign in feature for my Oracle Apex app. Try it by visiting my web app beachmap.info.




Oracle Apex Social Sign in gives you the ability to use oAuth2 to authenticate and sign in users to your Oracle Apex apps using social media like Google, Facebook and others.

Google and Facebook are the prominent authentication methods currently available, others will probably follow. Social sign in is easy to use, you don't need to code, all you have to do is to obtain project credentials from say Google and then pass them to the Oracle Apex framework and put the sign in button to the page which will require authentication and the flow will kick in. I would say at most a 3 step operation. Step by step instructions are available in the blog posts below.


Further reading:




Categories: DBA Blogs

nVision Bug in PeopleTools 8.55/8.56 Impacts Performance

David Kurtz - Tue, 2019-11-12 13:12
I have recently come across an interesting bug in nVision that has a significant performance impact on nVision reports in particular and can impact the database as a whole.
Problem nVision SQLThis is an example of the problematic SQL generated by nVision.  The problem is that all of the SQL looks like this. There is never any group by clause, nor any grouping columns in the select clause in from of the SUM().
SELECT SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L2, PSTREESELECT10 L1
WHERE A.LEDGER='ACTUAL' AND A.FISCAL_YEAR=2018 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 8
AND L2.SELECTOR_NUM=159077 AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.BUSINESS_UNIT='10000')
AND L1.SELECTOR_NUM=159075 AND A.DEPTID=L1.RANGE_FROM_10
AND A.CURRENCY_CD='GBP' AND A.STATISTICS_CODE=' '
Each query only returns a single row, that only populates a single cell in the report, and therefore a different SQL statement is generated and executed for every cell in the report.  Therefore, more statements are parsed and executed, and more scans of the ledger indexes and look-ups of the ledger table and performed.  This consumes more CPU, more logical I/O.
Normal nVision SQLThis is how I would expect normal nVision SQL to look.  This example, although obfuscated, came from a real customer system.  Note how the query is grouped by TREE_NODE_NUM from two of the tree selector tables, so this one query now populates a block of cells.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_UKMGT'
AND A.FISCAL_YEAR=2018
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
The BugThis Oracle note details an nVision bug:
"UPTO SET2A-C Fixes - Details-only nPlosion not happening for Single Chart-field nPlosion Criteria.
And also encountered a performance issue when enabled details-only nPlosion for most of the row criteria in the same layout
Issue was introduced on build 8.55.19.
Condition: When most of the row filter criteria enabled Details-only nPlosion. This is solved in 8.55.22 & 8.56.07.
UPTO SET3 Fixes - Performance issue due to the SET2A-C fixes has solved but encountered new one. Performance issue when first chart-field is same for most of the row criteria in the same layout.
Issue was introduced on builds 8.55.22 & 8.56.07.
Condition: When most of the filter criteria’s first chart-field is same. The issue is solved in 8.55.25 & 8.56.10."
In summary
  • Bug introduced in PeopleTools 8.55.19, fully resolved in 8.55.25.
  • Bug introduced in PeopleTools 8.56.07, fully resolved in 8.56.10.

Basic Replication -- 11 : Indexes on a Materialized View

Hemant K Chitale - Tue, 2019-11-12 08:46
A Materialized View is actually also a physical Table (by the same name) that is created and maintained to store the rows that the MV query is supposed to present.

Since it is also a Table, you can build custom Indexes on it.

Here, my Source Table has an Index on OBJECT_ID :

SQL> create table source_table_1
2 as select object_id, owner, object_name
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_1
2 add constraint source_table_1_pk
3 primary key (object_id)
4 /

Table altered.

SQL> create materialized view log on source_table_1;

Materialized view log created.

SQL>


I then build Materialized View with  an additional Index on it :

SQL> create materialized view mv_1
2 refresh fast on demand
3 as select object_id as obj_id, owner as obj_owner, object_name as obj_name
4 from source_table_1
5 /

Materialized view created.

SQL> create index mv_1_ndx_on_owner
2 on mv_1 (obj_owner)
3 /

Index created.

SQL>


Let's see if this Index is usable.

SQL> exec  dbms_stats.gather_table_stats('','MV_1');

PL/SQL procedure successfully completed.

SQL> explain plan for
2 select obj_owner, count(*)
3 from mv_1
4 where obj_owner like 'H%'
5 group by obj_owner
6 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2523122927

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 10 | 15 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT| | 2 | 10 | 15 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | MV_1_NDX_ON_OWNER | 5943 | 29715 | 15 (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - access("OBJ_OWNER" LIKE 'H%')
filter("OBJ_OWNER" LIKE 'H%')



Note how this Materialized View has a column called "OBJ_OWNER"  (while the Source Table column is called "OWNER") and the Index ("MV_1_NDX_ON_OWNER") on this column is used.


You  would have also noted that you can run DBMS_STATS.GATHER_TABLE_STATS on a Materialized View and it's Indexes.

However, it is NOT a good idea to define your own Unique Indexes (including Primary Key) on a Materialized View.  During the course of a Refresh, the MV may not be consistent and the Unique constraint may be violated.   See Oracle Support Document # 67424.1



Categories: DBA Blogs

Oracle Introduces Cloud Native Modern Monetization

Oracle Press Releases - Tue, 2019-11-12 07:00
Press Release
Oracle Introduces Cloud Native Modern Monetization Cloud native deployment option gives market leaders the agility to embrace 5G, IoT and future digital business models

Redwood Shores, Calif.—Nov 12, 2019

Digital service providers are transforming their monetization systems to prepare for the upcoming demands of 5G and future digital services. Oracle Communications’ new cloud native deployment option for Billing and Revenue Management (BRM) addresses these demands by combining the features and extensibility of a proven, convergent charging system with the efficiency of cloud and DevOps agility.

Oracle Communications’ cloud native BRM deployment option provides a modern monetization solution to capitalize on the opportunities presented by today’s mobile, fixed and cable digital services. It supports any service, industry or partner-enabled business model and provides a foundation for 5G network slicing and edge monetization.

“As the telecommunications industry prepares itself to take advantage of 5G, architectural agility will be essential to monetize next-generation services quickly and efficiently,” added John Abraham, principal analyst, Analysys Mason. “With its cloud native compliant, microservices-based architecture framework, the latest version of Oracle’s Billing and Revenue Management solution is well positioned to accelerate CSPs ability to support emerging 5G-enabled use cases.“

Cloud native BRM enables internal IT teams to incorporate DevOps practices to more quickly design, test and deploy new services. Organizations can optimize their operations by seamlessly managing business growth with efficient scaling and simplified updates, and by taking advantage of deployment in any public or private cloud infrastructure environment. BRM further increases IT agility when deployed on Oracle’s next generation Cloud Infrastructure, which features autonomous capabilities, adaptive intelligence and machine learning cyber security.

“Service providers and enterprises are looking for agile solutions to quickly monetize 5G and IoT services,” said Jason Rutherford, senior vice president and general manager, Oracle Communications. “Cloud native BRM deployed on Oracle Cloud Infrastructure allows our customers to operate more efficiently, react quickly to competition and to pioneer new price plans and business models that capitalize on the digital revolution.”

Find out more about Oracle Communications Billing and Revenue Management, with modern monetization capabilities for 5G and the connected digital world. 

To learn more about Oracle Communications industry solutions, visit: Oracle Communications, LinkedIn, or join the conversation at Twitter @OracleComms.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Spain’s New York Burger Delivers Sizzling Service with Oracle

Oracle Press Releases - Tue, 2019-11-12 07:00
Press Release
Spain’s New York Burger Delivers Sizzling Service with Oracle Restaurant sees 50 percent decrease in customer wait times with Oracle MICROS

Redwood Shores, Calif.—Nov 12, 2019

New York Burger set out to shake up the local food scene by bringing American style burgers and barbeque dishes to Madrid. Today, the fast growing chain is doing exactly that. To keep up with the pace of expansion while keeping customers happy, New York Burger has added Oracle MICROS Simphony Point of Sale (POS) System to its technology menu to seamlessly connect servers and the kitchen. With real-time order sharing, cooks can immediately start an order, reducing the time it takes orders to arrive to hungry diners. Since deploying the Oracle Cloud solution, the chain has realized a 50 percent decrease in customer wait-time across its five restaurants.

“As the business grew, we found our existing solution was not up to the challenge, and inefficiencies meant our customers were kept waiting,” said Pablo Colmenares, founder, New York Burger. “Oracle has definitely helped us to streamline our operations. It is simple and fast to use, and utilizing the product helped us become a smarter business. Oracle has a great global reputation, there’s a reason why the biggest brands in the world trust Oracle. Every strong tree needs strong roots and Oracle is our roots.”

Along with improving service efficiencies, Oracle MICROS Simphony POS System has helped New York Burger streamline menu management, gaining immediate data and reporting on their customers’ favorite menu items. These insights have been especially helpful as the restaurant chain has revamped its menu to better match customers’ preferences, removing items that were not popular and reducing food waste.  

New York Burger has also relied on Oracle’s solutions to further its green-friendly approach to operating its restaurants, enabling them to reduce waste and more closely align with its goal of being an environmentally-friendly restaurant. Oracle’s solution specifically helps management minimize excess costs, by reducing any unnecessary ingredient surplus.

“This innovative chain took a chance on bringing a new kind of cuisine to Madrid – to rave reviews. But today, the quality of the experience customers have at a restaurant must be in parallel with the quality of the food,” said Simon de Montfort Walker, senior vice president and general manager for Oracle Food and Beverage. “With Oracle, New York Burger is able to speed service and give servers more time with customers - delivering an unforgettable meal on both sides of the equation. And with better insights into tastes, trends and what’s selling well, New York Burger can reduce waste and conserve revenue while giving customers a menu that will keep them coming back again and again.”

Please view New York Burger’s video: New York Burger Delivers Joyful Food Sustainably with Oracle

Contact Info
Katie Barron
Oracle
+1-202-904-1138
katie.barron@oracle.com
Scott Porter
Oracle
+1-650-274-9519
scott.c.porter@oracle.com
About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1-202-904-1138

Scott Porter

  • +1-650-274-9519

Oracle Cloud's Competitive Advantage

Oracle Press Releases - Tue, 2019-11-12 06:00
Blog
Oracle Cloud's Competitive Advantage

By Steve Daheb, Senior Vice President, Oracle Cloud—Nov 12, 2019

I just got back from a press tour in New York, where the most common question I heard was: What's Oracle's competitive advantage in the cloud? I believe it's the completeness of our offering. Here's why.

The three main components of the cloud are the application, platform, and infrastructure layers. But most enterprises don't think about the cloud in terms of these silos. They take a single, holistic view of their problems and how to solve them.

Because we play in all layers of the cloud—and are continually adding integrations between the layers—we are in a unique position to help.

The application layer refers to software such as enterprise resource planning, human capital management, supply chain management, and customer engagement. These are core applications that enterprises rely on to run their businesses. Oracle is the established leader in this area, and we're continuing to innovate and differentiate by integrating artificial intelligence, blockchain, and other important new technologies into these applications.

These applications sit on the platform layer, which is powered by the Oracle Autonomous Database. We've taken our 40-plus years of expertise and combined it with advanced machine learning technologies to create the market's only self-driving and self-repairing database.

The platform layer is also where our analytics, security, and integration capabilities live. Analytics are helping businesses answer questions they couldn't answer before—and ask new questions they never would have thought of. And security, which used to be seen as an inhibitor to cloud adoption, is actually now a driver. Enterprises are saying, "Oracle's data center is going to be more secure than what we can manage on our own."

The application and platform layers rest upon Oracle's Generation 2 Cloud Infrastructure. Our compute, storage, and networking capabilities are purpose-built to run new types of workloads in a more secure and performant way than our competitors. We plan to open 20 Oracle Cloud data centers by the end of next year, which works out to one new data center every 23 days. And we're hiring 2,000 new people to support this infrastructure business.

Another differentiator for Oracle is our commitment to openness and interoperability in the cloud. As an example, we have a very strategic relationship with Microsoft. Joint customers can migrate to the cloud, build net new applications and even do things like run Microsoft analytics on top of an Oracle Database. We've also announced a collaboration with VMware to help customers run vSphere workloads in Oracle Cloud and to support Oracle software running on VMware.

We live in a hybrid and multicloud world. Oracle's comprehensive cloud offering, combined with our interoperability and multicloud support, helps customers achieve outcomes they simply couldn't with other vendors.

Watch Steve Daheb discuss the Oracle Cloud advantage on Cheddar and Yahoo Finance.

GRDF Reaches Four Million Smart Meter Milestone with Oracle

Oracle Press Releases - Tue, 2019-11-12 05:00
Press Release
GRDF Reaches Four Million Smart Meter Milestone with Oracle Leading French gas distributor continues natural gas expansion with world’s largest smart meter roll-out, expected to reach 11 million households by 2023

EUROPEAN UTILITY WEEK, Paris—Nov 12, 2019

Leading French DSO GRDF has rolled out more than four million smart meters, powered by Oracle Meter Data Management (MDM). This milestone is part of GRDF’s larger smart meter initiative that is on track to reach 11 million households by 2023. With this program, GRDF can further realize its vision of Improving energy management and enhancing customer satisfaction. GRDF serves 90 percent of France’s gas market.

“The move to smart meters and implementation of new digitized functionalities are critical to delivering a natural gas network that fosters energy transition for our territories,” said Vincent PERTUIS, GRDF Director for Smart Gas Metering Program. “With Oracle MDM, GRDF will be able to use data to continue to reimagine how we serve customers, accelerate decarbonization, and increase the flexibility and reliability of our network.” 

Using Oracle Utilities Meter Data Management (MDM), GRDF is modernizing its natural gas transmission network to make it an effective tool for the energy transition. The result will be a fully digitized and connected network that will deliver benefits to customers and the environment by integrating renewable gas, enhancing safety, providing data to better manage gas supply, and linking with other networks to enhance flexibility and storage capacity. 

The smart meter roll-out will provide GRDF with massive amounts of interval meter data that will be essential to running a more efficient, cleaner network. Oracle Utilities MDM helps energy providers not only capture the data but securely optimise its use and management to support core operations and fuel innovation.

“GRDF’s smart grid modernization project is the largest in the world and hitting the four million smart meter marker represents tremendous progress,” said Francois Vazille, vice president of JAPAC & EMEA, Oracle Utilities. “Oracle MDM is a critical component to GRDF’s digital transformation journey and in opening up new opportunities for GRDF to serve its customers with clean, reliable energy.”

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

ā·pěks 10 Years Later

Joel Kallman - Tue, 2019-11-12 03:49


Exactly 10 years ago today, I wrote a succinct blog post with the intent of clarifying how to properly pronounce and abbreviate Oracle APEX.  I decided to use the phonetic spelling, ā'pěks, to avoid all ambiguity with the pronunciation.  Was I successful?

  • I still encounter many people who spell this Apex (and not the correct APEX)
  • I routinely hear people pronounce this as ah·pěks or ap·ěks (and not the correct ā'pěks)

Obviously, we still have a ways to go.  However, this hasn't been a complete loss.  With many thanks to the global APEX community, this simple phonetic spelling has resulted in:
...and more.  And did I say stickers?

What I especially love is that all of this was created by the Oracle APEX community.  Instead of Oracle providing merchandise and branding for Oracle APEX, the community embraced this and ran with it themselves.  This has been wonderfully organic and authentic, and completely community-driven.

Going forward, if you come across someone who misspells or mispronounces Oracle APEX, please feel free to direct them to this blog post.  It is:

Oracle APEX

and it's pronounced ā·pěks.

Joining two tables with time range

Tom Kyte - Mon, 2019-11-11 11:49
Dear AskTom-Team! I wonder whether it is possible to join two tables that have time ranges. E.g a table 'firmname' holds the name of a firm with two columns from_year and to_year that define the years the name is valid. Table 'address' holds the a...
Categories: DBA Blogs

TDE Encryption of local Oracle databases. KEK hosted on cloud service?

Tom Kyte - Mon, 2019-11-11 11:49
Hi, We want to encrypt some on-premise Oracle databases. If possible, we would like to avoid to use a physical HSM or to contract with a third party HSM cloud provider. Is this possible to store the KEK's in GCP or Azure, and to interface our lo...
Categories: DBA Blogs

Migration from oracle 6i and 11g to Oracle APEX

Tom Kyte - Mon, 2019-11-11 11:49
Dear all, i have oracle forms that are built on 6i and 11g and i want to migrate them to oracle apex, is it possible to create an application based on the oracle 6i form migrated, if so can you provide me with a document or a video showing the pro...
Categories: DBA Blogs

One way Encryption where no one can decrypt

Tom Kyte - Mon, 2019-11-11 11:49
Hi Tom, Kindly suggest some one way encryption for the data in table,where no one can decrypt it? Read few articles about this and not satisfied. Kindly help.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator