Feed aggregator

Restrict Application access to developers in same workspace

Tom Kyte - Thu, 2019-10-31 09:47
How to show a user only certain amount of applications in app builder(i.e user should not be able to see all applications in app builder)? In a Oracle Apex workspace, I need to create a new user(admin role) such that the new user can only see sele...
Categories: DBA Blogs

Find if event spanning several dates happened via SQL

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have data like below <code> event_flag event_date 1 date1 1 date2 0 date3 1 date4 0 date5 0 date6 1 date7 1 date8 1 date9 ...
Categories: DBA Blogs

pragma autonomous_transaction within procedure before creating synonyms

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have created a stored procedure with in oracle package which creates list of synonyms depending upon the change of dblink server name. I need to execute this procedure to create/replace synonym pointing to another dblink server. My quest...
Categories: DBA Blogs

Undo Tablespaces.

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, Waiting to ask u this question. What is a Undo Tablespace in 9i. Is this similar to Rollback Segments. What are NonStandard Block sizes Why that non-Standarad. Why am i not able to create a RS on a Locally Managed Automatically Si...
Categories: DBA Blogs

How to count no of records in table without count?

Tom Kyte - Thu, 2019-10-31 09:47
2)How to count no of records in table without count? ---Actually,this question asked when i had attended an interview in Dell company.I don't know why they people are asked these type of questions,but i said an answer like in my own way. --->...
Categories: DBA Blogs

Use of Blockchain Helps Speed Global Shipping Transactions

Oracle Press Releases - Thu, 2019-10-31 06:00
Blog
Use of Blockchain Helps Speed Global Shipping Transactions

By Guest Author, Oracle—Oct 31, 2019

While ocean shipping has been steadily growing in recent years, multiple pressures are impacting the predictability of the far-flung supply chains and pressuring industry profits. Key to navigating these challenges for shippers, ocean carriers, terminal operators, and other parties involved with shipping ocean freight is better visibility and real-time access to information.

That’s a challenge given the sheer number of participants in a single shipment—and often leads to delays, disputes, and significant extra costs for shippers and carriers. But things are changing thanks to the help of emerging technology and the formation of a new blockchain consortium in this space pioneered by CargoSmart.

For the last year, CargoSmart and Oracle have been working together to develop a blockchain solution that aims to simplify the shipping documentation process and deliver a single source of truth for trusted, real-time sharing of information, thereby increasing trust and boosting efficiency.

More recently, CargoSmart announced a significant milestone in forming the Global Shipping Business Network (GSBN) blockchain consortium. Comprising of nine leading ocean carriers and terminal operators: CMA CGM, COSCO SHIPPING Lines, COSCO SHIPPING Ports, Hapag-Lloyd, Hutchison Ports, OOCL, Port of Qingdao, PSA International, and Shanghai International Port Group. This not-for-profit joint venture aims to accelerate the digital transformation of the shipping industry, with CargoSmart providing the resources underpinning it.

“The close cooperation between our R&D group and Oracle’s blockchain team has already helped to accelerate the development of some pilot applications. By leveraging Oracle’s enriched technical support and advice, CargoSmart has been able to achieve high levels of operational capability, reduce R&D time, and significantly improve the productivity of its blockchain application developers,” said Romney Wong, Chief Technology Officer of CargoSmart.

Watch the CargoSmart Video

Watch this video to learn more about how CargoSmart is leveraging Oracle Blockchain Platform to bring transparency and trust into the complex systems behind CargoSmart’s logistics and shipping management.

 


Read More Stories from Oracle Cloud

CargoSmart is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

SLES15 SP1 – New features

Yann Neuhaus - Thu, 2019-10-31 05:20

With SLES15 SUSE introduced the Multimodal OS and the unified installer. Means, you only get what you really need. Your OS is flexible and you can easily add features you need and remove them as well. But this article shouldn’t be an explanation of the multimodal OS, it will show you some of the new features of SLES15 SP1.

SUSE supports the migration from SLES15 to SLES15 SP1 in online mode.
You can upgrade using two possibilities, YaST migration (GUI) and Zypper migration (command line).
Be sure that your system is registered at the SUSE Customer Center or has a local RMT server. Afterwards, just use “zypper migration”, type the number of the product you want to migrate and accept the terms of the license. That’s it.
The best way to check, if the installation was successful.

sles15:~ # cat /etc/os-release | grep PRETTY_NAME
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1"

So let’s have a look at the new features and improvements of SLES15 SP1 .

Unified Installer

SUSE Manager Server and Proxy are now available as base products. Both can be installed using the unified installer.
Point of Service and SLE Real Time are also included in the unified installer now.

Transactional Update

In OpenSUSE Leap and SUSE CaaS transactional update was already implemented, now it is also possible to run transactional updates with SLE. To install transactional update, the Transactional Server Module needs to get activated first (no additional key is needed). Afterwards the transactional-update package and its dependencies can be installed.

sle15:~ #  SUSEConnect --product sle-module-transactional-server/15.1/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-transactional-server 15.1 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system
sle15:~ # zypper install transactional-update
Refreshing service 'Basesystem_Module_15_SP1_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_SP1_x86_64'.
Refreshing service 'Server_Applications_Module_15_SP1_x86_64'.
Refreshing service 'Transactional_Server_Module_15_SP1_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 6 NEW packages are going to be installed:
  attr bc openslp psmisc rsync transactional-update

6 new packages to install.
Overall download size: 686.6 KiB. Already cached: 0 B. After the operation, additional 1.3 MiB will be used.
Continue? [y/n/v/...? shows all options] (y): y

As you maybe know, SUSE uses btrfs with snapper as default for the file systems. This builds the basis for the transactional updates. Transactional updates are applied into a new snapshot, so the running system is not touched. Using this technology, the updated snapshot will be activated after the next reboot. So this is an update, that is
– Atomic: either fully applied or not.
– Easily revertabled: after a failed update the return to the previous (running) system is easy.

Simplified btrfs layout

There is only one single subvolume under /var not 10 for simplified and consistens snapshots. This takes only effect for fresh installations. Upgraded systems still use the old layout.
Startings with SLES15 SP1 there is also the possibility to have each home-directory as single subvolume. But this is not the default.

Secure encrypted virtualization (SEV)

Data encryption is a important topic in todays IT environments. Data stored on disk is widley encrypted, but how about the data in RAM? AMD’s SEV gives the opportunity to protect Linux KVM virtual machines by encrypting the memory of each VM with a unique key. It can also generate a signature, that attests the correct encryption.
This increases system security a lot and protects VM for memory scrape attachs from hypervisor.
With SLES15 SP1, Suse provides full support for this technology. For further information about SEV, click here .

Quarterly Updates

Starting with 15 SP1 SUSE offers quarterly updates of the installation and package media. They will be refreshed every quarter with all maintenance and security updates. SO for the setup of new systems there is always a recent and up-to-date state.

Conclusion

This is not the full list of new features, only an abstract. But nevertheless, especially the transactional update makes it effortable to upgrade to SLES15 SP1. And always think about the security improvements which come with every new release.

Cet article SLES15 SP1 – New features est apparu en premier sur Blog dbi services.

How to Generate SSH Key Pair for Oracle Cloud (Windows/Linux)

Online Apps DBA - Thu, 2019-10-31 04:19

How to Generate SSH Keys Pair for Oracle Cloud (Windows/Linux) If you are creating a Linux or Database Instance then You will need an SSH key pair to access the instances. An SSH key pair comprises a Private key and Public key. Public Key will be used when you create the Instance & the Private […]

The post How to Generate SSH Key Pair for Oracle Cloud (Windows/Linux) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Square root in excel – A Step-By-Step Tutorial

VitalSoftTech - Wed, 2019-10-30 10:30

Have you ever stopped to wonder what life would have been like when there wasn’t a calculator to perform the arithmetical operations for people? It surely sends a shiver down your spine to even imagine the horror of having to survive without a calculator. But in the present times, it wouldn’t be wrong to state […]

The post Square root in excel – A Step-By-Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Patroni Operations – Changing Parameters

Yann Neuhaus - Wed, 2019-10-30 10:17

Sooner or later all of us have to change a parameter on the database. But how is this put into execution when using a Patroni cluster? Of course there are some specifics you have to consider.
This post will give you a short introduction into this topic.

When you want to change a parameter on a Patroni cluster you have several possibilities:
– Dynamic configuration in DCS. These changes are applied asynchronously to every node.
– Local configuration in patroni.yml. This will take precedence over the dynamic configuration.
– Cluster configuration using “alter system”.
– Environment configuration using local environment variables.

Change PostgreSQL parameters using patronictl 1. Change parameters, that do not need a restart

If you want to change a parameter (or more) for the whole cluster, you should use patronictl. If you want to change the initial configuration as well, you should also adjust patroni.yml.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1

All parameters already set are shown and can be changed like in any other file using the vi commands:

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1

loop_wait: 10
maximum_lag_on_failover: 1048576
postgresql:
  parameters:
    archive_command: /bin/true
    archive_mode: 'on'
    autovacuum_max_workers: '6'
    autovacuum_vacuum_scale_factor: '0.1'
    autovacuum_vacuum_threshold: '50'
    client_min_messages: WARNING
    effective_cache_size: 512MB
    hot_standby: 'on'
    hot_standby_feedback: 'on'
    listen_addresses: '*'
    log_autovacuum_min_duration: 60s
    log_checkpoints: 'on'
    log_connections: 'on'
    log_directory: pg_log
    log_disconnections: 'on'
    log_duration: 'on'
    log_filename: postgresql-%a.log
    log_line_prefix: '%m - %l - %p - %h - %u@%d - %x'
    log_lock_waits: 'on'
    log_min_duration_statement: 30s
    log_min_error_statement: NOTICE
    log_min_messages: WARNING
    log_rotation_age: '1440'
    log_statement: ddl
    log_temp_files: '0'
    log_timezone: Europe/Zurich
    log_truncate_on_rotation: 'on'
    logging_collector: 'on'
    maintenance_work_mem: 64MB
    max_replication_slots: 10
    max_wal_senders: '20'
    port: 5432
    shared_buffers: 128MB
    shared_preload_libraries: pg_stat_statements
    wal_compression: 'off'
    wal_keep_segments: 8
    wal_level: replica
    wal_log_hints: 'on'
    work_mem: 8MB
  use_pg_rewind: true
  use_slots: true
retry_timeout: 10
ttl: 30

Once saved, you get the following:

---
+++
@@ -2,7 +2,8 @@
 maximum_lag_on_failover: 1048576
 postgresql:
   parameters:
-    archive_command: /bin/true
+    archive_command: 'test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f'
     archive_mode: 'on'
     autovacuum_max_workers: '6'
     autovacuum_vacuum_scale_factor: '0.1'

Apply these changes? [y/N]: y
Configuration changed

When connecting to the database you will see, that the parameter is changed now. It is also changed on all the other nodes.

 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] sq
psql (11.5)
Type "help" for help.

postgres=# show archive_command;
                                  archive_command
------------------------------------------------------------------------------------
 test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f
(1 row)
2. Change parameters, that need a restart

How can parameters be changed that need a restart? Especially as we want to have a minimal downtime of the cluster.
First of all the parameter can be changed the same way as the parameters that do not need a restart using patronictl edit-config. Once the parameter is changed the status overview of the cluster gets a new column showing which node needs a restart.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+

Afterwards there are two possibilites.

2.1 Restart node by node

If you do not want to restart the whole cluster, you have the possibility to restart each node separatly. Keep in mind, that you have to restart the Leader Node first, otherwise the change does not take effect. It is also possible to schedule the restart of a node.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni1
When should the restart take place (e.g. 2019-10-08T15:33)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni1? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni1
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni2
When should the restart take place (e.g. 2019-10-08T15:34)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |                 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni2? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni2
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni3
When should the restart take place (e.g. 2019-10-08T15:34)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |                 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |                 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni3? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni3
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
2.2 Restart the whole cluster

In case you don’t want to restart node by node and you have the possibility of a downtime, it is also possible to restart the whole cluster (scheduled or immediately)

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1
When should the restart take place (e.g. 2019-10-08T15:37)  [now]:
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB | Pending restart |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |        *        |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |        *        |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |        *        |
+---------+----------+----------------+--------+---------+----+-----------+-----------------+
Are you sure you want to restart members patroni1, patroni2, patroni3? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2)  []:
Success: restart on member patroni1
Success: restart on member patroni2
Success: restart on member patroni3
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  4 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  4 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  4 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Change PostgreSQL parameters using “alter system”

Of course you can change a parameter only on one node using “alter system”, too.

 postgres@patroni1:/home/postgres/ [PG1] sq
psql (11.5)
Type "help" for help.

postgres=# show archive_Command;
 archive_command
-----------------
 /bin/false
(1 row)

postgres=# alter system set archive_command='/bin/true';
ALTER SYSTEM

postgres=# select pg_reload_conf();
 pg_reload_conf
----------------
 t
(1 row)

postgres=# show archive_command;
 archive_command
-----------------
 /bin/true
(1 row)

For sure the parameter change is not automatically applied to the replicas. The parameter is only changed on that node. All the other nodes will keep the value from the DCS. So you can change the parameter using “patronictl edit-config” or with an “alter system” command on each node. But: you also have to keep in mind the order in which the parameters are applied. The “alter system” change will persist the “patronictl edit-config” command.

Conclusion

So if you consider that there are some specialities when changing parameters in a Patroni cluster, it is quite easy to change a parameter. There are some parameters that need the same value on all nodes, e.g. max_connections, max_worker_processes, wal_level. And there are as well some parameters controlled by patroni, e.g listen_addresses and port. For a more details check the Patroni documentation . And last but not least: If you change the configuration with patronictl and one node still has another configuration. Look for a postgresql.auto.conf in the PGDATA directory. Maybe there you can find the reason for different parameters on your nodes.
If you are interested in more “Patroni Operations” blogs, check also this one Patroni operations: Switchover and Failover.

Cet article Patroni Operations – Changing Parameters est apparu en premier sur Blog dbi services.

PostgreSQL 13 will come with partitioning support for pgbench

Yann Neuhaus - Wed, 2019-10-30 08:32

A lot of people use pgbench to benchmark a PostgreSQL instance and pgbench is also heavily used by the PostgreSQL developers. While declarative partitioning was introduced in PostgreSQL 10 there was no support for that in pgbench, even in the current version, which is PostgreSQL 12. With PostgreSQL 13, which is currently in development, this will change and pgbench will be able to create a partitioned pgbench_accounts tables you then can run your benchmark against.

Having a look at the parameters of pgbench in PostgreSQL 13, two new ones pop up:

postgres@centos8pg:/home/postgres/ [pgdev] pgbench --help
pgbench is a benchmarking tool for PostgreSQL.

Usage:
pgbench [OPTION]... [DBNAME]

Initialization options:
-i, --initialize         invokes initialization mode
-I, --init-steps=[dtgvpf]+ (default "dtgvp")
run selected initialization steps
-F, --fillfactor=NUM     set fill factor
-n, --no-vacuum          do not run VACUUM during initialization
-q, --quiet              quiet logging (one message each 5 seconds)
-s, --scale=NUM          scaling factor
--foreign-keys           create foreign key constraints between tables
--index-tablespace=TABLESPACE
create indexes in the specified tablespace
--partitions=NUM         partition pgbench_accounts in NUM parts (default: 0)
--partition-method=(range|hash)
partition pgbench_accounts with this method (default: range)
--tablespace=TABLESPACE  create tables in the specified tablespace
--unlogged-tables        create tables as unlogged tables

Options to select what to run:
-b, --builtin=NAME[@W]   add builtin script NAME weighted at W (default: 1)
(use "-b list" to list available scripts)
-f, --file=FILENAME[@W]  add script FILENAME weighted at W (default: 1)
-N, --skip-some-updates  skip updates of pgbench_tellers and pgbench_branches
(same as "-b simple-update")
-S, --select-only        perform SELECT-only transactions
(same as "-b select-only")

Benchmarking options:
-c, --client=NUM         number of concurrent database clients (default: 1)
-C, --connect            establish new connection for each transaction
-D, --define=VARNAME=VALUE
define variable for use by custom script
-j, --jobs=NUM           number of threads (default: 1)
-l, --log                write transaction times to log file
-L, --latency-limit=NUM  count transactions lasting more than NUM ms as late
-M, --protocol=simple|extended|prepared
protocol for submitting queries (default: simple)
-n, --no-vacuum          do not run VACUUM before tests
-P, --progress=NUM       show thread progress report every NUM seconds
-r, --report-latencies   report average latency per command
-R, --rate=NUM           target rate in transactions per second
-s, --scale=NUM          report this scale factor in output
-t, --transactions=NUM   number of transactions each client runs (default: 10)
-T, --time=NUM           duration of benchmark test in seconds
-v, --vacuum-all         vacuum all four standard tables before tests
--aggregate-interval=NUM aggregate data over NUM seconds
--log-prefix=PREFIX      prefix for transaction time log file
(default: "pgbench_log")
--progress-timestamp     use Unix epoch timestamps for progress
--random-seed=SEED       set random seed ("time", "rand", integer)
--sampling-rate=NUM      fraction of transactions to log (e.g., 0.01 for 1%)
--show-script=NAME       show builtin script code, then exit

Common options:
-d, --debug              print debugging output
-h, --host=HOSTNAME      database server host or socket directory
-p, --port=PORT          database server port number
-U, --username=USERNAME  connect as specified database user
-V, --version            output version information, then exit
-?, --help               show this help, then exit

Report bugs to .

That should give us partitions according to the amount of partitions and partitioning method we chose, so let’s populate a new database:

postgres@centos8pg:/home/postgres/ [pgdev] psql -c "create database pgbench" postgres
CREATE DATABASE
Time: 326.715 ms
postgres@centos8pg:/home/postgres/ [pgdev] pgbench -i -s 10 --partitions=10 --partition-method=range --foreign-keys pgbench
dropping old tables...
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
creating tables...
creating 10 partitions...
generating data...
100000 of 1000000 tuples (10%) done (elapsed 0.20 s, remaining 1.78 s)
200000 of 1000000 tuples (20%) done (elapsed 0.40 s, remaining 1.62 s)
300000 of 1000000 tuples (30%) done (elapsed 0.74 s, remaining 1.73 s)
400000 of 1000000 tuples (40%) done (elapsed 1.23 s, remaining 1.85 s)
500000 of 1000000 tuples (50%) done (elapsed 1.47 s, remaining 1.47 s)
600000 of 1000000 tuples (60%) done (elapsed 1.81 s, remaining 1.21 s)
700000 of 1000000 tuples (70%) done (elapsed 2.25 s, remaining 0.97 s)
800000 of 1000000 tuples (80%) done (elapsed 2.46 s, remaining 0.62 s)
900000 of 1000000 tuples (90%) done (elapsed 2.81 s, remaining 0.31 s)
1000000 of 1000000 tuples (100%) done (elapsed 3.16 s, remaining 0.00 s)
vacuuming...
creating primary keys...
creating foreign keys...
done in 5.78 s (drop tables 0.00 s, create tables 0.07 s, generate 3.29 s, vacuum 0.84 s, primary keys 0.94 s, foreign keys 0.65 s).

The pgbench_accounts table should now be partitioned by range:

postgres@centos8pg:/home/postgres/ [pgdev] psql -c "\d+ pgbench_accounts" pgbench
Partitioned table "public.pgbench_accounts"
Column  |     Type      | Collation | Nullable | Default | Storage  | Stats target | Description
----------+---------------+-----------+----------+---------+----------+--------------+-------------
aid      | integer       |           | not null |         | plain    |              |
bid      | integer       |           |          |         | plain    |              |
abalance | integer       |           |          |         | plain    |              |
filler   | character(84) |           |          |         | extended |              |
Partition key: RANGE (aid)
Indexes:
"pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Foreign-key constraints:
"pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)
Referenced by:
TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)
Partitions: pgbench_accounts_1 FOR VALUES FROM (MINVALUE) TO (100001),
pgbench_accounts_10 FOR VALUES FROM (900001) TO (MAXVALUE),
pgbench_accounts_2 FOR VALUES FROM (100001) TO (200001),
pgbench_accounts_3 FOR VALUES FROM (200001) TO (300001),
pgbench_accounts_4 FOR VALUES FROM (300001) TO (400001),
pgbench_accounts_5 FOR VALUES FROM (400001) TO (500001),
pgbench_accounts_6 FOR VALUES FROM (500001) TO (600001),
pgbench_accounts_7 FOR VALUES FROM (600001) TO (700001),
pgbench_accounts_8 FOR VALUES FROM (700001) TO (800001),
pgbench_accounts_9 FOR VALUES FROM (800001) TO (900001)

The same should work for hash partitioning:

postgres@centos8pg:/home/postgres/ [pgdev] pgbench -i -s 10 --partitions=10 --partition-method=hash --foreign-keys pgbench
dropping old tables...
creating tables...
creating 10 partitions...
generating data...
100000 of 1000000 tuples (10%) done (elapsed 0.19 s, remaining 1.69 s)
200000 of 1000000 tuples (20%) done (elapsed 0.43 s, remaining 1.71 s)
300000 of 1000000 tuples (30%) done (elapsed 0.67 s, remaining 1.55 s)
400000 of 1000000 tuples (40%) done (elapsed 1.03 s, remaining 1.54 s)
500000 of 1000000 tuples (50%) done (elapsed 1.22 s, remaining 1.22 s)
600000 of 1000000 tuples (60%) done (elapsed 1.59 s, remaining 1.06 s)
700000 of 1000000 tuples (70%) done (elapsed 1.80 s, remaining 0.77 s)
800000 of 1000000 tuples (80%) done (elapsed 2.16 s, remaining 0.54 s)
900000 of 1000000 tuples (90%) done (elapsed 2.36 s, remaining 0.26 s)
1000000 of 1000000 tuples (100%) done (elapsed 2.69 s, remaining 0.00 s)
vacuuming...
creating primary keys...
creating foreign keys...
done in 4.99 s (drop tables 0.10 s, create tables 0.08 s, generate 2.74 s, vacuum 0.84 s, primary keys 0.94 s, foreign keys 0.30 s).
postgres@centos8pg:/home/postgres/ [pgdev] psql -c "\d+ pgbench_accounts" pgbench
Partitioned table "public.pgbench_accounts"
Column  |     Type      | Collation | Nullable | Default | Storage  | Stats target | Description
----------+---------------+-----------+----------+---------+----------+--------------+-------------
aid      | integer       |           | not null |         | plain    |              |
bid      | integer       |           |          |         | plain    |              |
abalance | integer       |           |          |         | plain    |              |
filler   | character(84) |           |          |         | extended |              |
Partition key: HASH (aid)
Indexes:
"pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Foreign-key constraints:
"pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)
Referenced by:
TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)
Partitions: pgbench_accounts_1 FOR VALUES WITH (modulus 10, remainder 0),
pgbench_accounts_10 FOR VALUES WITH (modulus 10, remainder 9),
pgbench_accounts_2 FOR VALUES WITH (modulus 10, remainder 1),
pgbench_accounts_3 FOR VALUES WITH (modulus 10, remainder 2),
pgbench_accounts_4 FOR VALUES WITH (modulus 10, remainder 3),
pgbench_accounts_5 FOR VALUES WITH (modulus 10, remainder 4),
pgbench_accounts_6 FOR VALUES WITH (modulus 10, remainder 5),
pgbench_accounts_7 FOR VALUES WITH (modulus 10, remainder 6),
pgbench_accounts_8 FOR VALUES WITH (modulus 10, remainder 7),
pgbench_accounts_9 FOR VALUES WITH (modulus 10, remainder 8).

Looks fine. Now you can easily benchmark against a partitioned pgbench_accounts table.

Cet article PostgreSQL 13 will come with partitioning support for pgbench est apparu en premier sur Blog dbi services.

Cloning Oracle Homes in 19c Part 2

Michael Dinh - Wed, 2019-10-30 08:10

You didn’t think there was going to be a part 2 did you?

$ORACLE_HOME/log is not included in creategoldimage which makes perfect sense as I was discussing on twitter for having garbage in a gold image but why is it being read?

For ol7-19-rac2, permission for $GRID_HOME/log has not been changed; hence, creategoldimage failed.

-exclFiles $ORACLE_HOME/.patch_storage (succeeded)
-exclFiles $ORACLE_HOME/log (failed)

[oracle@ol7-19-rac1 ~]$ ls /u01/app/19.0.0/grid/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties
[oracle@ol7-19-rac1 ~]$

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -exclFiles $ORACLE_HOME/.patch_storage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-30_12-05-43PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/19.0.0/grid5/
[oracle@ol7-19-rac1 grid5]$ unzip -qo grid_home_2019-10-30_12-05-43PM.zip

[oracle@ol7-19-rac1 grid5]$ ls .patch_storage
ls: cannot access .patch_storage: No such file or directory

[oracle@ol7-19-rac1 grid5]$ ls -l
total 3203396
drwxr-xr-x.  3 oracle oinstall         22 Oct  8 20:33 acfs
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsccm
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsccreg
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfscm
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsiob
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsrd
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsrm
drwxr-xr-x.  2 oracle oinstall        102 Oct  1 23:45 addnode
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 advmccb
drwxr-xr-x. 10 oracle oinstall       4096 Apr 17  2019 assistants
drwxr-xr-x.  2 oracle oinstall      12288 Oct 30 12:05 bin
drwxr-x---.  3 oracle oinstall         18 Oct  1 23:47 cdp
drwxr-x---.  4 oracle oinstall         31 Oct  1 23:47 cha
drwxr-xr-x.  4 oracle oinstall         87 Oct  1 23:45 clone
drwxr-xr-x. 12 oracle oinstall       4096 Oct 30 12:05 crs
drwx--x--x.  5 oracle oinstall         41 Oct  1 23:47 css
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 ctss
drwxrwxr-x.  7 oracle oinstall         71 Apr 17  2019 cv
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 dbjava
drwxr-xr-x.  2 oracle oinstall         39 Oct 30 12:05 dbs
drwxr-xr-x.  5 oracle oinstall       4096 Oct  1 23:45 deinstall
drwxr-xr-x.  3 oracle oinstall         20 Apr 17  2019 demo
drwxr-xr-x.  3 oracle oinstall         20 Apr 17  2019 diagnostics
drwxr-xr-x. 13 oracle oinstall       4096 Apr 17  2019 dmu
-rw-r--r--.  1 oracle oinstall        852 Aug 18  2015 env.ora
drwxr-x---.  6 oracle oinstall         53 Oct 30 12:05 evm
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 gipc
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 gnsd
drwxr-x---.  5 oracle oinstall         49 Oct 30 12:05 gpnp
-rw-r--r--.  1 oracle oinstall 3280127704 Oct 30 12:12 grid_home_2019-10-30_12-05-43PM.zip
-rwxr-x---.  1 oracle oinstall       3294 Mar  8  2017 gridSetup.sh
drwxr-xr-x.  4 oracle oinstall         32 Apr 17  2019 has
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 hs
drwxr-xr-x. 11 oracle oinstall       4096 Oct 30 12:12 install
drwxr-xr-x.  2 oracle oinstall         29 Apr 17  2019 instantclient
drwxr-x---. 13 oracle oinstall       4096 Oct 30 12:05 inventory
drwxr-xr-x.  8 oracle oinstall         82 Oct 30 12:05 javavm
drwxr-xr-x.  3 oracle oinstall         35 Apr 17  2019 jdbc
drwxr-xr-x.  6 oracle oinstall       4096 Oct 30 12:05 jdk
drwxr-xr-x.  2 oracle oinstall       8192 Oct  8 20:28 jlib
drwxr-xr-x. 10 oracle oinstall       4096 Apr 17  2019 ldap
drwxr-xr-x.  4 oracle oinstall      12288 Oct 30 12:05 lib
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 md
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 mdns
drwxr-xr-x. 10 oracle oinstall       4096 Oct 30 12:05 network
drwxr-xr-x.  5 oracle oinstall         46 Apr 17  2019 nls
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 ohasd
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 ologgerd
drwxr-x---. 14 oracle oinstall       4096 Oct  1 23:45 OPatch
drwxr-xr-x.  8 oracle oinstall         77 Apr 17  2019 opmn
drwxr-xr-x.  4 oracle oinstall         34 Apr 17  2019 oracore
drwxr-xr-x.  6 oracle oinstall         52 Apr 17  2019 ord
drwxr-xr-x.  4 oracle oinstall         66 Apr 17  2019 ords
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 oss
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 osysmond
drwxr-xr-x.  8 oracle oinstall       4096 Oct  1 23:45 oui
drwxr-xr-x.  4 oracle oinstall         33 Apr 17  2019 owm
drwxr-xr-x.  5 oracle oinstall         39 Apr 17  2019 perl
drwxr-xr-x.  6 oracle oinstall         78 Apr 17  2019 plsql
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 precomp
drwxr-xr-x.  2 oracle oinstall         26 Apr 17  2019 QOpatch
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 qos
drwxr-xr-x.  5 oracle oinstall         56 Oct 30 12:05 racg
drwxr-xr-x. 15 oracle oinstall       4096 Oct 30 12:05 rdbms
drwxr-xr-x.  3 oracle oinstall         21 Apr 17  2019 relnotes
drwxr-xr-x.  7 oracle oinstall        102 Apr 17  2019 rhp
-rwxr-xr-x.  1 oracle oinstall        405 Oct  1 23:45 root.sh
-rwx------.  1 oracle oinstall        490 Apr 17  2019 root.sh.old
-rw-r-----.  1 oracle oinstall         10 Apr 17  2019 root.sh.old.1
-rwx------.  1 oracle oinstall        405 Apr 18  2019 root.sh.old.2
-rw-r-----.  1 oracle oinstall         10 Apr 17  2019 root.sh.old.3
-rwxr-xr-x.  1 oracle oinstall        414 Oct  1 23:45 rootupgrade.sh
-rwxr-x---.  1 oracle oinstall        628 Sep  3  2015 runcluvfy.sh
drwxr-xr-x.  5 oracle oinstall       4096 Apr 17  2019 sdk
drwxr-xr-x.  3 oracle oinstall         18 Apr 17  2019 slax
drwxr-xr-x.  5 oracle oinstall       4096 Oct  8 20:26 sqlpatch
drwxr-xr-x.  6 oracle oinstall         53 Oct  1 23:44 sqlplus
drwxr-xr-x.  7 oracle oinstall         66 Oct 30 12:05 srvm
drwxr-x---.  5 oracle oinstall         63 Oct 30 12:05 suptools
drwxr-xr-x.  4 oracle oinstall         29 Apr 17  2019 tomcat
drwxr-xr-x.  3 oracle oinstall         35 Apr 17  2019 ucp
drwxr-xr-x.  7 oracle oinstall         71 Apr 17  2019 usm
drwxr-xr-x.  2 oracle oinstall         33 Apr 17  2019 utl
-rw-r-----.  1 oracle oinstall        500 Feb  6  2013 welcome.html
drwxr-xr-x.  3 oracle oinstall         18 Apr 17  2019 wlm
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 wwg
drwxr-xr-x.  5 oracle oinstall       4096 Oct  8 20:35 xag
drwxr-x---.  6 oracle oinstall         58 Apr 17  2019 xdk

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/log/
crs  diag  ol7-19-rac1  procwatcher

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/log/*
ls: cannot access /u01/app/19.0.0/grid5/log/*: No such file or directory
[oracle@ol7-19-rac1 grid5]$

================================================================================

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-30_12-23-31PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/19.0.0/grid5/
[oracle@ol7-19-rac1 grid5]$ unzip -qo grid_home_2019-10-30_12-23-31PM.zip

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/log/
crs  diag  ol7-19-rac1  procwatcher

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/log/*
ls: cannot access /u01/app/19.0.0/grid5/log/*: No such file or directory
[oracle@ol7-19-rac1 grid5]$

================================================================================

[oracle@ol7-19-rac2 ~]$ . oraenv <<< +ASM2
ORACLE_SID = [cdbrac2] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-19-rac2 ~]$ ls $ORACLE_HOME/log/*
/u01/app/19.0.0/grid/log/diag:
adrci_dir.mif  asmtool  clients

/u01/app/19.0.0/grid/log/ol7-19-rac2:
acfs  admin  afd  chad  client  crsd  cssd  ctssd  diskmon  evmd  gipcd  gnsd  gpnpd  mdnsd  ohasd  racg  srvm  xag

/u01/app/19.0.0/grid/log/procwatcher:
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prw.sh: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac2: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora.org: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prw_ol7-19-rac2.log: Permission denied
prwinit.ora  prwinit.ora.org  prw_ol7-19-rac2.log  prw.sh  PRW_SYS_ol7-19-rac2

[oracle@ol7-19-rac2 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -exclFiles $ORACLE_HOME/log -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-30_12-40-37PM for more details.
Setup failed.
[oracle@ol7-19-rac2 ~]$

Strange Estimates.

Jonathan Lewis - Wed, 2019-10-30 08:10

A question came up on the Oracle-L list server a couple of days ago expressing some surprise at the following execution plan:


--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |       |       |   845 (100)|          |
|   1 |  SORT AGGREGATE                      |                 |     1 |     7 |            |          |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| ANY_TABLE       | 84827 |   579K|   845   (1)| 00:00:01 |
|   3 |    SORT CLUSTER BY ROWID             |                 | 68418 |       |    76   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                 | ANY_INDEX       | 68418 |       |    76   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("X"."ANY_COLUMN1"='J')
   4 - access("X"."ANY_COLUMN2"=89155)

You’ll notice that this is a very simple query accessing a table by index, yet the estimated table rows found exceeds the estimated number of index entries used to probe the table. How can this happen. The answer (most frequently) is that there’s a mismatch between the table (or, more commonly, column) statistics and the index statistics. This seems to happen very frequently when you start mixing partitioned tables with global (or globally partitioned) indexes but it can happen in very simple cases, especially since a call to gather_table_stats() with cascade set to true and using the auto_sample_size will take a small sample from the index while using a 100% “sample” from the table.

Here’s an example I engineered very quickly to demonstrate the point. There’s no particular reason for the choice of DML I’ve used on the data beyond a rough idea of setting up a percentage of nulls and deleting a non-uniform pattern of rows.


rem
rem     Script:         table_index_mismatch.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,1000)                n1,
        mod(rownum,1000)                n2,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1, for columns (n1,n2) size 1'
        );
end;
/

create index t1_i1 on t1(n1);

delete from t1 where mod(trunc(sqrt(n1)),7) = 0;
update t1 set n1 = null where mod(n1,10) = 0;
delete from t1 where mod(n1,10) = trunc(dbms_random.value(0,10));

execute dbms_stats.gather_table_stats(user,'t1',estimate_percent=>1)
execute dbms_stats.gather_index_stats(null,'t1_i1',estimate_percent=> 0.01)

Of course you’re not supposed to collect stats with arbitrary samples in any recent version of Oracle, so going for a 1% and 0.01% sample seems a little daft but I’m just doing that to demonstrate the problem with a very small data set.

After generating the data and gathering the stats I ran a few queries to pick out some critical numbers.


select
        table_name, sample_size, num_rows
from
        user_tables
where
        table_name = 'T1'
/

select 
        index_name, sample_size, num_rows, distinct_keys
from
        user_indexes
where
        table_name = 'T1'
and     index_name = 'T1_I1'
/

select
        column_name, sample_size, num_nulls, num_distinct
from
        user_tab_cols
where
        table_name = 'T1'
and     (
            column_name = 'N1'
         or virtual_column = 'YES'
        )
order by
        column_name
/

You’ll notice that I’ve only picked one of my original columns and any virtual columns. My gather_table_stats() call had a method_opt that included the creation of extended stats for the column group (n1, n2) and I want to report the stats on the resulting virtual column.


TABLE_NAME           SAMPLE_SIZE   NUM_ROWS
-------------------- ----------- ----------
T1                          7865     786500


INDEX_NAME           SAMPLE_SIZE   NUM_ROWS DISTINCT_KEYS
-------------------- ----------- ---------- -------------
T1_I1                     385779     713292           714


COLUMN_NAME                      SAMPLE_SIZE  NUM_NULLS NUM_DISTINCT
-------------------------------- ----------- ---------- ------------
N1                                      7012      85300          771
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS          7865          0          855

A couple of observations on the stats

  • the table sample size is, as expected, 1% of the reported num_rows (the actual count is 778,154).
  • The index sample size is much bigger than expected – but that’s probably related to the normal “select 1,100 leaf blocks strategy”. Because of the skew in the pattern of deleted values it’s possible for the sample size in this model to vary between 694,154 and something in the region of 380,000.
  • The n1 sample size is about 10% smaller than the table sample size – but that’s because I set 10% of the column to null.
  • The column group sample size matches the table sample size because column group hash values are never null, even if an underlying column is null.

So let’s check the execution plan for a very simple query:


set autotrace on explain
select id from t1 where n1 = 140 and n2 = 140;
set autotrace off


---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |   920 | 11960 |   918   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   920 | 11960 |   918   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |   909 |       |     5   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N2"=140)
   2 - access("N1"=140)

The estimate for relevant index rowids is smaller than the estimate for the number of table rows! The difference is not as extreme as the case reported on Oracle-l, but I’m only trying to demonstrate a principle, not reproduce the exact results.

There are several ways in which contradictory results like this can appear – but in this case we can see the following:

  • For the table access table.num_rows/column.num_distinct = 786,500 / 855 = 919.88  (using the column group num_distinct)
  • For the index range scan: (table.num_rows – column.num_nulls) / column.num_distinct = (786500 – 85300) / 771 = 909.47 (using the n1 statistics)

So the change in strategy as it becomes possible for the optimizer to take advantage of the column group means the index and table have been using incompatible sets of stats (in particular there’s that loss of information about NULLs) as their cardinalities are calculated. The question, then, is “how much is that likely to matter”, and the follow-up if it can matter is “in what circumstancs could the effect be large enough to cause problems”. But that’s a topic for another day.

Update / Footnote

In the case of the Oracle-l example, there was no column group, and in some cases the optimizer would produce a plan where the table estimate was much smaller than the index estimate, and in other cases (like the opening plan above) the table estimate was signficantly greater than the index estimate. This was a side effect of adaptive statistics: the low table estimate was due to the basic “multiply separate selectivities”; but the with adaptive statistics enabled Oracle started sampling the table to check the correlation between the two tables, and then produced an SQL Plan Directive to do so and got to the higher (and correct) result.

 

 

Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud Financial Close Solutions for Oracle EPM Cloud

Oracle Press Releases - Wed, 2019-10-30 08:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud Financial Close Solutions for Oracle EPM Cloud Oracle is the only EPM vendor to be named a Leader in both Cloud Financial Close Solutions and Cloud Financial Planning and Analysis Solutions Magic Quadrant reports

Redwood Shores, Calif.—Oct 30, 2019

Oracle has been named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Financial Close Solutions” report for the third consecutive year. Out of 10 companies evaluated, Oracle is positioned as a Leader based on its ability to execute and completeness of vision. Oracle is the only Enterprise Performance Management (EPM) vendor to be named a Leader in both Cloud Financial Planning and Analysis Solutions and Cloud Financial Close Solutions Magic Quadrant reports. A complimentary copy of the report is available here.

According to the report, “Leaders provide mature offerings that meet market demand and have demonstrated the vision necessary to sustain their market position as requirements evolve. The hallmark of Leaders is that they focus on, and invest in, their offerings to the point where they lead the market and can affect its overall direction. As a result, Leaders can be vendors to watch as you try to understand how new market offerings might evolve. Leaders typically possess a large, satisfied customer base (relative to the size of the market) and enjoy high visibility within the market. Their size and financial strength enable them to remain viable in a challenging economy. Leaders typically respond to a wide market audience by supporting broad market requirements; however, they may fail to meet the specific needs of vertical markets or other more-specialized segments.”

“As the EPM Cloud market matures, we are seeing more organizations looking for an integrated EPM suite to connect their financial and operational planning with financial close and reporting processes,” said Hari Sankar, Group Vice President, Product Management, Oracle.  “We are proud to be the only vendor acknowledged as a Leader by Gartner in these two Magic Quadrant reports related to EPM.”

Oracle EPM Cloud is the only complete EPM solution on a common platform that addresses financial and operational planning, consolidation and close, data management, reporting, and analysis processes. With native integration with the broader Oracle Cloud Applications suite, which includes Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Human Capital Management (HCM) and Customer Experience (CX) applications, Oracle helps customers to stay ahead of changing expectations, build adaptable organizations, and realize the potential of the latest innovations.

Oracle’s portfolio of enterprise performance management applications has garnered industry recognition. Oracle was recently named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Financial Planning and Analysis Solutions” for Oracle EPM Cloud, making it the only vendor to be named a Leader in both of Gartner’s 2019 Magic Quadrants related to enterprise performance management.

Gartner, Magic Quadrant for Cloud Financial Close Solutions, Robert Anderson, John Van Decker, Greg Leiter, 21 October 2019

Gartner, Magic Quadrant for Cloud Financial Planning and Analysis Solutions, Robert Anderson, John Van Decker, Greg Leiter, 8 August 2019

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

Top 8 Post Limits on Tumblr (and Other Limitations) Revealed

VitalSoftTech - Tue, 2019-10-29 09:57

Do you know about post limits on Tumblr? Are you aware that the blogging platform has certain rules and regulations which social media users must abide by? Tumblr has become one of the best and most entertaining platforms out there for blogging and sharing multimedia content. It is not just immensely popular amongst the younger […]

The post Top 8 Post Limits on Tumblr (and Other Limitations) Revealed appeared first on VitalSoftTech.

Categories: DBA Blogs

Cloning Oracle Homes in 19c

Michael Dinh - Tue, 2019-10-29 09:41

You may find conflicting information from Oracle’s documentation where Cloning an Oracle Database Home shows to use clone.pl and Database Upgrade Guide 19c shows Deprecation of clone.pl Script

To clone Oracle software, use createGoldImage and then install software as usual.

DEMO for DB:

Source: /u01/app/oracle/product/19.0.0/dbhome_1
Target: /u01/app/oracle/product/19.0.0/dbhome_2

[oracle@ol7-19-rac1 ~]$ ls -l /u01/app/oracle/product/19.0.0/dbhome_2/
total 0

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/runInstaller -createGoldImage -destinationLocation /u01/app/oracle/product/19.0.0/dbhome_2 -silent
Launching Oracle Database Setup Wizard...

[oracle@ol7-19-rac1 ~]$ ls -l /u01/app/oracle/product/19.0.0/dbhome_2/
total 3069584
-rw-r--r--. 1 oracle oinstall 3143250100 Oct 29 13:09 db_home_2019-10-29_12-59-52PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/oracle/product/19.0.0/dbhome_2/
[oracle@ol7-19-rac1 dbhome_2]$ unzip -qo db_home_2019-10-29_12-59-52PM.zip

[oracle@ol7-19-rac1 dbhome_2]$ ls -ld *
drwxr-xr-x. 2 oracle oinstall 102 Oct 2 00:06 addnode
drwxr-xr-x. 3 oracle oinstall 20 Oct 2 00:35 admin
drwxr-xr-x. 6 oracle oinstall 4096 Apr 17 2019 apex
drwxr-xr-x. 9 oracle oinstall 93 Apr 17 2019 assistants
drwxr-xr-x. 2 oracle oinstall 8192 Oct 29 13:00 bin
drwxr-xr-x. 4 oracle oinstall 87 Oct 2 00:06 clone
drwxr-xr-x. 6 oracle oinstall 55 Apr 17 2019 crs
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 css
drwxr-xr-x. 11 oracle oinstall 4096 Apr 17 2019 ctx
drwxr-xr-x. 7 oracle oinstall 71 Apr 17 2019 cv
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 data
-rw-r--r--. 1 oracle oinstall 3143250100 Oct 29 13:09 db_home_2019-10-29_12-59-52PM.zip
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 dbjava
drwxr-xr-x. 2 oracle oinstall 66 Oct 29 12:35 dbs
drwxr-xr-x. 5 oracle oinstall 4096 Oct 2 00:06 deinstall
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 demo
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 diagnostics
drwxr-xr-x. 13 oracle oinstall 4096 Apr 17 2019 dmu
drwxr-xr-x. 4 oracle oinstall 30 Apr 17 2019 drdaas
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 dv
-rw-r--r--. 1 oracle oinstall 852 Aug 18 2015 env.ora
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 has
drwxr-xr-x. 5 oracle oinstall 41 Apr 17 2019 hs
drwxr-xr-x. 10 oracle oinstall 4096 Oct 29 13:08 install
drwxr-xr-x. 2 oracle oinstall 29 Apr 17 2019 instantclient
drwxr-x---. 13 oracle oinstall 4096 Oct 29 13:00 inventory
drwxr-xr-x. 8 oracle oinstall 82 Oct 29 13:00 javavm
drwxr-xr-x. 3 oracle oinstall 35 Apr 17 2019 jdbc
drwxr-xr-x. 6 oracle oinstall 4096 Oct 29 13:00 jdk
drwxr-xr-x. 2 oracle oinstall 4096 Oct 8 20:23 jlib
drwxr-xr-x. 10 oracle oinstall 4096 Apr 17 2019 ldap
drwxr-xr-x. 4 oracle oinstall 12288 Oct 29 13:00 lib
drwxr-x---. 2 oracle oinstall 6 Oct 2 00:10 log
drwxr-xr-x. 9 oracle oinstall 98 Apr 17 2019 md
drwxr-xr-x. 4 oracle oinstall 31 Apr 17 2019 mgw
drwxr-xr-x. 10 oracle oinstall 4096 Oct 29 13:00 network
drwxr-xr-x. 5 oracle oinstall 46 Apr 17 2019 nls
drwxr-xr-x. 8 oracle oinstall 101 Apr 17 2019 odbc
drwxr-xr-x. 5 oracle oinstall 42 Apr 17 2019 olap
drwxr-x---. 14 oracle oinstall 4096 Oct 2 00:06 OPatch
drwxr-xr-x. 7 oracle oinstall 65 Apr 17 2019 opmn
drwxr-xr-x. 4 oracle oinstall 34 Apr 17 2019 oracore
drwxr-xr-x. 6 oracle oinstall 52 Apr 17 2019 ord
drwxr-xr-x. 4 oracle oinstall 66 Apr 17 2019 ords
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 oss
drwxr-xr-x. 8 oracle oinstall 4096 Oct 2 00:06 oui
drwxr-xr-x. 4 oracle oinstall 33 Apr 17 2019 owm
drwxr-xr-x. 5 oracle oinstall 39 Apr 17 2019 perl
drwxr-xr-x. 6 oracle oinstall 78 Apr 17 2019 plsql
drwxr-xr-x. 6 oracle oinstall 56 Oct 29 13:00 precomp
drwxr-xr-x. 2 oracle oinstall 26 Apr 17 2019 QOpatch
drwxr-xr-x. 5 oracle oinstall 52 Apr 17 2019 R
drwxr-xr-x. 4 oracle oinstall 29 Apr 17 2019 racg
drwxr-xr-x. 15 oracle oinstall 4096 Oct 29 13:00 rdbms
drwxr-xr-x. 3 oracle oinstall 21 Apr 17 2019 relnotes
-rwx------. 1 oracle oinstall 549 Oct 2 00:06 root.sh
-rwx------. 1 oracle oinstall 786 Apr 17 2019 root.sh.old
-rw-r-----. 1 oracle oinstall 10 Apr 17 2019 root.sh.old.1
-rwx------. 1 oracle oinstall 638 Apr 18 2019 root.sh.old.2
-rw-r-----. 1 oracle oinstall 10 Apr 17 2019 root.sh.old.3
-rwxr-x---. 1 oracle oinstall 1783 Mar 8 2017 runInstaller
-rw-r--r--. 1 oracle oinstall 2927 Oct 14 2016 schagent.conf
drwxr-xr-x. 5 oracle oinstall 4096 Apr 17 2019 sdk
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 slax
drwxr-xr-x. 4 oracle oinstall 41 Apr 17 2019 sqldeveloper
drwxr-xr-x. 3 oracle oinstall 17 Apr 17 2019 sqlj
drwxr-xr-x. 5 oracle oinstall 4096 Oct 8 20:22 sqlpatch
drwxr-xr-x. 6 oracle oinstall 53 Oct 2 00:05 sqlplus
drwxr-xr-x. 6 oracle oinstall 54 Apr 17 2019 srvm
drwxr-xr-x. 5 oracle oinstall 45 Oct 29 13:00 suptools
drwxr-xr-x. 3 oracle oinstall 35 Apr 17 2019 ucp
drwxr-xr-x. 4 oracle oinstall 31 Apr 17 2019 usm
drwxr-xr-x. 2 oracle oinstall 33 Apr 17 2019 utl
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 wwg
drwxr-x---. 7 oracle oinstall 69 Apr 17 2019 xdk
[oracle@ol7-19-rac1 dbhome_2]$

DEMO for GI:

Source: /u01/app/19.0.0/grid
Target: /u01/app/19.0.0/grid5

[root@ol7-19-rac1 ~]# mkdir -p /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# chmod 775 /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# chown oracle:oinstall /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# ls -ld /u01/app/19.0.0/grid5/
drwxrwxr-x. 2 oracle oinstall 6 Oct 29 13:15 /u01/app/19.0.0/grid5/

[oracle@ol7-19-rac1 ~]$ echo $ORACLE_HOME
/u01/app/19.0.0/grid
[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...
[oracle@ol7-19-rac1 ~]$

FAILED:

[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ grep -A1 "^WARNING" gridSetupActions2019-10-29_01-20-38PM.log
WARNING:  [Oct 29, 2019 1:20:54 PM] Validation disabled for the state init
INFO:  [Oct 29, 2019 1:20:54 PM] Completed validating state <init>
--
WARNING:  [Oct 29, 2019 1:20:55 PM] Command to get the files from '/u01/app/19.0.0/grid' not owned by 'oracle' failed.
WARNING:  [Oct 29, 2019 1:20:55 PM] Following files from the source home are not owned by the current user: [/u01/app/19.0.0/grid/acfs, /u01/app/19.0.0/grid/acfs/tunables, /u01/app/19.0.0/grid/acfs/tunables/acfstunables]
INFO:  [Oct 29, 2019 1:20:55 PM] Getting the last existing parent of: /u01/app/19.0.0/grid5
--
WARNING:  [Oct 29, 2019 1:20:57 PM] Files list is null or empty.
INFO:  [Oct 29, 2019 1:20:57 PM] Completed validating state <createGoldImage>
--
WARNING:  [Oct 29, 2019 1:20:58 PM] Following files are not readable: [/u01/app/19.0.0/grid/suptools/orachk/orachk, /u01/app/19.0.0/grid/log/procwatcher/prw.sh, /u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac1, /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora, /u01/app/19.0.0/grid/crf/admin/run/crfmond, /u01/app/19.0.0/grid/crf/admin/run/crflogd]
INFO:  [Oct 29, 2019 1:21:00 PM] Verifying whether Central Inventory is locked by any other OUI session...
--
WARNING:  [Oct 29, 2019 1:21:05 PM] Could not create symlink: /tmp/GridSetupActions2019-10-29_01-20-38PM/tempHome_1572355263979/log/procwatcher/prw.sh.
Refer associated stacktrace #oracle.install.ivw.common.driver.job.CreateGoldImageJob:7059
--
WARNING:  [Oct 29, 2019 1:21:34 PM] Could not create symlink: /tmp/GridSetupActions2019-10-29_01-20-38PM/tempHome_1572355294593/log/procwatcher/prw.sh.
Refer associated stacktrace #oracle.install.ivw.common.driver.job.CreateGoldImageJob:7118


[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ ll /u01/app/19.0.0/grid/acfs
total 0
drwxr-xr-x. 2 root root 26 Oct  8 20:33 tunables


[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ grep -i severe gridSetupActions2019-10-29_01-20-38PM.log
SEVERE: [Oct 29, 2019 1:21:11 PM] [FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-29_01-20-38PM for more details.
SEVERE: [Oct 29, 2019 1:21:40 PM] [FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-29_01-20-38PM for more details.
[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$

[oracle@ol7-19-rac1 ~]$

RESEARCH:

Bug 29220079 - Error INS-32700 Creating a GI Gold Image (Doc ID 29220079.8)	
Versions confirmed as being affected: 19.3.0	
The fix for 29220079 is first included in: 
19.3.0.0.190416 (Apr 2019) Database Release Update (DB RU) and 
20.1.0

Should have been fixed but does not seems like it.

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29851014;ACFS RELEASE UPDATE 19.4.0.0.0 (29851014)
29850993;OCW RELEASE UPDATE 19.4.0.0.0 (29850993)
29834717;Database Release Update : 19.4.0.0.190716 (29834717)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-19-rac1 ~]$

You might have to create SR :=(

UPDATE: Thanks to https://lonedba.wordpress.com/

[oracle@ol7-19-rac1 GridSetupActions2019-10-29_03-06-03PM]$ grep "Permission denied" gridSetupActions2019-10-29_03-06-03PM.log
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/prw.sh’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac1’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/prwinit.ora’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/crf/admin/run/crfmond’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/crf/admin/run/crflogd’: Permission denied
[oracle@ol7-19-rac1 GridSetupActions2019-10-29_03-06-03PM]$

[oracle@ol7-19-rac1 ~]$ echo $ORACLE_HOME; cd $ORACLE_HOME/log
/u01/app/19.0.0/grid

[oracle@ol7-19-rac1 log]$ ls -l
total 4
drwxr-x---.  4 oracle oinstall   57 Oct  1 23:57 diag
drwxr-xr-t. 20 root   oinstall 4096 Oct  1 23:55 ol7-19-rac1
drwxr--r--.  3 root   root       66 Oct 25 15:10 procwatcher

[root@ol7-19-rac1 log]# chmod 775 -R ol7-19-rac1/ procwatcher/
[root@ol7-19-rac1 log]# ls -l
total 4
drwxr-xr-x.  2 oracle oinstall    6 Oct  1 23:44 crs
drwxr-x---.  4 oracle oinstall   57 Oct  1 23:50 diag
drwxrwxr-x. 20 root   oinstall 4096 Oct  1 23:47 ol7-19-rac1
drwxrwxr-x.  3 root   root       66 Oct 25 15:08 procwatcher
[root@ol7-19-rac1 log]#

[oracle@ol7-19-rac1 ~]$ . oraenv <<< +ASM1
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-29_04-36-47PM.zip

[oracle@ol7-19-rac1 ~]$ ll /u01/app/19.0.0/grid5/*
-rw-r--r--. 1 oracle oinstall 4426495995 Oct 29 16:46 /u01/app/19.0.0/grid5/grid_home_2019-10-29_04-36-47PM.zip
[oracle@ol7-19-rac1 ~]$

Penn State Elevates Student Success with Oracle

Oracle Press Releases - Tue, 2019-10-29 07:00
Press Release
Penn State Elevates Student Success with Oracle University selects Oracle Student Cloud to enhance the student experience

Redwood Shores, Calif.—Oct 29, 2019

The Pennsylvania State University, one of the world’s leading higher education and research institutions, has selected Oracle Student Financial Planning Cloud to support its “One Penn State 2025” strategic vision to transform education delivery, focusing on student engagement and support. The offering will be an integral component in the university’s efforts to help reduce the cost of education while enhancing student outcomes and success.

Student Financial Planning is a key pillar in Oracle Student Cloud, a complete suite of modules designed to support the entire student lifecycle, including traditional and non-traditional learning models, with powerful automation and optimization capabilities.

Consistently ranked among the top one percent of the world’s universities, Penn State includes 24 campuses, nearly 100,000 students and 31,000 faculty and staff, all focused on creating, sharing and applying knowledge to make a positive impact on communities across the world. Acknowledging the ever-increasing student debt and drop out crises U.S. college students face, Penn State will use Oracle Student Financial Planning Cloud to help empower students to make more informed academic and financial aid decisions to accomplish their academic, professional and personal life goals.

By providing increased transparency and better management of financial aid resources, Oracle Student Financial Planning Cloud and Oracle Student Cloud will streamline financial aid processes and deliver invaluable, data-backed insights into student behaviors and successes, thus freeing administrators to focus more on supporting the academic needs of its students.

“Penn State is pleased to select Oracle’s Student Financial Planning product as a first step in transitioning our student system to the cloud and to further the institution’s strategic goals,” said Michael J. Büsges, senior director, Enterprise Projects at Penn State. “Oracle has an established track record in meeting the needs of its Higher Education customers, and we look forward to working together on this important initiative.”  

Higher education institutions today are under intense pressure to enroll best-fit students, improve outcomes, and ensure student access – and be able to do more with less. At the same time, today’s students demand modern, consumer-like experiences and engagement with institutions. With $2.9 billion in financial aid (representing 15 million automated packages) already processed through Oracle Student Financial Planning Cloud, Oracle has proven solutions to help universities create a seamless student experience that leads to students’ academic success. With Oracle Student Financial Planning, universities are able to spend more time advising students on their financial aid choices and less time packaging their aid, which leads to improved student outcomes, elevated institutional standing and greater operational efficiency. Oracle Student Financial Planning also supports traditional, continued and online educational programs, as well as administrators’ compliance and integration needs.

“Leading institutions such as Penn State know that by simplifying and optimizing their operations they will be able to use resources efficiently and maximize funding sources,” said Vivian Wong, group vice president of Higher Education Development at Oracle. “Oracle’s commitment to supporting the higher education market includes our belief that no student should sacrifice their academic goals due to cost and we have created powerful cloud solutions to help universities tackle this challenge head on.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

User-editable Application Setting

Jeff Kemp - Tue, 2019-10-29 01:49

A nice addition to APEX release 18.1 is the Application Settings feature. This allows the developer to define one or more configuration values that are relevant to a particular application. In a recent project this feature came in useful.

I had built a simple questionnaire/calculator application for a client and they wanted a small “FAQ” box on the left-hand side of the page:

I could have built this as an ordinary HTML region, but the Admin users needed to be able to modify the content later, so the content needed to be stored somewhere. I didn’t feel the users’ requirement was mature enough to design another table to store the boilerplate (not yet, at least), so I thought I’d give the Application Settings feature a go.

An Application Setting is a single value that can be set in Component Settings, and retrieved and modified at runtime via the supplied PL/SQL API (APEX_APP_SETTINGS). The feature is most useful for “configuration”-type data relevant to the application’s user interface. In the past I would have created a special table to store this sort of thing – and in some cases I think I still would – but in some cases using Application Settings may result in a simpler design for your applications.

I went to Shared Components, Application Settings and created a new Setting called “FAQ_BOILERPLATE“. Each Application Setting can have the following attributes configured:

  • Name – although this can be almost anything, I suggest using a naming standard similar to how you name tables and columns, to reduce ambiguity if you need to refer to the setting in your PL/SQL.
  • Value – at first, you would set this to the initial value; if it is changed, it is updated here. Note that the setting can only have one value at any time, and the value is global for all sessions. The value is limited to 4,000 bytes.
  • Value Required – if needed you can make the setting mandatory. In my case, I left this set to “No”.
  • Valid Values – if needed you can specify a comma-delimited list of valid values that APEX will validate against. In my case, I left this blank.
  • On Upgrade Keep Value – if you deploy the application from Dev to Prod, set this to Yes so that if a user has changed the setting your deployment won’t clobber their changes. On the other hand, set this to No if you want the value reset to the default when the application is deployed. In my case, I set this to Yes.
  • Build Option – if needed you can associate the setting with a particular build option. If the build option is disabled, an exception will be raised at runtime if the application setting is accessed.

On the page where I wanted to show the content, I added the following:

  1. A Static Content region titled “FAQ”.
  2. A hidden item in the region named “P10_FAQ_BOILERPLATE“.
  3. A Before Header PL/SQL process.

The Text content for the static content region is:

<div class="boilerplate">
&P10_FAQ_BOILERPLATE!RAW.
</div>

Note that the raw value from the application setting is trusted as it may include some embedded HTML; you would need to ensure that only “safe” HTML is stored in the setting.

The Before Header PL/SQL process has this code:

:P10_FAQ_BOILERPLATE := apex_app_setting.get_value('FAQ_BOILERPLATE');

Side note: a simpler, alternative design (that I started with initially) was just a PL/SQL region titled “FAQ”, with the following code:

htp.p(apex_app_setting.get_value('FAQ_BOILERPLATE'));

I later rejected this design because I wanted to hide the region if the FAQ_BOILERPLATE setting was blank.

I put a Server-side Condition on the FAQ region when “Item is NOT NULL” referring to the “P10_FAQ_BOILERPLATE” item.

Editing an Application Setting

The Edit button is assigned the Authorization Scheme “Admin” so that admin users can edit the FAQ. It redirects to another very simple page with the following components:

  1. A Rich Text Editor item P50_FAQ_BOILERPLATE, along with Cancel and Save buttons.
  2. An After Header PL/SQL process “get value” (code below).
  3. An On Processing PL/SQL process “save value” when the Save button is clicked (code below).

After Header PL/SQL process “get value”:

:P50_FAQ_BOILERPLATE := apex_app_setting.get_value('FAQ_BOILERPLATE');

On Processing PL/SQL process “save value”:

apex_app_setting.set_value('FAQ_BOILERPLATE',:P50_FAQ_BOILERPLATE);

The more APEX-savvy of you may have noticed that this design means that if an Admin user clears out the setting (setting it to NULL), since it has the Server-side Condition on it, the FAQ region will disappear from the page (by design). This also includes the Edit button which would no longer be accessible. In the event this happens, I added another button labelled “Edit FAQ” to the Admin page so they can set it again later if they want.

This was a very simple feature that took less than an hour to build, and was suitable for the purpose. Later, if they find it becomes a bit unwieldy (e.g. if they add many more questions and answers, and need to standardise the layout and formatting) I might replace it with a more complex design – but for now this will do just fine.

Related

So Far So Good with Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-10-28 18:55

I mentioned in my previous two posts that I had tried to figure out if it would be safe to turn on force logging on a production database that does a bunch of batch processing on the weekend: post1, post2. We know that many of the tables are set to NOLOGGING and some of the inserts have the append hint. We put in force logging on Friday and the heavy weekend processing ran fine last weekend.

I used an AWR report to check the top INSERT statements from the weekend and I only found one that was significantly slower. But the table it inserts into is set for LOGGING, it does not have an append hint, and the parallel degree is set to 1. So, it is a normal insert that was slower last weekend for some other reason. Here is the output of my sqlstatsumday.sql script for the slower insert:

Day        SQL_ID        PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 6mcqczrk3k5wm       472069319        129         36734.0024     20656.8462    462.098677                  0                      0             38.8160385          666208.285         1139.86923             486.323077
2019-09-29 6mcqczrk3k5wm       472069319        130         44951.6935     27021.6031    573.245664                  0                      0             21.8764885           879019.29         1273.52672             522.083969
2019-10-06 6mcqczrk3k5wm       472069319        130         9624.33742     7530.07634    264.929008                  0                      0             1.26370992          241467.023         678.458015             443.427481
2019-10-13 6mcqczrk3k5wm       472069319        130         55773.0864      41109.542    472.788031                  0                      0             17.5326031          1232828.64         932.083969             289.183206
2019-10-20 6mcqczrk3k5wm       472069319        130         89684.8089     59261.2977    621.276122                  0                      0             33.7963893          1803517.19         1242.61069             433.473282
2019-10-27 6mcqczrk3k5wm       472069319        130         197062.591     144222.595    561.707321                  0                      0             362.101267          10636602.9         1228.91603             629.839695

It averaged 197062 milliseconds last weekend but 89684 the previous one. The target table has always been set to LOGGING so FORCE LOGGING would not change anything with it.

One of the three INSERT statements that I expected to be slowed by FORCE LOGGING was faster this weekend than without FORCE LOGGING last weekend:

Day        SQL_ID        PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 0u0drxbt5qtqk       382840242          1         2610257.66         391635    926539.984                  0                      0              13718.453             5483472           745816.5                3689449
2019-09-29 0u0drxbt5qtqk       382840242          1         17127212.3        1507065    12885171.7                  0                      0             14888.4595            18070434          6793555.5             15028884.5
2019-10-06 0u0drxbt5qtqk       382840242          1         3531931.07         420150    2355139.38                  0                      0             12045.0115             5004273            1692754                5101998
2019-10-13 0u0drxbt5qtqk       382840242          1         1693415.59         180730    1250325.41                  0                      0               819.7725           2242638.5           737704.5                2142812
2019-10-20 0u0drxbt5qtqk       382840242          1         5672230.17         536115    3759795.33                  0                      0             10072.9125             6149731            2332038              2806037.5
2019-10-27 0u0drxbt5qtqk       382840242          1         2421533.59         272585    1748338.89                  0                      0               9390.821           3311219.5           958592.5              2794748.5

It ran 2421533 milliseconds this weekend and 5672230 the prior one. So clearly FORCE LOGGING did not have much effect on its overall run time.

It went so well this weekend that we decided to leave FORCE LOGGING in for now to see if it slows down the mid-week jobs and the web-based front end. I was confident on Friday, but I am even more confident now that NOLOGGING writes have minimal performance benefits on this system. But we will let it bake in for a while. Really, we might as well leave it in for good if only for the recovery benefits. Then when we configure GGS for the zero downtime upgrade it will already have been there for some time.

The lesson for me from this experience and the message of my last three posts is that NOLOGGING writes may have less benefits than you think, or your system may be doing less NOLOGGING writes than you think. That was true for me for this one database. It may be true for other systems that I expect to have a lot of NOLOGGING writes. Maybe someone reading this will find that they can safely use FORCE LOGGING on a database that they think does a lot of NOLOGGING writes, but which really does not need NOLOGGING for good performance.

Bobby

Categories: DBA Blogs

Basic Replication -- 10 : ON PREBUILT TABLE

Hemant K Chitale - Mon, 2019-10-28 09:05
In my previous blog post, I've shown a Materialized View that is built as an empty MV and subsequently populated by a Refresh call.

You can also define a Materialized View over an *existing*  (pre-populated) Table.

Let's say you have a Source Table and have built a Replica of it it another Schema or Database.  Building the Replica may have taken an hour or even a few hours.  You now know that the Source Table will have some changes every day and want the Replica to be updated as well.  Instead of executing, say, a TRUNCATE and INSERT, into the Replica every day, you define a Fast Refresh Materialized View over it and let Oracle identify all the changes (which, on a daily basis, could be a small percentage of the total size of the Source/Replica) and update the Replica using a Refresh call.


Here's a quick demo.

SQL> select count(*) from my_large_source;

COUNT(*)
----------
72447

SQL> grant select on my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> alter session enable parallel dml;

Session altered.

SQL> create table my_large_replica
2 as select * from hemant.my_large_source
3 where 1=2;

Table created.

SQL> insert /*+ PARALLEL (8) */
2 into my_large_replica
3 select * from hemant.my_large_source;

72447 rows created.

SQL>


So, now, HR has a Replica of the Source Table in the HEMANT schema.  Without any subsequent updates to the Source Table, I create the Materialized View definition, with the "ON PREBUILT TABLE" clause.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> create materialized view log on my_large_source;

Materialized view log created.

SQL> grant select, delete on mlog$_my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL>
SQL> create materialized view my_large_replica
2 on prebuilt table
3 refresh fast
4 as select * from hemant.my_large_source;

Materialized view created.

SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72447

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>


I am now ready to add data and Refresh the MV.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> desc my_large_source
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COL NOT NULL NUMBER
PRODUCT_NAME VARCHAR2(128)
FACTORY VARCHAR2(128)

SQL> insert into my_large_source
2 values (74000,'Revolutionary Pin','Outer Space');

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_my_large_source;

COUNT(*)
----------
1

SQL>
SQL> connect hr/HR@orclpdb1
Connected.
SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72448

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>
SQL> execute dbms_mview.refresh('MY_LARGE_REPLICA','F');

PL/SQL procedure successfully completed.

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72448

SQL>
SQL> select id_col, product_name
2 from my_large_replica
3 where factory = 'Outer Space'
4 /

ID_COL
----------
PRODUCT_NAME
--------------------------------------------------------------------------------
74000
Revolutionary Pin


SQL>
SQL> select count(*) from hemant.mlog$_my_large_source;

COUNT(*)
----------
0

SQL>


Instead of rebuilding / repopulating the Replica Table with all 72,448 rows, I used the MV definition and the MV Log on the Source Table to copy over that 1 new row.

The above demonstration is against 19c.

Here are two older posts, one in March 2009 and the other in January 2012 on an earlier release of Oracle.


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator