The Anti-Kyte

Subscribe to The Anti-Kyte feed The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 12 hours 17 min ago

Turtles All the Way ! Removing Clear Text passwords from bash scripts invoking SQL*Plus

Mon, 2022-04-04 14:33

When discussing this issue with a colleague recently, we came to the question of how you protect a password that you need to access another password and he observed that, “before you know it, it’s Turtles all the way!

Regular reader will be unsurprised to learn that I immediately thought of this…

Looks like we’ll be messing about on a shell then


… which may explain some of the more obscure references that might creep into what follows. Look, I’ll do my best not to get turtly carried away.

You may well have stumbled across this post because you’re supporting a venerable ETL application running on Oracle which employs a number Linux shell scripts. These scripts connect to the database via SQL*Plus. The password is hard-coded ( or possibly stored in an environment variable). Either way, it’s available in the application in clear-text and you’d like to do something about it.
Additionally you’ve probably just found out that the database schema’s password, that hasn’t changed for about 15 years, is now going to change once a month and you need a way stopping your scripts from breaking every time this happens.

In summary then, we want to harden our application and specifically :

  • protect access to the database from the box that our shell scripts are executing on
  • protect the database logon credentials themselves from being exposed

We’re going to start by looking at a typical shell script setup, complete with clear text password.
We’ll then look at the biggest single step we can take to make it more secure.
We’ll then review some the options available to remove the password from our shell scripts altogether.

The Discworld philosopher Didactylos is on record as saying you can’t trust anyone further than you can throw them. Bearing this in mind, we’ll take a look at the security limitations inherent in each solution.

The options we’re going to look at are :

  • OS Authentication
  • OpenSSL
  • Oracle Wallet

We’ll conclude by looking at an architectural approach which does not render the application vulnerable in the same way.

A typical Application Shell Script

Let’s start with some assumptions :

  • the shell scripts are all part of an application codebase rather than being for any database admin tasks
  • the scripts are all owned by a single OS user
  • That single OS account may be used by one or more people (e.g. a DBA and a Support Developer).
  • the scripts run unattended ( probably as part of an overnight batch, or triggered by a CRON job)

In addition, you should know that the examples that follow were tested on CentOS7 and Oracle 18c XE. However, it’s worth noting that the functionality of the tools used here have not changed for several releases ( both OS and database).

First the good news, each script calls an environment setup script so that we only have to change the password in one place. The script is called scriptenv.sh :

export DB_USER=hr
export DB_PWD=N0ttherealpasswordjustanexample!
export DB_NAME=eta

As an example we have this application script (om.sh) :

#!/bin/sh

# Standard environment setup for our application bash scripts
. ./scriptenv.sh

sqlplus -s $DB_USER/$DB_PWD@$DB_NAME @vorbis.sql 

…which runs the query in vorbis.sql :

column db_name format a20
column db_user format a20

select sys_context('userenv', 'con_name') db_name,
    sys_context('userenv', 'current_user') db_user 
from dual;
quit

It’s probably worth noting here that, as far as I can see, putting the password in an environment variable on Linux is not outrageously insecure .

Connecting as the Application Owner

Just a quick side note :

This sort of arrangement often involves connecting to the database as the application owner.
This is quite convenient as the Application Owner schema implicitly has full privileges on all of the application’s database objects. However, it also means that the credentials we’re using to connect to the database are highly privileged.
It’s therefore an outstandingly good idea to consider using a less privileged account to connect.
There are several ways to achieve this (e.g. using an account which has access to the application only via execute permissions on the Application Owner’s PL/SQL packages).

For now though, we’re going to start with one simple change which will harden this application more than implementing any of the tools under discussion …

File Permissions

The solutions to this problem tend to share the premise that if you have access to the files on the OS, then you have access to the database.
Therefore, in this scenario, the most important step you can take in protecting your database from unauthorised access is to ensure that only those OS users that need it, have permissions to see our application scripts.

Remember that the OS account may be accessed by multiple members of the Support Team. By the very fact of these people having been given the credentials for the OS account, they are also authorised to connect to the database.
In order to run a bash script on Linux, both Read and Execute permissions are required. As the script owner, the OS user will also need write access to change the scripts.
Therefore, our first step should be to ensure that all of the files we’ve got in our app have permissions set such that only the OS user can use them :

chmod 700 om.sh
chmod 700 scriptenv.sh
chmod 600 vorbis.sql

Permissions are now restricted to the script owner only :

-rwx------ 1 atuin atuin  141 Mar 18 21:31 om.sh
-rwx------ 1 atuin atuin   53 Mar 18 21:26 scriptenv.sh
-rw------- 1 atuin atuin  201 Mar 21 14:09 vorbis.sql

There you have it. At a stroke your application is much more secure. Anything beyond that is likely to be a variation on the theme of obfuscation where the password is concerned.

On the other hand, stopping here would make this post a bit short and we’d still have those passwords in our scripts…

Hang on a minute, doesn’t Oracle have an Authentication strategy that explicitly embraces the concept of delegating authentication to the OS ?

OS Authentication

Older readers may well recall a time when it was common for Oracle usernames to include the prefix “OPS$“.
This string is the default value for the os_authent_prefix database initialization parameter.
By creating a database user with this prefix, it’s possible to delegate authentication to the OS, as we’ll now demonstrate by creating an OS user called simony who will connect to the database as the externally identified user ops$simony.

We’ll start by creating the account on the linux server :

useradd -m simony

We’re going to override the standard TNS config files on the server with private ones for this user by setting the TNS_ADMIN environment variable to point to a custom set of files, which we’ll create in a directory under our users $HOME :

mkdir tns_admin

This directory needs to contain a tnsnames.ora, which looks like this…

eta =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = didactylos.virtualbox)(PORT = 1522))
      (CONNECT_DATA =
        (SERVER = DEDICATED)
        (SERVICE_NAME = ETA)
      )
    )

…and a sqlnet.ora, which contains a directive specific to OS Authentication :

sqlnet.authentication_services = (NTS)

In the database, we need to check a couple of parameters :

select name, value, default_value
from v$parameter
where name in ('os_authent_prefix', 'remote_os_authent')
order by num;

NAME                 VALUE                
-------------------- -------------------- 
remote_os_authent    FALSE                
os_authent_prefix    ops$                 

   

We need to change the remote_os_authent parameter to TRUE.

In the meantime, as an appropriately privileged user :

alter system set remote_os_authent=true sid='*' scope=spfile;

Now, we can create the ops$ user in the database. To do this, we need to make sure where in the correct target PDB…

alter session set container=eta;
create user ops$simony identified externally;

grant create session to ops$simony;

We now need to re-start the database for the parameter change to take effect.

Back on the linux server, as simony, we can now connect to the database without specifying a password :

So, provided you’ve authenticated as simony on the linux server, Oracle will allow you to connect to the database. Well, not quite…

On a different machine connected to the same network, I’ve created a user called simony and re-created the tns_admin directory and files. Oracle however, is not fussy about which host I’m connecting from :

Provided you can reach the database, and you’re current client session is under a user named simony, you can connect to the database. In effect, the database authentication is now on username alone.

It’s worth noting that, on Windows clients, configuring OS Authentication requires that the Windows Domain or machine name is included in the database OPS$ username and is therefore evaluated when connecting.

As we can see, the same does not apply on Linux. Therefore, you may well consider this to be rather less secure than having passwords in clear text in your bash scripts.

Encrypted passwords with OpenSSL

The next option to rid ourselves of those vulgar clear text passwords is OpenSSL, which we can use to :

  • encrypt the password and save it to a file
  • read the encrypted password from the file and decrypt it at runtime

The openssl utility comes as standard on most modern linux distros. We can see how it works by encrypting a simple test string :

echo 'Abraxas' |openssl enc -aes-256-cbc \
-md sha512 \
-a -pbkdf2 \
-iter 100000 \
-salt \
-pass pass:'charcoal'

This will produce an output that may look something like this :

U2FsdGVkX1/kpUzVwi4UNxkcjcxyK4cklEwfmZGgiAQ=

Note that you won’t get the same output every time due to the application of the salt.

We can decrypt the string using the command :

echo U2FsdGVkX1/kpUzVwi4UNxkcjcxyK4cklEwfmZGgiAQ= | openssl enc -aes-256-cbc -md sha512 -a -d -pbkdf2 -iter 100000 -salt -pass pass:'charcoal' 

…which returns the string we originally encrypted :

Abraxas

So, the first thing we need is a script to set/change the password.
In this instance, the script ( update_db_creds.sh) is owned by the same OS user that runs the shell scripts, but is designed to run interactively…

#!/bin/sh
# store encrypted db credentials for use in shell scripts

read -p 'Username : ' schema
read -sp 'Password: ' passwd
echo 
read -p 'Database : ' db_name
read -sp 'Passphrase : ' phrase 
echo 

#set filenames for this credential
encr_fname=".sweeper_${schema}_${db_name}.enc"
phrase_fname=".brutha_${schema}_${db_name}.txt"

#create/overwrite the passphrase file. This will be used for the SSL encryption
#and read for the decryption.
echo "$phrase" >$phrase_fname
chmod 600 $phrase_fname

#Encrypt the password and save it 
echo "$passwd" |\
    openssl enc -aes-256-cbc \
    -md sha512 \
    -a \
    -pbkdf2 \
    -iter 100000 \
    -salt \
    -pass file:${phrase_fname} \
    -out ${encr_fname} 

    chmod 600 ${encr_fname}
    echo 'Done'


This script prompts for four parameters.
The Database Schema and Database Name are associated with the password by the filename in which the encrypted password string is stored.
The password itself, together with the passphrase used in the OpenSSL command are accepted but not displayed.
Once the script has run, there are two files created in the current directory, both of which have read and write permissions for the file owner only.

An example run would look like this :

This creates these files :

ls -dl .[^.]* 
-rw------- 1 mike mike 16 Mar 23 09:57 .brutha_hr_eta.txt
-rw------- 1 mike mike 65 Mar 23 09:57 .sweeper_hr_eta.enc

Next, we need the script that’ll do the password retrieval and decryption for us at runtime. This one is called get_db_password.sh :

#!/bin/sh
# Script takes two arguments - Database Username and Database Name and retrieves
# the stored password

function decrypt()
{
    db_schema=$1
    db_name=$2

    pwd_file=".sweeper_${db_schema}_${db_name}.enc"
    phrase_file=".brutha_${db_schema}_${db_name}.txt"

    db_pwd=`openssl enc -aes-256-cbc -md sha512 -a -d -pbkdf2 -iter 100000 -salt \
    -pass file:$phrase_file -in $pwd_file`

    echo $db_pwd 
}

Now, we can plug this into our existing environment setup script…

#!/bin/sh

export DB_USER=hr
export DB_NAME=eta

. ./get_db_password.sh
export DB_PWD=$(decrypt $DB_USER $DB_NAME)

…and om.sh will execute as before, this time without the need for any of that mucky clear-text password nonsense…

Well, that’s a bit better. We’ve managed to protect the password…to an extent.
As before, access to the database is protected by the os file permissions on the application files.
Whilst the password itself is obfuscated, decryption would be trivial for anyone connected as the script owner.

OK, we’ve “shelled” out all that cash for Oracle, it must include a solution to this problem, right ?

Oracle Wallet

Q : when is an Oracle Client not an Oracle Client ?
A : when you realise that you can’t find the mkstore file that all the Oracle Wallet write-ups tell you to run as a first step.

If you’re using one or more downloads that comprise Oracle Instant Client, you will most likely come up against this issue.
Depending on which components you have installed, you may be able to take advantage of this workaround.

It’s worth noting here that, at least in some circumstances, Oracle’s intention was that the wallet be created on the Database Server and then copied to the required application servers. To quote from Oracle WebLogic Server documentation :

Oracle recommends that you create and manage the Wallet in a database environment. This environment provides all the necessary commands and libraries, including the $ORACLE_HOME/oracle_common/bin/mkstore command. Often this task is completed by a database administrator and provided for use by the client. A configured Wallet consists of two files, cwallet.sso and ewallet.p12 stored in a secure Wallet directory

If you’re not creating the wallet on your database server, it’s the full-fat client that you need – the one that’s designed to run on Application Servers and which can be found together with the Database Software downloads rather than the Instant Client downloads.
For example, you can find the 21c Oracle client by navigating here, selecting the option to show all downloads, then scrolling all the way down to the Oracle client.

Wallet Setup

Right, now we’ve got that straight, let’s logon to the OS as the script owner ( atuin) and create a couple of directories and set the appropriate permissions :

mkdir tns_admin
mkdir wallet
chmod 700 tns_admin
chmod 700 wallet

The next step is to create a wallet – in the wallet directory :

mkstore -create -wrl /home/atuin/wallet

At this point you will be prompted for a password.
NOTE – you’ll be prompted for this password pretty much anytime you access the wallet, so don’t forget it !.
Also, be warned, if it’s not up to snuff, mkstore will reject your password with :

PKI-01002: Invalid password. Passwords must have a minimum length of eight characters and contain alphabetic characters combined with numbers or special characters.

Now we need to get Oracle to look in the wallet every time we connect from atuin.

By setting the $TNS_ADMIN environment variable we can override the “standard” tns configuration files ( usually found in $ORACLE_HOME/network/admin) with our own.

We need to create the config files we need. We’ll do this in the newly created tns_admin directory.
Let’s start with the sqlnet.ora :

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

SQLNET.WALLET_OVERRIDE=TRUE
WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/home/atuin/wallet)))


This tells Oracle to use the specified wallet when connecting.

Now for the tnsnames.ora :

hr_on_eta =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = didactylos.virtualbox)(PORT = 1522))
      (CONNECT_DATA =
        (SERVER = DEDICATED)
        (SERVICE_NAME = ETA)
      )
    )

In keeping with our file permissions policy…

chmod 600 *.ora
ls -l
total 8
-rw-------. 1 atuin atuin 157 Mar 25 17:13 sqlnet.ora
-rw-------. 1 atuin atuin 203 Mar 25 17:14 tnsnames.ora

Now all we need to do is to add a credential to our wallet with the same name as the entry in the tnsnames.ora.

You can accomplish this with

mkstore -wrl /home/atuin/wallet -createCredential hr_on_eta hr <password_for_hr>

…where <password_for_hr> is the password for the database hr user.

However, remember that we’re on Linux here, which has that ever-so-helpful history command.
Whilst the password change may be handled by a single admin, such as a DBA, anyone else connecting as that user (e.g. the rest of the support team) will be able to view it’s history.
Therefore, if we were to run this command, we could see the unencrypted password in the history :

history |grep mkstore
    7  mkstore -wrl /home/atuin/wallet -createCredential hr_on_eta hr theTurtlem0ves!

Now we can fiddle around in bash to ensure that the command isn’t captured in the history.
Perhaps a simpler option however, is to not provide the password on the command line and instead make mkstore prompt for it :

mkstore -wrl /home/atuin/wallet -createCredential hr_on_eta hr

As the password no longer appears on the command line, it doesn’t show up in the history :

history |grep createCredential
   22  mkstore -wrl /home/atuin/wallet -createCredential hr_on_eta hr

If we now look in the wallet directory, we can see that the files have all been created with permissions for the current user only :

ls -l

total 8
-rw-------. 1 atuin atuin 573 Mar 25 17:30 cwallet.sso
-rw-------. 1 atuin atuin   0 Mar 25 17:28 cwallet.sso.lck
-rw-------. 1 atuin atuin 528 Mar 25 17:30 ewallet.p12
-rw-------. 1 atuin atuin   0 Mar 25 17:28 ewallet.p12.lck

Let’s do a quick test. From the command line, we can run :

export TNS_ADMIN=/home/atuin/tns_admin
sqlplus /@hr_on_eta
select sys_context('userenv', 'con_name') db_name,
    sys_context('userenv', 'current_user') db_user
from dual;

…which should result in :

This means that, by amending our scriptenv.sh to be this :

export TNS_ADMIN=/home/atuin/tns_admin
export CONN_STR=/@hr_on_eta

…and our om.sh application script to this :

#!/bin/sh

# Standard environment setup for our application bash scripts
. ./scriptenv.sh

sqlplus -s $CONN_STR @vorbis.sql

we can again connect to the database without exposing the password in clear text :

By default this will only work for the user who owns the wallet, due to the permissions on the wallet’s component files.
By setting the permissions to this :

$ ls -l
total 0
drwx------. 2 atuin atuin 57 Mar 25 17:49 scripts
drwxrwxrwx. 2 atuin atuin 44 Mar 25 17:14 tns_admin
drwxrwxrwx. 2 atuin atuin 90 Mar 25 17:28 wallet

$ ls -l tns_admin
total 8
-rw-rw-rw-. 1 atuin atuin 157 Mar 25 17:13 sqlnet.ora
-rw-rw-rw-. 1 atuin atuin 203 Mar 25 17:14 tnsnames.ora

$ ls -l wallet
total 8
-rw-rw-rw-. 1 atuin atuin 573 Mar 25 17:30 cwallet.sso
-rw-rw-rw-. 1 atuin atuin   0 Mar 25 17:28 cwallet.sso.lck
-rw-rw-rw-. 1 atuin atuin 528 Mar 25 17:30 ewallet.p12
-rw-rw-rw-. 1 atuin atuin   0 Mar 25 17:28 ewallet.p12.lck

… I can then use the same wallet (and tns_admin) to connect as “mike”…

I may have overdone it a bit with the chmod a+rwx there, but you get the point.
As with the OpenSSL solution, access to the database is ultimately controlled by OS file permissions.

Restricting the wallet to the original host

It’s worth noting that a wallet created in this fashion can be copied for use on another machine that does not have mkstore, or any of it’s associated jar files installed.

I was able to copy the contents of the wallet directory to my local pc, create the appropriate TNS_ADMIN config files and connect to the database using the same connect string.

If you’d rather that the wallet was used only on the machine on which it was created you can specify the auto_login_local parameter when creating the wallet, or even update the existng wallet using orapki :

orapki wallet create -wallet /home/atuin/wallet -auto_login_local

Note that the create option here does not overwrite the existing wallet, but merely changes the specified property.

This time, when I copy the wallet to my local pc, I get the following when trying to connect :

$ sqlplus /@hr_on_eta

SQL*Plus: Release 12.2.0.1.0 Production on Sun Apr 3 13:22:52 2022

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed
 

I’ve read that it’s possible to circumvent this restriction by renaming the remote machine to the same name as the host server, but I have not been able to replicate this.

Keeping database passwords out of the bash history

Any mkstore commands which include the database password as an argument can be run in such a way to ensure that the password is prompted for and not included on the command line.

For example, to change a password :

mkstore -wrl /home/atuin/wallet -modifyCredential hr_on_eta hr

There are several other options to the mkstore command that you can list by running :

mkstore -help
Checking the contents of the wallet

If you want to list the credentials held in the wallet :

mkstore -wrl /home/atuin/wallet -listCredential

Note that for all of these commands, mkstore will prompt you for the passphrase before it coughs up any information. This means that the database password itself has an additional layer of protection.
It’s not entirely inaccessible though…

From the listCredential command, we know we have one credential stored in the wallet :

mkstore -wrl wallet -listCredential
Oracle Secret Store Tool Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:   
List credential (index: connect_string username)
1: hr_on_eta hr
[atuin@didactylos ~]$ 

If we now use the list option, we can see the individual data items the wallet is holding :

mkstore -wrl wallet -list
Oracle Secret Store Tool Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:   
Oracle Secret Store entries: 
oracle.security.client.connect_string1
oracle.security.client.password1
oracle.security.client.username1

We can retrieve the values for each of these entries using the viewEntry option :

mkstore -wrl wallet -viewEntry oracle.security.client.username1
Oracle Secret Store Tool Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:   
oracle.security.client.username1 = hr
mkstore -wrl wallet -viewEntry oracle.security.client.password1
Oracle Secret Store Tool Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:   
oracle.security.client.password1 = haHayou'llneverguess1
mkstore -wrl wallet -viewEntry oracle.security.client.connect_string1
Oracle Secret Store Tool Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:   
oracle.security.client.connect_string1 = hr_on_eta

How secure is the wallet ?

Whilst an Oracle Wallet offers an additional layer of protection against the database login credentials being read, by requiring a passphrase, that does not render it invulnerable.

If you have read permissions on the wallet’s component files, there’s nothing to stop you from copying them onto a remote machine and then cracking the passphrase there, away from the prying eyes of the server’s audit trail.

How vulnerable is the passphrase ? Well, using my “turtly” awesome cracking skills ( open browser, search “cracking Oracle Wallet passphrase”), I can see that there are multiple scary-looking options to choose from.

There are a number of other database-side precautions we might take to harden our application against unauthorised access from the server on which the scripts are stored.
However, there is a more fundamental question to consider…

Do you really need to connect via shell scripts at all ?

Of course there will be times where the answer to this question is an unequivocal “Yes!”

These are often when you’re having to work around some unfathomable (ineffable ?) restriction such as a decision to revoke execute permissions on DBMS_SCHEDULER from the application user.
Yes, I did once work at a site where this was perpetrated as standard on all Oracle databases.

Generally speaking however, I can’t think of too many things that you can accomplish with a shell script connecting to Oracle, that you can’t originate from the database itself.

Consider what a typical ETL process might look like :

  1. A file arrives in a landing directory
  2. The file is moved to a processing directory
  3. The data in the file is loaded into a database table
  4. The file is moved to an archive directory

There are several ways to implement this process in such a way that the flow is controlled from within the database and any interaction with scripts on the OS is initiated from within the database using credentials stored in a database object, the contents of which are stored more securely than the database password is using any of the options we’ve looked at here.

For example, the above workflow could be accomplished from within PL/SQL with judicious use of Oracle-supplied packages such as DBMS_SCHEDULER and UTL_FILE.

You can find a simple example of how such a process might be implemented here.

There’s an example of invoking a shell script using DBMS_SCHEDULER here.

You can even invoke a shell script using an external table preprocessor directive.

The key benefit of this approach is that SQL*Plus – and therefore database access from the OS – is taken out of the equation. Database security is handled in the database.

Conclusions

If your application uses shell script to connect to Oracle via SQL*Plus, then the most important security measures you can take are to limit access to both the scripts and the server account that owns them.
For keeping the actual password itself secure, then an Oracle Wallet is probably the best option.
OpenSSL may be a valid option if installing/using the tools to create an Oracle Wallet is problematic. Note that the reasons for that being the case are almost always non-technical !
The option of using the functionality available in PL/SQL to initiate and control interaction with the OS from within the Oracle database itself offers significant security benefits over the shell script approach.

As we’ve seen, OS Authentication is probably best avoided if your servers are running on Linux.

References

This article by Dave McKay on OpenSSL was particularly helpful.

There are a number of posts about setting up Oracle Wallet Manager but I found this one by Antoine Roger to be extremely useful.

There is an old, but still relevant, Introduction to Linux Shell Scripting for DBAs by Casimir Saternos which includes some ideas about when to employ bash scripts when interacting with an Oracle database.

If you’ve got this far, I hope you didn’t find it a turtle waste of time.

__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-624b487ef3228', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); });

Working from Home when it’s stormy outside – Using 4G when Broadband is unavailable

Sat, 2022-03-05 04:43

Eunice blew me out the other week…along with several (thousand) others.
The eponymous storm screamed through the place like a Banshee… with a nasty case of wind.
As a result, I was reminded once again, of the importance of electricity in maintaining connectivity to work and how challenging it becomes to access it when there’s an all-day power cut.
One thing we noticed was that, whilst the mobile coverage from my provider was unavailable, Deb – who is on a different physical network, had her access to a 4G signal restored quite a while before mine came back.
This got me thinking about how I might ensure some redundancy in my ability to continue to connect to work remotely when broadband is not available.

In this post, I’m going to look at :

  • using a 4G phone as a Wireless Hotspot when Broadband is unavailable
  • which UK mobile providers use which physical network
  • unlocking a mobile so you can use it with SIM cards from multiple providers
Connecting to Work over 4G

There have been occasions in the past, especially when moving house and waiting for broadband to be switched over, when it’s been necessary for me to use my 4G mobile phone as a wireless hotspot so I can access the internet ( and work) from my PC/laptop.
Accessing the outside world in this way does eat into your mobile data allowance, but perhaps not as much as you’d think.
In my experience, whether connecting via a supplied laptop over a VPN or using a Citrix Receiver to on my local machine, an average day seems to require around 500MB of data.
Obviously, this figure will vary somewhat depending on the amount of data being downloaded to your local machine, type of work being done etc.
As for the speed, it tends to be “fast enough”, especially if watching videos does not comprise a significant portion of your working day.

To enable your Android phone as a wireless hotspot, you can simply swipe down from the top of the Home screen and scroll across to the Hotspot icon and tap it.

Full details can be found on this Google Help Page.

Once you’ve got your hotspot configured and activated, you can simply choose it from the list of available Wi-Fi networks on your computer. After that, it’ll work just the same as a normal broadband connection.

Physical Mobile Networks in the UK

There are four physical networks in the UK :

  • EE
  • O2
  • Three
  • Vodafone

All other mobile providers “piggyback” on one of these networks.
Details of which provider runs on which network can be found here, on Compare the Market.

So, if you want some additional redundancy in your use-4G-to-connect contingency plan, you can simply acquire a Pay-As-You-Go SIM from a provider on each network.
In doing so, you may think it worth adding a data bundle to the SIM. If so then there are some things you need to consider.

First of all, there appears to be an expiry date on the bundles. You need to activate ( use them) within a certain time frame. This can be done easily enough by simply sending a text using the SIM to your main number.
Secondly, the allowances for Text and Calls on the bundle may lapse ( usually one month after activation).
We’re primarily interested in the data allowance here and on EE, O2 and Vodafone, any unused data will rollover to the following month. However, this is not the case on any bundles I’ve been able to find on the Three network.

Unlocking a Phone from a Network

Up until around 2018, it was common practice for a mobile phone to be locked on the network of whoever was providing it.
Whilst phones provided on a contract are now unlocked, and can therefore operate using a SIM from any network, older phones and Pay-As-You-Go handsets may still be locked.
Operators are now legally obliged to provide an unlock code for a handset free of charge. All you need to provide them with is the handset’s IMEI number. We’ll come onto how to retrieve this for an Android phone shortly.
Before I do though, I think it’s worth adding a health warning here about the numerous sites that promise to give you an unlock code for your phone if you provide the IMEI number.
They invariably take the code then re-direct you to forms where you have to provide lots of personal data. They then finish by demanding that you accept one of half-a-dozen or so “offers” before they give you the code.
As this all looks rather suspicious, I did take the plunge – to a point, and provided some personal data… just not necessarily mine.
Ms Eunice Storm has been bombarded with spam ever since.

My experience was that, within two hours of requesting it, O2 had provided me with an unlock code for an old handset, which worked first time.

An Android handset’s IMEI number can be found on the current Android version ( 11 as I write) by going to the Settings menu and selecting About phone. If you scroll down you’ll find it under Hardware Information.

Alternatively, you can use a dialler code to get this. The advantage is that it appears to work on any Android version ( the oldest I’ve tried it on is Android 4.3).

Open the dialler as if you’re going to dial a number and enter :

*#06#

Incidentally, if you really want to dig into the innards of your handset, you can access the testing menu using the code :

*#*#4636#*#*

NOTE – be careful when using this as it’s possible to change some quite important looking settings if you’re not paying attention !

There is a list of codes here, if you’re still curious.

Working from Home when it’s stormy outside – Using 4G when Broadband is unavailable

Sat, 2022-03-05 04:43

Eunice blew me out the other week…along with several (thousand) others.
The eponymous storm screamed through the place like a Banshee… with a nasty case of wind.
As a result, I was reminded once again, of the importance of electricity in maintaining connectivity to work and how challenging it becomes to access it when there’s an all-day power cut.
One thing we noticed was that, whilst the mobile coverage from my provider was unavailable, Deb – who is on a different physical network, had her access to a 4G signal restored quite a while before mine came back.
This got me thinking about how I might ensure some redundancy in my ability to continue to connect to work remotely when broadband is not available.

In this post, I’m going to look at :

  • using a 4G phone as a Wireless Hotspot when Broadband is unavailable
  • which UK mobile providers use which physical network
  • unlocking a mobile so you can use it with SIM cards from multiple providers
Connecting to Work over 4G

There have been occasions in the past, especially when moving house and waiting for broadband to be switched over, when it’s been necessary for me to use my 4G mobile phone as a wireless hotspot so I can access the internet ( and work) from my PC/laptop.
Accessing the outside world in this way does eat into your mobile data allowance, but perhaps not as much as you’d think.
In my experience, whether connecting via a supplied laptop over a VPN or using a Citrix Receiver to on my local machine, an average day seems to require around 500MB of data.
Obviously, this figure will vary somewhat depending on the amount of data being downloaded to your local machine, type of work being done etc.
As for the speed, it tends to be “fast enough”, especially if watching videos does not comprise a significant portion of your working day.

To enable your Android phone as a wireless hotspot, you can simply swipe down from the top of the Home screen and scroll across to the Hotspot icon and tap it.

Full details can be found on this Google Help Page.

Once you’ve got your hotspot configured and activated, you can simply choose it from the list of available Wi-Fi networks on your computer. After that, it’ll work just the same as a normal broadband connection.

Physical Mobile Networks in the UK

There are four physical networks in the UK :

  • EE
  • O2
  • Three
  • Vodafone

All other mobile providers “piggyback” on one of these networks.
Details of which provider runs on which network can be found here, on Compare the Market.

So, if you want some additional redundancy in your use-4G-to-connect contingency plan, you can simply acquire a Pay-As-You-Go SIM from a provider on each network.
In doing so, you may think it worth adding a data bundle to the SIM. If so then there are some things you need to consider.

First of all, there appears to be an expiry date on the bundles. You need to activate ( use them) within a certain time frame. This can be done easily enough by simply sending a text using the SIM to your main number.
Secondly, the allowances for Text and Calls on the bundle may lapse ( usually one month after activation).
We’re primarily interested in the data allowance here and on EE, O2 and Vodafone, any unused data will rollover to the following month. However, this is not the case on any bundles I’ve been able to find on the Three network.

Unlocking a Phone from a Network

Up until around 2018, it was common practice for a mobile phone to be locked on the network of whoever was providing it.
Whilst phones provided on a contract are now unlocked, and can therefore operate using a SIM from any network, older phones and Pay-As-You-Go handsets may still be locked.
Operators are now legally obliged to provide an unlock code for a handset free of charge. All you need to provide them with is the handset’s IMEI number. We’ll come onto how to retrieve this for an Android phone shortly.
Before I do though, I think it’s worth adding a health warning here about the numerous sites that promise to give you an unlock code for your phone if you provide the IMEI number.
They invariably take the code then re-direct you to forms where you have to provide lots of personal data. They then finish by demanding that you accept one of half-a-dozen or so “offers” before they give you the code.
As this all looks rather suspicious, I did take the plunge – to a point, and provided some personal data… just not necessarily mine.
Ms Eunice Storm has been bombarded with spam ever since.

My experience was that, within two hours of requesting it, O2 had provided me with an unlock code for an old handset, which worked first time.

An Android handset’s IMEI number can be found on the current Android version ( 11 as I write) by going to the Settings menu and selecting About phone. If you scroll down you’ll find it under Hardware Information.

Alternatively, you can use a dialler code to get this. The advantage is that it appears to work on any Android version ( the oldest I’ve tried it on is Android 4.3).

Open the dialler as if you’re going to dial a number and enter :

*#06#

Incidentally, if you really want to dig into the innards of your handset, you can access the testing menu using the code :

*#*#4636#*#*

NOTE – be careful when using this as it’s possible to change some quite important looking settings if you’re not paying attention !

There is a list of codes here, if you’re still curious.

The most productive code I’ll write this year

Mon, 2022-01-31 13:38

I’m currently sitting here wearing a t-shirt so geeky that the slogan emblazoned upon it requires knowledge of both binary and unicode to understand that deciphering it is probably not worth the effort.
It is therefore not surprising that I have succumbed to the geek catnip that is Wordle.
What follows is neither a Wordle clone nor a Wordle solver. There are those far geekier cleverer than I who have already shared such creations.

Rather it is :

  • an exploration of the powers of SQL pattern matching
  • some fun and games with Ref Cursors.

Whilst I’ll be sticking firmly to the command line, there may even be a dash of colour.
Some of what follows has been influenced by Connor McDonald’s PL/SQL Wordle clone.

The end result will be a PL/SQL package called “nurdle” – defined on wordnik as

“To gently waffle or muse on a subject which one clearly knows little about”.

Also making an appearance will be several five letter words which may or may not include :

  • color – yes, Wordle does use American spellings
  • cwtch – a Welsh word meaning (according to Deb) to let your wife warm your cold feet on you to avoid having the duvet removed
  • quack – from the Australian for the sound made by an English batsman.

But first, just in case you’ve exhibited rather more willpower than I, let’s take a quick look at…

How to play Wordle

You start with no clues whatsoever other than the word that you’re trying to guess is 5 letters long.
You have six tries to guess the answer.

For each guess you enter, you get feedback as to whether each letter is in the answer as follows :

  • Green – the letter is in the word and you’ve put it in the correct position
  • Yellow – the letter is in the word but not in the position you have it
  • Gray – the letter is not in the word.

It’s worth noting that Josh Wardle, the creator of Wordle, resides in the USA, a land where vowels are strictly rationed. Therefore, it’s quite possible that the answer your looking for on any given day is an American spelling.

Simple rules then, which provide a perfect excuse to take a closer look as SQL’s pattern matching capabilities…

A worked example

In the vast majority of cases, when I use a wildcard in SQL, it’s the good old percentage sign (%) – replace zero or more characters. However, there is a second wildcard available – the underscore (_) – replace any single character.

Let’s imagine that we have an index organized table to hold a list of five-letter words…

create table nurdle_words(
    word varchar2(5),
    constraint nurdle_words_pk primary key (word))
    organization index
/   
insert into nurdle_words values ('alike');
insert into nurdle_words values( 'audio');
insert into nurdle_words values('attic');
insert into nurdle_words values('arise');
insert into nurdle_words values('alive');
commit;

Now imagine that the word we’re trying to guess is ‘alike‘.
Obviously, we have no clues to start with so I’ll start by guessing ‘audio‘ because it has lots of vowels in it.

The feedback we get from Wordle is :

A – Green – correct letter and position
U – Grey – not in the word
D – Grey – not in the word
I – Yellow – in the word but not the 4th letter
O – Grey – not in the word

Using this information, we can build a SQL query against our word list to narrow down the possible answers…

select word
from nurdle_words
where word like 'a____' -- starts with 'a' and is 5 letters long
and word like '%i%' -- contains an 'i'
and word not like '___i_' -- i is not the 4th letter
-- exclude the Grey letters
and word not like '%u%'
and word not like '%d%'
and word not like '%o%'
/
WORD 
-----
alike
alive
arise

Whilst the query is perfectly servicable, I’m feeling brave so I’m going to tidy up a bit by introducing a regular expression…

select word
from nurdle_words
where word like 'a____'
and word like '%i%'
and word not like '___i_'
and not regexp_like(word, '(u|d|o)') -- excluded characters in a single predicate
/

Our next guess is ‘arise‘, which results in :

A – Green
R – Grey
I – Green
S – Grey
E – Green

…which allows us to refine our search query further…

select word
from nurdle_words
where word like 'a_i_e'
and not regexp_like( word, '(u|d|o|r|s)')
/

WORD 
-----
alike
alive

As it’s now a fifty-fifty choice, Let’s go for ‘alike’, which means we’ve got it in three goes and can feel smug for the rest of the day.
Then again, all that typing is likely to become a bit tiresome over time, so let’s have a look at implementing this functionality in a PL/SQL package…

The Package

I read somewhere that there are 10,000 words included Wordle’s dictionary. Connor’s solution contains even more.
Being rather less dedicated, I only managed to add a few hundred words to my dictionary before I got bored.
Therefore, I’ll need a means of adding words to the dictionary.

I’ll want to keep track of the guesses I’ve made so far on the current day. I’m going to use a Global Temporary Table for this purpose as I’ll only require this data to persist for the life of the session.
In order to build my queries easily, the guess words will be split into their component letters in the table.
The resulting structure will look like this :

create global temporary table todays_guesses(
    guess_no number not null,
    letter varchar2(1) not null,
    pos_in_word number(1) not null,
    status varchar2(1) not null,
    constraint todays_guesses_status_chk check (status in ('B', 'Y', 'G')))
    on commit preserve rows
/    
    

NOTE – I’ve replace the Grey for letters not in the word with Black. This is for two reasons :

  1. I want to use a single letter for feedback and grey and green both start with “G”
  2. SQLcl background colours do not include grey (see below), but does have black, which was the colour used for incorrect letters when I first saw the Wordle result messages people posted on Twitter

I’ll need some means of re-constituting the words if I want to review my guesses so far.

Additionally, if I happen to be using SQLcl, I may want to take advantage of the ability to change the background colours of the output.

Finally, I’ll need a way of taking all of the information gleaned from my guesses so far and retrieving a list of possible words to try for my next guess.

The package header looks like this :

create or replace package nurdle as

    -- Maximum length of a Word
    GC_LEN constant number := 5;

    type wordlist is table of nurdle_words%rowtype;

    -- add to the dictionary
    procedure add_word ( i_word in nurdle_words.word%type);
    
    -- record a guess and the outcome
    procedure guess( i_guess in varchar2, i_result in varchar2);

    -- show guesses so far and the feedback for each (text version)    
    -- usage : 
    -- exec nurdle.show_guesses;
    procedure show_guesses;
        
    -- In SQLcl turn on sqlformat ansiconsole to get     
    -- usage : 
    -- set sqlformat ansiconsole
    -- select * from table( nurdle.colour_guesses);
    function colour_guesses return sys.ODCIVarchar2List pipelined;

    -- work out the possible words in the dictionary
    -- usage : 
    -- select * from table( nurdle.next_guess);
    function next_guess return wordlist pipelined;


end nurdle;
/

There are two functions which return an array of records in a Ref Cursor, but they achieve this in different ways.

function colour_guesses return sys.ODCIVarchar2List pipelined;

I copied this idea from Connor’s code. The function uses a pre-defined Oracle type to save us the bother of creating our own.
The type definition can be found in the data dictionary :

select text
from all_source
where owner = 'SYS'
and name = 'ODCIVARCHAR2LIST'
order by line
/

TEXT                                    
----------------------------------------
TYPE ODCIVarchar2List FORCE
 AS VARRAY(32767) OF VARCHAR2(4000);

Incidentally, there are other types out there which can be used in a similar fashion for other base types :

  • ODCINUMBERLIST
  • ODCIDATELIST

This approach is ideal for listing guesses which will only ever have a maximum of 6 rows and the order is pre-defined. However, if you want to interact with your refcursor output through a query predicate, then you really do need to have a column name to work with.
This is why we’ve got a type declared in the package…

type wordlist is table of nurdle_words%rowtype;

…which is used as the return type for the next_guess function…

function next_guess return wordlist pipelined;

This allows us to do things like narrowing down our search to include specific letters that we haven’t tried yet. For example, if I’ve used ‘audio’ for my initial guess, I might want to try the remaining vowel in my next guess :

select word
from table(nurdle.next_guess)
where word like '%e%' -- only words that include the letter 'e'
/

Here’s the package body ( suitably commented) :

create or replace package body nurdle as

    procedure verify_wordlen( i_word in varchar2) 
    is
    -- Private procedure to ensure that a word is the required length.
    -- Used by multiple sub-programs in the package.
    begin
        if nvl(length(i_word), 0) != GC_LEN then
            raise_application_error(-20990, 'Word is not '||GC_LEN||' characters long');
        end if;
    end verify_wordlen;    
    
    procedure add_word ( i_word in nurdle_words.word%type)
    is
        v_word nurdle_words.word%type;
    begin
        verify_wordlen( i_word);
        -- Canonicalize the WORD value to make searching simple 
        v_word := lower(i_word);
        
        merge into nurdle_words
        using dual
        on (word = v_word)
        when not matched then
            insert( word)
            values(v_word);
    end add_word;
    
    procedure guess( i_guess in varchar2, i_result in varchar2)
    is
        v_guess varchar2(GC_LEN);
        v_result varchar2(GC_LEN);
        v_guess_no todays_guesses.guess_no%type;
    begin 
        verify_wordlen(i_guess);
        verify_wordlen(i_result);
        
        -- canonicalize word to all lowercase, the result to all uppercase, just for a change !
        v_guess := lower(i_guess);
        v_result := upper(i_result);
        
        select nvl(max(guess_no), 0) + 1
        into v_guess_no     
        from todays_guesses;

        -- split the guessed word and the result into the component letters
        for i in 1..GC_LEN loop
            insert into todays_guesses( guess_no, letter, pos_in_word, status)
            values(v_guess_no, substr(v_guess, i, 1), i, substr(v_result, i, 1));
        end loop;
    end guess;
    
    procedure show_guesses
        is
        v_rc sys_refcursor;
    begin
        open v_rc for
            select guess_no, 
                listagg(letter)  within group (order by pos_in_word) as guess,
                listagg(status) within group (order by pos_in_word) as result
            from todays_guesses
            group by guess_no
            order by guess_no;
        dbms_sql.return_result(v_rc);
    end show_guesses;
    
    function colour_guesses return sys.ODCIVarchar2List pipelined
    is
        v_rc sys_refcursor;
        v_rtn sys.ODCIVarchar2List := sys.ODCIVarchar2List();
    begin
        dbms_output.put_line(q'[Don't forget to "set sqlformat ansiconsole"]');
        open v_rc for
            select 
                listagg(
                    case status 
                        when 'B' then '@|bg_black '
                        when 'Y' then '@|bg_yellow '
                        when 'G' then '@|bg_green '
                    end
                    ||letter
                    ||'|@') 
                within group (order by pos_in_word) as guess
            from todays_guesses
            group by guess_no
            order by guess_no;
        loop
            v_rtn.extend(1);
            fetch v_rc into v_rtn(v_rtn.count);
            exit when v_rc%notfound;
            pipe row( v_rtn( v_rtn.count));
        end loop;
        close v_rc;
        return;
    end colour_guesses;    

    function next_guess return wordlist pipelined
    is
        v_row nurdle_words%rowtype;
        v_stmnt clob;
        v_green varchar2(GC_LEN);
        v_blank varchar2(100);
        v_rc sys_refcursor;

        -- Remember your injection protection !
        -- We're only concatenating values from this table. It's a small table.
        -- Therefore, just scan the whole thing and make sure that the values we'll be using are all
        -- alphabetical only
        function is_dodgy return boolean is
            v_count pls_integer;
        begin
            select count(*)
            into v_count
            from todays_guesses
            where length(letter) != regexp_count(letter, '[[:alpha:]]');
            
            return v_count > 0;
        end is_dodgy;    
    begin 
        if is_dodgy then
            raise_application_error(-20902, 'There are non-alphabetical characters in TODAYS_GUESSES.LETTER');
        end if;    

        -- Specifying the where clause here will save us jumping through hoops as we
        -- build the rest of the query. The optimizer will essentially disregard
        -- "where 1=1" so it shouldn't cost anything in performance terms
        
        v_stmnt := 'select word from nurdle_words where 1=1';
        
        -- Build a pattern containing all of the Green letters - i.e. correct letters in the right place -
        -- using a pile of Common Table Expressions that interact with each other.
        -- Account for the fact that the same "green" letter may appear in multiple guesses...
        with greens as (
            select letter, pos_in_word,
                row_number() over (partition by letter order by guess_no) as recnum
            from todays_guesses
            where status = 'G'),
        ungreens as (
            select '_' as letter, rownum as pos_in_word
            from dual
            connect by rownum <= GC_LEN),
        green_pattern as (
            select nvl(g.letter, u.letter) letter, u.pos_in_word
            from ungreens u
            left outer join greens g
                on g.pos_in_word = u.pos_in_word
                and g.recnum = 1
            order by u.pos_in_word)
        select listagg(letter) within group( order by pos_in_word)
        into v_green
        from green_pattern;

        v_stmnt := v_stmnt||chr(10)||q'[and word like ']'||v_green||q'[']';

        -- Now add in all of the Yellows - letters that are in the word but which we've put in
        -- the wrong place. First we need to include the letter in the search...
        for r_like in ( 
            select letter
            from todays_guesses
            where status = 'Y'
            and letter not in ( select letter from todays_guesses where status = 'G'))
        loop
            v_stmnt := v_stmnt||chr(10)||q'[and word like '%]'||r_like.letter||q'[%']';
        end loop;
 
        -- Now exclude words that have our yellow letters in the wrong place...
        for r_pos in (
            select case pos_in_word 
                when 1 then letter||'____'
                when 2 then '_'||letter||'___'
                when 3 then '__'||letter||'__'
                when 4 then '___'||letter||'_'
                when 5 then '____'||letter
                end yellow_pos
            from todays_guesses
            where status = 'Y'
            and letter not in ( select letter from todays_guesses where status = 'G'))
        loop    
            v_stmnt := v_stmnt||chr(10)||q'[and word not like ']'||r_pos.yellow_pos||q'[']';
        end loop;    

        -- Where we've made a guess with a repeating letter and found only one in the word,
        -- Wordle will return one of the letters as yellow or green and the other as blank.
        -- We can use the blank to infer that the letter in question is not in that position in the word
        
        for r_double in (
            select case pos_in_word 
                when 1 then letter||'____'
                when 2 then '_'||letter||'___'
                when 3 then '__'||letter||'__'
                when 4 then '___'||letter||'_'
                when 5 then '____'||letter
                end double_pos
            from todays_guesses
            where status = 'B'
            and letter in ( select letter from todays_guesses where status in ('G', 'Y')))
        loop
            v_stmnt := v_stmnt||chr(10)||q'[and word not like ']'||r_double.double_pos||q'[']';
        end loop;    

        -- Exclude all the Blanks
        select listagg(letter, '|') within group (order by pos_in_word)
        into v_blank
        from todays_guesses
        where status = 'B'
        -- handle guesses with double-letters where the word only contains one of the letter
        and letter not in (select letter from todays_guesses where status != 'B');
    
        -- Handle the remote possibility of getting all 5 letters on the first try
        if nvl(length( v_blank), 0) > 0 then
            v_stmnt := v_stmnt||chr(10)||q'[and not regexp_like( word, '(]'||v_blank||q'[)')]';
        end if;    
        
        -- Output the statement in case we need to debug
        dbms_output.put_line(v_stmnt);

        -- Open a refcursor for the statement and pipe the contents
        open v_rc for v_stmnt;
        loop
            fetch v_rc into v_row;
            exit when v_rc%notfound;
                pipe row(v_row);
        end loop;
        close v_rc;
        return;

    end next_guess;
end nurdle;
/

Let’s run the worked example again, but this time using the package :

exec nurdle.guess('audio', 'gbbyb');

select word
from table( nurdle.next_guess)
order by 1
/

If we want to see the query the code has generated, then we can enable serveroutput :

exec nurdle.guess('arise', 'gbgbg');

Let’s take a look at the guesses so far – boring version first…

…and now the colour version…

Hopefully this package will justify the title of this post and spare me countless evenings racking my brains for five-letter words ending in ‘E’.

The most productive code I’ll write this year

Mon, 2022-01-31 13:38

I’m currently sitting here wearing a t-shirt so geeky that the slogan emblazoned upon it requires knowledge of both binary and unicode to understand that deciphering it is probably not worth the effort.
It is therefore not surprising that I have succumbed to the geek catnip that is Wordle.
What follows is neither a Wordle clone nor a Wordle solver. There are those far geekier cleverer than I who have already shared such creations.

Rather it is :

  • an exploration of the powers of SQL pattern matching
  • some fun and games with Ref Cursors.

Whilst I’ll be sticking firmly to the command line, there may even be a dash of colour.
Some of what follows has been influenced by Connor McDonald’s PL/SQL Wordle clone.

The end result will be a PL/SQL package called “nurdle” – defined on wordnik as

“To gently waffle or muse on a subject which one clearly knows little about”.

Also making an appearance will be several five letter words which may or may not include :

  • color – yes, Wordle does use American spellings
  • cwtch – a Welsh word meaning (according to Deb) to let your wife warm your cold feet on you to avoid having the duvet removed
  • quack – from the Australian for the sound made by an English batsman.

But first, just in case you’ve exhibited rather more willpower than I, let’s take a quick look at…

How to play Wordle

You start with no clues whatsoever other than the word that you’re trying to guess is 5 letters long.
You have six tries to guess the answer.

For each guess you enter, you get feedback as to whether each letter is in the answer as follows :

  • Green – the letter is in the word and you’ve put it in the correct position
  • Yellow – the letter is in the word but not in the position you have it
  • Gray – the letter is not in the word.

It’s worth noting that Josh Wardle, the creator of Wordle, resides in the USA, a land where vowels are strictly rationed. Therefore, it’s quite possible that the answer your looking for on any given day is an American spelling.

Simple rules then, which provide a perfect excuse to take a closer look as SQL’s pattern matching capabilities…

A worked example

In the vast majority of cases, when I use a wildcard in SQL, it’s the good old percentage sign (%) – replace zero or more characters. However, there is a second wildcard available – the underscore (_) – replace any single character.

Let’s imagine that we have an index organized table to hold a list of five-letter words…

create table nurdle_words(
    word varchar2(5),
    constraint nurdle_words_pk primary key (word))
    organization index
/   
insert into nurdle_words values ('alike');
insert into nurdle_words values( 'audio');
insert into nurdle_words values('attic');
insert into nurdle_words values('arise');
insert into nurdle_words values('alive');
commit;

Now imagine that the word we’re trying to guess is ‘alike‘.
Obviously, we have no clues to start with so I’ll start by guessing ‘audio‘ because it has lots of vowels in it.

The feedback we get from Wordle is :

A – Green – correct letter and position
U – Grey – not in the word
D – Grey – not in the word
I – Yellow – in the word but not the 4th letter
O – Grey – not in the word

Using this information, we can build a SQL query against our word list to narrow down the possible answers…

select word
from nurdle_words
where word like 'a____' -- starts with 'a' and is 5 letters long
and word like '%i%' -- contains an 'i'
and word not like '___i_' -- i is not the 4th letter
-- exclude the Grey letters
and word not like '%u%'
and word not like '%d%'
and word not like '%o%'
/
WORD 
-----
alike
alive
arise

Whilst the query is perfectly servicable, I’m feeling brave so I’m going to tidy up a bit by introducing a regular expression…

select word
from nurdle_words
where word like 'a____'
and word like '%i%'
and word not like '___i_'
and not regexp_like(word, '(u|d|o)') -- excluded characters in a single predicate
/

Our next guess is ‘arise‘, which results in :

A – Green
R – Grey
I – Green
S – Grey
E – Green

…which allows us to refine our search query further…

select word
from nurdle_words
where word like 'a_i_e'
and not regexp_like( word, '(u|d|o|r|s)')
/

WORD 
-----
alike
alive

As it’s now a fifty-fifty choice, Let’s go for ‘alike’, which means we’ve got it in three goes and can feel smug for the rest of the day.
Then again, all that typing is likely to become a bit tiresome over time, so let’s have a look at implementing this functionality in a PL/SQL package…

The Package

I read somewhere that there are 10,000 words included Wordle’s dictionary. Connor’s solution contains even more.
Being rather less dedicated, I only managed to add a few hundred words to my dictionary before I got bored.
Therefore, I’ll need a means of adding words to the dictionary.

I’ll want to keep track of the guesses I’ve made so far on the current day. I’m going to use a Global Temporary Table for this purpose as I’ll only require this data to persist for the life of the session.
In order to build my queries easily, the guess words will be split into their component letters in the table.
The resulting structure will look like this :

create global temporary table todays_guesses(
    guess_no number not null,
    letter varchar2(1) not null,
    pos_in_word number(1) not null,
    status varchar2(1) not null,
    constraint todays_guesses_status_chk check (status in ('B', 'Y', 'G')))
    on commit preserve rows
/    
    

NOTE – I’ve replace the Grey for letters not in the word with Black. This is for two reasons :

  1. I want to use a single letter for feedback and grey and green both start with “G”
  2. SQLcl background colours do not include grey (see below), but does have black, which was the colour used for incorrect letters when I first saw the Wordle result messages people posted on Twitter

I’ll need some means of re-constituting the words if I want to review my guesses so far.

Additionally, if I happen to be using SQLcl, I may want to take advantage of the ability to change the background colours of the output.

Finally, I’ll need a way of taking all of the information gleaned from my guesses so far and retrieving a list of possible words to try for my next guess.

The package header looks like this :

create or replace package nurdle as

    -- Maximum length of a Word
    GC_LEN constant number := 5;

    type wordlist is table of nurdle_words%rowtype;

    -- add to the dictionary
    procedure add_word ( i_word in nurdle_words.word%type);
    
    -- record a guess and the outcome
    procedure guess( i_guess in varchar2, i_result in varchar2);

    -- show guesses so far and the feedback for each (text version)    
    -- usage : 
    -- exec nurdle.show_guesses;
    procedure show_guesses;
        
    -- In SQLcl turn on sqlformat ansiconsole to get     
    -- usage : 
    -- set sqlformat ansiconsole
    -- select * from table( nurdle.colour_guesses);
    function colour_guesses return sys.ODCIVarchar2List pipelined;

    -- work out the possible words in the dictionary
    -- usage : 
    -- select * from table( nurdle.next_guess);
    function next_guess return wordlist pipelined;


end nurdle;
/

There are two functions which return an array of records in a Ref Cursor, but they achieve this in different ways.

function colour_guesses return sys.ODCIVarchar2List pipelined;

I copied this idea from Connor’s code. The function uses a pre-defined Oracle type to save us the bother of creating our own.
The type definition can be found in the data dictionary :

select text
from all_source
where owner = 'SYS'
and name = 'ODCIVARCHAR2LIST'
order by line
/

TEXT                                    
----------------------------------------
TYPE ODCIVarchar2List FORCE
 AS VARRAY(32767) OF VARCHAR2(4000);

Incidentally, there are other types out there which can be used in a similar fashion for other base types :

  • ODCINUMBERLIST
  • ODCIDATELIST

This approach is ideal for listing guesses which will only ever have a maximum of 6 rows and the order is pre-defined. However, if you want to interact with your refcursor output through a query predicate, then you really do need to have a column name to work with.
This is why we’ve got a type declared in the package…

type wordlist is table of nurdle_words%rowtype;

…which is used as the return type for the next_guess function…

function next_guess return wordlist pipelined;

This allows us to do things like narrowing down our search to include specific letters that we haven’t tried yet. For example, if I’ve used ‘audio’ for my initial guess, I might want to try the remaining vowel in my next guess :

select word
from table(nurdle.next_guess)
where word like '%e%' -- only words that include the letter 'e'
/

Here’s the package body ( suitably commented) :

create or replace package body nurdle as

    procedure verify_wordlen( i_word in varchar2) 
    is
    -- Private procedure to ensure that a word is the required length.
    -- Used by multiple sub-programs in the package.
    begin
        if nvl(length(i_word), 0) != GC_LEN then
            raise_application_error(-20990, 'Word is not '||GC_LEN||' characters long');
        end if;
    end verify_wordlen;    
    
    procedure add_word ( i_word in nurdle_words.word%type)
    is
        v_word nurdle_words.word%type;
    begin
        verify_wordlen( i_word);
        -- Canonicalize the WORD value to make searching simple 
        v_word := lower(i_word);
        
        merge into nurdle_words
        using dual
        on (word = v_word)
        when not matched then
            insert( word)
            values(v_word);
    end add_word;
    
    procedure guess( i_guess in varchar2, i_result in varchar2)
    is
        v_guess varchar2(GC_LEN);
        v_result varchar2(GC_LEN);
        v_guess_no todays_guesses.guess_no%type;
    begin 
        verify_wordlen(i_guess);
        verify_wordlen(i_result);
        
        -- canonicalize word to all lowercase, the result to all uppercase, just for a change !
        v_guess := lower(i_guess);
        v_result := upper(i_result);
        
        select nvl(max(guess_no), 0) + 1
        into v_guess_no     
        from todays_guesses;

        -- split the guessed word and the result into the component letters
        for i in 1..GC_LEN loop
            insert into todays_guesses( guess_no, letter, pos_in_word, status)
            values(v_guess_no, substr(v_guess, i, 1), i, substr(v_result, i, 1));
        end loop;
    end guess;
    
    procedure show_guesses
        is
        v_rc sys_refcursor;
    begin
        open v_rc for
            select guess_no, 
                listagg(letter)  within group (order by pos_in_word) as guess,
                listagg(status) within group (order by pos_in_word) as result
            from todays_guesses
            group by guess_no
            order by guess_no;
        dbms_sql.return_result(v_rc);
    end show_guesses;
    
    function colour_guesses return sys.ODCIVarchar2List pipelined
    is
        v_rc sys_refcursor;
        v_rtn sys.ODCIVarchar2List := sys.ODCIVarchar2List();
    begin
        dbms_output.put_line(q'[Don't forget to "set sqlformat ansiconsole"]');
        open v_rc for
            select 
                listagg(
                    case status 
                        when 'B' then '@|bg_black '
                        when 'Y' then '@|bg_yellow '
                        when 'G' then '@|bg_green '
                    end
                    ||letter
                    ||'|@') 
                within group (order by pos_in_word) as guess
            from todays_guesses
            group by guess_no
            order by guess_no;
        loop
            v_rtn.extend(1);
            fetch v_rc into v_rtn(v_rtn.count);
            exit when v_rc%notfound;
            pipe row( v_rtn( v_rtn.count));
        end loop;
        close v_rc;
        return;
    end colour_guesses;    

    function next_guess return wordlist pipelined
    is
        v_row nurdle_words%rowtype;
        v_stmnt clob;
        v_green varchar2(GC_LEN);
        v_blank varchar2(100);
        v_rc sys_refcursor;

        -- Remember your injection protection !
        -- We're only concatenating values from this table. It's a small table.
        -- Therefore, just scan the whole thing and make sure that the values we'll be using are all
        -- alphabetical only
        function is_dodgy return boolean is
            v_count pls_integer;
        begin
            select count(*)
            into v_count
            from todays_guesses
            where length(letter) != regexp_count(letter, '[[:alpha:]]');
            
            return v_count > 0;
        end is_dodgy;    
    begin 
        if is_dodgy then
            raise_application_error(-20902, 'There are non-alphabetical characters in TODAYS_GUESSES.LETTER');
        end if;    

        -- Specifying the where clause here will save us jumping through hoops as we
        -- build the rest of the query. The optimizer will essentially disregard
        -- "where 1=1" so it shouldn't cost anything in performance terms
        
        v_stmnt := 'select word from nurdle_words where 1=1';
        
        -- Build a pattern containing all of the Green letters - i.e. correct letters in the right place -
        -- using a pile of Common Table Expressions that interact with each other.
        -- Account for the fact that the same "green" letter may appear in multiple guesses...
        with greens as (
            select letter, pos_in_word,
                row_number() over (partition by letter order by guess_no) as recnum
            from todays_guesses
            where status = 'G'),
        ungreens as (
            select '_' as letter, rownum as pos_in_word
            from dual
            connect by rownum <= GC_LEN),
        green_pattern as (
            select nvl(g.letter, u.letter) letter, u.pos_in_word
            from ungreens u
            left outer join greens g
                on g.pos_in_word = u.pos_in_word
                and g.recnum = 1
            order by u.pos_in_word)
        select listagg(letter) within group( order by pos_in_word)
        into v_green
        from green_pattern;

        v_stmnt := v_stmnt||chr(10)||q'[and word like ']'||v_green||q'[']';

        -- Now add in all of the Yellows - letters that are in the word but which we've put in
        -- the wrong place. First we need to include the letter in the search...
        for r_like in ( 
            select letter
            from todays_guesses
            where status = 'Y'
            and letter not in ( select letter from todays_guesses where status = 'G'))
        loop
            v_stmnt := v_stmnt||chr(10)||q'[and word like '%]'||r_like.letter||q'[%']';
        end loop;
 
        -- Now exclude words that have our yellow letters in the wrong place...
        for r_pos in (
            select case pos_in_word 
                when 1 then letter||'____'
                when 2 then '_'||letter||'___'
                when 3 then '__'||letter||'__'
                when 4 then '___'||letter||'_'
                when 5 then '____'||letter
                end yellow_pos
            from todays_guesses
            where status = 'Y'
            and letter not in ( select letter from todays_guesses where status = 'G'))
        loop    
            v_stmnt := v_stmnt||chr(10)||q'[and word not like ']'||r_pos.yellow_pos||q'[']';
        end loop;    

        -- Where we've made a guess with a repeating letter and found only one in the word,
        -- Wordle will return one of the letters as yellow or green and the other as blank.
        -- We can use the blank to infer that the letter in question is not in that position in the word
        
        for r_double in (
            select case pos_in_word 
                when 1 then letter||'____'
                when 2 then '_'||letter||'___'
                when 3 then '__'||letter||'__'
                when 4 then '___'||letter||'_'
                when 5 then '____'||letter
                end double_pos
            from todays_guesses
            where status = 'B'
            and letter in ( select letter from todays_guesses where status in ('G', 'Y')))
        loop
            v_stmnt := v_stmnt||chr(10)||q'[and word not like ']'||r_double.double_pos||q'[']';
        end loop;    

        -- Exclude all the Blanks
        select listagg(letter, '|') within group (order by pos_in_word)
        into v_blank
        from todays_guesses
        where status = 'B'
        -- handle guesses with double-letters where the word only contains one of the letter
        and letter not in (select letter from todays_guesses where status != 'B');
    
        -- Handle the remote possibility of getting all 5 letters on the first try
        if nvl(length( v_blank), 0) > 0 then
            v_stmnt := v_stmnt||chr(10)||q'[and not regexp_like( word, '(]'||v_blank||q'[)')]';
        end if;    
        
        -- Output the statement in case we need to debug
        dbms_output.put_line(v_stmnt);

        -- Open a refcursor for the statement and pipe the contents
        open v_rc for v_stmnt;
        loop
            fetch v_rc into v_row;
            exit when v_rc%notfound;
                pipe row(v_row);
        end loop;
        close v_rc;
        return;

    end next_guess;
end nurdle;
/

Let’s run the worked example again, but this time using the package :

exec nurdle.guess('audio', 'gbbyb');

select word
from table( nurdle.next_guess)
order by 1
/

If we want to see the query the code has generated, then we can enable serveroutput :

exec nurdle.guess('arise', 'gbgbg');

Let’s take a look at the guesses so far – boring version first…

…and now the colour version…

Hopefully this package will justify the title of this post and spare me countless evenings racking my brains for five-letter words ending in ‘E’.

Upgrading Plex Media Server on a Raspberry Pi

Thu, 2022-01-20 14:37

I’m running Plex on my RaspberryPi so that I can stream my movies to multiple devices on my home network.
The periodic upgrades to the Plex software can get a bit fiddly.
Therefore, to save me looking up how to do the same things every time I need to upgrade, I thought I’d put them in one place…

Finding information about my Pi

I’m running Raspberry Pi OS (formerly Raspbian) on my Pi.

Connecting to the Pi via ssh, I can run the following commands to discover information about my system.

To find out what OS version I’m running :

cat /etc/os-release |grep 'PRETTY NAME'

the model of the Pi :

cat /proc/cpuinfo |grep Model

the specific ARM architecture of your Pi

/proc/cpuinfo |grep 'model name'

the name of your pi on my local (home) network

hostname -f

the IP address of your Pi on your local network

hostname -I
Upgrading Raspbian

This is not usually a requirement for upgrading Plex, but I thought I’d mention it here just in case.

There is a big warning about in-place upgrades in the documentation

In my case, I only really use my Pi for the plex server so the in-place upgrade does not present a huge risk.
If I happen to lose everything, re-installation and configuration would not be too onerous a task.
Therefore, I follow the link provided in the documentation and ended up with these instructions.

Upgrading Plex Server

I run my Pi headless (i.e. no monitor attached), so I’m usually accessing the Plex management web page on another machine when I see it’s upgrade time :

I can click on the link and download the update file here.
Alternatively, you can get it directly from the Plex Downloads site.

On the download site, I need to select “Linux” when asked to choose a platform :

Then click on the “Choose Distribution” button :

Now I want to select one of the ARM distros ( in my case v7) :

I can now copy the file from my machine to the pi via SFTP.
There is a good tutorial here about using the freeware sftp client Filezilla to transfer files to and from a pi via sftp.

Installing the upgrade

For this bit, I need to have a terminal session on the pi ( I usually use SSH to remote in).

We don’t want to go to all the trouble of running the upgrade if we’re already running the latest available version. One way to confirm the current version, is :

apt-cache policy plexmediaserver

First, if we’ve not done so already, make sure we have the most up-to-date packages on the Pi :

sudo apt update -y && sudo apt upgrade -y

Now re-start the pi to make sure any changes take effect

sudo reboot

Obviously, you’ll lose your connection to the Pi whilst it’s rebooting and will need to reconnect once it’s back up.

Next we need to stop the Plexmidiaserver ( assuming it starts on boot, which mine does) :

sudo service plexmediaserver stop

Now we need to manually install of deb file we’ve downloaded :

sudo apt install ./plexmediaserver_1.25.2.5319-c43dc0277_armhf.deb

The plexmediaserver should now be up and running. You can check by running :

sudo service plexmediaserver status

…and should be running the version we’ve just installed :

Plex Clients

Once you’ve completed the Server install, you may need to “remind” the plex clients running on your TVs of the IP adress of your server.

Remember, you can get the IP address of your pi by running :

hostname -I

Your looking for the first bit of information returned by this command.
On a home network, the address will usually be in the format :

192.168.1.nnn

…where nnn is a 3-digit number.

Upgrading Plex Media Server on a Raspberry Pi

Thu, 2022-01-20 14:37

I’m running Plex on my RaspberryPi so that I can stream my movies to multiple devices on my home network.
The periodic upgrades to the Plex software can get a bit fiddly.
Therefore, to save me looking up how to do the same things every time I need to upgrade, I thought I’d put them in one place…

Finding information about my Pi

I’m running Raspberry Pi OS (formerly Raspbian) on my Pi.

Connecting to the Pi via ssh, I can run the following commands to discover information about my system.

To find out what OS version I’m running :

cat /etc/os-release |grep 'PRETTY NAME'

the model of the Pi :

cat /proc/cpuinfo |grep Model

the specific ARM architecture of your Pi

/proc/cpuinfo |grep 'model name'

the name of your pi on my local (home) network

hostname -f

the IP address of your Pi on your local network

hostname -I
Upgrading Raspbian

This is not usually a requirement for upgrading Plex, but I thought I’d mention it here just in case.

There is a big warning about in-place upgrades in the documentation

In my case, I only really use my Pi for the plex server so the in-place upgrade does not present a huge risk.
If I happen to lose everything, re-installation and configuration would not be too onerous a task.
Therefore, I follow the link provided in the documentation and ended up with these instructions.

Upgrading Plex Server

I run my Pi headless (i.e. no monitor attached), so I’m usually accessing the Plex management web page on another machine when I see it’s upgrade time :

I can click on the link and download the update file here.
Alternatively, you can get it directly from the Plex Downloads site.

On the download site, I need to select “Linux” when asked to choose a platform :

Then click on the “Choose Distribution” button :

Now I want to select one of the ARM distros ( in my case v7) :

I can now copy the file from my machine to the pi via SFTP.
There is a good tutorial here about using the freeware sftp client Filezilla to transfer files to and from a pi via sftp.

Installing the upgrade

For this bit, I need to have a terminal session on the pi ( I usually use SSH to remote in).

We don’t want to go to all the trouble of running the upgrade if we’re already running the latest available version. One way to confirm the current version, is :

apt-cache policy plexmediaserver

First, if we’ve not done so already, make sure we have the most up-to-date packages on the Pi :

sudo apt update -y && sudo apt upgrade -y

Now re-start the pi to make sure any changes take effect

sudo reboot

Obviously, you’ll lose your connection to the Pi whilst it’s rebooting and will need to reconnect once it’s back up.

Next we need to stop the Plexmidiaserver ( assuming it starts on boot, which mine does) :

sudo service plexmediaserver stop

Now we need to manually install of deb file we’ve downloaded :

sudo apt install ./plexmediaserver_1.25.2.5319-c43dc0277_armhf.deb

The plexmediaserver should now be up and running. You can check by running :

sudo service plexmediaserver status

…and should be running the version we’ve just installed :

Plex Clients

Once you’ve completed the Server install, you may need to “remind” the plex clients running on your TVs of the IP adress of your server.

Remember, you can get the IP address of your pi by running :

hostname -I

Your looking for the first bit of information returned by this command.
On a home network, the address will usually be in the format :

192.168.1.nnn

…where nnn is a 3-digit number.

Finding an Identity Column Sequence in Oracle

Wed, 2021-12-29 07:51

It’s that time again. It’s the week between Christmas and New Year and I’ve been grappling with the advanced mathematics required to calculate when the next refuse collection is due.
I’ve got some code that does the calculation and inserts the collection date into a table, which I’m running on Oracle 18cXE.
Automating the test for this code is a bit tricky because the table uses an identity column as it’s key.
Therefore, I need to figure out the name of the sequence used to generate the identity value and find it’s current value in the session to identify and test the record we’ve just inserted.

What I’m going to cover here is :

  • my search for the sequence associated with an identity column
  • predicting the name of the identity column sequence without having to look it up
  • just using a named sequence instead of an identity column but without having to write a trigger

Fortified by a traditional Betwixmas breakfast of cold sprouts and trifle, I’m ready to get cracking…

Normally, the bin collections happen on a Wednesday. If there are any public holidays in the seven days prior to the collection, it moves back one day per holiday. There are no collections at the weekend so the collection day would be moved to the following Monday if it were otherwise to fall on a Saturday or Sunday.

To start with then, we have a table which holds the Bank Holiday dates :

create table bank_holidays(
    holiday_date date primary key,
    holiday_name varchar2(25))
/

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-01-01', 'New Year');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-04-02', 'Good Friday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-04-05', 'Easter Monday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-05-03', 'Early May Bank Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-05-31', 'Spring Bank Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-08-30', 'Summer Bank Holiday');
    
insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-12-27', 'Christmas Bank Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-12-28', 'Boxing Day Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2022-01-03', 'New Year Bank Holiday');

commit;

We then have a table into which the collection days are inserted. This table has an identity column :

create table bin_days(
    id number generated by default on null as identity primary key,
    weekday varchar2(25) not null,
    collection_date date not null)
/

The procedure to populate this table looks like this :

create or replace procedure calculate_bin_day( i_date in date)
is
    v_offset pls_integer;
    v_new_date date;
begin
    select count(*)
    into v_offset
    from bank_holidays
    where holiday_date between i_date - 7 and i_date;
    
    v_new_date := i_date + v_offset;
    if to_char(v_new_date, 'DY') in ('SAT', 'SUN') then
        v_new_date := next_day( v_new_date, 'MONDAY');
    end if;
    insert into bin_days( weekday, collection_date)
    values(to_char(v_new_date, 'DY'), v_new_date);
end calculate_bin_day;
/

What’s that ? This example is intentionally convoluted so that I can play with Identity columns ? Wherever did you get that idea ?

I want to write an anonymous block to test the procedure.
Once we’ve executed the procedure in the test, we’ll have access to the sequence’s CURRVAL value in the session.
What we need to know is what the sequence is called. Bear in mind that the test may need to run in multiple pre-production environments ( e.g. DEV, SIT, UAT) and there’s every chance that the sequence will have a different name when it’s created in each environment. Therefore, we need a way of working out the sequence name at runtime…

Finding the Sequence in the Data Dictionary

Unfortunately, Oracle does not give you the option of naming the sequence for an identity column. However, you can find a reference to it in USER_TAB_COLUMNS…

select data_default
from user_tab_columns
where table_name = 'BIN_DAYS'
and column_name = 'ID'
and identity_column = 'YES';

which, in my case, returns :

DATA_DEFAULT                                                                    
-----------------------------
"MIKE"."ISEQ$$_75278".nextval

The DEFAULT_VALUE column is a LONG, which may make it a bit fiddly to work with.
A cleaner source for this information is the “purpose-built” USER_TAB_IDENTITY_COLS…

select sequence_name
from user_tab_identity_cols
where table_name = 'BIN_DAYS'
and column_name = 'ID';

SEQUENCE_NAME                                                                                                                   
-------------
ISEQ$$_75278

We can obtain the sequence name using this query and drop it into a bit of dynamic SQL. As with any selected query string being concatenated into a dynamic SQL statement, we would need to be mindful of the possibility of a blind injection and take appropriate steps.
However, there is another option which does not require us to select a VARCHAR2 value…

Predicting the identity sequence name

Thanks to this article by Julian Dyke, we know that Oracle uses this pattern to name an identity sequence :

ISEQ$$_<object_id of the table to which the identity column belongs>

USER_OBJECTS gives us the object_id of the BIN_DAYS table :

select object_id
from user_objects
where object_type = 'TABLE'
and object_name = 'BIN_DAYS';
 OBJECT_ID
----------
     75278

…which is indeed the numeric component of the sequence name in question…

select sequence_name
from user_tab_identity_cols
where table_name = 'BIN_DAYS'
and column_name = 'ID';
SEQUENCE_NAME                                                                                                                   
--------------
ISEQ$$_75278

As OBJECT_ID is a number, not a string we don’t have to worry about any nefarious SQL injection shenanigans if we concatenate it into a dynmaic SQL select string. Therefore, we can write our test like this :

set serverout on size unlimited
clear screen
declare
    v_in_date date;
    v_expected_date date;
    v_actual_date date;
    v_id bin_days.id%type;
    v_seq_name user_sequences.sequence_name%type;
    
    v_result varchar2(4000);
begin
    -- setup
    v_in_date  := date '2021-04-07';
    v_expected_date := date '2021-04-09';

    dbms_output.put_line('Input date : '||to_char(v_in_date, 'DD-MON-YYYY'));
    -- execute
    calculate_bin_day(v_in_date);
    
    -- validate
    
    -- find the sequence name and obtain the id value 
    select 'ISEQ$$_'||object_id
    into v_seq_name
    from user_objects
    where object_type = 'TABLE'
    and object_name = 'BIN_DAYS';
    
    execute immediate 'select '||v_seq_name||'.currval from dual' into v_id;
    
    select collection_date
    into v_actual_date
    from bin_days
    where id = v_id;
    
    if v_actual_date = v_expected_date then
        v_result := 'PASS';
    else
        v_result := 'FAIL : expected '||to_char(v_expected_date, 'DD-MON-YYYY')
            ||' got '||to_char(v_actual_date, 'DD-MON-YYYY');
    end if;
    dbms_output.put_line(v_result);
end;
/
Using a named sequence

Rather than trying to figure out the sequence name at runtime, we could just use an explicitly named sequence and dispense with the identity column altogether :

create sequence bin_days_id_seq;
drop table bin_days;
create table bin_days(
    id number default bin_days_id_seq.nextval primary key,
    weekday varchar2(25) not null,
    collection_date date not null)
/

This makes our test much simpler…

set serverout on size unlimited
clear screen
declare
    v_in_date date;
    v_expected_date date;
    v_actual_date date;
    v_id bin_days.id%type;
   
    v_result varchar2(4000);
begin
    -- setup
    v_in_date  := date '2021-12-27';
    v_expected_date := date '2021-12-31';

    dbms_output.put_line('Input date : '||to_char(v_in_date, 'DD-MON-YYYY'));
    -- execute
    calculate_bin_day(v_in_date);
    
    -- validate
    
    v_id := bin_days_id_seq.currval;
    
    select collection_date
    into v_actual_date
    from bin_days
    where id = v_id;
    
    if v_actual_date = v_expected_date then
        v_result := 'PASS';
    else
        v_result := 'FAIL : expected '||to_char(v_expected_date, 'DD-MON-YYYY')||' got '||to_char(v_actual_date, 'DD-MON-YYYY');
    end if;
    dbms_output.put_line(v_result);
end;
/

…although we do lose some of the built-in functionality that comes with using an identity column.

As always, the best option will depend entirely on the circumstances you find yourself in, but it’s nice to know you have a choice.

Finding an Identity Column Sequence in Oracle

Wed, 2021-12-29 07:51

It’s that time again. It’s the week between Christmas and New Year and I’ve been grappling with the advanced mathematics required to calculate when the next refuse collection is due.
I’ve got some code that does the calculation and inserts the collection date into a table, which I’m running on Oracle 18cXE.
Automating the test for this code is a bit tricky because the table uses an identity column as it’s key.
Therefore, I need to figure out the name of the sequence used to generate the identity value and find it’s current value in the session to identify and test the record we’ve just inserted.

What I’m going to cover here is :

  • my search for the sequence associated with an identity column
  • predicting the name of the identity column sequence without having to look it up
  • just using a named sequence instead of an identity column but without having to write a trigger

Fortified by a traditional Betwixmas breakfast of cold sprouts and trifle, I’m ready to get cracking…

Normally, the bin collections happen on a Wednesday. If there are any public holidays in the seven days prior to the collection, it moves back one day per holiday. There are no collections at the weekend so the collection day would be moved to the following Monday if it were otherwise to fall on a Saturday or Sunday.

To start with then, we have a table which holds the Bank Holiday dates :

create table bank_holidays(
    holiday_date date primary key,
    holiday_name varchar2(25))
/

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-01-01', 'New Year');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-04-02', 'Good Friday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-04-05', 'Easter Monday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-05-03', 'Early May Bank Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-05-31', 'Spring Bank Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-08-30', 'Summer Bank Holiday');
    
insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-12-27', 'Christmas Bank Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2021-12-28', 'Boxing Day Holiday');

insert into bank_holidays( holiday_date, holiday_name)
values( date '2022-01-03', 'New Year Bank Holiday');

commit;

We then have a table into which the collection days are inserted. This table has an identity column :

create table bin_days(
    id number generated by default on null as identity primary key,
    weekday varchar2(25) not null,
    collection_date date not null)
/

The procedure to populate this table looks like this :

create or replace procedure calculate_bin_day( i_date in date)
is
    v_offset pls_integer;
    v_new_date date;
begin
    select count(*)
    into v_offset
    from bank_holidays
    where holiday_date between i_date - 7 and i_date;
    
    v_new_date := i_date + v_offset;
    if to_char(v_new_date, 'DY') in ('SAT', 'SUN') then
        v_new_date := next_day( v_new_date, 'MONDAY');
    end if;
    insert into bin_days( weekday, collection_date)
    values(to_char(v_new_date, 'DY'), v_new_date);
end calculate_bin_day;
/

What’s that ? This example is intentionally convoluted so that I can play with Identity columns ? Wherever did you get that idea ?

I want to write an anonymous block to test the procedure.
Once we’ve executed the procedure in the test, we’ll have access to the sequence’s CURRVAL value in the session.
What we need to know is what the sequence is called. Bear in mind that the test may need to run in multiple pre-production environments ( e.g. DEV, SIT, UAT) and there’s every chance that the sequence will have a different name when it’s created in each environment. Therefore, we need a way of working out the sequence name at runtime…

Finding the Sequence in the Data Dictionary

Unfortunately, Oracle does not give you the option of naming the sequence for an identity column. However, you can find a reference to it in USER_TAB_COLUMNS…

select data_default
from user_tab_columns
where table_name = 'BIN_DAYS'
and column_name = 'ID'
and identity_column = 'YES';

which, in my case, returns :

DATA_DEFAULT                                                                    
-----------------------------
"MIKE"."ISEQ$$_75278".nextval

The DEFAULT_VALUE column is a LONG, which may make it a bit fiddly to work with.
A cleaner source for this information is the “purpose-built” USER_TAB_IDENTITY_COLS…

select sequence_name
from user_tab_identity_cols
where table_name = 'BIN_DAYS'
and column_name = 'ID';

SEQUENCE_NAME                                                                                                                   
-------------
ISEQ$$_75278

We can obtain the sequence name using this query and drop it into a bit of dynamic SQL. As with any selected query string being concatenated into a dynamic SQL statement, we would need to be mindful of the possibility of a blind injection and take appropriate steps.
However, there is another option which does not require us to select a VARCHAR2 value…

Predicting the identity sequence name

Thanks to this article by Julian Dyke, we know that Oracle uses this pattern to name an identity sequence :

ISEQ$$_<object_id of the table to which the identity column belongs>

USER_OBJECTS gives us the object_id of the BIN_DAYS table :

select object_id
from user_objects
where object_type = 'TABLE'
and object_name = 'BIN_DAYS';
 OBJECT_ID
----------
     75278

…which is indeed the numeric component of the sequence name in question…

select sequence_name
from user_tab_identity_cols
where table_name = 'BIN_DAYS'
and column_name = 'ID';
SEQUENCE_NAME                                                                                                                   
--------------
ISEQ$$_75278

As OBJECT_ID is a number, not a string we don’t have to worry about any nefarious SQL injection shenanigans if we concatenate it into a dynmaic SQL select string. Therefore, we can write our test like this :

set serverout on size unlimited
clear screen
declare
    v_in_date date;
    v_expected_date date;
    v_actual_date date;
    v_id bin_days.id%type;
    v_seq_name user_sequences.sequence_name%type;
    
    v_result varchar2(4000);
begin
    -- setup
    v_in_date  := date '2021-04-07';
    v_expected_date := date '2021-04-09';

    dbms_output.put_line('Input date : '||to_char(v_in_date, 'DD-MON-YYYY'));
    -- execute
    calculate_bin_day(v_in_date);
    
    -- validate
    
    -- find the sequence name and obtain the id value 
    select 'ISEQ$$_'||object_id
    into v_seq_name
    from user_objects
    where object_type = 'TABLE'
    and object_name = 'BIN_DAYS';
    
    execute immediate 'select '||v_seq_name||'.currval from dual' into v_id;
    
    select collection_date
    into v_actual_date
    from bin_days
    where id = v_id;
    
    if v_actual_date = v_expected_date then
        v_result := 'PASS';
    else
        v_result := 'FAIL : expected '||to_char(v_expected_date, 'DD-MON-YYYY')
            ||' got '||to_char(v_actual_date, 'DD-MON-YYYY');
    end if;
    dbms_output.put_line(v_result);
end;
/
Using a named sequence

Rather than trying to figure out the sequence name at runtime, we could just use an explicitly named sequence and dispense with the identity column altogether :

create sequence bin_days_id_seq;
drop table bin_days;
create table bin_days(
    id number default bin_days_id_seq.nextval primary key,
    weekday varchar2(25) not null,
    collection_date date not null)
/

This makes our test much simpler…

set serverout on size unlimited
clear screen
declare
    v_in_date date;
    v_expected_date date;
    v_actual_date date;
    v_id bin_days.id%type;
   
    v_result varchar2(4000);
begin
    -- setup
    v_in_date  := date '2021-12-27';
    v_expected_date := date '2021-12-31';

    dbms_output.put_line('Input date : '||to_char(v_in_date, 'DD-MON-YYYY'));
    -- execute
    calculate_bin_day(v_in_date);
    
    -- validate
    
    v_id := bin_days_id_seq.currval;
    
    select collection_date
    into v_actual_date
    from bin_days
    where id = v_id;
    
    if v_actual_date = v_expected_date then
        v_result := 'PASS';
    else
        v_result := 'FAIL : expected '||to_char(v_expected_date, 'DD-MON-YYYY')||' got '||to_char(v_actual_date, 'DD-MON-YYYY');
    end if;
    dbms_output.put_line(v_result);
end;
/

…although we do lose some of the built-in functionality that comes with using an identity column.

As always, the best option will depend entirely on the circumstances you find yourself in, but it’s nice to know you have a choice.

Predicting the future with DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING

Fri, 2021-11-19 05:38

Tempus Fugit !”…may well be similar to the collection of syllables that spring to mind when you find out the “fun” way that your carefully crafted DBMS_SCHEDULER calendar string doesn’t quite do what you thought it did.
Fortunately, the gift of clairvoyance is within your grasp if only you follow the Wisdom of the Ancient DBAs and their mantra of Legere mandata (Lit. “Read the instructions”)…

The path of enlightenment will eventually lead to to the mystical and and ancient ( well, at least since 11g) writings of :

DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING

… which reveals how to acquire the power to fortell the next date and time match for a given calendar string.

Let’s start with a simple example, a calender for the same time every day :

declare
    v_cal_string varchar2(4000);
    v_next_date timestamp;
begin
    dbms_output.put_line('Now is : '||to_char(sysdate, 'DY DD-MON-YYYY HH24:MI'));
    -- Calendar string for every day at 06:15
    v_cal_string := 'FREQ=DAILY; BYHOUR=6; BYMINUTE=15';
    
    dbms_scheduler.evaluate_calendar_string( 
        calendar_string => v_cal_string,
        start_date => null,
        return_date_after => null,
        next_run_date => v_next_date);
        
    dbms_output.put_line('Next Run Date is : '||to_char(v_next_date, 'DY DD-MON-YYYY HH24:MI'));    
    
end;
/

Run this and we can see that, the next run date is tomorrow morning as expected ( unless you’ve started work especially early today) :

No chickens were sacrificed in the making of this screenshot

Note that all of the parameters to EVALUATE_CALENDAR_STRING are mandatory.

If you want something a bit more exciting, how about we pretend that payday is the third Thursday of the month…

declare
    v_cal_string varchar2(4000);
    v_next_date timestamp;
begin
    dbms_output.put_line('Now is : '||to_char(sysdate, 'DY DD-MON-YYYY HH24:MI'));
    v_cal_string := 'FREQ=MONTHLY; BYDAY=3 THU; BYHOUR=0; BYMINUTE=0';
    
    dbms_scheduler.evaluate_calendar_string( 
        calendar_string => v_cal_string,
        start_date => null,
        return_date_after => null,
        next_run_date => v_next_date);
        
    dbms_output.put_line('Next Payday is : '||to_char(v_next_date, 'DY DD-MON-YYYY HH24:MI'));    
    
end;
/

That’s simple enough, but what if we want to check for something a bit more subtle, such as a string only for weekdays, or weekends for that matter.
If fiddling around with the FIXED_DATE init.ora parameter in your development environment doesn’t appeal, you have the rather simpler option of setting a value for RETURN_DATE_AFTER

declare
    v_weekday_string varchar2(4000) := 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=6; BYMINUTE=15';
    v_next_run timestamp;
    v_start_from timestamp;
begin
    dbms_output.put_line('Weekday...');
    -- I want to know that the calendar string will hit Monday - Friday but not Saturday or Sunday
    v_start_from := to_timestamp(next_day(sysdate, 'TUESDAY'));

    for i in 1..5 loop
       
        dbms_scheduler.evaluate_calendar_string(
            calendar_string => v_weekday_string,
            start_date => null,
            return_date_after => v_start_from,
            next_run_date => v_next_run);
        dbms_output.put_line(to_char( v_next_run, 'Day DDth Month YYYY HH24:MI'));
        -- set RETURN_DATE_AFTER to be the date last found by the string
        v_start_from := v_next_run;
    end loop;    
    
end;
/

Alternatively, we can specify the start_date parameter. This time, we’ll test a calendar string for weekends…

declare
    v_weekend_string varchar2(4000) := 'FREQ=DAILY; BYDAY=SAT,SUN; BYHOUR=9; BYMINUTE=30'; 
    -- I like a lie-in at weekends 
    
    v_start_date timestamp;
    v_next_run timestamp;

begin
    dbms_output.put_line('Weekday...');
    v_start_date := to_timestamp(next_day(systimestamp, 'MONDAY'));
    for i in 1..2 loop
        
        dbms_scheduler.evaluate_calendar_string(
            calendar_string => v_weekend_string,
            start_date => v_start_date,
            return_date_after => null,
            next_run_date => v_next_run);
        dbms_output.put_line(to_char( v_next_run, 'Day DDth Month YYYY HH24:MI'));
        -- Unlike the RETURN_DATE_AFTER parameter, we need to increment the start date by a day
        -- in each loop iteration
        v_start_date := v_next_run + 1;
    end loop;    
end;
/

Whatever your future may hold, you can now be confident that it does not include further scheduling surprises.

Predicting the future with DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING

Fri, 2021-11-19 05:38

Tempus Fugit !”…may well be similar to the collection of syllables that spring to mind when you find out the “fun” way that your carefully crafted DBMS_SCHEDULER calendar string doesn’t quite do what you thought it did.
Fortunately, the gift of clairvoyance is within your grasp if only you follow the Wisdom of the Ancient DBAs and their mantra of Legere mandata (Lit. “Read the instructions”)…

The path of enlightenment will eventually lead to to the mystical and and ancient ( well, at least since 11g) writings of :

DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING

… which reveals how to acquire the power to fortell the next date and time match for a given calendar string.

Let’s start with a simple example, a calender for the same time every day :

declare
    v_cal_string varchar2(4000);
    v_next_date timestamp;
begin
    dbms_output.put_line('Now is : '||to_char(sysdate, 'DY DD-MON-YYYY HH24:MI'));
    -- Calendar string for every day at 06:15
    v_cal_string := 'FREQ=DAILY; BYHOUR=6; BYMINUTE=15';
    
    dbms_scheduler.evaluate_calendar_string( 
        calendar_string => v_cal_string,
        start_date => null,
        return_date_after => null,
        next_run_date => v_next_date);
        
    dbms_output.put_line('Next Run Date is : '||to_char(v_next_date, 'DY DD-MON-YYYY HH24:MI'));    
    
end;
/

Run this and we can see that, the next run date is tomorrow morning as expected ( unless you’ve started work especially early today) :

No chickens were sacrificed in the making of this screenshot

Note that all of the parameters to EVALUATE_CALENDAR_STRING are mandatory.

If you want something a bit more exciting, how about we pretend that payday is the third Thursday of the month…

declare
    v_cal_string varchar2(4000);
    v_next_date timestamp;
begin
    dbms_output.put_line('Now is : '||to_char(sysdate, 'DY DD-MON-YYYY HH24:MI'));
    v_cal_string := 'FREQ=MONTHLY; BYDAY=3 THU; BYHOUR=0; BYMINUTE=0';
    
    dbms_scheduler.evaluate_calendar_string( 
        calendar_string => v_cal_string,
        start_date => null,
        return_date_after => null,
        next_run_date => v_next_date);
        
    dbms_output.put_line('Next Payday is : '||to_char(v_next_date, 'DY DD-MON-YYYY HH24:MI'));    
    
end;
/

That’s simple enough, but what if we want to check for something a bit more subtle, such as a string only for weekdays, or weekends for that matter.
If fiddling around with the FIXED_DATE init.ora parameter in your development environment doesn’t appeal, you have the rather simpler option of setting a value for RETURN_DATE_AFTER

declare
    v_weekday_string varchar2(4000) := 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=6; BYMINUTE=15';
    v_next_run timestamp;
    v_start_from timestamp;
begin
    dbms_output.put_line('Weekday...');
    -- I want to know that the calendar string will hit Monday - Friday but not Saturday or Sunday
    v_start_from := to_timestamp(next_day(sysdate, 'TUESDAY'));

    for i in 1..5 loop
       
        dbms_scheduler.evaluate_calendar_string(
            calendar_string => v_weekday_string,
            start_date => null,
            return_date_after => v_start_from,
            next_run_date => v_next_run);
        dbms_output.put_line(to_char( v_next_run, 'Day DDth Month YYYY HH24:MI'));
        -- set RETURN_DATE_AFTER to be the date last found by the string
        v_start_from := v_next_run;
    end loop;    
    
end;
/

Alternatively, we can specify the start_date parameter. This time, we’ll test a calendar string for weekends…

declare
    v_weekend_string varchar2(4000) := 'FREQ=DAILY; BYDAY=SAT,SUN; BYHOUR=9; BYMINUTE=30'; 
    -- I like a lie-in at weekends 
    
    v_start_date timestamp;
    v_next_run timestamp;

begin
    dbms_output.put_line('Weekday...');
    v_start_date := to_timestamp(next_day(systimestamp, 'MONDAY'));
    for i in 1..2 loop
        
        dbms_scheduler.evaluate_calendar_string(
            calendar_string => v_weekend_string,
            start_date => v_start_date,
            return_date_after => null,
            next_run_date => v_next_run);
        dbms_output.put_line(to_char( v_next_run, 'Day DDth Month YYYY HH24:MI'));
        -- Unlike the RETURN_DATE_AFTER parameter, we need to increment the start date by a day
        -- in each loop iteration
        v_start_date := v_next_run + 1;
    end loop;    
end;
/

Whatever your future may hold, you can now be confident that it does not include further scheduling surprises.

Side-stepping session silliness – ORA-04068 and MODIFY_PACKAGE_STATE

Sat, 2021-10-30 05:50

I’m writing some PL/SQL package. In a surprise development, I’m actually following best practice and using utPLSQL to unit test it.
Even more surprisingly, I’m following the Test-Driven Development (TDD) approach and writing my tests before my application code.
This being PL/SQL, this approach does require a fair amount of re-compilation.
I’m running in SQLDeveloper, which I’ve configured to open each worksheet in a new session ( Jeff Smith has shared the details on how to do this here) .

Here’s my test package…

create or replace package test_it as
    --%suite(some_tests)
    
    --%test( always true)
    procedure test_true;
    
    --%test( always false)
    procedure test_false;
end test_it;
/

create or replace package body test_it as

    -- Global variable - this ensures package will have a session state
    g_test_val integer;
    
    -- Helper to set global variable(s)
    procedure set_globals is
    begin
        g_test_val := 1;
    end set_globals;
    
    -- Tests
    procedure test_true is 
    begin
        set_globals;
        ut.fail('Not yet written');
    end test_true;    

    procedure test_false is 
    begin
        set_globals;
        ut.fail('Not yet written');
    end test_false;    
end test_it;
/

… with this query immediately after the package ddl to identify the session we’re in…

select sys_context('userenv', 'sessionid') from dual;

When I execute the test in a separate session…

clear screen
set serveroutput on size unlimited
exec ut.run('test_it');

select sys_context('userenv', 'sessionid') from dual;

…the result is as you’d expect…

Note that this session’s sessionid is different to the one in which the package is being compiled.

Now to “develop” my test package…

create or replace package test_it as
    --%suite(some_tests)
    
    --%test( always true)
    procedure test_true;
    
    --%test( always false)
    procedure test_false;
end test_it;
/

create or replace package body test_it as

    -- Global variable - this ensures package will have a session state
    g_test_val integer;
    
    -- Helper to set global variable(s)
    procedure set_globals is
    begin
        g_test_val := 1;
    end set_globals;
    
    -- Tests
    procedure test_true is 
    begin
        set_globals;
        ut.expect(g_test_val).to_(equal(1));
    end test_true;    

    procedure test_false is 
    begin
        set_globals;
        ut.fail('Not yet written');
    end test_false;    
end test_it;
/

OK, let’s give that a try…

Oh. The recompilation of our test package in a different session has rendered the state of the package variables in the current session obsolete, causing them to be reset. ORA-04068 is Oracle’s way of warning you that this has happened.

If we just re-execute, the error will disappear as the package state has already been reset by the first execution. It’s a bit of an annoyance though.
Fortunately, there is a solution…

clear screen
exec dbms_session.modify_package_state(dbms_session.reinitialize);
set serveroutput on size unlimited
exec ut.run('test_it');

Note that the call to DBMS_SESSION.MODIFY_PACKAGE_STATE needs to take place before set serveroutput on as that SQL*Plus command makes a call to a stateful PL/SQL package (DBMS_OUTPUT) under the covers. If we run it afterwards then it will have the effect of turning serveroutput off.

To test our fix, let’s develop our test package a bit further…

create or replace package test_it as
    --%suite(some_tests)
    
    --%test( always true)
    procedure test_true;
    
    --%test( always false)
    procedure test_false;
end test_it;
/

create or replace package body test_it as

    -- Global variable - this ensures package will have a session state
    g_test_val integer;
    
    -- Helper to set global variable(s)
    procedure set_globals is
    begin
        g_test_val := 1;
    end set_globals;
    
    -- Tests
    procedure test_true is 
    begin
        set_globals;
        ut.expect(g_test_val).to_(equal(1));
    end test_true;    

    procedure test_false is 
    begin
        set_globals;
        ut.expect(g_test_val = 2).to_(be_false());
    end test_false;    
end test_it;
/

…and then execute…

Thanks to DBMS_SESSION.MODIFY_PACKAGE_STATE, I can now move forward, safe in the knowledge that any errors I fall over now are likely to be all my own work.

Further Reading

Mark Hoxey has written an interesting article on the question of ORA-04068 and how you can stop your packages from…well…getting in a state.

Side-stepping session silliness – ORA-04068 and MODIFY_PACKAGE_STATE

Sat, 2021-10-30 05:50

I’m writing some PL/SQL package. In a surprise development, I’m actually following best practice and using utPLSQL to unit test it.
Even more surprisingly, I’m following the Test-Driven Development (TDD) approach and writing my tests before my application code.
This being PL/SQL, this approach does require a fair amount of re-compilation.
I’m running in SQLDeveloper, which I’ve configured to open each worksheet in a new session ( Jeff Smith has shared the details on how to do this here) .

Here’s my test package…

create or replace package test_it as
    --%suite(some_tests)
    
    --%test( always true)
    procedure test_true;
    
    --%test( always false)
    procedure test_false;
end test_it;
/

create or replace package body test_it as

    -- Global variable - this ensures package will have a session state
    g_test_val integer;
    
    -- Helper to set global variable(s)
    procedure set_globals is
    begin
        g_test_val := 1;
    end set_globals;
    
    -- Tests
    procedure test_true is 
    begin
        set_globals;
        ut.fail('Not yet written');
    end test_true;    

    procedure test_false is 
    begin
        set_globals;
        ut.fail('Not yet written');
    end test_false;    
end test_it;
/

… with this query immediately after the package ddl to identify the session we’re in…

select sys_context('userenv', 'sessionid') from dual;

When I execute the test in a separate session…

clear screen
set serveroutput on size unlimited
exec ut.run('test_it');

select sys_context('userenv', 'sessionid') from dual;

…the result is as you’d expect…

Note that this session’s sessionid is different to the one in which the package is being compiled.

Now to “develop” my test package…

create or replace package test_it as
    --%suite(some_tests)
    
    --%test( always true)
    procedure test_true;
    
    --%test( always false)
    procedure test_false;
end test_it;
/

create or replace package body test_it as

    -- Global variable - this ensures package will have a session state
    g_test_val integer;
    
    -- Helper to set global variable(s)
    procedure set_globals is
    begin
        g_test_val := 1;
    end set_globals;
    
    -- Tests
    procedure test_true is 
    begin
        set_globals;
        ut.expect(g_test_val).to_(equal(1));
    end test_true;    

    procedure test_false is 
    begin
        set_globals;
        ut.fail('Not yet written');
    end test_false;    
end test_it;
/

OK, let’s give that a try…

Oh. The recompilation of our test package in a different session has rendered the state of the package variables in the current session obsolete, causing them to be reset. ORA-04068 is Oracle’s way of warning you that this has happened.

If we just re-execute, the error will disappear as the package state has already been reset by the first execution. It’s a bit of an annoyance though.
Fortunately, there is a solution…

clear screen
exec dbms_session.modify_package_state(dbms_session.reinitialize);
set serveroutput on size unlimited
exec ut.run('test_it');

Note that the call to DBMS_SESSION.MODIFY_PACKAGE_STATE needs to take place before set serveroutput on as that SQL*Plus command makes a call to a stateful PL/SQL package (DBMS_OUTPUT) under the covers. If we run it afterwards then it will have the effect of turning serveroutput off.

To test our fix, let’s develop our test package a bit further…

create or replace package test_it as
    --%suite(some_tests)
    
    --%test( always true)
    procedure test_true;
    
    --%test( always false)
    procedure test_false;
end test_it;
/

create or replace package body test_it as

    -- Global variable - this ensures package will have a session state
    g_test_val integer;
    
    -- Helper to set global variable(s)
    procedure set_globals is
    begin
        g_test_val := 1;
    end set_globals;
    
    -- Tests
    procedure test_true is 
    begin
        set_globals;
        ut.expect(g_test_val).to_(equal(1));
    end test_true;    

    procedure test_false is 
    begin
        set_globals;
        ut.expect(g_test_val = 2).to_(be_false());
    end test_false;    
end test_it;
/

…and then execute…

Thanks to DBMS_SESSION.MODIFY_PACKAGE_STATE, I can now move forward, safe in the knowledge that any errors I fall over now are likely to be all my own work.

Further Reading

Mark Hoxey has written an interesting article on the question of ORA-04068 and how you can stop your packages from…well…getting in a state.

The Three Laws of Robotic Vacuum Cleaners and the ANYDATA Datatype

Wed, 2021-10-06 15:32

After some observation of my recently acquired robotic vacuum cleaner, I’ve concluded that Asimov’s Three Laws of Robotics require some revision :

First Law – A robot may not traverse stairs. If you want it to do the whole house then move to a bungalow.


Second Law – A robot must stop in the most inaccessible place possible, such as under a bed or a sofa, unless doing so would conflict with the First Law


Third Law – A robot must seek out and entangle itself in any loose wires/pet toys/socks it can find unless doing so would conflict with the First or Second Laws.

Mind you, these Laws are unlikely to be immutable. I mean, The Daleks never used to be able to climb stairs…until they discovered CGI…

The Application Parameters Table

If we were to capture these settings in an application, we’d probably use some kind of config file. After all, we’d like some flexibility to change the robot’s behaviour if the laws change, without having to rewrite the software. If the application was running on Oracle, then we’d be more likely to use a table like this :

create table vacuum_params(
    parameter_name varchar2(100),
    varchar_value varchar2(4000),
    number_value number,
    date_value date,
    datatype varchar2(10) not null,
    description varchar2(4000) not null,
    constraint vacuum_params_pk primary key (parameter_name),
    constraint vacuum_params_datatype_chk check (datatype in ('VARCHAR2', 'NUMBER', 'DATE')))
/    

With most tables of this sort, it’s fairly safe to assume that any code reading it will “know” the datatype of the parameter it’s looking up. However, that’s not necessarily the case with any DML statements. Therefore, we have to do a bit of work to ensure that :

  • there is one value per parameter, but no more
  • the value is of the correct type
create or replace trigger vacuum_params_biu
    before insert or update on vacuum_params
    for each row
declare
    v_param_count pls_integer;
begin
    -- there must be one, and only one, value set for a parameter
    v_param_count := 
        case when :new.varchar_value is null then 0 else 1 end +
        case when :new.number_value is null then 0 else 1 end +
        case when :new.date_value is null then 0 else 1 end;
        
    if v_param_count = 0 then
        raise_application_error(-20990, 'A value must be specified for a parameter');
    elsif v_param_count > 1 then
        raise_application_error(-20991, 'Only one value can be specified for a parameter');
    end if;
    
    if 
        :new.varchar_value is not null and :new.datatype != 'VARCHAR2' or
        :new.number_value is not null and :new.datatype != 'NUMBER' or
        :new.date_value is not null and :new.datatype != 'DATE' 
    then
        raise_application_error(-20992, 'The parameter must be of the type specified in DATATYPE');
    end if;
end;
/
    

This trigger does what we need so we can now safely add some parameters.

insert into vacuum_params( parameter_name, varchar_value, datatype, description)
values('DALEK_MODE', 'Y', 'VARCHAR2', q'[If set to 'Y' then do not ascend or descend stairs. If 'N' then clean stairs.]');

insert into vacuum_params( parameter_name, number_value, datatype, description)
values('TRAILING_WIRES_FOUND', 38, 'NUMBER', 'A count of the trailing wires that the robot has got tangled in');

insert into vacuum_params(parameter_name, date_value, datatype, description)
values('DATE_LAST_STUCK', date '2021-10-02', 'DATE', 'The date on which the robot last played hide-and-seek under the sofa');

commit;

The ANYDATA Datatype

Using Oracle’s ANYDATA datatype, we can regain some of the “for-free” data validation functionality offered by a more conventional table structure :

drop table vacuum_params;
create table vacuum_params(
    parameter_name varchar2(100),
    param_value anydata not null,
    datatype varchar2(10) not null,
    description varchar2(4000),
    constraint vacuum_params_datatype_chk check( datatype in ( 'VARCHAR2', 'NUMBER', 'DATE')),
    CONSTRAINT vacuum_params_pk primary key( parameter_name))
/

As we now have only one value column, we can make the value mandatory by simply adding a NOT NULL constraint. However, a trigger is still required to ensure that each parameter has values of the correct type…

create or replace trigger vacuum_params_biu 
    before insert or update on vacuum_params
    for each row
    
begin
    -- No need to count the number of values passed in because there is only one.
    -- Also that column now has a not null constraint.
    -- However, if we try to use the anydata method on a null argument, we run into
    -- ORA-30625: method dispatch on NULL SELF argument is disallowed.
    -- So, if the param_value is null then just get out of the way and let the not null constraint
    -- do it's thing.
    if :new.param_value is null then 
        return; 
    end if;
    if 
        anydata.getTypeName(:new.param_value) = 'SYS.VARCHAR2' and :new.datatype != 'VARCHAR2' or
        anydata.getTypeName(:new.param_value) = 'SYS.NUMBER' and :new.datatype != 'NUMBER' or
        anydata.getTypeName(:new.param_value) = 'SYS.DATE' and :new.datatype != 'DATE'
    then    
        raise_application_error(-20992, 'The parameter must be of the type specified in DATATYPE');
    elsif anydata.getTypeName(:new.param_value) not in ('SYS.VARCHAR2', 'SYS.NUMBER', 'SYS.DATE')
    then
        raise_application_error(-20993, 'Parameter values must be of type VARCHAR2, NUMBER or DATE');
    end if;    
end;
/

Being an object type, we now need to use ANYDATA’s methods when performing DML on the PARAM_VALUE column. :

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'DALEK_MODE', 
    anydata.convertVarchar2('Y'), 
    'VARCHAR2', 
    q'[If set to 'Y' then do not ascend or descend stairs. If 'N' then clean stairs.]');

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'TRAILING_WIRES_FOUND', 
    anydata.convertNumber(38), 
    'NUMBER', 
    'A count of the trailing wires that the robot has got tangled in');

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'DATE_LAST_STUCK', 
    anydata.convertDate(date '2021-10-02'), 
    'DATE', 
    'The date on which the robot last played hide-and-seek under the sofa');


Using the native types directly in the insert tends to generate rather unhelpful error messages :

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'ROBOT_NAME', 
    'HUSBAND 2.0', 
    'VARCHAR2', 
    'More reliable than the original');

SQL Error: ORA-00932: inconsistent datatypes: expected CHAR got CHAR
00932. 00000 -  "inconsistent datatypes: expected %s got %s"
*Cause:    
*Action:

Querying from the table is also a bit different

select parameter_name,
    case anydata.getTypeName(param_value)
        when 'SYS.VARCHAR2' then anydata.accessVarchar2(param_value)
        when 'SYS.NUMBER' then to_char(anydata.accessNumber(param_value))
        when 'SYS.DATE' then to_char(anydata.accessDate(param_value), 'DD-MON-YYYY')
    end as param_value,
    datatype,
    description
from vacuum_params;

…but the result is a bit neater…

Whilst ANYDATA may not be the ideal solution, it is worth considering if you like to keep you’re table structures nice and clean with minimal effort…just like my floors.

The Three Laws of Robotic Vacuum Cleaners and the ANYDATA Datatype

Wed, 2021-10-06 15:32

After some observation of my recently acquired robotic vacuum cleaner, I’ve concluded that Asimov’s Three Laws of Robotics require some revision :

First Law – A robot may not traverse stairs. If you want it to do the whole house then move to a bungalow.


Second Law – A robot must stop in the most inaccessible place possible, such as under a bed or a sofa, unless doing so would conflict with the First Law


Third Law – A robot must seek out and entangle itself in any loose wires/pet toys/socks it can find unless doing so would conflict with the First or Second Laws.

Mind you, these Laws are unlikely to be immutable. I mean, The Daleks never used to be able to climb stairs…until they discovered CGI…

The Application Parameters Table

If we were to capture these settings in an application, we’d probably use some kind of config file. After all, we’d like some flexibility to change the robot’s behaviour if the laws change, without having to rewrite the software. If the application was running on Oracle, then we’d be more likely to use a table like this :

create table vacuum_params(
    parameter_name varchar2(100),
    varchar_value varchar2(4000),
    number_value number,
    date_value date,
    datatype varchar2(10) not null,
    description varchar2(4000) not null,
    constraint vacuum_params_pk primary key (parameter_name),
    constraint vacuum_params_datatype_chk check (datatype in ('VARCHAR2', 'NUMBER', 'DATE')))
/    

With most tables of this sort, it’s fairly safe to assume that any code reading it will “know” the datatype of the parameter it’s looking up. However, that’s not necessarily the case with any DML statements. Therefore, we have to do a bit of work to ensure that :

  • there is one value per parameter, but no more
  • the value is of the correct type
create or replace trigger vacuum_params_biu
    before insert or update on vacuum_params
    for each row
declare
    v_param_count pls_integer;
begin
    -- there must be one, and only one, value set for a parameter
    v_param_count := 
        case when :new.varchar_value is null then 0 else 1 end +
        case when :new.number_value is null then 0 else 1 end +
        case when :new.date_value is null then 0 else 1 end;
        
    if v_param_count = 0 then
        raise_application_error(-20990, 'A value must be specified for a parameter');
    elsif v_param_count > 1 then
        raise_application_error(-20991, 'Only one value can be specified for a parameter');
    end if;
    
    if 
        :new.varchar_value is not null and :new.datatype != 'VARCHAR2' or
        :new.number_value is not null and :new.datatype != 'NUMBER' or
        :new.date_value is not null and :new.datatype != 'DATE' 
    then
        raise_application_error(-20992, 'The parameter must be of the type specified in DATATYPE');
    end if;
end;
/
    

This trigger does what we need so we can now safely add some parameters.

insert into vacuum_params( parameter_name, varchar_value, datatype, description)
values('DALEK_MODE', 'Y', 'VARCHAR2', q'[If set to 'Y' then do not ascend or descend stairs. If 'N' then clean stairs.]');

insert into vacuum_params( parameter_name, number_value, datatype, description)
values('TRAILING_WIRES_FOUND', 38, 'NUMBER', 'A count of the trailing wires that the robot has got tangled in');

insert into vacuum_params(parameter_name, date_value, datatype, description)
values('DATE_LAST_STUCK', date '2021-10-02', 'DATE', 'The date on which the robot last played hide-and-seek under the sofa');

commit;

The ANYDATA Datatype

Using Oracle’s ANYDATA datatype, we can regain some of the “for-free” data validation functionality offered by a more conventional table structure :

drop table vacuum_params;
create table vacuum_params(
    parameter_name varchar2(100),
    param_value anydata not null,
    datatype varchar2(10) not null,
    description varchar2(4000),
    constraint vacuum_params_datatype_chk check( datatype in ( 'VARCHAR2', 'NUMBER', 'DATE')),
    CONSTRAINT vacuum_params_pk primary key( parameter_name))
/

As we now have only one value column, we can make the value mandatory by simply adding a NOT NULL constraint. However, a trigger is still required to ensure that each parameter has values of the correct type…

create or replace trigger vacuum_params_biu 
    before insert or update on vacuum_params
    for each row
    
begin
    -- No need to count the number of values passed in because there is only one.
    -- Also that column now has a not null constraint.
    -- However, if we try to use the anydata method on a null argument, we run into
    -- ORA-30625: method dispatch on NULL SELF argument is disallowed.
    -- So, if the param_value is null then just get out of the way and let the not null constraint
    -- do it's thing.
    if :new.param_value is null then 
        return; 
    end if;
    if 
        anydata.getTypeName(:new.param_value) = 'SYS.VARCHAR2' and :new.datatype != 'VARCHAR2' or
        anydata.getTypeName(:new.param_value) = 'SYS.NUMBER' and :new.datatype != 'NUMBER' or
        anydata.getTypeName(:new.param_value) = 'SYS.DATE' and :new.datatype != 'DATE'
    then    
        raise_application_error(-20992, 'The parameter must be of the type specified in DATATYPE');
    elsif anydata.getTypeName(:new.param_value) not in ('SYS.VARCHAR2', 'SYS.NUMBER', 'SYS.DATE')
    then
        raise_application_error(-20993, 'Parameter values must be of type VARCHAR2, NUMBER or DATE');
    end if;    
end;
/

Being an object type, we now need to use ANYDATA’s methods when performing DML on the PARAM_VALUE column. :

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'DALEK_MODE', 
    anydata.convertVarchar2('Y'), 
    'VARCHAR2', 
    q'[If set to 'Y' then do not ascend or descend stairs. If 'N' then clean stairs.]');

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'TRAILING_WIRES_FOUND', 
    anydata.convertNumber(38), 
    'NUMBER', 
    'A count of the trailing wires that the robot has got tangled in');

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'DATE_LAST_STUCK', 
    anydata.convertDate(date '2021-10-02'), 
    'DATE', 
    'The date on which the robot last played hide-and-seek under the sofa');


Using the native types directly in the insert tends to generate rather unhelpful error messages :

insert into vacuum_params( parameter_name, param_value, datatype, description)
values(
    'ROBOT_NAME', 
    'HUSBAND 2.0', 
    'VARCHAR2', 
    'More reliable than the original');

SQL Error: ORA-00932: inconsistent datatypes: expected CHAR got CHAR
00932. 00000 -  "inconsistent datatypes: expected %s got %s"
*Cause:    
*Action:

Querying from the table is also a bit different

select parameter_name,
    case anydata.getTypeName(param_value)
        when 'SYS.VARCHAR2' then anydata.accessVarchar2(param_value)
        when 'SYS.NUMBER' then to_char(anydata.accessNumber(param_value))
        when 'SYS.DATE' then to_char(anydata.accessDate(param_value), 'DD-MON-YYYY')
    end as param_value,
    datatype,
    description
from vacuum_params;

…but the result is a bit neater…

Whilst ANYDATA may not be the ideal solution, it is worth considering if you like to keep you’re table structures nice and clean with minimal effort…just like my floors.

Defaulting Argument Values in a BASH script

Mon, 2021-09-13 13:49

I recently found out (again) how to default argument values in a shell script, so I thought I’d write it down this time.
The ABBA flavour in what follows is because we’re talking about BASH – the Bourne Again SHell and Bjorn Again are an ABBA tribute band. This is quite handy because the ABBA back catalogue is rather versatile having already lent itself to an explanation Oracle password complexity.

Welcome to the inside of my head. Sorry about the mess…

Let’s start with a simple script – abba.sh :

#!/bin/bash

if [ $# = 0 ];  then
    echo 'No argument values passed in'
else
    TRACK=$1
    echo $TRACK
fi;

…which we now make executable :

chmod u+x abba.sh

Run this – first with no arguments passed in and then with a single argument and we get :

If we want to default the value of $TRACK (and let’s face it, who doesn’t love a bit of Dancing Queen), we can do the following (saved as abba2.sh) …

#!/bin/bash

TRACK="${1:-Dancing Queen}"
if [ $# != 1 ];  then
    echo 'No argument values passed in'
fi;
echo $TRACK

Now, when we run this, we can see that it’ll accept an argument as before. However, if no argument is passed in the argument count is unaffected but the variable is initialized to it’s default value :

Defaulting Argument Values in a BASH script

Mon, 2021-09-13 13:49

I recently found out (again) how to default argument values in a shell script, so I thought I’d write it down this time.
The ABBA flavour in what follows is because we’re talking about BASH – the Bourne Again SHell and Bjorn Again are an ABBA tribute band. This is quite handy because the ABBA back catalogue is rather versatile having already lent itself to an explanation Oracle password complexity.

Welcome to the inside of my head. Sorry about the mess…

Let’s start with a simple script – abba.sh :

#!/bin/bash

if [ $# = 0 ];  then
    echo 'No argument values passed in'
else
    TRACK=$1
    echo $TRACK
fi;

…which we now make executable :

chmod u+x abba.sh

Run this – first with no arguments passed in and then with a single argument and we get :

If we want to default the value of $TRACK (and let’s face it, who doesn’t love a bit of Dancing Queen), we can do the following (saved as abba2.sh) …

#!/bin/bash

TRACK="${1:-Dancing Queen}"
if [ $# != 1 ];  then
    echo 'No argument values passed in'
fi;
echo $TRACK

Now, when we run this, we can see that it’ll accept an argument as before. However, if no argument is passed in the argument count is unaffected but the variable is initialized to it’s default value :

Help, Oracle’s yanking my Chain (count)

Sun, 2021-07-25 09:32

Despite all the hype, it turns out that Football was coming home only to change it’s underwear before going out again.
As the Azzuri may have said, “Veni Vidi Vici (ai penalti)”…or something like that.
Look, I’m over it, there are far more important things in life than football and it will not impinge further on this post. Well, not much.

What I’m going to explore here how you persuade Oracle to report the count of chained rows for a table in the _tables dictionary views. The examples that follow were run on Oracle 19c Enterprise Edition. I have observed the same behaviour all the way back to 11gR2.

What is a Chained Row ?

In Oracle, a chained row is one that is too big to fit into a single block.
Usually this happens because a row that fits within a single block is updated and becomes too big to fit into a block ( row migration), or the row is simply too big for a single block to start with.
Either way, reading such rows require additional blocks to be retrieved, which may impact on query performance.
There are other ways row chaining can occur but what we’re really interested in here is exactly how to identify which tables contain chained rows ( and how many).

A Tables with Chained Rows

Time for a simple example involving a table where the rows are created being too large to fit into a single block.
First, let’s check the block size of the tablespace we’re creating the table in :

select ts.tablespace_name, ts.block_size
from dba_tablespaces ts
inner join user_users usr
    on usr.default_tablespace = ts.tablespace_name
/    
TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
USERS                                8192

Now let’s create a table which includes rows larger than 8192 bytes ( 8K) :

create table years_of_hurt as
    -- Winners of all the major tournaments England haven't won since 1966
    select 
        'ITALY' as team, 
        2 as euro_wins, 
        2 as world_cup_wins,
        rpad('Forza', 4000, '.') as text1,
        rpad('Italia',4000,' ') as text2,
        rpad('!', 4000, ' ') as text3
    from dual 
    union all
    select 'BRAZIL', null, 3, null, null, null from dual union all
    select 'GERMANY', 3, 2, rpad('Die mannschaft', 4000, '.'), rpad('Never bet against them', 4000, ' '), rpad('!', 4000, ' ') from dual union all
    select 'CZECH REPUBLIC', 1, null, null, null, null from dual union all
    select 'ARGENTINA', null, 3, null, null, null from dual union all
    select 'FRANCE', 2, 2, rpad( 'Allez', 4000, '.'), rpad('Les', 4000, ' '), rpad('Bleus!', 4000, ' ') from dual union all
    select 'DENMARK', 1, null, 'Which way to the beach ?', null, null from dual union all
    select 'GREECE',  1, null, null, null, null from dual union all
    select 'SPAIN', 2, 1, rpad('tiki', 4000, '.'), rpad('taka', 4000, ' '), rpad('!', 4000, ' ') from dual union all
    select 'PORTUGAL', 1, null, null, null, null from dual
/    

We can see that there are some records which are larger than the tablespace block size…

select team, 
    nvl(length(team||to_char(euro_wins)||to_char(world_cup_wins)
        ||text1||text2||text3), 0) as record_length
from years_of_hurt
order by 2 desc
/

TEAM           RECORD_LENGTH
-------------- -------------
GERMANY                12009
FRANCE                 12008
ITALY                  12007
SPAIN                  12007
DENMARK                   32
CZECH REPUBLIC            15
ARGENTINA                 10
PORTUGAL                   9
GREECE                     7
BRAZIL                     7

10 rows selected. 
DBMS_STATS

I mean, this is going to be a really short post, right ? If we want a stat like the chained row count we just need to run DBMS_STATS.GATHER_TABLE_STATS, don’t we ?

I mean, there’s even a column called CHAIN_CNT in the _tables views.
If we look at the column comments, we can confirm it’s purpose :

select comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TABLES'
and column_name = 'CHAIN_CNT'
/

COMMENTS                                                                        
-------------------------------------------
The number of chained rows in the table

So, when we gather stats on the table…

exec dbms_stats.gather_table_stats( user, 'years_of_hurt');

…we should see this reflected in the _tables dictionary views…

select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN LAST_ANALYZ SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- ----------- -----------
        10         10            0          0        4818 24-JUL-2021          10

Hmmm, it seems that DBMS_STATS is convinced that there are no chained rows at all in our table, despite sampling all 10 rows.

The ANALYZE command

Now, I’m old enough to remember a time when England were really bad at football and Italy…were still pretty good at football. This is back in the days before DBMS_STATS came along.
Then, in order to gather stats on a table, you had to run the ANALYZE command.

Looks like this command is still around and one of it’s purposes is to identify chained rows…

The CHAINED_ROWS table

We’ll come onto running ANALYZE to identify the individual chained rows in a table shortly. Before that, we need to ensure that there is a table to hold the results of this command.

Oracle provide a script for this purpose ( to be run as SYS as SYSDBA) :

$ORACLE_HOME/rdbms/admin/utlchain.sql

In order to access the table, I additionally ran :

grant all on chained_rows to mike;

You can point ANALYZE TABLE at a custom table of your own devising as long as it has the same structure as the CHAINED_ROWS table created by this script – e.g. :

create table local_chained_rows (
    owner_name varchar2(128),
    table_name varchar2(128),
    cluster_name varchar2(128),
    partition_name varchar2(128),
    subpartition_name varchar2(128),
    head_rowid rowid,
    analyze_timestamp date)
/

Either way, you can now analyze the table and write the chained row details to whichever target table you’re using. In this case, we’ll use the default :

analyze table years_of_hurt list chained rows into sys.chained_rows;
Table YEARS_OF_HURT analyzed.

I can now see that, as expected, there are four chained rows in the table :

select head_rowid, analyze_timestamp
from sys.chained_rows
where owner_name = 'MIKE'
and table_name = 'YEARS_OF_HURT'
/
HEAD_ROWID         ANALYZE_TIM
------------------ -----------
AAATiyAAMAAAAHrAAA 24-JUL-2021
AAATiyAAMAAAAHsAAC 24-JUL-2021
AAATiyAAMAAAAHuAAD 24-JUL-2021
AAATiyAAMAAAAHwAAD 24-JUL-2021

Not that using ANALYZE in this way does not change the stats recorded in the _TABLES dictionary views :

select chain_cnt
from user_tables
where table_name = 'YEARS_OF_HURT'
/
 CHAIN_CNT
----------
         0
Old-School Analysis

“You’ll win nothing with kids !”

Also…if the continued lack of an accurate chain count in USER_TABLES has left you feeling as sick as a parrot, you can always analyze your table just like Alan Hansen used to do…

analyze table years_of_hurt estimate statistics;

Finally, we can see that the stats in USER_TABLES have been updated :

select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- -----------
        10         10            6          4        4829          10

It’s worth noting that, even in our restricted column selection, the EMPTY_BLOCKS and AVG_ROW_LEN values have changed in addition to CHAIN_CNT.
If you do decide you need to use analyze in this way, it may be worth re-executing DBMS_STATS afterwards to ensure the stats used by the CBO are accurate. Note that doing so will not overwrite the CHAIN_CNT value :

exec dbms_stats.gather_table_stats(user, 'years_of_hurt');


select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- -----------
        10         10            6          4        4818          10

For the England Mens’ team, the next major tournament ( provided they qualify) is only next year, in 2022. That’s 56 years of schadenfreude if you’re using the Scottish Calendar.

Help, Oracle’s yanking my Chain (count)

Sun, 2021-07-25 09:32

Despite all the hype, it turns out that Football was coming home only to change it’s underwear before going out again.
As the Azzuri may have said, “Veni Vidi Vici (ai penalti)”…or something like that.
Look, I’m over it, there are far more important things in life than football and it will not impinge further on this post. Well, not much.

What I’m going to explore here how you persuade Oracle to report the count of chained rows for a table in the _tables dictionary views. The examples that follow were run on Oracle 19c Enterprise Edition. I have observed the same behaviour all the way back to 11gR2.

What is a Chained Row ?

In Oracle, a chained row is one that is too big to fit into a single block.
Usually this happens because a row that fits within a single block is updated and becomes too big to fit into a block ( row migration), or the row is simply too big for a single block to start with.
Either way, reading such rows require additional blocks to be retrieved, which may impact on query performance.
There are other ways row chaining can occur but what we’re really interested in here is exactly how to identify which tables contain chained rows ( and how many).

A Tables with Chained Rows

Time for a simple example involving a table where the rows are created being too large to fit into a single block.
First, let’s check the block size of the tablespace we’re creating the table in :

select ts.tablespace_name, ts.block_size
from dba_tablespaces ts
inner join user_users usr
    on usr.default_tablespace = ts.tablespace_name
/    
TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
USERS                                8192

Now let’s create a table which includes rows larger than 8192 bytes ( 8K) :

create table years_of_hurt as
    -- Winners of all the major tournaments England haven't won since 1966
    select 
        'ITALY' as team, 
        2 as euro_wins, 
        2 as world_cup_wins,
        rpad('Forza', 4000, '.') as text1,
        rpad('Italia',4000,' ') as text2,
        rpad('!', 4000, ' ') as text3
    from dual 
    union all
    select 'BRAZIL', null, 3, null, null, null from dual union all
    select 'GERMANY', 3, 2, rpad('Die mannschaft', 4000, '.'), rpad('Never bet against them', 4000, ' '), rpad('!', 4000, ' ') from dual union all
    select 'CZECH REPUBLIC', 1, null, null, null, null from dual union all
    select 'ARGENTINA', null, 3, null, null, null from dual union all
    select 'FRANCE', 2, 2, rpad( 'Allez', 4000, '.'), rpad('Les', 4000, ' '), rpad('Bleus!', 4000, ' ') from dual union all
    select 'DENMARK', 1, null, 'Which way to the beach ?', null, null from dual union all
    select 'GREECE',  1, null, null, null, null from dual union all
    select 'SPAIN', 2, 1, rpad('tiki', 4000, '.'), rpad('taka', 4000, ' '), rpad('!', 4000, ' ') from dual union all
    select 'PORTUGAL', 1, null, null, null, null from dual
/    

We can see that there are some records which are larger than the tablespace block size…

select team, 
    nvl(length(team||to_char(euro_wins)||to_char(world_cup_wins)
        ||text1||text2||text3), 0) as record_length
from years_of_hurt
order by 2 desc
/

TEAM           RECORD_LENGTH
-------------- -------------
GERMANY                12009
FRANCE                 12008
ITALY                  12007
SPAIN                  12007
DENMARK                   32
CZECH REPUBLIC            15
ARGENTINA                 10
PORTUGAL                   9
GREECE                     7
BRAZIL                     7

10 rows selected. 
DBMS_STATS

I mean, this is going to be a really short post, right ? If we want a stat like the chained row count we just need to run DBMS_STATS.GATHER_TABLE_STATS, don’t we ?

I mean, there’s even a column called CHAIN_CNT in the _tables views.
If we look at the column comments, we can confirm it’s purpose :

select comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TABLES'
and column_name = 'CHAIN_CNT'
/

COMMENTS                                                                        
-------------------------------------------
The number of chained rows in the table

So, when we gather stats on the table…

exec dbms_stats.gather_table_stats( user, 'years_of_hurt');

…we should see this reflected in the _tables dictionary views…

select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN LAST_ANALYZ SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- ----------- -----------
        10         10            0          0        4818 24-JUL-2021          10

Hmmm, it seems that DBMS_STATS is convinced that there are no chained rows at all in our table, despite sampling all 10 rows.

The ANALYZE command

Now, I’m old enough to remember a time when England were really bad at football and Italy…were still pretty good at football. This is back in the days before DBMS_STATS came along.
Then, in order to gather stats on a table, you had to run the ANALYZE command.

Looks like this command is still around and one of it’s purposes is to identify chained rows…

The CHAINED_ROWS table

We’ll come onto running ANALYZE to identify the individual chained rows in a table shortly. Before that, we need to ensure that there is a table to hold the results of this command.

Oracle provide a script for this purpose ( to be run as SYS as SYSDBA) :

$ORACLE_HOME/rdbms/admin/utlchain.sql

In order to access the table, I additionally ran :

grant all on chained_rows to mike;

You can point ANALYZE TABLE at a custom table of your own devising as long as it has the same structure as the CHAINED_ROWS table created by this script – e.g. :

create table local_chained_rows (
    owner_name varchar2(128),
    table_name varchar2(128),
    cluster_name varchar2(128),
    partition_name varchar2(128),
    subpartition_name varchar2(128),
    head_rowid rowid,
    analyze_timestamp date)
/

Either way, you can now analyze the table and write the chained row details to whichever target table you’re using. In this case, we’ll use the default :

analyze table years_of_hurt list chained rows into sys.chained_rows;
Table YEARS_OF_HURT analyzed.

I can now see that, as expected, there are four chained rows in the table :

select head_rowid, analyze_timestamp
from sys.chained_rows
where owner_name = 'MIKE'
and table_name = 'YEARS_OF_HURT'
/
HEAD_ROWID         ANALYZE_TIM
------------------ -----------
AAATiyAAMAAAAHrAAA 24-JUL-2021
AAATiyAAMAAAAHsAAC 24-JUL-2021
AAATiyAAMAAAAHuAAD 24-JUL-2021
AAATiyAAMAAAAHwAAD 24-JUL-2021

Not that using ANALYZE in this way does not change the stats recorded in the _TABLES dictionary views :

select chain_cnt
from user_tables
where table_name = 'YEARS_OF_HURT'
/
 CHAIN_CNT
----------
         0
Old-School Analysis

“You’ll win nothing with kids !”

Also…if the continued lack of an accurate chain count in USER_TABLES has left you feeling as sick as a parrot, you can always analyze your table just like Alan Hansen used to do…

analyze table years_of_hurt estimate statistics;

Finally, we can see that the stats in USER_TABLES have been updated :

select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- -----------
        10         10            6          4        4829          10

It’s worth noting that, even in our restricted column selection, the EMPTY_BLOCKS and AVG_ROW_LEN values have changed in addition to CHAIN_CNT.
If you do decide you need to use analyze in this way, it may be worth re-executing DBMS_STATS afterwards to ensure the stats used by the CBO are accurate. Note that doing so will not overwrite the CHAIN_CNT value :

exec dbms_stats.gather_table_stats(user, 'years_of_hurt');


select num_rows, blocks, empty_blocks, chain_cnt, avg_row_len, 
    sample_size
from user_tables
where table_name = 'YEARS_OF_HURT'
/
  NUM_ROWS     BLOCKS EMPTY_BLOCKS  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
---------- ---------- ------------ ---------- ----------- -----------
        10         10            6          4        4818          10

For the England Mens’ team, the next major tournament ( provided they qualify) is only next year, in 2022. That’s 56 years of schadenfreude if you’re using the Scottish Calendar.

Using the INTERVAL data type ( and some pictures) to tell the difference between two Timestamps in Oracle

Mon, 2021-04-05 14:30

Looking after an overnight batch process can be a big fraught at times.
If it’s a good day, you might begin the morning with a nice cup of coffee and a leisurely scroll through the logs to confirm that all is well.
In contrast, if the batch has overrun you may well find yourself scrambling through those same logs whilst gulping down large quantities of caffeine in a desperate attempt to hot-wire your brain into working out the elapsed time between each entry. Not great. Especially when you consider that, as Terry Pratchett put it,
“Coffee is a way of stealing time that should by rights belong to your older self”.
A better approach might be to get Oracle to do it for you.

What we’re going to look at here is :

  • the INTERVAL data type that holds the difference between two TIMESTAMP values
  • using the LAG SQL function to report the interval time between timestamps across rows of a query
  • creating some graphs using SQLDeveloper User Defined Reports for when it’s just too early for words and numbers.
The Batch Log table

Running on Oracle 18cXE, our overnight batch logs to the BATCH_LOG table :

create table batch_log (
    id number generated always as identity,
    log_ts timestamp not null,
    job_name varchar2(50),
    status varchar2(25))
/ 

I’ve used the following script to populate the table with some test data :

set serverout on size unlimited
clear screen
declare

    tbl_jobs dbms_utility.name_array;
    tbl_statuses dbms_utility.name_array;
    
    start_ts timestamp := to_timestamp('16-MAR-2021 01:00', 'DD-MON-YYYY HH24:MI');
    v_ts timestamp;
    v_duration pls_integer;
begin

    tbl_jobs(1) := 'EXTRACT';
    tbl_jobs(2) := 'TRANSFORM';
    tbl_jobs(3) := 'LOAD';
    tbl_jobs(4) := 'REPORT';

    tbl_statuses(1) := 'STARTED';
    tbl_statuses(2) := 'COMPLETED';
        
    for i in 1..7 loop
        v_ts := start_ts;
        for j in 1..tbl_jobs.count loop
            v_duration := 
                case mod(j, 4) 
                    when 3 then trunc(dbms_random.value(8,11))
                    when 2 then trunc(dbms_random.value(30,41)) 
                    when 1 then trunc(dbms_random.value(25,31)) 
                    else trunc(dbms_random.value(15,21)) 
                end;
            for k in 1..tbl_statuses.count loop
                insert into batch_log( log_ts, job_name, status)
                values( 
                    case when tbl_statuses(k) = 'STARTED' then v_ts else v_ts + (v_duration/1440) end, 
                    tbl_jobs(j), 
                    tbl_statuses(k));
            end loop;
            v_ts := v_ts + (v_duration/1440);
        end loop;
        start_ts := start_ts + interval '1' day;
    end loop;    
end;
/

-- Now simulate a long running Load job
update batch_log 
set log_ts = log_ts + (60/1440)
where id >= (select max(id) from batch_log where job_name = 'LOAD' and status = 'COMPLETED');


commit;


As a result, the table now has a week’s worth of log entries.

The INTERVAL data type

You can find lots of information about Oracle’s DATETIME and INTERVAL data types in the documentation. Here’s the 18c version , for example ( all of the code here is written and tested on 18cXE).

You will see from this that we have two distinct INTERVAL data types – Year to Month and Day to Second.
It’s the latter of these which is relevant in the current context, unless you’re having extreme performance issues on your overnight batch.

If you have any scheduler jobs running, you can see an example of the INTERVAL data type in action in the _scheduler_jobs views :

select owner, job_name, last_run_duration
from dba_scheduler_jobs t
where run_count > 0;
OWNER           JOB_NAME                                 LAST_RUN_DURATION  
--------------- ---------------------------------------- -------------------
SYS             PURGE_LOG                                +00 00:00:00.924991
SYS             PMO_DEFERRED_GIDX_MAINT_JOB              +00 00:00:00.117572
SYS             CLEANUP_NON_EXIST_OBJ                    +00 00:00:00.221356
SYS             CLEANUP_ONLINE_IND_BUILD                 +00 00:00:00.382708
SYS             CLEANUP_TAB_IOT_PMO                      +00 00:00:00.129863
SYS             CLEANUP_TRANSIENT_TYPE                   +00 00:00:00.071285
SYS             CLEANUP_TRANSIENT_PKG                    +00 00:00:00.094254
SYS             CLEANUP_ONLINE_PMO                       +00 00:00:00.024660
SYS             FILE_SIZE_UPD                            +00 00:00:00.166189
SYS             ORA$AUTOTASK_CLEAN                       +00 00:00:00.151000
SYS             DRA_REEVALUATE_OPEN_FAILURES             +00 00:00:00.387121
SYS             BSLN_MAINTAIN_STATS_JOB                  +00 00:00:00.443146
SYS             RSE$CLEAN_RECOVERABLE_SCRIPT             +00 00:00:00.324392
SYS             SM$CLEAN_AUTO_SPLIT_MERGE                +00 00:00:00.023483
APEX_180200     ORACLE_APEX_PURGE_SESSIONS               +00 00:00:00.868204
APEX_180200     ORACLE_APEX_MAIL_QUEUE                   +00 00:00:00.026859
APEX_180200     ORACLE_APEX_WS_NOTIFICATIONS             +00 00:00:01.685787
APEX_180200     ORACLE_APEX_DAILY_MAINTENANCE            +00 00:00:00.943306
ORDS_METADATA   CLEAN_OLD_ORDS_SESSIONS                  +00 00:00:00.640826

19 rows selected. 

According to the documentation, INTERVALs are not subject to a format model in the same way that dates and timestamps are. However, you can re-format their contents using the EXTRACT function.
For example, if we want to convert the LAST_RUN_DURATION from our previous query into seconds, then we can run :

select owner, job_name,
    (extract ( minute from last_run_duration) * 60) +
    extract( second from last_run_duration) as last_run_secs
from dba_scheduler_jobs
where run_count > 0
order by 3 desc
/

…which returns…

OWNER           JOB_NAME                                 LAST_RUN_SECS
--------------- ---------------------------------------- -------------
APEX_180200     ORACLE_APEX_WS_NOTIFICATIONS                  1.685787
APEX_180200     ORACLE_APEX_DAILY_MAINTENANCE                  .943306
SYS             PURGE_LOG                                      .924991
APEX_180200     ORACLE_APEX_PURGE_SESSIONS                     .868204
ORDS_METADATA   CLEAN_OLD_ORDS_SESSIONS                        .640826
SYS             BSLN_MAINTAIN_STATS_JOB                        .443146
SYS             DRA_REEVALUATE_OPEN_FAILURES                   .387121
SYS             CLEANUP_ONLINE_IND_BUILD                       .382708
SYS             RSE$CLEAN_RECOVERABLE_SCRIPT                   .324392
SYS             CLEANUP_NON_EXIST_OBJ                          .221356
SYS             FILE_SIZE_UPD                                  .166189
SYS             ORA$AUTOTASK_CLEAN                                .151
SYS             CLEANUP_TAB_IOT_PMO                            .129863
SYS             PMO_DEFERRED_GIDX_MAINT_JOB                    .117572
SYS             CLEANUP_TRANSIENT_PKG                          .094254
SYS             CLEANUP_TRANSIENT_TYPE                         .071285
APEX_180200     ORACLE_APEX_MAIL_QUEUE                         .026859
SYS             CLEANUP_ONLINE_PMO                              .02466
SYS             SM$CLEAN_AUTO_SPLIT_MERGE                      .023483

19 rows selected.

What we need now is a way to calculate the durations between rows in a single query…

The LAG function

As ever, OracleBase has a pretty good explanation of the LEAD and LAG functions.

All we need to do is to use the LAG function to find the elapsed time between the timestamp of the current record and that of it’s predecessor in the result set :

select trunc(log_ts) as run_date,
    job_name,
    status,
    log_ts,
    case when status = 'COMPLETED' then
        log_ts - lag( log_ts) over( order by log_ts)
    end as duration,
    case when status = 'COMPLETED' then
        extract( hour from log_ts - lag(log_ts) over (order by log_ts)) * 60 +
        extract( minute from log_ts - lag(log_ts) over (order by log_ts))
    end as duration_mins
from batch_log
order by log_ts
/

We could use what we have so far as the basis for an ad-hoc script to give us information about batch runtimes. Then again, if it’s one of those mornings where you’re struggling a bit, looking at a picture might make things a bit easier. That last listing was from SQLDeveloper, which is handy considering where we’re going next …

SQLDeveloper User Defined Chart Reports – The Line Chart

We can see the total runtime for the batch for each day using the query :

with batch_runs as (
    select job_name, status, log_ts,
        case status when 'COMPLETED' then
            (extract( hour from log_ts - lag( log_ts, 1) over( order by log_ts)) * 60) +
            extract(minute from log_ts - lag(log_ts, 1) over( order by log_ts))
        end as duration_mins
    from batch_log
    where (
        (job_name = 'EXTRACT' and status = 'STARTED')
        or (job_name = 'REPORT' and status = 'COMPLETED'))    
    order by log_ts)
select trunc(log_ts) as run_date,
    duration_mins
from batch_runs    
where duration_mins is not null
order by 1
/

…which returns…

RUN_DATE    DURATION_MINS
----------- -------------
16-MAR-2021            88
17-MAR-2021            83
18-MAR-2021            90
19-MAR-2021            84
20-MAR-2021            94
21-MAR-2021            91
22-MAR-2021           155

7 rows selected. 

Jeff Smith has a nifty way of turning a SQLDeveloper query grid into a chart report, which I’ve largely plagiarised below.

First, we run the query in a SQLDeveloper worksheet. Then, in the result grid, right-click and select Save Grid as Report :

This will take you to the Create Report Window :

As you can see, SQLDeveloper has reformatted the query a bit.

Now we can add a Name for the report.
Before we go any further, it’s probably a good idea to specify a connection at this point. This will help us when we’re configuring the report as we’ll be able to use live data to see how things look.
The other thing we need to do is to specify a series name so that we can produce a line graph.

This means that the report query now looks like this :

SELECT "RUN_DATE", "DURATION_MINS", 
    'Overnight Runs' as series_name 
FROM(
with batch_runs as (
    select job_name, status, log_ts,
        case status when 'COMPLETED' then
            (extract( hour from log_ts - lag( log_ts, 1) over( order by log_ts)) * 60) +
            extract(minute from log_ts - lag(log_ts, 1) over( order by log_ts))
        end as duration_mins
    from batch_log
    where (
        (job_name = 'EXTRACT' and status = 'STARTED')
        or (job_name = 'REPORT' and status = 'COMPLETED'))    
    order by log_ts)
select trunc(log_ts) as run_date,
    duration_mins
from batch_runs    
where duration_mins is not null
order by 1
)

…and with a connection specified, I can check everything is still working as expected by clicking the Test Report button…

…which provides the expected output.

Now I know that my query still works, I can change the report Style from Table to Chart :

Now we need to click on the Property node of the navigation tree in the left-hand pane of the Create Report Window.
Here, we change the Chart Type to Line :

Next we need to specify the Data for the report so we need to click on the Data Tree Node.

As we’re connected to a database, we can Fetch Column Names so we can select the Group, Series and Value columns from a drop-down.

Now we can check the Use Live Data checkbox and click Refresh Report to see what the output will look like :

Finally, click the Apply button in the bottom right of the Create Report window and the new report will appear in the User Defined Reports node of the main Reports pane :

Now we can run the report in the usual way…

A picture paints a thousand words…but in this case just says “Hmmm…”

If we want a more granular view of what’s happening during the batch run, but we’re not quite ready to deal with actual text, we can try using…

A Stacked Bar Chart

This time, we want to know the elapsed time for each process in the batch. Therefore, we’ll use this query as the basis for our report :

select trunc(log_ts) as run_date,
    job_name,
    status,
    log_ts,
    case when status = 'COMPLETED' then
        log_ts - lag( log_ts) over( order by log_ts)
    end as duration,
    case when status = 'COMPLETED' then
        extract( hour from log_ts - lag(log_ts) over (order by log_ts)) * 60 +
        extract( minute from log_ts - lag(log_ts) over (order by log_ts))
    end as duration_mins
from batch_log
order by log_ts

The Report creation is as before, except that this time we choose a Chart Type of Bar – Vertical Stack.
Additionally, in order to view the total time for each day, we go to the Plot Area Property and check the Cumulative box in the Data Tips section :

Running this new report, we can see that the increase in duration is due to the Load process taking rather longer than normal :

Should it really be that colour ?

Of course, you’ll need to investigate further to find out just exactly what’s going on but, thanks to some handy SQL features and a couple of SQLDeveloper Chart Reports, at least you know where to start looking.

__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-606b76329ee58', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); });

Pages