Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 2 weeks 1 day ago

Serverless computing with Azure Functions – interaction with Event Hub

Thu, 2017-08-31 01:16

In a previous article, I described my first steps with Azure Functions – one of the implementation mechanisms for serverless computing: Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function. Functions can be triggered in many ways – by HTTP requests, the clock (scheduled), by database modifications and by events. In this article, I will look at a Function that is triggered by an event on the Azure Event Hub. I will also show how a function (triggered by an HTTP request) can write to Event Hub.

Functions can have triggers and input bindings. The first is the cause of the function to run – which can have a payload. An input binding is a declarative definition of data that the function has (read) access to during execution. Function also can have Output bindings – for each of the channels to which they write results.


The first steps: arrange Azure account, create Event Hubs namespace – as the context in which to create individual event hubs (the latter are comparable to Kafka topics)

On the Event Hub side of the world:

  • Create Event Hub
  • Create Shared Access Policy
  • Get Connection String URL for the shared access policy

In Azure Functions –

  • At the function app level: Create Connection String for Connection String URL copied from shared access policy
  • Create a function based on the template Data Processing/JavaScript/EventHub Trigger – a JavaScript function triggered by a message on the indicated Event Hub in the Event Hub namespace addressed through the connection string; save and (test) run the function (this will publish an event to the event hub)
  • Optionally: create a second function, for example triggered by an HTTP Request, and have it write to an output binding to the Event Hub; in that case, an HTTP request to the second function will indirectly – through Event Hub – cause the first function to be executed


In Event Hub Namespace

Create Event Hub GreetingEvents. Set the name and accept all defaults. Press Create.




Once the Event Hub creation is complete, we can inspect the details – such as 1 Consumer Group, 2 Partitions and 1 Day message retention:


This is our current situation:



Now return to the overview and click on the link Connection Strings. We need the to create a connection from Azure Functions app to Event Hub Namespace using the URL for the Shared Policy we want to leverage for that connection.


Click on Connection Strings to bring up a list of Shared Policies. Click on the Shared Policy to use for accessing the Event Hub namespace from Azure Functions.


Click the copy button to copy the RootManageSharedAccessKey connection string to the clipboard.

In Azure Function App

In order for the Function to access the Event Hub [Namespace], the connection string to the Event Hub [Namespace[ needs to be configured as app setting in the function app [context in which the Function to be triggered by Event Hub is created]. Note: that is the value in the clipboard.


Scrolll down.


Create Connection String to Event Hub Namespace using the value in the clipboard



Save changes in function app


At this point, a link is established between the function app (context) and the Event Hub Namespace. Any function in the app can link to any event hub in the namespace.



Create Function to be Triggered by Event

With the connection string in place, we can create a function that is executed when an event is published on Event Hub greetingevents. That is done like this:


Type the name of the function, click on the link new and select event hub greetingevents to associate the function with:




Click on create.

The function is created – including the template code:



The configuration of the function is defined in the file function.json. Its contents can be inspected and edited:


The value of connection is a reference to an APP Setting that has been created when the function was created, based on the connection string to Event Hub Namespace.

Click on Save and Run. A test event is published to the Event Hub greetingevents. In the log window – we can see the function reacting to that event. So we have lift off for our function – it is triggered by an event (and therefore presumably by all events) on the Event Hub and processes these events according to the (limited) logic it currently contains.


The set up looks like this:image



Publish to Event Hub from Azure Function


To make things a little bit more interesting we will make the Azure Function that was introduced in a previous article for handling HTTP Request “events” also produce output to the Event Hub greetingevents. This means that any HTTP request sent to function HttpTriggerJS1 leads to an event published to Event Hub greetingevents and in turn to function EventHubTrigger-GreetingEvents being triggered.



To add this additional output flow to the function, first open the Integration tab for the function and create a new Output Binding, of type Azure Event Hubs. Select the connection string and the target Event Hub – greetingevents. Define the name of the context parameter that provides the value to be published to the Event Hub – outputEventHubMessage:


We now need to modify the code of the function, to actually set the value of this context parameter called outputEventHubMessage:


At this point, we can test the function – and see how it sends the event


that indirectly triggers our former function.

When the HTTP Request is sent to the function HttpTriggerJS1 from Postman for example


The function returns it response and also publishes the event. We can tell, because in the logging for function EventHubTrigger-GreetingEvents we see the name sent as parameter to the HttpTriggerJS1 function.

(Note: In this receiving function, I have added the line the red to see the contents of the event message.)





Azure Function – Event Hub binding – 

Azure Documentation on Configuring App Settings – 

Azure Event Hubs Overview – 

Azure Functions Triggers and Binding Concepts –

The post Serverless computing with Azure Functions – interaction with Event Hub appeared first on AMIS Oracle and Java Blog.

Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function

Wed, 2017-08-30 03:59

If your application does not have internal state – and sometimes it is handling peak loads of requests while at other times it is not doing any work at all, why then should there be one or even more instances of the application (plus container and/or server) continuously and dedicatedly up and running for the application? For peak loads – a single instance is nowhere near enough. For times without any traffic, even a single instance is too much – and yet you pay for it.

Serverless computing – brought to prominence with AWS Lambda – is an answer to this. It is defined on Wikipedia as a “cloud execution model” in which “the cloud provider dynamically manages the allocation of machine resources”. The subscriber to the cloud service provides the code to execute and specifies the events that should trigger execution. The cloud provider takes care of running that code whenever the event occurs. Pricing is based on the combination of the resources used (memory, possibly CPU) and the time it takes to execute the function. No compute node is permanently associated with the function and any function [execution]instance can run on a different virtual server. (so it is not really serverless in a strict sense – a server is used for running the function; but it can be a different server with each execution). Of course, function instances can still have and share state by using a cache or backend data store of some kind.

The Serverless Function model can be used for processing events (a very common use case) but also for handling HTTP requests and therefore for implementing REST APIs or even stateless web applications. Implementation languages for serverless functions differ a little across cloud providers. Common runtimes are Node, Python, Java and C#. Several cloud vendors provide a form of Serverless Computing – AWS with Lamba, Microsoft with Azure Functions, Google with Google Cloud Functions and IBM with BlueMix FaaS (Function as a Service). Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016 (Oracle Functions – Serverless architecture on the Oracle PaaS Cloud) and is expected to actually the service (including support for orchestration for distributed serverless functions) around Oracle OpenWorld 2017 (October 2017) – see for example the  list of session at OOW2017 on Serverless.

Note: monitoring the execution of the functions, collecting run time metrics and doing debugging on issues can be a little challenging. Special care should be taken when writing the functions – as for example there is no log file written on the server on which the code executes.

In this article, I briefly show an example of working with Serverless Computing using Azure Functions.

Steps for implementing a Function:

  • arrange Azure cloud account
  • create Function App as context for Functions
  • create Function
  • trigger Function – cause the events that trigger the Function.
  • inspect the result from the function
  • monitor the function execution

Taking an existing Azure Cloud Account, the first step is to create a Function App in your Azure subscription – as a context to create individual functions in (“You must have a function app to host the execution of your functions. A function app lets you group functions as a logic unit for easier management, deployment, and sharing of resources.”).


I will not discuss the details for this step – they are fairly trivial (see for example this instruction:

Quick Overview of Steps

Navigate into the function app:


Click on plus icon to create a new Function:


Click on goto quickstart for the easiest way in


Select scenario WebHook + API; select JavaScript as the language. Note: the JavaScript runtime environment is Node 6.5 at the time of writing (August 2017).

Click on Create this function.


The function is created – with a name I cannot influence


When the function was created, two files were created: index.js and function.json. We can inspect these files by clicking on the View Files tab:


The function.json file is a configuration file where we specify generic meta-data about the function.

The integration tab shows the triggering event (s) for this function – configured for HTTP requests.


The manage tab allows us to define environment variable to pass into the function runtime execution environment:


The Monitor tab allows us to monitor executions of the Function and the logging they produce:


Return to the main tab with the function definition. Make a small change in the template code – to make it my own function; then click on Save & Run to store the modified definition and make a test call to the Function:


The result of the test call is shown on the right as well in the logging tab at the bottom of the page:


To invoke the function outside the Azure Cloud environment, click on Get Function URL.


Click on the icon to copy the URL to the clipboard.

Open a browser, paste the URL and add the name query parameter:


In Postman we can also make a test call:


Both these calls are from my laptop without any special connection to the Azure Cloud. You can make that same call from your environment. The function is triggerable – and when an HTTP request is received to hand to the function, Azure will assign it a run time environment in which to execute the JavaScript code. Pretty cool.

The logging shows the additional instances of the function:


From within the function, we can write output to the logging. All function execution instances write to the same pile of logging, from within their own execution environments:


Now Save & Run again – and see the log line written during the function execution:


Functions lets you define the threshold trace level for writing to the console, which makes it easy to control the way traces are written to the console from your functions. You can set the trace-level threshold for logging in the host.json file, or turn it off.

The Monitor tab provides an overview of all executions of the function, including the not so happy ones (I made a few coding mistakes that I did not share). For each instance, the specific logging and execution details are available:



Debug Console and Package Management

At the URL https://<function_app_name> we can access a management/development console where we can perform advanced operations regarding application deployment and configuration:


The CMD console looks like this:


NPM packages en Node Modules can be added to a JavaScript Function. See for details : 

An not obvious feature of the CMD Console is the ability to drag files from my local Windows operating system into the browser – such as the package.json shown in this figure:


Note: You should define a package.json file at the root of your function app. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a package.json file in the folder of a specific function.


Creating a JavaScript (Node) Function in Azure Functions is pretty straightforward. The steps are logical, the environment reacts intuitively and smoothly. Good fun working with this.

I am looking forward to Oracle’s Cloud service for serverless computing – to see if it provides a similar good experience,and perhaps even more. More on that next month I hope.

Next steps for me: trigger Azure Functions from other events than HTTP Requests and leveraging NPM packages from my Function. Perhaps also trying out Visual Studio as the development and local testing environment for Azure Functions.



FAQ on AWS Lambda –

Wikipedia on Serverless Computing –

Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016  – Oracle Functions – Serverless architecture on the Oracle PaaS Cloud

Sessions at Oracle OpenWorld 2017 on Serverless Computing (i.e. Oracle Functions) –  list of session at OOW2017 on Serverless

Azure Functions – Create your first Function – 

Azure Functions Documentation – 

Azure Functions HTTP and webhook bindings –

Azure Functions JavaScript developer guide –

How to update function app files – package.json, project.json, host.json –

The post Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function appeared first on AMIS Oracle and Java Blog.

Creating JSFiddle for Oracle JET snippet – using additional modules

Tue, 2017-08-29 02:14

My objective in this article: describe how I (and therefore you) can use JSFiddle to create running, shared samples of Oracle JET code. This is useful for question on the JET Forum or on StackOverflow and also as demo/illustration along a blog post or other publication. JSFiddle is an IDE-like web site that allows us to create mini-applications consisting of CSS, HTML and JavaScript resources and run these client side applications in the browser. We can edit the code and re-run. We can easily embed JSFiddle components in articles and we can share JSFiddle entries simply by sharing the URL.

In order to create Oracle JET fiddles, we need a template that takes care of all the scaffolding – the basic dependencies (CSS and JavaScript) that we always need. Ideally, by using the template, we can focus on the code that is specific to the sample we want to create as a fiddle.

The original JSFiddle that I used as a started point is from John  Brock … ehm, Peppertech:

As an external resource the fiddle loads requireJS:

All other required JavaScript modules are loaded by requireJS – as instructed in the configuration of the paths property in requirejs.config. The modules include Oracle JET (core, translation, drag and drop), jQuery, Hammer, Knockout and ES6 Promise.


The custom JavaScript for the specific JET snippet we want to demonstrate in the Fiddle goes into the main function that is passed to require at the beginning of the Fiddle – along with the list of modules required by the main function. This function defines the ViewModel and applies data bindings through knockout, linking the ViewModel to an HTML element.

If we have additional custom JavaScript that is in a separate JavaScript files, we can access these as external dependencies that are added to the fiddle. Note that JSFiddle will only access resources from Content Delivery Networks; we can make use of a trick to add our own custom JavaScript resources to the fiddle:

  • store the files on GitHub
  • create a CDN-style URL to each file, for example using RawGit (a site that serves GitHub project files through MaxCDN)
  • add the URL as external resource to the fiddle

Any file added in this fashion is loaded by JSFiddle when the fiddle is executed.

In my case, I want to load a custom module – through require.js . In that case, I do not have to add the file that contains the module definition to the JSFiddle as external resource. I can have require.js load the resource directly from the CDN URL (note: loading the file from the raw GitHub URL does not work: “Refused to execute script from ‘’ because its MIME type (‘text/plain’) is not executable, and strict MIME type checking is enabled.”.

My custom module is on GitHub:


I copy the URL to the clipboard. Then on I paste the URL:


I then copy the CDN style URL to the clipboard. In JSFiddle I can add this URL path to the code – in function _getCDNPath(paths) . Note: I remove the actual name of the file, so the path itself refers to the directory. In this directory, there could be multiple modules.


Finally the module is required into fiddle through:


Here I refer to custom-modules/my-module which resolves to the module defined in file my-module.js in the [GitHub] directory referred to by the CDN Url added to newPaths.

The full fiddle looks like this – hardly anything specific, just a tiny little bit of data binding to the ViewModel:


This fiddle now becomes my starting point for any future fiddle for Oracle JET 3.2. As is shown below.

Create New Fiddle from Template

To create any Oracle JET fiddle, I can now (and you can do that as well) go to my template fiddle ( and click on Fork.


A new fiddle is created as a clone of the template. I should update the meta data of the fiddle (as to not get confused myself) and can then create the example I want. Here I show a very basic popup example:



The resulting fiddle: – created as a clone of the template fiddle, extended with a few lines of code to create the intended effect.

The two fiddles show up on my public JSFiddle Dashboard (



Fiddles can be embedded in articles and other publications. Open the embed option in the top menu and copy the embed-code or link:


Then use that code in the source of whatever you want to embed the fiddle into. For example – down here:



Jim Marion’s blog article


Source in GitHub:

The starting point fiddle by PepperTech:

The final resulting fiddle with the JET Tooltip example:

My public JSFiddle Dashboard (

The post Creating JSFiddle for Oracle JET snippet – using additional modules appeared first on AMIS Oracle and Java Blog.

Oracle JET Nested Data Grid for presenting Hierarchical Data Sets – with cell popup, collapse and expand, filter and zoom

Mon, 2017-08-28 01:27

As part of a SaaS Enablement project we are currently working on for a customer using Oracle JET, a requirement came up to present an hierarchical data set – in a way that quickly provides an overview as well as allows access to detail information and the ability to focus on specific areas of the hierarchy. The data describes planning and availability of employees and the hierarchy is by time: months, weeks and days. One presentation that would fit the requirements bill was a spreadsheet like data grid with employees in the rows, the time hierarchy in [nested]columns and the hours planned and available in the cells. A popup that appears when the mouse hovers over a cell will present detail information for the planned activities for that day and employee. Something like this:



This article will not describe in detail how I implemented this functionality using Oracle JET – although I did and all the source code is available in GitHub: .

This article is a brief testament to the versatility of Oracle JET [and modern browsers and the JavaScript ecosystem]and the strength of the Oracle JET UI component set as well as its documentation – both Cookbook for the Components and the JS API documentation. They allowed me to quickly turn the requirements into working code. And throw in some extra functionality while I was at it.

When I first looked at the requirements, it was not immediately clear to me that JET would be able to easily take on this challenge. I certainly did not rush out to give estimates to our customer – depending on the the shoulders we could stand on, this could be a job for weeks or more. However, browsing through the Oracle JET Cookbook, it did not take too long to identify the data grid (the smarter sibling of the table component) as the obvious starting point. And to our good fortune, the Cookbook as a recipe for Data Grid with Nested Headers:

imageWith this recipe – that includes source code – as starting point, it turned out to be quite straightforward to plug in our own data, rewrite the code from the recipe to handle our specific data structure and add custom cell styling. When that was done – rather easily – it was very seductive to start adding some features, both to take on the challenge and to further woo (or wow – but not woe as I had mistyped originally) our customer.

Because the data set presented in the grid is potentially quite large, it is convenient to have ways to narrow down what is shown. An intuitive way with hierarchical data is to collapse branches of the data set that are currently not relevant. So we added collapse icons to month column headers; when a month is collapse, the icon is changed to an expand icon. Clicking the icon has the expected effect of collapsing all weeks and days under the month or expanding them. From here it is a small step to allow all months to be collapsed or expanded by single user actions – so we added icons and supporting logic to make that happen.


Also intuitive is the ability to drill down or zoom into a specific column – either a month or a week. We added that feature too – by allowing the month name or week number in the column header to be clicked upon. When that happens, all data outside the selected month and week are hidden.



Finally, and very rough at present, we added a search field. The user can currently enter a number; this is interpreted as the number of the month to filter on. However, it would not be hard to interpret the search value more intelligently – also filtering on cell content for example.



Did we not have any challenges? Well, not major stumbling blocks. Some of the topics that took a little longer to deal with:

  • understand the NestedHeaderDataGridDataSource code and find out where and how to customize for our own needs
  • create a custom module and use require to make it available in our ViewModel (see
  • use of Cell template and knock out tag for conditional custom cell rendering
  • capture the mouseover event in the cell template and pass the event and the cell (data) context to a custom function
  • generate a unique id for span element in cell in order to have an identifiable DOM element to attach the popup to
  • programmatically notify all subscribers to KnockOut observables that the observable has been updated (and the data grid component should refresh) (use function valueHasMutated() on observable)
  • programmatically manipulate the contents of the popup
  • take the input from the search field and use it (instantly) to filter and refresh the data grid
  • include images in a in GitHub (yes, that is a very different topic from Oracle JET)
  • create an animated gif (and that too) – see the result below, using

I hope to describe these in subsequent blog posts.


This animated gif gives an impression of what the prototype I put together does. In short:

– present all data in an hierarchical grid

– show a popup with cell details when hovering over a cell

– collapse (and expand) months (by clicking the icon)

– drill down on (zoom into) a single month or week (by clicking the column header)

– collapse or expand all months and weeks

– filter the data in the grid by entering a value in the search field





Final Words

In the end, the customer decided to have us use the Gantt Chart to present the hierarchical data. Mainly because of its greater visual appeal (the data grid looks too much like Excel) and the more fluid ability to zoom in and out. I am sure our explorations of the data grid will come in handy some other time. And if not, they have been interesting and fun.


Source code for the article (and the nested header data grid component):

Oracle JET JavaScript API Documentation

– Popup –

– Datagrid –

Oracle JET Cookbook:

– Popup Tooltip –

– Nested Headers with Data Grid –

– CRUD with Data Grid –

Documentation for KnockOut –

Documentation for RequireJS –

The foundation of JS Fiddles for JET 3.2 –

Blog Article on AMIS Blog – Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters

The post Oracle JET Nested Data Grid for presenting Hierarchical Data Sets – with cell popup, collapse and expand, filter and zoom appeared first on AMIS Oracle and Java Blog.

R and the Oracle database: Using dplyr / dbplyr with ROracle on Windows 10

Wed, 2017-08-23 10:14

R uses data extensively. Data often resides in a database. In this blog I will describe installing and using dplyr, dbplyr and ROracle on Windows 10 to access data from an Oracle database and use it in R.

Accessing the Oracle database from R

dplyr makes the most common data manipulation tasks in R easier. dplyr can use dbplyr. dbplyr provides a transformation from the dplyr verbs to SQL queries. dbplyr 1.1.0 is released 2017-06-27. See here. It uses the DBI (R Database Interface). This interface is implemented by various drivers such as ROracle. ROracle is an Oracle driver based on OCI (Oracle Call Interface) which is a high performance native C interface to connect to the Oracle Database.

Installing ROracle on Windows 10

I encountered several errors when installing ROracle in Windows 10 on R 3.3.3. The steps to take to do this right in one go are the following:

  • Determine your R platform architecture. 32 bit or 64 bit. For me this was 64 bit
  • Download and install the oracle instant client with the corresponding architecture (here). Download the basic and SDK files. Put the sdk file from the sdk zip in a subdirectory of the extracted basic zip (at the same level as vc14)
  • Download and install RTools (here)
  • Set the OCI_LIB64 or OCI_LIB32 variables to the instant client path
  • Set the PATH variable to include the location of oci.dll
  • Install ROracle (install.packages(“ROracle”) in R)
Encountered errors

Warning in install.packages :
 package ‘’ is not available (for R version 3.3.3)

You probably tried to install the ROracle package which Oracle provides on an R version which is too new (see here). This will not work on R 3.3.3. You can compile ROracle on your own or use the (older) R version Oracle supports.

Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’ These will not be installed

This can be fixed by installing RTools (here). This will install all the tools required to compile sources on a Windows machine.

Next you will get the following question:

Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
Do you want to attempt to install these from sources?

If you say y, you will get the following error:

installing the source package ‘ROracle’

trying URL ''
Content type 'application/x-gzip' length 308252 bytes (301 KB)
downloaded 301 KB

* installing *source* package 'ROracle' ...
** package 'ROracle' successfully unpacked and MD5 sums checked
ERROR: cannot find Oracle Client.
 Please set OCI_LIB64 to specify its location.

In order to fix this, you can download and install the Oracle Instant Client (the basic and SDK downloads).

Mind that when running a 64 bit version of R, you also need a 64 bit version of the instant client. You can check with the R version command. In my case: Platform: x86_64-w64-mingw32/x64 (64-bit). Next you have to set the OCI_LIB64 variable (for 64 bit else OCI_LIB32) to the specified path. After that you will get the error as specified below:

Next it will fail with something like:

Error in inDL(x, as.logical(local), as.logical(now), ...) :
 unable to load shared object 'ROracle.dll':
 LoadLibrary failure: The specified module could not be found.

This is caused when oci.dll from the instant client is not in the path environment variable. Add it and it will work! (at least it did on my machine). The INSTALL file from the ROracle package contains a lot of information about different errors which can occur during installation. If you encounter any other errors, be sure to check it.

How a successful 64 bit compilation looks
> install.packages("ROracle")
Installing package into ‘C:/Users/maart_000/Documents/R/win-library/3.3’
(as ‘lib’ is unspecified)
Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
Do you want to attempt to install these from sources?
y/n: y
installing the source package ‘ROracle’

trying URL ''
Content type 'application/x-gzip' length 308252 bytes (301 KB)
downloaded 301 KB

* installing *source* package 'ROracle' ...
** package 'ROracle' successfully unpacked and MD5 sums checked
Oracle Client Shared Library 64-bit - Operating in Instant Client mode.
found Instant Client C:\Users\maart_000\Desktop\instantclient_12_2
found Instant Client SDK C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
copying from C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
** libs
Warning: this package has a non-empty '' file,
so building only the main architecture

c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"     -O2 -Wall  -std=gnu99 -mtune=core2 -c rodbi.c -o rodbi.o
c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"     -O2 -Wall  -std=gnu99 -mtune=core2 -c rooci.c -o rooci.o
c:/Rtools/mingw_64/bin/gcc -shared -s -static-libgcc -o ROracle.dll tmp.def rodbi.o rooci.o C:\Users\maart_000\Desktop\instantclient_12_2/oci.dll -Ld:/Compiler/gcc-4.9.3/local330/lib/x64 -Ld:/Compiler/gcc-4.9.3/local330/lib -LC:/PROGRA~1/R/R-33~1.3/bin/x64 -lR
installing to C:/Users/maart_000/Documents/R/win-library/3.3/ROracle/libs/x64
** R
** inst
** preparing package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded
* DONE (ROracle)
Testing ROracle

You can read the ROracle documentation here. Oracle has been so kind as to provide developer VM’s to play around with the database. You can download them here. I used ‘Database App Development VM’.

After installation of ROracle you can connect to the database and for example fetch employees from the EMP table. See for example below (make sure you also have DBI installed).

drv <- dbDriver("Oracle")
host <- "localhost"
port <- "1521"
sid <- "orcl12c"
connect.string <- paste(
"(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
"(CONNECT_DATA=(SID=", sid, ")))", sep = "")

con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
sysdba = FALSE)

dbReadTable(con, "EMP")

This will yield the data in the EMP table.

1 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
2 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
3 7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 NA 20
4 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
5 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
6 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
7 7521 WARD SALESMAN 7698 1981-02-21 23:00:00 1250 500 30
8 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
9 7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500 0 30
10 7876 ADAMS CLERK 7788 1987-05-23 00:00:00 1100 NA 20
11 7900 JAMES CLERK 7698 1981-12-02 23:00:00 950 NA 30
Using dplyr

dplyr uses dbplyr and it makes working with database data a lot easier. You can see an example here.

Installing dplyr and dbplyr in R is easy:


Various functions are provides to work with data.frames, a popular R datatype in combination with data from the database. Also dplyr uses an abstraction above SQL which makes coding SQL for non-SQL coders more easy. You can compare it in some ways with Hibernate which makes working with databases from the Java object world more easy.

Some functions dplyr provides:

  • filter() to select cases based on their values.
  • arrange() to reorder the cases.
  • select() and rename() to select variables based on their names.
  • mutate() and transmute() to add new variables that are functions of existing variables.
  • summarise() to condense multiple values to a single value.
  • sample_n() and sample_frac() to take random samples.

I’ll use the same example data as with the above sample which uses plain ROracle


#below are required to make the translation done by dbplyr to SQL produce working Oracle SQL
sql_translate_env.OraConnection <- dbplyr:::sql_translate_env.Oracle
sql_select.OraConnection <- dbplyr:::sql_select.Oracle
sql_subquery.OraConnection <- dbplyr:::sql_subquery.Oracle

drv <- dbDriver("Oracle")
host <- "localhost"
port <- "1521"
sid <- "orcl12c"
connect.string <- paste(
"(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
"(CONNECT_DATA=(SID=", sid, ")))", sep = "")

con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
sysdba = FALSE)

emp_db <- tbl(con, "EMP")

The output is something like:

# Source: table<EMP> [?? x 8]
# Database: OraConnection
<int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
1 7839 KING PRESIDENT NA 1981-11-16 23:00:00 5000 NA 10
2 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
3 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10
4 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
5 7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 NA 20
6 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
7 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
8 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
9 7521 WARD SALESMAN 7698 1981-02-21 23:00:00 1250 500 30
10 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
# ... with more rows

If I now want to select specific records, I can do something like:

emp_db %>% filter(DEPTNO == "10")

Which will yield

# Source: lazy query [?? x 8]
# Database: OraConnection
<int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
1 7839 KING PRESIDENT NA 1981-11-16 23:00:00 5000 NA 10
2 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10
3 7934 MILLER CLERK 7782 1982-01-22 23:00:00 1300 NA 10

A slightly more complex query:

emp_db %>%
group_by(DEPTNO) %>%
summarise(EMPLOYEES = count())

Will result in the number of employees per department:

# Source: lazy query [?? x 2]
# Database: OraConnection
<int> <dbl>
1 30 6
2 20 5
3 10 3

You can see the generated query by:

emp_db %>%
group_by(DEPTNO) %>%
summarise(EMPLOYEES = count()) %>% show_query()

Will result in


If I want to take a random sample from the dataset to perform analyses on, I can do:

sample_n(as_data_frame(emp_db), 10)

Which could result in something like:

# A tibble: 10 x 8
<int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
1 7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500 0 30
2 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
3 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
4 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
5 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
6 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
7 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
8 7876 ADAMS CLERK 7788 1987-05-23 00:00:00 1100 NA 20
9 7934 MILLER CLERK 7782 1982-01-22 23:00:00 1300 NA 10
10 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10

Executing the same command again will result in a different sample.


There are multiple ways to get data to and from the Oracle database and perform actions on them. Oracle provides Oracle R Enterprise. Oracle R Enterprise is a component of the Oracle Advanced Analytics Option of Oracle Database Enterprise Edition. You can create R proxy objects in your R session from database-resident data. This allows you to work on database data in R while the database does most of the computations. Another feature of Oracle R Enterprise is an R script repository in the database and there is also a feature to allow execution of R scripts from within the database (embedded), even within SQL statements. As you can imagine this is quite powerful. More on this in a later blog!

The post R and the Oracle database: Using dplyr / dbplyr with ROracle on Windows 10 appeared first on AMIS Oracle and Java Blog.

Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters

Sun, 2017-08-20 04:33


A common requirement in any web application: allow the user to quickly drill down to records of interest by specifying relevant filters. The figure overhead shows two way of setting filters: by selecting from the [limited number of]existing values in a certain column – here Location – and by specifying a search string whose value should occur in records to be displayed after filtering.

Oracle JET is a toolkit that supports development of rich web applications. With Oracle JET too this filtering feature is a common requirement. In this article I take a brief look at how to:

  • create the multi select element and how to populate it with data from the Location attribute of the records in the table
  • handle a (de)selection event in the multi select – leading to filtering of the records shown in the table
  • create the search field and intercept changes in the search field
  • handle resetting the search field
  • invoking the REST API when the search field has changed

I am not claiming to present the best possible way to implement this functionality. I am not fluent enough in JET to make such a claim, and I have seen too many different implementations in Oracle documentation, blog articles, tutorials etc. to be able to point out the one approach that stands out (for the current JET release). However, the implementation I demonstrate here seems good enough as a starting point.

The HRM module is a tab I have added to the Work Better demo application. It has its own ViewModel (hrm.js) and its own HTML view (hrm.html). I have implemented a very simple REST API in Node (http://host:port/departments?name=)  that provides the departments in a JSON document.

Sources are in this Gist:

Starting Point

The starting point in this article is a simple JET application with a tab that contains a  table that displays Department records retrieved from a REST API. The implementation of this application is not very special and is not the topic of this article.


The objective of this article is to show how to add the capability to filter the records in this table – first by selecting the locations for which departments should be shown, using a multiselect widget. The filtering takes place on the client, against the set of departments retrieved from the backend service. The second step adds filtering by name using a search field. This level of filtering is performed by the server that exposes the REST API.


Create and Populate the Multiselect Element for Locations

The multiselect element in this case is the Oracle JET ojSelect component (see cookbook). `The element shows a dropdownlist of options that can be selected, displays the currently selected options and allows selected options to be deselected.


The HTML used to add the multiselect component to the page is shown here:

<label for="selectLocation">Locations</label>
<select id="selectLocation" data-bind="ojComponent: { component: 'ojSelect' , options: locationOptions, multiple: true , optionChange:optionChangedHandler, rootAttributes: {style:'max-width:20em'}}">  

The options attribute references the locationOptions property of the ViewModel that returns the select(able) option values – more on that later. The attribute multiple is set to true to allow multiple values to be selected and the optionChange attribute references the optionChangedHandler, a function in the ViewModel that handles option change events that are published whenever options are selected or deselected.

When the Departments have been fetched from the REST API, the locationOptions are populated by identifying the unique values for the Location attribute in all Department records. Subsequently, all locations are set as selected values on the select component – as we started out with an unfiltered set of departments. function handleDepartmentsFetch is called whenever fresh data has been fetched from the REST API.

// values for the locations shown in the multiselect
self.locationOptions = ko.observableArray([]);

self.handleDepartmentsFetch = function (collection) {
    var locationData = new Set();
    //collect distinct locations and add to locationData array 
    var locations = collection.pluck('Location'); // get all values for Location attribute
    // distill distinct values
    var locationData = new Set(locations.filter((elem, index, arr) => arr.indexOf(elem) === index));

    // rebuild locationOptions

    var uniqueLocationsArray = [];
    for (let location of locationData) {
        uniqueLocationsArray.push({ 'value': location, 'label': location });
    ko.utils.arrayPushAll(self.locationOptions(), uniqueLocationsArray);
    // tell the observers that this observable array has been updated
    // (as result, the Multiselect UI component will be refreshed)
    // set the selected locations on the select component based on all distinct locations available
    $("#selectLocation").ojSelect({ "value": Array.from(locationData) });

I did not succeed in setting the selected values on the select component by updating an observable array that backs the value attribute of the ojSelect component. As a workaround, I now use the direct manipulation using the programmatic manipulation via jQuery selection ($(“#selectLocation”)) of the ojSelect component.


Handle a (de)selection event in the multi select

When the user changes the set of selected values in the Locations multiselect, we want the set of departments shown in the table to be updated – narrowed down or expanded, depending on whether a location was removed or added to the selected items.

The ojSelect component has the optionChange attribute that in this case references the function optionChangeHandler . This function inspects the type of option change (equals “data”?) and then invokes function prepareFilteredDepartmentsCollection while passing the self.deppies collection that was initialized during the fetch from the REST API. This function clones the collection of all departments fetched from the REST API and subsequently filters it based on the selectedLocations.

// returns an array of the values of the currently selected options in select component with id selectLocation
self.getCurrentlySelectedLocations = function () {
    return $("#selectLocation").ojSelect("option", "value");

self.optionChangedHandler = function (event, data) {
    if (data.option == "value") {
        // REFILTER the data in self.DeptCol into the collection backing the table
        self.prepareFilteredDepartmentsCollection(self.deppies, getCurrentlySelectedLocations());

// prepare (possibly filtered) set of departments and set data source for table
self.prepareFilteredDepartmentsCollection = function (collection, selectedLocations) {
    if (collection) {
        // prepare filteredDepartmentsCollection
        var filteredDepartmentsCollection = collection.clone();

        var selectedLocationsSet = new Set(selectedLocations);
        var toFilter = [];
        // find all models in the collection that do not comply with the selected locations
        for (var i = 0; i < filteredDepartmentsCollection.size(); i++) {
            var deptModel =;
            if (!selectedLocationsSet.has(deptModel.attributes.Location)) {
        // remove all departments that do not qualify according to the locations that are (not) selected

        // update data source with fresh data and inform any observers of data source (such as the table component)
            new oj.CollectionTableDataSource(filteredDepartmentsCollection));
    }// if (collection)

When the collection of filtered departments is created, the self.dataSource is refreshed with a new CollectionTableDataSource. With the call to self.dataSource.valueHasMutated(), we explicitly trigger subscribers to the dataSource – the Table component.


Create the search field and Intercept Changes in the Search Field

The search field is simply an inputText element with some decoration. Associated with the search field is a button to reset (clear) the search field.


The HTML code for these elements is:

<div class="oj-flex-item oj-sm-8 ">
<div class="oj-flex-item" style="max-width: 400px; white-space: nowrap">
        <input aria-label="search box" placeholder="search" data-bind="value: nameSearch, valueUpdate: 'afterkeydown' , ojComponent: {component: 'ojInputText', rootAttributes:{'style':'max-width:100%;'} }" />
<div id="searchIcon" class="demo-icon-sprite demo-icon-search demo-search-position"></div>

        <button id="clearButton" data-bind="click: clearClick, ojComponent: { component: 'ojButton', label: 'Clear', display: 'icons', chroming: 'half', icons: {start:'oj-fwk-icon oj-fwk-icon-cross03'}}">


The search field is bound to nameSearch, an observable in the ViewModel. When the user edits the contents of the search field, the observable is updated and any subscribers are triggered. One such subscriber is function – this is a computed KnockOut function that has a dependency on nameSearch. When the function is triggered – by a change in the value of nameSearch – it checks if the search string consists of three or more characters and if so, it triggers a new fetch of departments from the REST API by calling function fetchDepartments().

// bound to search field
self.nameSearch = ko.observable('');

// this computed function is implicitly subscribed to self.nameSearch; any changes in the search field will trigger this function = ko.computed(function () {
    var searchString = self.nameSearch();
    if (searchString.length > 2) {

function getDepartmentsURL(operation, collection, options) {
    var url = dataAPI_URL + "?name=" + self.nameSearch();
    return url;

Function getDepartmentsURL() is invoked just prior to fetching the Departments. It returns the URL to use for fetching from the REST API. This function will add a query parameter to the URL with the value of the nameSearch observable.


Handle Resetting the Search Field

The Clear button – shown in the previous HTML snipptet – is associated with a click event handler: function clearClick. This function resets the nameSearch observable and explicitly declares its value updated – in order to trigger subscribers to the nameSearch observable. One such subscriber is function which will be triggered by this, and will go ahead with refetching the departments from the REST API.

// event handler for reset button (for search field)
self.clearClick = function (data, event) {
    return true;

The REST API is implemented with Node and Express. It is extremely simple; initially it just returns the contents of a static file (departments.json) with department records. It is slightly extended to handle the name query parameter, to only return selected departments. Note that this implementation is not the most efficient. For the purpose of this article, it will do the job.


var app = express();
var departments  = JSON.parse(require('fs').readFileSync('./departments.json', 'utf8'));
  // add a location to each record
  for (i = 0; i < departments.length; i++) { departments[i].location = locations[Math.floor(Math.random() * locations.length)] ; } app.get('/departments', function (req, res) { //process var nameFilter =; // read query parameter name (/departments?name=VALUE) // filter departments by the name filter res.send( departments.filter(function (department, index, departments) { return !nameFilter ||department.DEPARTMENT_NAME.toLowerCase().indexOf(nameFilter)>-1; 
            ); //using send to stringify and set content-type
    Complete Source Code GIST

    Putting all source code together:





    Sources for this article in GitHub Gist:

    JET Cookbook on Multiselect

    JET Cookbook on Table and Filtering –

    Blog post by Andrejus Baranovskis – Oracle JET Executing Dynamic ADF BC REST URL

    JET Documentation on Collection  and its API Documentation.

    Knock Documentation on Computed [Obserable] and Observable

    JavaScript Gist on removing duplicates from an array –

    JavaScript Filter, Map and Reduce on Arrays:

    Oracle JET Cookbook – Recipe on Filtering

    The post Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters appeared first on AMIS Oracle and Java Blog.

    ODA X6-2M – How to create your own ACFS file system

    Mon, 2017-08-14 15:12

    In this post I will explain how to create your own ACFS file system (on the command line) that you can use to (temporarily) store data.

    So you have this brand new ODA X6-2M and need to create or migrate some databases to it. Thus you need space to store data to import into the new databases you will create.  Or for some other reason. The ODA X6-2M comes with lots of space in the form of (at least) two 3.2 TB NVMe disks. But those have been formatted to ASM disks when you executed the odacli create-appliance command, or when you used the GUI to deploy the ODA.

    If you opted for “External Backups” most of the disk space will have been allocated to the +DATA ASM diskgroup. Or in the +RECO diskgroup.

    Thus you need to decide which diskgroup you will use to create an ACFS file system on. Since we have 80% of space allocated to +DATA I decided to use some of that.

    Logon to your ODA as root and make a mount point that you will use:

    as root:
    mkdir /migration

    Then su to user grid and set the ASM environment:

    su - grid
    . oraenv
    [+ASM1] <press enter>

    The command below will use asmca to create a volume called migration on the ASM DiskGroup +DATA with initial allocation of 50 GB.

    asmca -silent -createVolume -volumeName migration -volumeDiskGroup DATA -volumeSizeGB 50

    Then you need to find the name of the volume you created in order to create an ACFS file system on it:

    asmcmd volinfo -G DATA migration | grep -oE '/dev/asm/.*'

    Let’s assume that the above command returned:


    Then you can use the following command to create an ACFS file system on that volume and mount it on /migration:

    asmca -silent -createACFS -acfsVolumeDevice /dev/asm/migration-46 -acfsMountPoint /migration

    Next you need to run the generated script as an privileged user (aka root), which is the message you get when executing the previous step:


    To check the system for the newly created file system use:

    df -h /migration

    To get the details of the created file system use:

    acfsutil info fs /migration

    Or to just check the autoresize parameter, autoresizemax or autoresizeincrement use:

    /sbin/acfsutil info fs -o autoresize /migration
    /sbin/acfsutil info fs -o autoresizemax /migration
    /sbin/acfsutil info fs -o autoresizeincrement /migration

    To set the autoresize on with an increment of 10GB:

    /sbin/acfsutil size -a 10G /migration

    And to verify that it worked as expected:

    acfsutil info fs /migration
    acfsutil info fs -o autoresize /migration

    To use the file system as the oracle user you might want to set the permissions and ownership:

    ls -sla /migration
    chown oracle:oinstall /migration
    chmod 775 /migration
    ls -sla /migration

    And you are good to go!

    Of course you can also use the GUI to do this, then just start asmca as the grid user without the parameters and follow similar steps but then in the GUI.

    HTH – Patrick

    The post ODA X6-2M – How to create your own ACFS file system appeared first on AMIS Oracle and Java Blog.

    Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud

    Sat, 2017-08-12 06:00

    In a previous post I described how to do Continuous Integration & Delivery from Oracle Developer Cloud to Oracle Application Container Cloud on simple Node applications: Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud. In this post, I am going to extend that very simple application with the functionality to count requests. With every HTTP request to the application, a counter is incremented and the current counter value is returned in the response.


    The initial implementation is a very naïve one: the Node application contains a global variable that is increased for each request that is handled. This is naïve because:

    • multiple instances are running concurrently and each is keeping its own count; because of load balancing, the subsequent requests are handled by various instances and the responses will show a somewhat irregular request counter pattern; the total number of requests is not known: each instance as a subtotal for that instance
    • when the application is restarted – or even a single instance is restarted or added – the request counter for each instance involved is reset

    Additionally, the request count value is not available outside the Node application and it can only be retrieved by calling the application -which in turn increases the count.

    A much better implementation would be one that uses a cache – that is shared by the application instances and that survives application (instance) restarts. This would also potentially make the request count value available to other microservices that can access the same cache – if we allow that to happen.

    This post demonstrates how an Application Cache can be set up on Application Container Cloud Service and how it can be leveraged from a Node application. It shows that the request counter will be shared across instances and survives redeployments and restarts.


    Note: there is still the small matter of race conditions that are not addressed in this simple example because read,update and write are not performed as atomic operation and no locking has been implemented.

    The steps are:

    • Add (naïve) request counting capability to greeting microservice
    • Demonstrate shortcomings upon multiple requests (handled by multiple instances) and by instance restart
    • Implement Application Cache
    • Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service
    • Utilize Application Cache in greeting microservice
    • Redeploy greeting microservice and demonstrate that request counter is shared and preserved

    Sources for this article are in GitHub: .

    Add (naïve) request counting capability to greeting microservice

    The very simple HTTP request handler is extended with a global variable requestCounter that is displayed and incremented for each request:


    It’s not hard to demonstrate shortcomings upon multiple requests (handled by multiple instances) :


    Here we see how subsequent requests are handled (apparently) by two different instances that each have their own, independently increasing count.

    After application restart, the count is back to the beginning.

    Implement Application Cache

    To configure an Application Cache we need to work from the Oracle Application Container Cloud Service console.



    Specify the details – the name and possibly the sizing:



    Press Create and the cache will be created:


    I got notified about its completion by email:



    Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service

    In order to be able to access the cache from within an application on ACCS, the application needs a service binding to the Cache service. This can be configured in the console (manually) as well as via the REST API, psm cli and the deployment descriptor in the Deployment configuration in Developer Cloud Service.

    Manual configuration through the web ui looks like this:


    or though a service binding:



    and applying the changes:



    I can then utilize the psm command line interface to inspect the JSON definition of the application instance on ACCS and so learn how to edit the deployment.json file with the service binding for the application cache. First setup psm:


    And inspect the greeting application:

    psm accs app -n greeting -o verbose -of json


    to learn about the JSON definition for the service binding:


    Now I know how to update the deployment descriptor in the Deployment configuration in Developer Cloud Service:


    The next time this deployment is performed, the service binding to the application cache is configured.

    Note: the credentials for accessing the application cache have to be provided and yes, horrible as it sounds and is, the password is in clear text!

    It seems that the credentials are not required. The value of password is now BogusPassword – which is not the true value of my password – and still accessing the cache works fine. Presumably the fact that the application is running inside the right network domain qualifies it for accessing the cache.

    The Service Binding makes the following environment variable available to the application – populated at runtime by the ACCS platform:


    Utilize Application Cache in greeting microservice

    The simplest way to make use of the service binding’s environment variable is demonstrated here (note that this does not yet actually use the cache):


    and the effect on requests:


    Now to actually interact with the cache – through REST calls as explained here: – we will use a node module node-rest-client. This module is added to the application using

    npm install node-rest-client –save


    Note: this instruction will update package.json and download the module code. Only the changed package.json is committed to the git repository. When the application is next built in Developer Cloud Service, it will perform npm install prior to zipping the Node application into a single archive. That action of npm install ensures that the sources of node-rest-client are downloaded and will get added to the file that is deployed to ACCS.

    Using this module, the app.js file is extended to read from and write to the application cache. See here the changed code (also in GitHub

    var http = require('http');
    var Client = require("node-rest-client").Client;
    var version = '1.2.3';
    // Read Environment Parameters
    var port = Number(process.env.PORT || 8080);
    var greeting = process.env.GREETING || 'Hello World!';
    var requestCounter = 0;
    var server = http.createServer(function (request, response) {
      getRequestCounter( function (value) {
         requestCounter = (value?value+1:requestCounter+1);
         // put new value in cache  - but do not wait for a response          
         console.log("write value to cache "+requestCounter);
         response.writeHead(200, {"Content-Type": "text/plain"});
         response.end( "Version "+version+" says an unequivocal: "+greeting 
                     + ". Request counter: "+ requestCounter +". \n"
    // functionality for cache interaction
    // for interaction with cache
    var baseCCSURL = 'http://' + CCSHOST + ':8080/ccs';
    var cacheName = "greetingCache";
    var client = new Client(); 
    var keyString = "requestCount";
    function getRequestCounter(callback)  {
            function(data, rawResponse){
                var value;
                // If nothing there, return not found
                if(rawResponse.statusCode == 404){
                  console.log("nothing found in the cache");
                  value = null;
                  // Note: data is a Buffer object.
                  console.log("value found in the cache "+data.toString());
                  value = JSON.parse(data.toString()).requestCounter;
    function writeRequestCounter(requestCounter) {
    var args = {
            data: { "requestCounter": requestCounter},
            headers: { "Content-Type" : "application/json" }
            function (data, rawResponse) {   
                // Proper response is 204, no content.
                if(rawResponse.statusCode == 204){
                  console.log("Successfully put in cache "+JSON.stringify(data))
                  console.error("Error in PUT "+rawResponse);
                  console.error('writeRequestCounter returned error '.concat(rawResponse.statusCode.toString()));
    }// writeRequestCounter

    Redeploy greeting microservice and demonstrate that request counter is shared and preserved

    When we make multiple invocations to the greeting service, we see a consistently increasing series of count values:


    Even when the application is restarted or redeployed, the request count is preserved and when the application becomes available again, we simply resume counting.

    The logs from the two ACCS application instances provide insight in what takes place – how load balancing makes these instances handle requests intermittently – and how they read each others’ results from the cache:




    Sources for this article are in GitHub: .

    Blog article by Mike Lehmann, announcing the Cache feature on ACCS:

    Documentation on ACCS Caches:

    Tutorials on cache enabling various technology based applications on ACCS:

    Tutorial on Creating a Node.js Application Using the Caching REST API in Oracle Application Container Cloud Service

    Public API Docs for Cache Service –

    Using psm to retrieve deployment details of ACCS application: (to find out how Application Cache reference is defined)

    The post Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud appeared first on AMIS Oracle and Java Blog.

    Oracle Mobile Cloud Service (MCS): Overview of integration options

    Fri, 2017-08-11 04:40

    Oracle Mobile Cloud Service has a lot of options which allows it to integrate with other services and systems. Since it runs JavaScript on Node.js for custom APIs, it is very flexible.

    Some features allow it to extent its own functionality such as the Firebase configuration option to send notifications to mobile devices, while for example the connectors allow wizard driven integration with other systems. The custom API functionality running on a recent Node.js version ties it all together. In this blog article I’ll provide a quick overview and some background of the integration options of MCS.

    MCS is very well documented here and there are many YouTube video’s available explaining/demonstrating various MCS features here. So if you want to know more, I suggest looking at those.

    Some recent features

    Oracle is working hard on improving and expanding MCS functionality. For the latest improvements to the service see the following page. Some highlights I personally appreciate of the past half year which will also get some attention in this blog:

    • Zero footprint SSO (June 2017)
    • Swagger support in addition to RAML for the REST connector (April 2017)
    • Node.js version v6.10.0 support (April 2017)
    • Support for Firebase (FCM) to replace GCM (December 2016)
    • Support for third party tokens (December 2016)
    Feature integration Notification support

    In general there are two options for sending notifications from MCS. Integrating with FCM and integrating with Syniverse. Since they are third party suppliers, you should compare these options (license, support, performance, cost, etc) before choosing one of them.

    You can also use any other notification provider if it offers a REST interface by using the REST connector. You will not get much help in configuring it through the MCS interface though; it will be a custom implementation.

    Firebase Cloud Messaging / Google Cloud Messaging

    Notification support is implemented by integrating with Google cloud messaging products. Google Cloud Messaging (GCM) is being replaced with Firebase Cloud Messaging (FCM) in MCS. GCM has been deprecated by Google for quite a while now so this is a good move. You do need a Google Cloud Account though and have to purchase their services in order to use this functionality. See for example here on how to implement this from a JET hybrid application.


    Read more on how to implement this here. You first have to create a Syniverse account. Next subscribe to the Syniverse Messaging Service, register the app and get credentials. These credentials you can register in MCS, client management.


    Beacon support

    Beacons create packages which can be detected on Bluetooth by mobile devices. The package structure the beacons broadcast, can differ. There are samples available for iBeacon, altBeacon and Eddystone but others can be added if you know the corresponding package structure. See the following presentation some background on beacons and how they can be integrated in MCS. How to implement this for an Android app can be watched here.


    Client support

    MCS comes with several SDKs which provide easy integration of a client with MCS APIs. Available client SDKs are iOS, Android, Windows, Web (plain JavaScript). These SDKs provide an easy alternative to using the raw MCS REST APIs. They provide a wrapper for the APIs and provide easy access in the respective language the client uses.

    Authentication options (incoming) SAML, JWT

    Third party token support for SAML and JWT is available. Read more here. A token exchange is available as part of MCS which creates MCS tokens from third party tokens based on specifically defined mappings. This MCS tokens can be used by clients in subsequent requests. This does require some work on the client side but the SDKs of course help with this.

    Facebook Login

    Read here for an example on how to implement this in a hybrid JET application.

    OAuth2 and Basic authentication support.

    No third party OAuth tokens are supported. This is not strange since the OAuth token does not contain user data and MCS needs a way to validate the token. MCS provides its own OAuth2 STS (Secure Token Service) to create tokens for MCS users. Read more here.

    Oracle Enterprise Single Sign-on support.

    Read here. This is not to be confused with the Oracle Enterprise Single Sign-on Suite (ESSO). This is browser based authentication of Oracle Cloud users which are allowed access to MCS.

    These provide the most common web authentication methods. Especially the third party SAML and JWT support provides for many integration options with third party authentication providers. OKTA is given as an example in the documentation.

    Application integration: connectors

    MCS provides connectors which allow wizard driven configuration in MCS. Connectors are used for outgoing calls. There is a connector API available which makes it easy to interface with the connectors from custom JavaScript code. The connectors support the use of Oracle Credential Store Framework (CSF) keys and certificates. TLS versions to TLS 1.2 are supported. You are of course warned that older versions might not be secure. The requests the connectors do are over HTTP since no other technologies are currently directly supported. You can of course use REST APIs and ICS as wrappers should you need it.

    Connector security settings

    For the different connectors, several Oracle Web Service Security Manager (OWSM) policies are used. See here. These allow you to configure several security settings and for example allow usage of WS Security and SAML tokens for outgoing connections. The policies can be configured with security policy properties. See here.


    It is recommended to use the REST connector instead of doing calls directly from your custom API code because of they integrate well with MCS and provide security and monitoring benefits. For example out of the box analytics.


    The SOAP connector can do a transformation from SOAP to JSON and back to make working with the XML easier in JavaScript code. This has some limitations however:

    Connector scope

    There are also some general limitations defined by the scope of the API of the connector:

    • Only SOAP version 1.1 and WSDL version 1.2 are supported.
    • Only the WS-Security standard is supported. Other WS-* standards, such as WS-RM or WS-AT, aren’t supported.
    • Only document style and literal encoding are supported.
    • Attachments aren’t supported.
    • Of the possible combinations of input and output message operations, only input-output operations and input-only operations are supported. These operations are described in the Web Services Description Language (WSDL) Version 1.2 specification.
    Transformation limitations

    • The transformation from SOAP to XML has limitations
    • A choice group with child elements belonging to different namespaces having the same (local) name. This is because JSON doesn’t have any namespace information.
    • A sequence group with child elements having duplicate local names. For example, <Parent><ChildA/><ChildB/>…<ChildA/>…</Parent>. This translates to an object with duplicate property names, which isn’t valid.
    • XML Schema Instance (xsi) attributes aren’t supported.
    Integration Cloud Service connector

    Read more about this connector here. This connector allows you to call ICS integrations. You can connect to your ICS instance and select an integration from a drop-down menu. For people who also use ICS in their cloud architecture, this will probably be the most common used connector.

    Fusion Applications connector

    Read more about this connector here. The flow looks similar to that of the ICS Cloud Adapters (here). In short, you authenticate, a resource discovery is done and local artifacts are generated which contain the connector configuration. At runtime this configuration is used to access the service. The wizard driven configuration of the connector is a great strength. MCS does not provide the full range of cloud adapters as is available in ICS and SOA CS.

    Finally Flexibility

    Oracle Mobile Cloud Service allows you to define custom APIs using JavaScript code. Oracle Mobile Cloud Service V17.2.5-201705101347 runs Node.js version v6.10.0 and OpenSSL version 1.0.2k (process.versions) which are quite new! Because a new OpenSSL version is supported, TLS 1.2 ciphers are also supported and can be used to create connections to other systems. This can be done from custom API code or by configuring the OWSM settings in the connector configuration. It runs on Oracle Enterprise Linux 6 kernel 2.6.39-400.109.6.el6uek.x86_64 (JavaScript: os.release()). Most JavaScript packages will run on this version so few limitations there.

    ICS also provides an option to define custom JavaScript functions (see here). I haven’t looked at the engine used in ICS though but I doubt this will be a full blown Node.js instance and suspect (please correct me if I’m wrong) a JVM JavaScript engine is used like in SOA Suite / SOA CS. This provides less functionality and performance compared to Node.js instances.

    What is missing? Integration with other Oracle Cloud services

    Mobile Cloud Service does lack out of the box integration options with other Oracle Cloud Services. Only 4 HTTP based connectors are available. Thus if you want to integrate with an Oracle Cloud database (a different one than which is provided) you have to use the external DB’s REST API (with the REST connector or from custom API code) or use for example the Integration Cloud Service connector or the Application Container Cloud Service to wrap the database functionality. This of course requires a license for the respective services.

    Cloud adapters

    A Fusion Applications Connector is present in MCS. Also OWSM policies are used in MCS. It would therefore not be strange if MCS would be technically capable of running more of the Cloud adapters which are present in ICS. This would greatly increase the integration options for MCS.

    Mapping options for complex payloads

    Related to the above, if the payloads become large and complex, mapping fields also becomes more of a challenge. ICS does a better job at this than MCS currently. It has a better mapping interface and provides mapping suggestions.

    The post Oracle Mobile Cloud Service (MCS): Overview of integration options appeared first on AMIS Oracle and Java Blog.

    Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud

    Fri, 2017-08-11 02:57

    A familiar story:

    • Develop a Node application with one or more developers
    • Use Oracle Developer Cloud Service to organize the development work, host the source code and coordinate build jobs and the ensuing deployment
    • Run the Node application on Oracle Application Container Cloud

    I have read multiple tutorials and blog posts that each seemed to provide a piece of puzzle. This article shows the full story – in its simplest form.

    We will:

    • Start a new project on Developer Cloud Service
    • Clone the Git repository for this new project
    • Locally work on the Node application and configure it for Application Container Cloud
    • Commit and push the sources to the Git repo
    • Create a Build job in Developer Cloud service that creates the zip file that is suitable for deployment; the job is triggered by changes on the master branch in the Git repo
    • Create a Deployment linking to an existing Oracle Application Container Cloud service instance; associate the deployment with the build task (and vice versa)
    • Run the build job – and verify that the application will be deployed to ACCS
    • Add the ACCS Deployment descriptor with the definition of environment variables (that are used inside the Node application)
    • Make a change in the sources of the application, commit and push and verify that the live application gets updated

    Prerequisites: access to a Developer Cloud Instance and an Application Container Cloud service. Locally access to git and ideally Node and npm.

    Sources for this article are in GitHub: .

    Start a new project on Developer Cloud Service

    Create the new project greeting in Developer Cloud





    After you press Finish, the new project is initialized along with all associated resources and facilities, such as a new Git repository, a Wiki, an Issue store.


    When the provisioning is done, the project can be accessed.



    Locally work on the Node application

    Copy the git URL for the source code repository.


    Clone the Git repository for this new project

    git clone


    Start a new Node application, using npm init:


    This will create the package.json file.

    To prepare the application for eventual deployment to Application Container Cloud, we need to add the manifest.json file.


    We also need to create a .gitignore file, to prevent node_modules from being committed and pushed to Git.


    Implement the application itself, in file app.js. This is a very simplistic application – that will handle an incoming request and return a greeting of some sort:


    Note how the greeting can be read from an environment variable, just like the port on which the requests should be listened to. When no environment values are provided, defaults are used instead.

    Commit and push the sources to the Git repo

    The Git repository in the Developer Cloud Service project is empty except for the when the project is first created:


    Now we commit and push the files created locally:




    A little while later, these sources show up in Developer Cloud Service console:


    Create a Build job in Developer Cloud service

    To have the application build we can create a build job in Developer Cloud Service that creates the zip file that is suitable for deployment; this zip file needs to contain all sources from Git and all dependencies (all node modules) specified in package.json. The job is triggered by changes on the master branch in the Git repo. Note: the build job ideally should also perform automated tests – such as described by Yannick here.



    Specify free-style job. Specify the name – here BuildAndDeploy.

    Configure the Git repository that contains the sources to build; this is the repository that was first set up when the project was created.


    Configure the build job to be performed whenever sources are committed to (the master branch in) the Git repository:



    Create a Build Step, of type Execute Shell:



    Enter the following shell-script commands:

    git config –global url. git://

    npm install

    zip -r .

    This will download all required node modules and package all sources in a single zip-file called


    Define as post build step that the file should be archived. That makes this zip file available as artifact produced by the build job – for use in deployments or other build jobs.



    Run the job a first time with Build Now.




    The console output for running the shell commands is shown. Note that the implicit first steps performed in a build include the retrieval of all sources from the git repositories on to the file system of the build server. The explicit shell commands are executed subsequently – and can make use of these cloned git repo sources.


    The build job produces as artifact:


    Create a Deployment linking to an existing Oracle Application Container Cloud service instance

    The build job produces an artifact that can be deployed to an ACCS instance. We need a Deployment to create an ACCS instance based on that artifact. The Deployment is the bridge between the build artifact and a specific target environment – in this case an ACCS instance.


    Specify name of the configuration – for use within Developer Cloud Service – and of the application – that will be used in Application Container Cloud. Specify the type of Deployment – we want On Demand because that type of Deployment can be associated with a Build job to be automatically performed at the end of the build. Specify the Deployment Target – New of type Application Container Cloud.


    Provide the connection details for an ACCS instance. Press Test Connection to verify these details.


    Upon success, click on Use Connection.


    Specify the type of Runtime – Node in this case. Select the Build Job and Artifact to base this Deployment on:



    Note: for now, the Deployment is tied to a specific instance of the build job. When add the Deployment as Post Build step to the Build Job, we will always use the artifact produced by that specific build instance.

    When the Deployment is saved, it starts to execute the deployment immediately:


    In the Application Container Cloud Console, we can see the new Node application greeting being created



    After some time (actually, quite some time) the application is deployed and ready to be accessed:


    And here is the result of opening the application in  browser


    Now associate the build job with the Deployment, in order to have the deployment performed at the end of each successful build:


    Go to the Post Build tab, check the box for Oracle Cloud Service Deployment and add a Deployment Task of type Deploy:


    Select the Deployment we created earlier:


    And press Save to save the changes to the build job’s definition.


    Run the build job – and verify that the application will be deployed to ACCS (again)

    If we now run the build job, as its last action it should perform the deployment:




    The ACCS console shows that now we have Version 2.0, deployed just now.



    Add the ACCS Deployment descriptor with the definition of environment variables

    The app.js file contains the line

    var greeting = process.env.GREETING || ‘Hello World!’;

    This line references the environment variable GREETING – that currently is not set. By defining a deployment descriptor as part of the Deployment definition, we can not only specify the number of instances and their size as well as any Service Bindings and the value of Environment Variables such as GREETING.



    Add the Deployment Descriptor json:


    “memory”: “1G”,

    “instances”: “1”,

    “environment”: {

    “GREETING”:”Greetings to you”,




    Note: variable APPLICATION_PREFIX is not currently used.


    Save and the deployment will be performed again:



    When done, the application can be accessed. This time, the greeting returned is the one specified in the the deployment descriptor deployment.json (as environment variable) and picked up by the application at run time (using



    Make a change in the sources of the application and Do the End To End Workflow

    If we make a change in the application and commit and push the change to Git then after some time we should be able to verify that the live application gets updated.

    Make the change – a new version label and a small change in the text returned by the application.


      Then commit the change and push the changes – to the Developer CS Git repo:



      The changes arrive in the Git repo:


      Now the Git repo has been updated, the build job should be triggered:



      Some of the console output – showing that deployment has started:


      The ACCS Service Console makes it clear too


      When the deployment is done, it is clear that the code changes made it through to the running application:


      So editing the source code and committing plus pushing to git suffices to trigger the build and redeployment of the application – thanks to the set up made in Developer Cloud Service.

      Next Steps

      Show how multiple instances of an application each have their own state – and how using an Application Cache can make them share state.

      Show how an ACCS application can easily access a DBaaS instance through Service Bindings (and in the case of Node application through the oracle node driver and OCI libraries that come prepackaged with the ACCS Node Runtime.

      Show how Oracle Management Cloud APM can be setup as part of an ACCS instance in order to perform application monitoring of applications running on ACCS; probably works for Log Analytics as well.



      Sources for this article are available in GitHub:

      Oracle Community Article by Abhinav Shroff –Oracle Developer Cloud to build and deploy Nodejs REST project on Application Container Cloud

      A-Team Chronicle by Yannick Ongena- Automated unit tests with Node.JS and Developer Cloud Services

      Article by Fabrizio Marini – Oracle Application Container Cloud & Developer Cloud Service: How to create a Node.js application with DB connection in pool (in cloud) and how to deploy it on Oracle Application Container Cloud (Node.js) using Developer Cloud Service

      Create Node.js Applications (Oracle Documentation) –

      Developer Cloud Service Docs – Managing Releases in Oracle Developer Cloud Service

      Oracle Documentation – Creating Meta Files for ACCS deployments –

      The post Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud appeared first on AMIS Oracle and Java Blog.

      When Screen Scraping became API calling – Gathering Oracle OpenWorld 2017 Session Catalog with Node

      Thu, 2017-08-10 02:57

      A dataset with all sessions of the upcoming Oracle OpenWorld 2017 conference is nice to have – for experiments and demonstrations with many technologies. The session catalog is exposed at a website – 


      With searching, filtering and scrolling, all available sessions can be inspected. If data is available in a browser, it can be retrieved programmatically and persisted locally in for example a JSON document. A typical approach for this is web scraping: having a server side program act like a browser, retrieve the HTML from the web site and query the data from the response. This process is described for example in this article – – for Node and the Cheerio library.

      However, server side screen scraping of HTML will only be successful when the HTML is static. Dynamic HTML is constructed in the browser by executing JavaScript code that manipulates the browser DOM. If that is the mechanism behind a web site, server side scraping is at the very least considerably more complex (as it requires the server to emulate a modern web browser to a large degree). Selenium has been used in such cases – to provide a server side, programmatically accessible browser engine. Alternatively, screen scraping can also be performed inside the browser itself – as is supported for example by the Getsy library.

      As you will find in this article – when server side scraping fails, client side scraping may be a much to complex solution. It is very well possible that the rich client web application is using a REST API that provides the data as a JSON document. An API that our server side program can also easily leverage. That turned out the case for the OOW 2017 website – so instead of complex HTML parsing and server side or even client side scraping, the challenge at hand resolves to nothing more than a little bit of REST calling.

      Server Side Scraping

      Server side scraping starts with client side inspection of a web site, using the developer tools in your favorite browser.


      A simple first step with cheerio to get hold of the content of the H1 tag:


      Now let’s inspect in the web page where we find those session details:


      We are looking for LI elements with a CSS class of rf-list-item. Extending our little Node program with queries for these elements:


      The result is disappointing. Apparently the document we have pulled with request-promise does not contain these list items. As I mentioned before, that is not necessarily surprising: these items are added to the DOM at runtime by JavaScript code executed after an Ajax call is used to fetch the session data.

      Analyzing the REST API Calls

      Using the Developer Tools in the browser, it is not hard to figure out which call was made to fetch these results:


      The URL is there: Now the question is: what headers and parameters are sent as part of the request to the API – and what HTTP operation should it be (GET, POST, …)?

      The information in the browser tools reveals:


      A little experimenting with custom calls to the API in Postman made clear that rfWidgetId and rfApiProfileId are required form data.


      Postman provides an excellent feature to quickly get going with source code in many technologies for making the REST call you have just put together:


      REST Calling in Node

      My first stab:


      With the sample generated by Postman as a starting point, it is not hard to create the Node application that will iterate through all session types – TUT, BOF, GEN, CON, … -:


      To limit the size of the individual (requests and) responses, I have decided to search the sessions of each type in 9 blocks – for example CON1, CON2, CON3 etc. The search string is padded with wild cards – so CON1 will return all sessions with an identifier starting with CON1.

      To be nice to the OOW 2017 server – and prevent being blocked out by any filters and protections – I will fire requests spaced apart (with a 500 ms delay between each of them).

      Because this code is for one time use only, and is not constrained by time limits, I have not put much effort in parallelizing the work, creating the most elegant code in the world etc. It is simply not worth it. This will do the job – once – and that is all I need. (although I want to extend the code to help me download the slide decks for the presentations in an automated fashion; for each conference, it takes me several hours to manually download slide decks to take with me on the plane ride home – only to find out each year that I am too tired to actually browser through those presentations).

      The Node code for constructing a local file with all OOW 2017 sessions:

      The post When Screen Scraping became API calling – Gathering Oracle OpenWorld 2017 Session Catalog with Node appeared first on AMIS Oracle and Java Blog.

      Oracle Mobile Cloud Service (MCS) and Integration Cloud Service (ICS): How secure is your TLS connection?

      Wed, 2017-07-26 08:27

      In a previous blog I have explained which what cipher suites are, the role they play in establishing SSL connections and have provided some suggestions on how you can determine which cipher suite is a strong cipher suite. In this blog post I’ll apply this knowledge to look at incoming connections to Oracle Mobile Cloud Service and Integration Cloud Service. Outgoing connections are a different story altogether. These two cloud services do not allow you control of cipher suites to the extend as for example Oracle Java Cloud Service and you are thus forced to use the cipher suites Oracle has chosen for you.

      Why should you be interested in TLS? Well, ‘normal’ application authentication uses tokens (like SAML, JWT, OAuth). Once an attacker obtains such a token (and no additional client authentication is in place), it is more or less free game for the attacker. An important mechanism which prevents the attacker from obtaining the token is TLS (Transport Layer Security). The strength of the provided security depends on the choice of cipher suite. The cipher suite is chosen by negotiation between client and server. The client provides options and the server chooses the one which has its preference.

      Disclaimer: my knowledge is not at the level that I can personally exploit the liabilities in different cipher suites. I’ve used several posts I found online as references. I have used the OWASP TLS Cheat Sheet extensively which provides many references for further investigation should you wish.

      Method Cipher suites

      The supported cipher suites for the Oracle Cloud Services appear to be (on first glance) host specific and not URL specific. The APIs and exposed services use the same cipher suites. Also the specific configuration of the service is irrelevant we are testing the connection, not the message. Using tools described here (for public URL’s is easiest) you can check if the SSL connection is secure. You can also check yourself with a command like: nmap –script ssl-enum-ciphers -p 443 hostname. Also there are various scripts available. See for some suggestions here.

      I’ve looked at two Oracle Cloud services which are available to me at the moment:


      It was interesting to see the supported cipher suites for Mobile Cloud Service and Integration Cloud Service are the same and also the supported cipher suites for the services and APIs are the same. This could indicate Oracle has public cloud wide standards for this and they are doing a good job at implementing it!

      Supported cipher suites

      TLS 1.2
      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_RSA_WITH_AES_256_CBC_SHA256 (0x3d)
      TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
      TLS_RSA_WITH_AES_128_CBC_SHA256 (0x3c)
      TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
      TLS 1.1
      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
      TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
      TLS 1.0
      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp256r1 (eq. 3072 bits RSA) FS
      TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
      TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
      Liabilities in the cipher suites

      You should not read this as an attack against the choices made in the Oracle Public Cloud for SSL connections. Generally the cipher suites Oracle chose to support are pretty secure and there is no need to worry unless you want to protect yourself against groups like the larger security agencies. When choosing your cipher suite in your own implementations outside the mentioned Oracle cloud products, I would go for stronger cipher suites than which are provided. Read here.

      TLS 1.0 support

      TLS 1.0 is supported by the Oracle Cloud services. This standard is outdated and should be disabled. Read the following for some arguments of why you should do this. It is possible Oracle choose to support TLS 1.0 since some older browsers (really old ones like IE6) do not support TLS 1.1 and 1.2 yet. This is a consideration of compatibility versus security.

      TLS_RSA_WITH_3DES_EDE_CBC_SHA might be a weak cipher

      There are questions whether TLS_RSA_WITH_3DES_EDE_CBC_SHA could be considered insecure (read here, here and here why). Also SSLLabs says it is weak. You can mitigate some of the vulnerabilities by not using CBC mode, but that is not an option in the Oracle cloud as GCM is not supported (see more below). If a client indicates he only supports TLS_RSA_WITH_3DES_EDE_CBC_SHA, this cipher suite is used for the SSL connection making you vulnerable to collision attacks like sweet32. Also it uses a SHA1 hash which can be considered insecure (read more below).

      Weak hashing algorithms

      There are no cipher suites available which provide SHA384 hashing. Only SHA256 and SHA. SHA1 (SHA) is considered insecure (see here and here. plenty of other references to this can be found easily).

      No GCM mode support

      GCM provides data authenticity (integrity) and confidentiality checking. It is more efficient and performant compared to CBC mode. CBC only provides authenticity/integrity but no confidentiality checking. GCM uses a so-called nonce. You cannot use the same nonce to encrypt data with the same key twice.

      Wildcard certificates are used

      As you can see in the screenshot below, the certificate used for my Mobile Cloud Service contains a wildcard: *

      This means the same certificate is used for all Mobile Cloud Service hosts in a data center unless specifically overridden. See here Rule – Do Not Use Wildcard Certificates. They violate the principle of least privilege. If you decide to implement two-way SSL, I would definitely consider using your own certificates since you want to avoid trust on the data center level. They also violate the EV Certificate Guidelines. Since the certificate is per data center, there is no difference between the certificate used for development environments compared to production environments. In addition, everyone in the same data center will use the same certificate. Should the private key be compromised (of course Oracle will try not to let this happen!), this will be an issue for the entire data center and everyone using the default certificate.

      Oracle provides the option to use your own certificates and even recommends this. See here. This allows you to manage your own host specific certificate instead of the one used by the data center.

      Choice of keys

      Only RSA and ECDHE keys are used and no DSA/DSS keys. Also the ECDHE keys are given priority above the RSA keys. ECDHE gives forward secrecy. Read more here. DHE however is preferred above ECDHE (see here) since ECDHE uses Elliptic Curves and there are doubts they are really secure. Read here and here. Oracle does not provide DHE support in their list of cipher suites.

      Strengths of the cipher suites

      Is it all bad? No, definitely not! You can see Oracle has put thought into choosing their cipher suites and only provide a select list. Maybe it is possible to request stronger cipher suites to be enabled by contacting Oracle support.

      Good choice of encryption algorithm

      AES is the preferred encryption algorithm (here). WITH_AES_256 is supported which is a good thing. WITH_AES_128 is also supported. This one is obviously weaker, but it is not really terrible that it is still used and for compatibility reasons, OWASP even recommends TLS_RSA_WITH_AES_128_CBC_SHA as cipher suite (also SHA1!) so they are not completely against it.

      Good choice of ECDHE curve

      The ECDHE curve used is the default most commonly used secp256r1 which is equivalent to 3072 bits RSA. OWASP recommends > 2048 bits so this is ok.

      No support for SSL2 and SSL3

      Of course SSL2 and SSL3 are not secure anymore and usage should not be allowed.

      So why these choices? Considerations

      I’ve not been involved with these choices and have not talked to Oracle about this. In summary, I’m just guessing at the considerations.

      I can imagine the cipher suites have been chosen to create a balance between compatibility, performance and security. Also, they could be related to export restrictions / government regulations. The supported cipher suites do not all require the installation of JCE (here) but some do. For example usage of AES_256 and ECDHE require the JCE cryptographic provider but AES_128 and RSA do not. Also of course compatibility is taken into consideration. The list of supported cipher suites are common cipher suites supported by most web browsers (see here). When taking performance into consideration (although this is hardware dependent, certain cipher suites perform better on ARM processors, others better on for example Intel), using ECDHE is not at all strange while not using GCM might not be a good idea (try for example the following: gnutls-cli –benchmark-ciphers). For Oracle using a single certificate for your data center with a wildcard is of course an easy and cheap default solution.

      • Customers should consider using their own host specific certificates instead of the default wildcard certificate.
      • Customers should try to put constraints on their clients. Since the public cloud offers support for weak ciphers, the negotiation between client and server determines the cipher suite (and thus strength) used. If the client does not allow weak ciphers, relatively strong ciphers will be used. It of course depends if you are able to do this since if you would like to provide access to the entire world, controlling the client can be a challenge. If however you are integrating web services, you are more in control (unless of course a SaaS solution has limitations).
      • Work with Oracle support to see what is possible and where the limitations are.
      • Whenever you have more control, consider using stronger cipher suites like TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

      The post Oracle Mobile Cloud Service (MCS) and Integration Cloud Service (ICS): How secure is your TLS connection? appeared first on AMIS Oracle and Java Blog.

      Industrial IoT Strategy, The Transference of Risk by using a Digital Twin

      Tue, 2017-07-25 02:41

      The Internet of Things (IoT) is all about getting in-depth insight about your customers. It is the inter-networking of physical devices, vehicles (also referred to as “connected devices” and “smart devices”), buildings, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data.

      For me, IoT is an extension of data integration and big data. The past decade I have worked in the integration field and adding smart devices to these systems makes it even more interesting. Connecting the real world with the digital one creates a huge potential for valuable digital services on top of the physical world. This article contains our vision and guidance for a strategy for The Internet of Things based on literature and our own experience. 

      Drivers for business.

      Everybody is talking about The Internet Of Things. This is going to become a billion dollar business in the near future. IoT has become a blanket term for smart, connected devices. Technology is giving these devices the ability to sense/act for themselves, cause an effect on the environment and be controlled by us.  Especially in the industrial world the application of smart sensors has the potential to change the landscape of current supplier of large scale industrial solutions. 

      This is the perfect storm

      For decades we have had devices with sensors and connectivity,  but these devices never reached the market potential they currently have until now. IoT is slowly becoming a mainstream technology. Only 2 years ago there were technical limitations in processing power, storage, connectivity, and platform accessibility hindering the growth of the usage of IoT devices.

      Now we see a perfect storm: The advances in cloud computing, big data storage, an abundance of fast internet access, machine learning, and smart sensors come together. The past economic crisis has made businesses start focusing more on lean manufacturing, measuring and real-time feedback. And finally, our addiction to social media and direct interaction makes us accustomed to instant feedback. We demand real time process improvement and in-depth, highly personalized services. This can only be achieved by probing deep into data about the behavior of consumers.

      Digital Transformation changes our economy.  

      Smart devices are a driver for efficiency. On one hand, we can save power usage – by switching off unused machines for example – and boost effective usage of machines by optimizing their utilization. For example: have cleaning robots to visit rooms with a lot of traffic more often, instead of the same schedule for all rooms.  Intensive data gathering offers the possibility to optimize our processes and apply effective usage of machines and resources.  These solutions are aimed at saving money. Your customers expect this process data as an inclusive service on top of the product they buy from you. In practice: look at the Nest thermostat; the dashboard and data are perceived as part of the device. Nobody is going to pay extra for the Nest dashboard.  

      Create value using a digital twin of your customer

      You can make a real difference with IoT when you consider the long term strategic goals of your company. Smart devices make it possible to acquire extensive data of your customer.  This information is very valuable, especially when you combine individual sensor data of each customer to a complete digital representation of the customer (also called digital twin). This is very valuable for both B2B and B2C businesses.  Having a digital twin of your customer helps you know exactly what your customer needs and what makes them successful.  You can create additional services and better a user experience with the data you acquire.  Your customers are willing to pay for an add-on when you are able to convert their data into valuable content and actions. This is how you create more revenue.

      IoT is all about transference of risk and responsibility

      I predict IoT will transform the economy. With IoT, you are able to influence your customer and their buying habits. You are able to measure the status and quality of a heating installation, car engine or security system. You are able to influence the operation of these machines and warn your customer up front about possible outages due to wear and usage. Next logical step for your customer is to transfer the responsibility for these machines to you as a supplier.  This has huge consequences for the risk profile of your company and the possible liabilities connected to it. Having an extensive sensor network and operational digital twin of the customer makes it possible to assess and control this risk. You can implement predictive maintenance and reduce the risk of an outage. Since you can predict possible malfunction since you have a vast amount of data and trained algorithms to predict the future state of the machines and your customers. Customers are prepared to pay an insurance fee if you can guarantee the operational state and business continuity.

      How to create a profitable your IoT strategy?

      The first step is to determine what kind of company you want to be in the IoT realm. According to  Frank Burkitt and Brian Solis There are 3 types of companies building IoT services:

      • Enablers
        These are the companies that develop and implement IoT technology; they deliver pure IoT solutions. Ranging from hardware to all kinds of cloud systems. They have no industry focus and deliver generic IoT solutions. The purpose of these companies is to process as high as the possible volume at a low price. The enablers will focus on delivering endpoint networks and cloud infrastructure. This market will be dominated by a small number of global players who deliver devices, sensors, and suitable cloud infrastructure.
      • Engagers
        These are the companies who design, create, integrate, and deliver IoT services to customers. The purpose of these companies it to deliver customer intimacy by adding close interaction with the end users. Aiming their strategy on customer intimacy via IoT. Usually via one specific industry or product stack.  The engagers will focus on hubs and market-facing solutions like dashboards and historical data. This market will contain traditional software companies able to offer dashboards on top of existing systems and connecting IoT devices.
      • Enhancers
        These are the companies that deliver their own value-added services on top of services delivered by the Engagers. The services of the Engagers are unique to IoT and add a lot of value to their end user. Their goal is to provide a richer end-user engagement and surprise and delight the customer by offering them new services using their data and enhancing this with your experience and third party sources. This market will contain innovative software companies able to bridge the gap between IoT, Big Data and Machine Learning. These companies need to have excellent technical and creative to offer new and disruptive solutions.
      How to be successful in the IoT World?
      1. Decide the type of company you want to be: Enabler, Engager or Enhancer? Make sure if you are an enabler you need to offer a distinctive difference compared to existing platforms and devices.
      2. Identify your target market as you need to specialize in making a significant difference.
      3. Hire a designer and a business developer if you aren’t any of these.
      4. Develop using building blocks.
        Enhance existing products and services. Be very selective about what you want to offer. Do not invent the wheel yourself and use existing products and services and build on the things that are already being offered as SAAS solutions.
      5. Create additional value
        Enhance existing services with insight and algorithm. Design your service in such a way that you create additional value in your network. Create new business models and partner with companies outside your industry.
      6. Invest in your company
        Train your employees and build relationships with other IoT companies.
      7. Experiment with new ideas, create an innovation lab and link to companies outside your comfort zone to add them to your service

      You are welcome to contact us if you want to know more about adding value to your products and services using IoT.
      We can help you make your products and services smart at scale.  Visit our IoT services page

      The post Industrial IoT Strategy, The Transference of Risk by using a Digital Twin appeared first on AMIS Oracle and Java Blog.

      Oracle Compute Cloud – Uploading My Image – Part Two – Linux 7

      Mon, 2017-07-24 14:20

      In this sequel of part one I will show how you can upload your own (Oracle) Linux 7 image in the IAAS Cloud of Oracle. This post will use the lessons learnt by using AWS which I described here.

      The tools used are: VirtualBox, Oracle Linux 7, Oracle IAAS Documentation and lots of time.

      With Oracle as Cloud provider it is possible to use the UEKR3 or UEKR4 kernels in your image that you prepare in VirtualBox. There is no need to temporarily disable the UEKR3 or UEKR4 repo’s in your installation. I reused the VirtualBox VM that I’d prepared for the previous blog: AWS – Build your own Oracle Linux 7 AMI in the Cloud.

      The details:

      The main part here is (again) making sure that the XEN blockfront en netfront drivers are installed in your initramfs. There are multiple ways of doing so. I prefer changing dracut.conf.

       # additional kernel modules to the default
       add_drivers+="xen-blkfront xen-netfront"

      You could also use:

      rpm -qa kernel | sed 's/^kernel-//'  | xargs -I {} dracut -f --add-drivers 'xen-blkfront xen-netfront' /boot/initramfs-{}.img {}

      But it is easy to forget to check if you need to rebuild your initramfs after you have done a: “yum update”. I know, I have been there…

      The nice part of the Oracle tutorial is that you can minimize the size you need to upload by using sparse copy etc. But on Windows or in Cygwin that doesn’t work. Nor on my iMac. Therefore I had to jump through some hoops by using an other VirtualBox Linux VM that could access the image file and make a sparse copy, create a tar file and copy it back to the host OS (Windows or OSX).

      Then use the upload feature of Oracle Compute Cloud or Oracle Storage Cloud to be exact.

      Tip: If you get errors that your password isn’t correct (like I did) you might not have set a replication policy. (See the Note at step 7 in the documentation link).

      Now you can associate your image file, which you just uploaded, to an image. Use a Name and Description that you like:

      2017-07-14 17_54_30-Oracle Compute Cloud Service - Images

      Then Press “Ok” to have the image created, and you will see messages similar to these on your screen:

      2017-07-14 17_54_40

      2017-07-14 17_54_45-Oracle Compute Cloud Service - Images

      I now have two images created in IAAS. One exactly the same as my AWS image source and one with a small but important change:

      2017-07-14 17_55_16-Oracle Compute Cloud Service - Images

      Now create an instance with the recently uploaded image:

      2017-07-14 17_55_37-Oracle Compute Cloud Service - Images

      2017-07-14 17_56_34-Oracle Compute Cloud Service - Instance Creation

      Choose the shape that you need:

      2017-07-14 17_56_45-Oracle Compute Cloud Service - Instance Creation

      Do not forget to associate your SSH Keys with the instance or you will not be able to logon to the instance:

      2017-07-14 17_58_18-Oracle Compute Cloud Service - Instance Creation

      I left the Network details default:
      2017-07-14 18_01_33-Oracle Compute Cloud Service - Instance Creation

      To change the storage details of the boot disk press the “hamburger menu” on the right (Just below “Boot Drive”):

      2017-07-14 18_02_12-Oracle Compute Cloud Service - Instance Creation

      I changed the boot disk from 11GB to 20GB so I can expand the filesystems if needed later on:

      2017-07-14 18_03_21-Oracle Compute Cloud Service - Instance Creation

      Review your input in the next step and press “Create” when you are satisfied:

      2017-07-14 18_04_16-Oracle Compute Cloud Service - Instance Creation

      You will see some messages passing by with the details of steps that have been put in motion:

      2017-07-14 18_04_27-Oracle Compute Cloud Service - Instances (Instances)

      If it all goes too fast you can press the little clock on the right side of you screen to get the ”Operations History”:

      2017-07-14 18_04_35-Oracle Compute Cloud Service - Instances (Instances)

      On the “Orchestrations” tab you can follow the status of the instance creation steps:

      2017-07-14 18_06_45-Oracle Compute Cloud Service - Orchestrations

      Once they have the status ready you will find a running instance on the instances tab:

      2017-07-14 18_09_21-Oracle Compute Cloud Service - Instances (Instances)

      Then you can connect to the instance and do with it whatever you want. In the GUI you can use the “hamburger” menu on the right to view the details of the instance, and for instance stop it:

      2017-07-14 18_14_22-Oracle Compute Cloud Service - Instance Details (Overview)

      Sometimes I got the error below, but found that waiting a few minutes before repeating the action it sequentially succeeded:

      2017-07-17 18_01_32-

      A nice feature of the Oracle Cloud is that you can capture screenshots of the console output, just as if you were looking at a monitor:

      2017-07-17 18_46_08-Oracle Compute Cloud Service - Instance Details (Screen Captures)

      And to view the Console Log (albeit truncated to a certain size) if you added the highlighted text to GRUB_CMDLINE_LINUX in /etc/default/grub:

      [ec2-user@d3c0d7 ~]$ cat /etc/default/grub 
      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
      GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet net.ifnames=0 console=ttyS0"

      If you didn’t you will probably see something like:

      2017-07-17 18_46_28-Oracle Compute Cloud Service - Instance Details (Logs)

      If you did you will see something like:

      2017-07-17 19_01_38-Oracle Compute Cloud Service - Instance Details (Logs)

      I hope this helps building your own Linux 7 Cloud Images.

      The post Oracle Compute Cloud – Uploading My Image – Part Two – Linux 7 appeared first on AMIS Oracle and Java Blog.

      Integrating Vue.js in ADF Faces 12c Web Application – using HTML5 style Document Import

      Mon, 2017-07-24 02:51

      Vue.js is a popular framework for developing rich client web applications, leveraging browsers for all they are worth. Vue.js has attracted a large number of developers that together have produced a large number of quite interesting reusable components. ADF Faces is itself a quite mature framework for the development of rich web applications. It was born over 10 years ago. It has evolved over the years and adopted quite a few browser enhancements over the years. However, ADF Faces is still – and will stay – a server side framework that provides only piecemeal support for HTML5 APIs. When developing in ADF Faces, it feels a bit as if your missing out on all those rapid, cool, buzzing developments that take place on the client side.

      Oracle strongly recommends you to stay inside the boundaries of the framework. To use JavaScript only sparingly. To not mess with the DOM as that may confuse Partial Page Rendering, one of the cornerstones of ADF Faces 11g and 12c. And while I heed these recommendations and warnings, I do not want to miss out on all the goodness that is available to me.

      So we tread carefully. Follow the guidelines for doing JavaScript things in ADF Faces. Try to keep the worlds of ADF Faces en Vue.js apart except for when they need to come into contact.

      In this article, I will discuss how the simplest of Vue.js application code can be integrated in a ‘normal’ ADF Faces web application. Nothing fancy yet, no interaction between ADF Faces client components and Vue.js, no exchange of events or even data. Just a hybrid page that contains ADF Faces content (largely server side rendered) and Vue.js content (HTML based and heavily post processed in JavaScript as is normally the case with Vue.js).

      The steps we have to go through:

      1. Create new ADF Faces Web Application with main page
      2. Import Vue.js JavaScript library into ADF Faces web application main page
      3. Create HTML document with Vue.js application content – HTML tags, custom tags, data bound attributes; potentially import 3rd party Vue.js components
      4. Create JavaScript module with initialization of Vue.js application content (function VueInit() – data structure, methods, custom components, … (see:
      5. Create a container in the ADF Faces main page to load the Vue.js content into
      6. Import HTML document with Vue.js content into browser and add to main page DOM
      7. Import custom Vue.js JavaScript module into main page; upon ADF Faces page load event, call VueInit()

      When these steps are complete, the application can be run. The browser will bring up a page with ADF Faces content as well as Vue.js content. A first step towards a truly hybrid application with mutually integrated components. Or at least some rich Vue.js components enriching the ADF Faces application. Such as the time picker (, the Google Charts integrator ( and many more.

      The source code described in this article is in GitHub:

      A brief overview of the steps and code is provided below. The biggest challenge probably was to get HTML into the ADF Faces page that could not be parsed by the ADF Faces framework (that does not allow the notation used by Vue.js such as :value=”expression” and @click=”function”. Using link for an HTML document is a workaround, followed by a little DOM manipulation. At this moment, this approach is only supported in Chrome browser. For Firefox there is a polyfill available and perhaps an approach based on XMLHttpRequest is viable (see this article).


      Create new ADF Faces Web Application with main page

      Use the wizard to create the new application. Then create a new page: main.jsf. Also create a JavaScript module: main.js and import it into the main page:

      <af:resource type=”javascript” source=”resources/js/main.js”/>

      Import Vue.js JavaScript library into ADF Faces web application main page

      Add  an af:resource tag that references the online resource for the Vue.js 2 framework library.

      <af:resource type=”javascript” source=””/>

      Create HTML document with Vue.js application content

      Just create a new HTML document in the application – for example VueContent.html. Add some Vue.js specific content using data bound syntax with : and {{}} notation. Use a third party component – for example the 3D carousel:

      The final HTML tags are in VueContent.html as is an import of the 3D carousel component (straight JavaScript reference). Some local custom components are defined in VueContent.js; that is also where the data is prepared that is leveraged in this document.



      Create JavaScript module with initialization of Vue.js application content

      Create JavaScript module VueContent.js with a function VueInit() that will do the Vue.js application initialization and set up data structure, methods, … (see:

      In this library, local custom components are defined – such as app-menu, app-menu-list, update, updates-list, status-replies, post-reply – and third party components are registered – carousel-3d and slide.

      The VueInit() function does the familiar hard Vue.js work:

      function initVue() {
           console.log("Initialize Vue in VueContent.js");
           new Vue({
            el: '#app',
            data: {
              greeting: 'Welcome to your hybrid ADF and Vue.js app!',
              docsURL: '',
              message: 'Hello Vue!',
              value:'Welcome to the tutorial <small>which is all about Vue.js</small>',
              showReplyModal: false,
              slides: 7
            methods: {
              humanizeURL: function (url) {
                return url
                  .replace(/^https?:\/\//, '')
                  .replace(/\/$/, '')
            components: {
              'carousel-3d': Carousel3d.Carousel3d,
              'slide': Carousel3d.Slide
        }) /* new Vue */

      Create a container in the ADF Faces main page to load the Vue.js content into

      The Vue.js content can be loaded in the ADF page into a DIV element. Such an element can  best be created into an ADF Faces web page by using a af:panelGroupLayout with layout set to vertical (says Duncan Mills):

      <af:panelGroupLayout id=”app” layout=”vertical”>

      Import HTML document with Vue.js content into browser and add to main page DOM

      JSF 2 allows us to embed HTML in our JSF pages – XHTML and Facelet, jspx and jsff – although as it happens there are more than a few server side parser limitations that make this not so easy. Perhaps this is only for our own good: it forces us to strictly separate the (client side) HTML that Vue.js will work against and the server side files that are parsed and rendered by ADF Faces. We do need a link between these two of course: the document rendered in the browser from the JSF source needs to somehow import the HTML and JavaScript resources.

      The Vue.js content is in a separate HTML document called VueContent.html. To add the content of this document – or at least everything inside a DIV with id=”content” – to the main page, add a <link> element (as described in this article ) and have it refer to the HTML document. Also specify an onload listener to process the content after it has been loaded. Note: this event will fire before the page load event fires.

      <link id=”VueContentImport” rel=”import” href=”VueContent.html” onload=”handleLoad(event)”  onerror=”handleError(event)”/>

      Implement the function handleLoad(event) in the main.js JavaScript module. Have it get hold of the just loaded document and deep clone it into the DOM, inside the DIV with the app id (the DIV rendered from the panelGroupLayout component).

      Import custom Vue.js JavaScript module into main page and call upon Page Load Event

      Import JavaScript module:

      <af:resource type=”javascript” source=”resources/js/VueContent.js”/>

      Add a clientListener component to execute function init() in main.js that will call VueInit() in VueContent.js :

      <af:clientListener method=”init” type=”load”/>

      In function init(), call VueInit() – the function that is loaded from VueContent.js – the JavaScript module that constitutes the Vue.js application together with VueContent.html. In VueInit() the real Vue.js initialization is performed and the data bound content inside DIV app is prepared.

      The overall set up and flow is depicted in this figure:


      And the application looks like this in JDeveloper:


      When running, this is what we see in the browser (note: only Chrome supports this code at the moment); the blue rectangle indicates the Vue.js content:


      And at the bottom of the page, we see the 3D Carousel:



      Next steps would have us exchange data and events between ADF Faces components and Vue.js content. But as stated at the beginning – we tread carefully, stick to the ADF framework as much as possible.


      Vue 2 – Introduction Guide –

      Vue Clock Picker component – Compare · DomonJi/vue-clock-picker

      Google Charts plugin for Vue – Google Charts Plugin For Vue.js – Vue.js Script

      How to include HTML in HTML (W3 Schools) –

      HTML Imports in Firefox –

      Chrome – HTML5 Imports: Embedding an HTML File Inside Another HTML File –

      Me, Myself and JavaScript – Working with JavaScript in an ADF World, Duncan Mills, DOAG 2015 –,_Myself_and_JavaScript-Praesentation.pdf

      The post Integrating Vue.js in ADF Faces 12c Web Application – using HTML5 style Document Import appeared first on AMIS Oracle and Java Blog.

      Get going with Node.js, npm and Vue.js 2 on Red Hat & Oracle Linux

      Sun, 2017-07-23 11:56

      A quick and rough guide on getting going with Node, npm and Vue.js 2 on a Enterprise Linux platform (Oracle Linux based on RedHat Linux)

      Install Node.JS on a Oracle Enterprise Linux system:


      as root:

      curl –silent –location | bash –


      yum -y install nodejs

      (in order to disable the inaccessible proxy server that was setup for my yum environment I have to remove the line in /etc/yum.conf with proxy server)

      (see instruction at:



      For Vue.js


      still as root:

      npm install vue

      npm install --global vue-cli


      Now again as the [normal]development user:

      create and run your first Vue.js application

      A single HTML document that loads Vue.js library and contains Vue.js “application” – and that can be opened like that in a local browser (no web server required)

      vue init simple my-first-app






      # create a new project using the “webpack” template

      vue init webpack my-second-app



      # install dependencies and go!

      cd my-second-app

      npm install

      npm run dev


      Open the generated Vue.js application in the local browser – or in a remote one:



      Optional – though recommended – is the installation of a nice code editor. One that is to my liking is Microsoft Visual Studio Code – free, light weight, available on all platforms. See for installation instructions:

      To turn the application – simplistic as it is – into a shippable, deployable application, we can use the build feature of webpack:

      npm run build


      The built resources are in the /dist folder of the project. These resources can be shipped and placed on any web server, such as nginx, Apache, Node.js and even WebLogic (co-located with Java EE web application).

      The build process can be configured through the file /build/, for example to have the name of the application included in the name of the generated resources:



      The post Get going with Node.js, npm and Vue.js 2 on Red Hat & Oracle Linux appeared first on AMIS Oracle and Java Blog.

      Using Vue.JS Community Component in My Own Application

      Sun, 2017-07-23 00:34

      In a recent blog article, I fiddle around a little with Vue.JS – Auto suggest with HTML5 Data List in Vue.js 2 application. For me, it was a nice little exercise to get going with properties and events, the basics for creating a custom component. It was fun to do, easy to achieve some degree of success.

      Typing into a simple input field lists a number of suggestions – using the HTML5 data list component.


      At that moment, I was not yet aware of the wealth of reusable components available to Vue.js developers, for example at  and

      I decided to try my hand at reusing just one of those components, expecting that to give me a good impression of what it is in general to reuse components. I stumbled across a nice little carousel component: and thought that it might be nice to display the news items for the selected news source in a carousel. How hard can that be?

      (well, in many server side web development framework, integrating third party components actually can be quite hard. And I am not sure it is that simple in all client side frameworks either).

      The steps with integrating the Carousel in my Vue.js application turned out to be:

      1. Install the component into the application’s directory structure:

      npm install -S vue-carousel-3d

      This downloads a set of files into the node-modules directory child folder vue-carousel-3d.


      2. Import the component into the application

      In main.js add an import statement:

      import Carousel3d from ‘vue-carousel-3d’;

      To install the plugin – make it globally available throughout the application – add this line, also in main.js:



      At this point, the carousel component is available and can be added in templates.

      3. To use the carousel, follow the instructions in its documentation:

      In the Newslist component from the original sample application (based on this article) I have introduced the carousel and slide components that have become available through the import of the carousel component:

        <div class="newslist">
          <carousel-3d controlsVisible="true">
            <slide :index="index"  v-for="(article,index) in articles">
              <div class="media-left">
                <a v-bind:href="article.url" target="_blank">
                  <img class="media-object" v-bind:src="article.urlToImage">
              <div class="media-body">
                <h4 class="media-heading"><a v-bind:href="article.url" target="_blank">{{article.title}}</a></h4>
                <h5><i>by {{}}</i></h5>

      Note: comparing with the code as it was before, only two lines were meaningfully changed – the ones with the carousel-3d tag and the slide tag.

      The result: news items displayed in a 3d carousel.


      The post Using Vue.JS Community Component in My Own Application appeared first on AMIS Oracle and Java Blog.


      Fri, 2017-07-21 09:51

      How it works in a simple view

      Several implementations are done with 2 way ssl certificates, but still wondering how it works?

      Two-way ssl means that a client and a server communicates on a verified connection with each other. The verifying is done by certificates to identify. A server and a client has implemented a private key certificate and a public key certificate.

      In short and simple terms.

      A server has a private certificate which will be accepted by a client. The client also has a private certificate which will be accepted by the server. This is called the handshake and then it is safe to sent messages to each other. The proces looks like a cash withdrawal, putting in your creditcard corresponds to sending a hello to the server. Your card will be accepted if your card is valid for that machine.  You will be asked for your code. With two way ssl, the server sent a code,  the cliënt accept the code. Back to the withdrawal machine, the display ass for your code and putting in the right code, sent it to the server. The server accept the connection. Back to the two-way ssl process the clients sents a thumbprint which should be accepted on the server. When this proces is ready on the withdrawal you might put in the amount you want to receive, on the two-way ssl connection a message could be sent. The withdrawal machine responds with cash and probably a revenu, the two-way ssl connections with a respond message.

      In detail.

      These are the basic components necessary for communicate 2-way SSL over https.

      Sending information to a http address is done in plain text, hacking of these communication remains in clear text information to a hacker, this is not likely for several internet traffic. You don’t want to communicate password in plain text over the internet. So https and a certificate is necessary.

      So the first part to describe, the public key.

      A public key consists of a root certificate with one or more intermediate certificates. A certificate authority generates a root certificate and on top of these an intermediate certificate and on top of that certificate another intermediate certificate. This is done to arrange a smaller set of clients who can communicate with you. A root certificate will be used in several intermediates, and an intermediate certificate will be used in other intermediate certificates, so using the root certificate will remain in accepting connections of all intermediates. A public key is not protected by password and can be shared.

      The second part is the private key.

      A private key is built like a public key but on top there is a private key installed, this key is client specific and protected by a password. This private key represent you as firm or as person, so you don’t want to share this key with other people.

      What happens when setting up a two-way ssl connection

      The first step in the communication is sent a hello from the client to the server and then information is exchanged. The servers sends a request to the client with an encoded string of the thumbprint of his private key. The authorization key of the public chain below is sent to ask if the client will accept the communication. When the public key of the request corresponds to a public key on the client an OK sign will be sent back. The server asks also for the encoded string of the client, so the client will sent his encoded string of the thumbprint to the server. When the server accepts this in case of a match on his public key the connection between client and server is established and a message could be sent.

      A certificate has an expiration date, so a certificate (public and private) will only works until the expiration date is reached. Normally it will take some time to receive a new certificate so do a request for a new certificate on time.

      A certificate has a version within, for now version 3 is the standard version. Also the term SHA will be used, the start was with sha1 but still this one is achieved not safe enough anymore so we use SHA2 certificates or SHA256 as it will be shown.

      The post TWO WAY SSL appeared first on AMIS Oracle and Java Blog.

      Machine Learning in Oracle Database – Classification of Conference Abstracts based on Text Analysis

      Tue, 2017-07-18 01:53

      Machine Learning is hot. The ability to have an automated system predict, classify, recommend and even decide based on models derived from past experience is quite attractive. And with the number of obvious applications of machine learning – Netflix and Amazon recommendations, intelligent chat bots, license plate recognition in parking garages, spam filters in email servers – the interest further grows. Who does not want to apply machine learning?

      This article shows that the Oracle Database (platform) – with the Advanced Analytics option – is perfectly capable of doing ‘machine learning’. And has been able to do such learning for many years. From the comfort of their SQL & PL/SQL zone, database developers can play data scientists. The challenge is as follows:

      For the nlOUG Tech Experience 2017 conference, we have a set of about 90 abstracts in our table (title and description). 80 of these abstracts have been classified into the conference tracks, such as DBA, Development, BI & Warehousing, Web & Mobile, Integration & Process. For about 10 abstracts, this classification has not yet been done – they do not currently have an assigned track. We want to employ machine learning to determine the track for these unassigned abstracts.

      The steps we will go through to solve this challenge:

    • Create a database table with the conference abstracts – at least columns title, abstract and track

    • Create an Oracle Text policy object

    • Specify the model configuration settings

    • Create the model using the model settings and text transformation instructions to DBMS_DATA_MINING.CREATE_MODEL.

    • Test the model/Try out the model – in our case against the currently unassigned conference abstracts

    • The volume of code required for this is very small (less than 30 lines of PL/SQL). The time it takes to go through this is very limited as well. Let’s see how this works. Note: the code is in a GitHub repository: .

      Note: from the Oracle Database documentation on text mining:

      Text mining is the process of applying data mining techniques to text terms, also called text features or tokens. Text terms are words or groups of words that have been extracted from text documents and assigned numeric weights. Text terms are the fundamental unit of text that can be manipulated and analyzed.

      Oracle Text is a Database technology that provides term extraction, word and theme searching, and other utilities for querying text. When columns of text are present in the training data, Oracle Data Mining uses Oracle Text utilities and term weighting strategies to transform the text for mining. Oracle Data Mining passes configuration information supplied by you to Oracle Text and uses the results in the model creation process.

      Create a database table with the conference abstracts

      I received the data in an Excel spreadsheet. I used SQL Developer to import the file and create a table from it. I then exported the table to a SQL file with DDL and DML statements.



      Create an Oracle Text policy object

      An Oracle Text policy specifies how text content must be interpreted. You can provide a text policy to govern a model, an attribute, or both the model and individual attributes.

        l_policy     VARCHAR2(30):='conf_abstrct_mine_policy';
        l_preference VARCHAR2(30):='conference_abstract_lexer';
        ctx_ddl.create_preference(l_preference, 'BASIC_LEXER');
        ctx_ddl.create_policy(l_policy, lexer => l_preference);

      Note: the database user you use for this requires two system privileges from the DBA: grant execute on ctx_ddl and grant create mining model

      Specify the text mining model configuration settings

      When the Data Mining  model is created with a PL/SQL command, we need to specify the name of a table that holds key-value pairs (columns setting_name and setting value) with the settings that should be applied.

      Create this settings table.

      CREATE TABLE text_mining_settings
          setting_name  VARCHAR2(30),
          setting_value VARCHAR2(4000)

      Choose the algorithm to use for classification – in this case Naïve Bayes. Indicate the Oracle Text policy to use – in this case conf_abstrct_mine_policy- through INSERT statements.

        l_policy     VARCHAR2(30):='conf_abstrct_mine_policy';
        -- Populate settings table
        INTO text_mining_settings VALUES
        INTO text_mining_settings VALUES
        INTO text_mining_settings VALUES


      Pass the model settings and text transformation instructions to DBMS_DATA_MINING.CREATE_MODEL

      I do not like the elaborate, unintuitive syntax required for creating model. I do not like the official Oracle Documentation around this. It is not as naturally flowing as it should be, the pieces do not fit together nicely. It feels a little like the SQL Model clause – something that never felt quite right to me.

      Well, this is how it is. To specify which columns must be treated as text (configure text attribute) and, optionally, provide text transformation instructions for individual attributes, we need to use a dbms_data_mining_transform.TRANSFORM_LIST object to hold all columns and/or SQL expressions that contribute to the identification of each record. The attribute specification is a field (attribute_spec) in a transformation record (transform_rec). Transformation records are components of transformation lists (xform_list) that can be passed to CREATE_MODEL. You can view attribute specifications in the data dictionary view ALL_MINING_MODEL_ATTRIBUTES.

      Here is how we specify the text attribute abstract:

      dbms_data_mining_transform.SET_TRANSFORM( xformlist, ‘abstract’, NULL, ‘abstract’, NULL, ‘TEXT(TOKEN_TYPE:NORMAL)’);

      where xformlist is a local PL/SQL variable of type dbms_data_mining_transform.TRANSFORM_LIST.

      In the call to create_model, we specify the name of the new model, the table (of view) against which the model is to be built, the target column name for which the model should predict the values, the name of the database table with the key value pairs holding the settings for the model and the list of text attributes:

        xformlist dbms_data_mining_transform.TRANSFORM_LIST;
        -- add column abstract as a column to parse and use for text mining
        dbms_data_mining_transform.SET_TRANSFORM( xformlist, 'abstract', NULL, 'abstract', NULL, 'TEXT(TOKEN_TYPE:NORMAL)');
        dbms_data_mining_transform.SET_TRANSFORM( xformlist, 'title', NULL, 'title', NULL, 'TEXT(TOKEN_TYPE:NORMAL)');
        , mining_function => dbms_data_mining.classification
        , data_table_name => 'OGH_TECHEXP17'
        , case_id_column_name => 'title'
        , target_column_name => 'track'
        , settings_table_name => 'text_mining_settings'
        , xform_list => xformlist);

      Oracle Data Miner needs to have one attribute that identifies each records; the name of the column to use for this is passed as the case id.


      Test the model/Try out the model – in our case against the currently unassigned conference abstracts

      Now that the model has been created, we can make use of it for predicting the value of the target column for selected records.

      First, let’s have the model classify the abstracts without track:

      SELECT title
      ,      abstract
      where  track is null



      We can use the model also to classify data on the fly, like this (using two abstracts from a different conference that are not stored in the database at all):

      with sessions_to_judge as
      ( select 'The Modern JavaScript Server Stack' title
        , 'The usage of JavaScript on the server is rising, and Node.js has become popular with development shops, from startups to big corporations. With its asynchronous nature, JavaScript provides the ability to scale dramatically as well as the ability to drive server-side applications. There are a number of tools that help with all aspects of browser development: testing, packaging, and deployment. In this session learn about these tools and discover how you can incorporate them into your environment.' abstract
        from dual
        UNION ALL
        select 'Winning Hearts and Minds with User Experience' title
        , 'Not too long ago, applications could focus on feature functionality alone and be successful. Today, they must also be beautiful, responsive, and intuitive. In other words, applications must be designed for user experience (UX) because when they are, users are far more productive, more forgiving, and generally happier. Who doesnt want that? In this session learn about the psychology behind what makes a great UX, discuss the key principles of good design, and learn how to apply them to your own projects. Examples are from Oracle Application Express, but these principles are valid for any technology or platform. Together, we can make user experience a priority, and by doing so, win the hearts and minds of our users. We will use Oracle JET as well as ADF and some mobile devices and Java' abstract
        from dual
      SELECT title
      ,      abstract
      FROM   sessions_to_judge



      Both abstracts are assigned tracks within the boundaries of the model. If these abstracts were submitted to the Tech Experience 2017 conference, they would have been classified like this. It would be interesting to see which changes to make to for example the second abstract on user experience in order to have it assigned to the more fitting Web & Mobile track.

      One final test: find all abstracts for which the model predicts a different track than the track that was actually assigned:

      select *
      from ( SELECT title
             ,      track 
             ,      PREDICTION(ABSTRACT_CLASSIFICATION USING *) AS predicted_track
             FROM   OGH_TECHEXP17
             where  track is not null
      where track != predicted_track


      Seems not unreasonable to have a second look at this track assignment.


      Source code in GitHub: 

      Oracle Advanced Analytics Database Option: 

      My big inspiration for this article:  Introduction to Machine Learning for Oracle Database Professionals by Alex Gorbachev –

      Oracle Documentation on Text Mining:

      Toad World article on Explicit Semantic Analysis setup using SQL and PL/SQL:

      Sentiment Analysis Using Oracle Data Miner – OTN article by Brendan Tierney – 

      My own blogs on Oracle Database Data Mining from PL/SQL – from long, long ago: Oracle Datamining from SQL and PL/SQL and Hidden PL/SQL Gem in 10g: DBMS_FREQUENT_ITEMSET for PL/SQL based Data Mining

      The post Machine Learning in Oracle Database – Classification of Conference Abstracts based on Text Analysis appeared first on AMIS Oracle and Java Blog.

      Virtualization on Windows 10 with Virtual Box, Hyper-V and Docker Containers

      Mon, 2017-07-17 16:17

      Recently I started working on a brand new HP ZBook 15-G3 with Windows 10 Pro. And I immediately tried to return to the state I had my previous Windows 7 laptop in: Oracle Virtual Box for running most software in virtual machines, using Docker Machine (and Kubernetes) for running some things in Docker Containers and using Vagrant to spin up some of these containers and VMs.

      I quickly ran into some issues that made me reconsider – and realize that some things are different on Windows 10. In this article a brief summary of my explorations and findings.

      • Docker for Windows provides near native support for running Docker Containers; the fact that under the covers there is still a Linux VM running is almost hidden and from command line (Powershell) and a GUI I have easy access to the containers. I do  not believe though that I can run containers that expose a GUI – except through a VNC client
      • Docker for Windows leverages Hyper-V. Hyper-V lets you run an operating system or computer system as a virtual machine on Windows. (Hyper-V is built into Windows as an optional feature; it needs to be explicitly enabled) Hyper-V on Windows is very similar to VirtualBox
      • In order to use Hyper-V or Virtual Box, hardware virtualization must be enabled in the system’s BIOS
      • And the one finding that took longest to realize: Virtual Box will not work if Hyper-V is enabled. So the system at any one time can only run Virtual Box or Hyper-V (and Docker for Windows), not both. Switching Hyper-V support on and off is fairly easy, but it does require a reboot

      Quick tour of Windows Hyper-V

      Creating a virtual machine is very easy. A good example is provided in this article: that describes how a Hyper-V virtual machine is created with Ubuntu Linux.

      I went through the following steps to create a Hyper-V VM running Fedora 26. It was easy enough. However, the result is not as good in terms of the GUI experience as I had hoped it would be. Some of my issues: low resolution, only 4:3 aspect ratio, I cannot get out of full screen mode (that requires CTRL-ALT-BREAK and my keyboard does not have a break key. All alternative I have found do not work for me.

        • Download ISO image for Fedora 26 (Fedora-Workstation-Live-x86_64-26-1.5.iso using Fedora Media Writer or from
        • Enable Virtualization in BIOS
        • Enable Hyper-V (First, open Control Panel. Next, go to Programs. Then, click “Turn Windows features on or off”. Finally, locate Hyper-V and click the checkbox (if it isn’t already checked))
        • Run Hyper-V Manager – click on search, type Hype… and click on Hype-V Manager
        • Create Virtual Switch – a Network Adapter that will allow the Virtual Machine to communicate to the world
        • Create Virtual Machine – specify name, size and location of virtual hard disk (well, real enough inside he VM, virtual on your host), size of memory, select the network switch (created in the previous step), specify the operating system and the ISO while where it will be installed from
        • Start the virtual machine and connect to it. It will boot and allow you to run through the installation procedure
        • Potentially change the screen resolution used in the VM. That is not so simple: see this article for an instruction: Note: this is one of the reasons why I am not yet a fan of Hyper-V
        • Restart the VM an connect to it; (note: you may have to eject the ISO file from the virtual DVD player, as otherwise the machine could boot again from the ISO image instead of the now properly installed (virtual) hard disk


      Article that explains how to create a Hyper-V virtual machine that runs Ubuntu (including desktop): 

      Microsoft article on how to use local resources (USB, Printer) inside Hyper-V virtual machine: 

      Microsoft documentation: introduction of Hypervisor Hyper-v on Windows 10:

      Two article on converting Virtual Box VM images to Hyper-V: and (better)

      And: how to create one’s own PC into a Hyper-V VM:

      Rapid intro to Docker on Windows

      Getting going with Docker on Windows is surprisingly simple and pleasant. Just install Docker for Windows (see for example article for instructions: ). Make sure that Hyper-V is enabled – because Docker for Windows leverages Hyper-V to run a Linux VM: the MobyLinuxVM that you see the details for in the next figure.


      At this point you can interact with Docker from the Powershell command line – simply type docker ps, docker run, docker build and other docker commands on your command line. To just run containers based on images – local or in public or private registries – you can use the Docker GUI Kitematic. It is a separate install action – – that is largely automated as is described here –to get Kitematic installed. That is well worth the extremely small trouble it is.


      From Kitematic, you have a graphical overview of your containers as well as an interactive UI for starting containers, configuring them, inspecting them and interacting with them. All things you can do from the command line – but so much simpler.


      In this example, I have started a container based on the ubuntu-xfce-nvc image (see which runs the Ubuntu Linux distribution with “headless” VNC session, Xfce4 UI and preinstalled Firefox and Chrome browser.


      The Kitematic IP & Ports tab specify that port 5901 – the VNC port – is mapped to port 32769 on the host (my Windows 10 laptop). I can run the MobaXterm tool and open a VNC session with it, fir at port 32769. This allows me to remotely (or at least outside of the container) see the GUI for the Ubuntu desktop:


      Even though it looks okay and it is pretty cool that I can graphically interact with the container, it is not a very good visual experience – especially when things start to move around. Docker for Windows is really best for headless programs that run in the background.

      For quickly trying out Docker images and for running containers in the background – for example with a MongoDB database, an Elastic Search Index and a Node.JS or nginx web server – this seems to be a very usable way of working.


      Introducing Docker for Windows: Documentation

      Download Docker for Windows Community Edition:

      Article on installation for Kitematic – the GUI for Docker for Windows: 

      Download MobaXterm: 

      Virtual Box on Windows 10

      My first impressions on Virtual Box compared to Hyper-V that for now at least I far prefer Virtual Box(for running Linux VMs).The support for shared folders between host and guest, the high resolution GUI for the Guest, and the fact that currently many prebuilt images are available for Virtual Box and not so many (or hardly any) for Hyper-V are for now points in favor of Virtual Box. I never run VMs with Windows as Guest OS, I am sure that would impact my choice.

      Note- once more- that for VirtualBox to run on Windows 10, you need to make sure that hardware virtualization is enabled in BIOS and that Hyper-V is not enabled. Failing to take care of either of these two will return the same error VT-x is not available (VERR_VMX_NO_VMX):


      Here is a screenshot of a prebuilt VM image running on Virtual Box on Windows 10 – all out of the box.


      No special set up required. It uses the full screen, it can interact with the host, is clipboard enabled, I can easily toggle between guest and host and it has good resolution and reasonable responsiveness:



      Article describing setting up two boot profiles for Windows 10 – one for Hyper-V and one without it (for example run Virtual Box):

      Article that explains how to create a Hyper-V virtual machine that runs Ubuntu (including desktop): 

      Microsoft article on how to use local resources (USB, Printer) inside Hyper-V virtual machine: 

      Microsoft documentation: introduction of Hypervisor Hyper-v on Windows 10:

      HP Forum Entry on enabling Virtualization in BIOS fo ZBook G2 : 

      Introducing Docker for Windows: Documentation

      Download Docker for Windows Community Edition:

      Article on installation for Kitematic – the GUI for Docker for Windows: 

      Two article on converting Virtual Box VM images to Hyper-V: and (better)

      And: how to create one’s own PC into a Hyper-V VM:

      The post Virtualization on Windows 10 with Virtual Box, Hyper-V and Docker Containers appeared first on AMIS Oracle and Java Blog.