Fusion Middleware

Lizok's Bookshelf

Greg Pavlik - Sun, 2018-09-30 17:34
The first of Eugene Vodolazkin's novels translated to English was, of course, Laurus, which ranks as one of the significant literary works of the current century. I was impressed by the translators ability to convey not just a feel for what I presume the original has, but a kind of "other-time-yet-our-timeness" that seems an essential part of the authors objective. I recently picked up Volodazkin's Aviator and thought to look up the translator as well. I was delighted to find her blog on modern Russian literature, which can be found here:

http://lizoksbooks.blogspot.com/2018/09/the-2018-nose-award-longlist.html

Sea of Fertility

Greg Pavlik - Sun, 2018-09-23 18:24
In a discussion on some of my reservations on Murakami's take on 20th century Japanese literature, a friend commented on Mishima's Sea of Fertility tetrology with some real insights I thought worth preserving and sharing, albeit anonymously (if you're not into Japanese literature, now's a good time to stop reading):

"My perspective is different: it was a perfect echo of the end of “Spring Snow” and a final liberation of the main character from his self-constructed prison of beliefs. Honda’s life across the novels represents the false path: of consciousness the inglorious decay and death of the soul trapped in a repetition of situations that it cannot fathom being forced into waking. He is forced into being an observer of his own life eventually debasing himself into a “peeping Tom” even as he works as a judge. The irony is rich. Honda decays through the four novels since he clings to the memory of his friend (Kiyoaki) and does not understand the constructed nature his experience and desires. He is asleep. He wants Matsugae’s final dream to be the truth (that they will “...meet again under the Falls.”) His desires have been leading him in a circle and the final scene in the garden is his recognition of what the Abbess (Satoko from Spring Snow) was trying to convey to him. When she tells him, “There was no such person as Kiyoaki Matsugae”, it is her attempt to cure him of his delusion (and spiritual illness that has rendered him desperate and weak - chasing the ego illusions of his youth and seeking the reincarnation of his friend everywhere.) Honda lives in the dream of his ego and desire. In the final scene, he wakes up for the first time. I loved the image of the shadows falling on the garden. He is finally dying, stripped of illusion. I found it to be Mishima at his most powerful. I agree about “Sailor”, that is a great novel and much more Japanese in its economy of expression. Now, Haruki Murakami is a world apart from Kawabata and Mishima. I love his use of the unconscious/Id as a place to inform and enthrall: the labyrinth of dreams. Most of his characters are trapped (at least part of the time) in this “place”: eg Kafka on the Shore, Windup Bird Chronicle, Hard-boiled Wonderland and End of the World, etc. Literature has to have room for all of them. I like the other Murakami, Ryu Murkami, whose “Audition” and “Famous Hits of the Shōwa Era” are dark, psychotic tales of unrestrained, escalating violence but redeemed by deep probing of unconscious, hidden motives (the inhuman work of the unconscious that guides the characters like the Greek sense of fate (Moira)) and occasional black humor."
 

PKS - What happens when we create a new namespace with NSX-T

Pas Apicella - Mon, 2018-09-17 07:02
I previously blogged about the integration between PKS and NSX-T on this post

http://theblasfrompas.blogspot.com/2018/09/pivotal-container-service-pks-with-nsx.html

On this post lets show the impact of what occurs within NSX-T when we create a new Namespace in our K8s cluster.

1. List the K8s clusters with have available

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks clusters

Name    Plan Name  UUID                                  Status     Action
apples  small      d9f258e3-247c-4b4c-9055-629871be896c  succeeded  UPDATE

2. Fetch the cluster config for our cluster into our local Kubectl config

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks get-credentials apples

Fetching credentials for cluster apples.
Context set for cluster apples.

You can now switch between clusters by using:
$kubectl config use-context

3. Create a new Namespace for the K8s cluster as shown below

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl create namespace production
namespace "production" created

4. View the Namespaces in the K8s cluster

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl get ns
NAME          STATUS    AGE
default       Active    12d
kube-public   Active    12d
kube-system   Active    12d
production    Active    9s

Using NSX-T manager the first thing you will see is a new Tier 1 router created for the K8s namespace "production"



Lets view it's configuration via the "Overview" screen


Finally lets see the default "Logical Routes" as shown below



When we push workloads to the "Production" namespace it's this configuration which was dynamically created which we will get out of the box allowing us to expose a "LoadBalancer" service as required across the Pods deployed within the Namspace

Categories: Fusion Middleware

Pivotal Container Service (PKS) with NSX-T on vSphere

Pas Apicella - Wed, 2018-09-05 06:15
It taken some time but now I officially was able to test PKS with NSX-T rather then using Flannel.

While there is a bit of initial setup to install NSX-T and PKS and then ensure PKS networking is NSX-T, the ease of rolling out multiple Kubernetes clusters with unique networking is greatly simplified by NSX-T. Here I am going to show what happens after pushing a workload to my PKS K8s cluster

First Before we can do anything we need the following...

Pre Steps

1. Ensure you have NSX-T setup and a dashboard UI as follows


2. Ensure you have PKS installed in this example I have it installed on vSphere which at the time of this blog is the only supported / applicable version we can use for NSX-T



PKS tile would need to ensure it's setup to use NSX-T which is done on this page of the tile configuration



3. You can see from the NSX-T manager UI we have a Load Balancers setup as shown below. Navigate to "Load Balancing -> Load Balancers"



And this Load Balancer is backed by few "Virtual Servers", one for http (port 80) and the other for https (port 443), which can be seen when you select the Virtual Servers link


From here we have logical switches created for each of the Kubernetes namespaces. We see two for our load balancer, and the other 3 are for the 3 K8s namespaces which are (default, kube-public, kube-system)


Here is how we verify the namespaces we have in our K8s cluster

pasapicella@pas-macbook:~/pivotal $ kubectl get ns
NAME          STATUS    AGE
default       Active    5h
kube-public   Active    5h
kube-system   Active    5h

All of the logical switches are connected to the T0 Logical Switch by a set of T1 Logical Routers


For these to be accessible, they are linked to the T0 Logical Router via a set of router ports



Now lets push a basic K8s workload and see what NSX-T and PKS give us out of the box...

Steps

Lets create our K8s cluster using the PKS CLI. You will need a PKS CLI user which can be created following this doc

https://docs.pivotal.io/runtimes/pks/1-1/manage-users.html

1. Login using the PKS CLI as follows

$ pks login -k -a api.pks.haas-148.pez.pivotal.io -u pas -p ****

2. Create a cluster as shown below

$ pks create-cluster apples --external-hostname apples.haas-148.pez.pivotal.io --plan small

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  In Progress

3. Wait for the cluster to have created as follows

$ pks cluster apples

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  10.1.1.10

The PKS CLI is basically telling BOSH to go ahead an based on the small plan create me a fully functional/working K8's cluster from VM's to all the processes that go along with it and when it's up keep it up and running for me in the event of failure.

His an example of the one of the WORKER VM's of the cluster shown in vSphere Web Client



4. Using the following YAML file as follows lets push that workload to our K8s cluster

apiVersion: v1
kind: Service
metadata:
  labels:
    app: fortune-service
    deployment: pks-workshop
  name: fortune-service
spec:
  ports:
  - port: 80
    name: ui
  - port: 9080
    name: backend
  - port: 6379
    name: redis
  type: LoadBalancer
  selector:
    app: fortune
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: fortune
    deployment: pks-workshop
  name: fortune
spec:
  containers:
  - image: azwickey/fortune-ui:latest
    name: fortune-ui
    ports:
    - containerPort: 80
      protocol: TCP
  - image: azwickey/fortune-backend-jee:latest
    name: fortune-backend
    ports:
    - containerPort: 9080
      protocol: TCP
  - image: redis
    name: redis
    ports:
    - containerPort: 6379
      protocol: TCP

5. Push the workload as follows once the above YAML is saved to a file

$ kubectl create -f fortune-teller.yml
service "fortune-service" created
pod "fortune" created

6. Verify the PODS are running as follows

$ kubectl get all
NAME         READY     STATUS    RESTARTS   AGE
po/fortune   3/3       Running   0          35s

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                      AGE
svc/fortune-service   LoadBalancer   10.100.200.232   10.195.3.134   80:30591/TCP,9080:32487/TCP,6379:32360/TCP   36s
svc/kubernetes        ClusterIP      10.100.200.1              443/TCP                                      5h

Great so now lets head back to our NSX-T manager UI and see what has been created. From the above output you can see a LB service is created and external IP address assigned

7. First thing you will notice is in "Virtual Servers" we have some new entries for each of our containers as shown below


and ...


Finally the LB we previously had in place shows our "Virtual Servers" added to it's config and routable



More Information

Pivotal Container Service
https://docs.pivotal.io/runtimes/pks/1-1/

VMware NSX-T
https://docs.vmware.com/en/VMware-NSX-T/index.html
Categories: Fusion Middleware

PCF Platform Automation with Concourse (PCF Pipelines)

Pas Apicella - Mon, 2018-08-20 03:28
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

... and from there setting up Concourse

http://theblasfrompas.blogspot.com/2018/08/deploying-concourse-using-my-bubble.html

.. of course this was created so I can now use the PCF Pipelines to deploy Pivotal Cloud Foundry's Pivotal Application Service (PAS). At a high level this is how to achieve this with some screen shots on the end result

Steps

1. To get started you would use this link as follows. In my example I was deploying PCF to AWS

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf

AWS Install Pipeline

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf/aws

2. Create a versioned bucket for holding terraform state. on AWS that will look as follows


3. Unless you ensure AWS pre-reqs are meet you won't be able to install PCF so this link highlights all that you will need for installing PCF on AWS such as key pairs, limits, etc

https://docs.pivotal.io/pivotalcf/2-1/customizing/aws.html

4. Create a public DNS zone, get its zone ID we will need that when we setup the pipeline shortly. I also created a self signed public certificate used for my DNS as part of the setup which is required as well.





5. At this point we can download the PCF Pipelines from network.pivotal.io or you can use the link as follows

https://network.pivotal.io/products/pcf-automation/



6. Once you have unzipped the file you would then change to the directory for the write IaaS in my case "aws"

$ cd pcf-pipelines/install-pcf/aws


7. Change all of the CHANGEME values in params.yml with real values for your AWS env. This file is documented so you are clear with what you need to add and where. Most of the values are defaults of course.

8. Login to concourse using the "fly" command line

$ fly --target pcfconcourse login  --concourse-url https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com -k

9. Add pipeline

$ fly -t pcfconcourse set-pipeline -p deploy-pcf -c pipeline.yml -l params.yml

10. Unpause pipeline

$ fly -t pcfconcourse unpause-pipeline -p deploy-pcf

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines/pcf-pipelines/install-pcf/aws$ fly -t pcfconcourse pipelines
name        paused  public
deploy-pcf  no      no

11. The pipeline on concourse will look as follows



12. Now to execute the pipeline you have to manually run 2 tasks

- Run bootstrap-terraform-state job manually




- Run create-infrastructure manually
 


At this point the pipeline will kick of automatically. If you need to run-run due to an issue you can manually kick off the task after you fix what you need to fix. The “wipe-env” task will take everything for PAS down and terraform removes all IaaS config as well.

While running each task current state is shown as per the image below


If successful your AWS account will the PCF VM's created for example


Verify that PCF installed is best done using Pivotal Operations Manager as shown below



More Information

https://network.pivotal.io/products/pcf-automation/


Categories: Fusion Middleware

Deploying concourse using my "Bubble" created Bosh director

Pas Apicella - Fri, 2018-08-17 23:27
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

Now with bosh director deployed it's time to deploy concourse itself. The process is very straight forward as per the steps below

1. First let's clone the bosh concourse deployment using the GitHub project as follows



2.  Target bosh director and login, must set ENV variables to connect to AWS bosh correctly using "eval" as we did in the previous post. This will set all the ENV variables we need

$ eval "$(bbl print-env -s state)"
$ bosh alias-env aws-env
$ bosh -e aws-env log-in

3. At this point we need to set the external URL which is essentially the load balancer we created when we deployed Bosh Director in the previous post. To get that value run a command as follows where we deployed bosh director from as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bbl lbs -s state
Concourse LB: bosh-director-aws-concourse-lb [bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com]

4. Now lets set that ENV variable as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ export external_url=https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com

5. Now from the cloned bosh concourse directory change to the directory "concourse-bosh-deployment/cluster" as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ cd concourse-bosh-deployment/cluster

6. Upload stemcell as follows

$ bosh upload-stemcell light-bosh-stemcell-3363.69-aws-xen-hvm-ubuntu-trusty-go_agent.tgz

Verify:

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-bosh stemcells
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name                                     Version  OS             CPI  CID
bosh-aws-xen-hvm-ubuntu-trusty-go_agent  3363.69  ubuntu-trusty  -    ami-0812e8018333d59a6

(*) Currently deployed

1 stemcells

Succeeded
 
7. Now lets deploy concourse as shown below with a command as follows. Make sure you set a password as per "atc_basic_auth.password"

$ bosh deploy -d concourse concourse.yml   -l ../versions.yml   --vars-store cluster-creds.yml   -o operations/basic-auth.yml   -o operations/privileged-http.yml   -o operations/privileged-https.yml   -o operations/tls.yml   -o operations/tls-vars.yml   -o operations/web-network-extension.yml   --var network_name=default   --var external_url=$external_url   --var web_vm_type=default   --var db_vm_type=default   --var db_persistent_disk_type=10GB   --var worker_vm_type=default   --var deployment_name=concourse   --var web_network_name=private   --var web_network_vm_extension=lb  --var atc_basic_auth.username=admin --var atc_basic_auth.password=..... --var worker_ephemeral_disk=500GB_ephemeral_disk -o operations/worker-ephemeral-disk.yml 

8. Once deployed verify the deployment and VM's created as follows

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env deployments
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name       Release(s)          Stemcell(s)                                      Team(s)
concourse  concourse/3.13.0    bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3363.69  -
           garden-runc/1.13.1
           postgres/28

1 deployments

Succeeded
pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 32. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/db78de7f-55c5-42f5-bf9d-20b4ef0fd331      running        z1  10.0.16.5  i-04904fbdd1c7e829f  default  true
web/767b14c8-8fd3-46f0-b74f-0dca2c3b9572     running        z1  10.0.16.4  i-0e5f1275f635bd49d  default  true
worker/cde3ae19-5dbc-4c39-854d-842bbbfbe5cd  running        z1  10.0.16.6  i-0bd44407ec0bd1d8a  default  true

3 vms

Succeeded

9. Navigate to the LB url we used above to access concourse UI using the username/password you set as per the deployment

https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com/


10. Finally we can see of Bosh Director and Concourse deployment VM's on our AWS instance EC2 page as follows



More Information

Categories: Fusion Middleware

bosh-bootloader or "Bubble" as pronounced and how to get started

Pas Apicella - Wed, 2018-08-15 06:50
I decided to try out installing bosh using the bosh-bootloader CLI today. bbl currently supports AWS, GCP, Microsoft Azure, Openstack and vSphere. In this example I started with AWS but it won't be long until try this on GCP

It's worth noting that this can all be done remotely from your laptop once you give BBL the access it needs for the cloud environment.

Steps

1. First your going to need the bosh v2 CLI which you can install here

  https://bosh.io/docs/cli-v2/

Verify:

pasapicella@pas-macbook:~$ bosh -version
version 5.0.1-2432e5e9-2018-07-18T21:41:03Z

Succeeded

2. Second you will need Terrform having a Mac I use brew

$ brew install terrafrom

Verify:

pasapicella@pas-macbook:~$ terraform version
Terraform v0.11.7

3. Now we need to install BBL which is done as follows on a Mac. I also show how to install bosh CLI as well if you missed step 1

$ brew tap cloudfoundry/tap
$ brew install bosh-cli
$ brew install bbl

Further instructions on this link

https://github.com/cloudfoundry/bosh-bootloader

4. At this point your ready to deploy BOSH the instructions for AWS are here

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

Pretty straight forward but here is what I did at this point

5. In order for bbl to interact with AWS, an IAM user must be created. This user will be issuing API requests to create the infrastructure such as EC2 instances, load balancers, subnets, etc.

The user must have the following policy which I just copy into my clipboard to use later:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:*",
                "elasticloadbalancing:*",
                "cloudformation:*",
                "iam:*",
                "kms:*",
                "route53:*",
                "ec2:*"
            ],
            "Resource": "*"
        }
    ]
}


$ aws iam create-user --user-name "bbl-user”

This next command requires you to copy the policy JSON above

$ aws iam put-user-policy --user-name "bbl-user" --policy-name "bbl-policy" --policy-document "$(pbpaste)"

$ aws iam create-access-key --user-name "bbl-user"

You will get a JSON response at this point as follows. Save file created here as it’s used next few steps

{
    "AccessKey": {
        "UserName": "bbl-user",
        "Status": "Active",
        "CreateDate": "2018-08-07T03:30:39.993Z",
        "SecretAccessKey": ".....",
        "AccessKeyId": "........"
    }
}

In the next step BBL will use these commands to create infrastructure on AWS.

6. Now we can pave the infrastructure, Create a Jumpbox, and Create a BOSH Director as well as a LB which I need as I plan to deploy concourse using BOSH.

$ bbl up --aws-access-key-id ..... --aws-secret-access-key ... --aws-region ap-southeast-2 --lb-type concourse --name bosh-director -d -s state --iaas aws

The process takes around 5-8 minutes.

The bbl state directory contains all of the files that were used to create your bosh director. This should be checked in to version control, so that you have all the information necessary to later destroy or update this environment at a later date.

7.  Finally we target the the bosh director as follows. Keep in mind everything we need is stored in the "state" directory as per above

$ eval "$(bbl print-env -s state)"

8. This will set various ENV variables which the bosh CLI will then use to target the bosh director.  Now we need to just prepare ourselves to actually log in. I use a script as follows

target-bosh.sh

bbl director-ca-cert -s state > bosh.crt
export BOSH_CA_CERT=bosh.crt

export BOSH_ENVIRONMENT=$(bbl director-address -s state)

echo ""
echo "Username: $(bbl director-username -s state)"
echo "Password: $(bbl director-password -s state)"
echo ""
echo "Log in using -> bosh log-in"
echo ""

bosh alias-env aws-env

echo "ENV set to -> aws-env"
echo ""

Output When run with password omitted ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ ./target-bosh.sh

Username: admin
Password: ......

Log in using -> bosh log-in

Using environment 'https://10.0.0.6:25555' as client 'admin'

Name      bosh-bosh-director-aws
UUID      3ade0d28-77e6-4b5b-9be7-323a813ac87c
Version   266.4.0 (00000000)
CPI       aws_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      admin

Succeeded
ENV set to -> aws-env

9. Finally lets log-in as follows

$ bosh -e aws-env log-in

Output ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env log-in
Successfully authenticated with UAA

Succeeded

10. Last but not least lets see what VM's bosh has under management. These VM's are for my concourse I installed. If you would like to install concourse use this link - https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/concourse.md

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 20. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/ec8aa978-1ec5-4402-9835-9a1cbce9c1e5      running        z1  10.0.16.5  i-0d33949ece572beeb  default  true
web/686546be-09d1-43ec-bbb7-d96bb5edc3df     running        z1  10.0.16.4  i-03af52f574399af28  default  true
worker/679be815-6250-477c-899c-b962076f26f5  running        z1  10.0.16.6  i-0efac99165e12f2e6  default  true

3 vms

Succeeded

More Information

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/howto-target-bosh-director.md


Categories: Fusion Middleware

Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information

A supplier of equipment, vessels, and services for offshore dredging and wet-mining markets, based in Europe with over 3,000 employees and 39 global locations, was struggling to get the most out of their enterprise business applications.

Business Problem

In 2012, the company started a transformation initiative, and as part of the project, they replaced most of their enterprise business applications.  The company had over 10 different business applications and wanted to provide employees with access to information through a single web experience or portal view. For example, the field engineers may need information for a ship’s parts from the PLM system (TeamCenter), as well as customer-specific cost information for parts from the company’s ERP system (IFS Applications). It was critical to the business that employees could quickly navigate, search, and view information regardless of where it is stored in the content management system. The company’s business is built from ships dredging, laying cable, etc., so the sooner field engineers are able to find information on servicing a broken part, the sooner the company is able to drive revenue.

Integrating Oracle WebCenter

The company chose Oracle WebCenter Portal because it had the best capabilities to integrate their various business systems, as well as its ability to scale. WebCenter enabled them to build a data integration portal that provided a single pane of glass to all enterprise information. Unfortunately, this single pane of glass did not perform as well as expected. The integrations, menu navigation, and the ability to render part drawings in the portal were all developed using Oracle Application Development Framework (Oracle ADF).  Oracle ADF is great for serving up content to WebCenter Portal using taskflows, but it requires a specialized development skill set. The company had limited Oracle ADF development resources, so each time a change or update was requested for the portal it took them weeks and sometimes months to implement the enhancement. Additionally, every change to the portal required a restart and these took in excess of forty minutes.

Platform Goals

The company wanted to shorten the time-to-market for portal changes, as well as reduce its dependency on and the overall development and design limitations with Oracle ADF. They wanted to modernize their portal and leverage a more designer-friendly, front-end development framework. They contacted Fishbowl Solutions after searching for Oracle WebCenter Portal partners and finding out about their single page application approach (SPA) to front-end portal development.

Fishbowl Solutions’ SPA for Oracle WebCenter Portal is a framework that overhauls the Oracle ADF UI with Oracle JET (JavaScript Extension Toolkit) or other front-end design technology such as Angular or React. The SPA framework includes components (taskflows) that act as progressive web applications and can be dropped onto pages from the portal resource catalog, meaning that no Oracle ADF development is necessary. Fishbowl’s SPA also enables portal components to be rendered on the client side with a single page load. This decreases the amount of processing being done on the portal application server, as well as how many times the page has to reload. This all leads to an improved experience for the user, as well as the ability design and development teams to view changes or updates to the portal almost instantaneously.

Outcome

Fishbowl Solutions helped the company implement its SPA framework in under two weeks. Since the implementation, they have observed more return visits to the portal, as well as fewer support issues. They are also no longer constrained by the 40-minute portal restart after changes to the portal, and overall portal downtime has been significantly reduced. Lastly, Fishbowl’s SPA framework provided them with a go-forward design and development approach for portal projects, which will enable them to continue to evolve their portal to best serve their employees and customers alike.

The post Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Four Options for Creating Mindbreeze Search Interfaces

A well-designed search interface is a critical component of an engaging search experience. Mindbreeze provides a nice combination of both pre-built search apps and tools for customization. This post explores the following approaches to building a Mindbreeze search interface:

  • The Mindbreeze Default Search Client
  • The Mindbreeze Search App Designer
  • Custom Mindbreeze Web Applications
  • The Mindbreeze REST API
Option 1: The Mindbreeze Default Search Client Flexibility: Low | Development Effort: None

Mindbreeze includes a built-in search client which offers a feature-rich, mobile friendly, search interface out of the box. Built-in controls exist to configure filter facets, define suggestion sources, and enable or disable export. Features are enabled and disabled via the Client Service configuration interface within the Mindbreeze Management Center. The metadata displayed within the default client is determined by the value of the “visible” property set in the Category Descriptor for the respective data sources. Some of the Mindbreeze features exposed through the default client are not available via a designer-built search app (discussed in Option 2). These include saved searches, result groupings (i.e. summarize-by), the sort-by picker, sources filters, and tabs. Organizations that wish to use these features without much effort would be wise to consider the Mindbreeze Default Search Client.

In order to integrate the built-in client with a website or application, users are typically redirected from the primary website to the Mindbreeze client when performing a search. The default client is served directly from the search appliance and the query term can be passed in the URL from the website’s search box to the Mindbreeze client. Alternately, the built-in client can be embedded directly into a website using an iframe.

What is a Category Descriptor?

Mindbreeze uses an XML file called the Category Descriptor (categorydescriptor.xml) to control various aspects of both indexing and serving for each data source category (e.g. Web, SharePoint, Google Drive, etc.). Each category plugin includes a default Category Descriptor which can be extended or modified to meet your needs. Common modifications include adding localized display labels for metadata field names, boosting the overall impact of a metadata field on relevancy, and changing which fields are visible within the default search client.

Option 2: The Mindbreeze Search App Designer Flexibility: Moderate | Development Effort: None to Moderate

The Mindbreeze Search App Designer provides a drag-and-drop interface for creating modular, mobile-friendly, search applications. Some of the most popular modules include filters, maps, charts, and galleries. Many of these features are not enabled on the aforementioned default Client, so a search app is the easiest way to use them. This drag-and-drop configuration allows for layout adjustments, widget selection, and basic configurations without coding or technical knowledge. To further customize search apps, users can modify the mustache templates that control the rendering of each search widget within the search app. Common modifications include conditionally adjusting visible metadata, removing actions, or adding custom callouts or icons for certain result types. 

A key feature is the ability to export the code needed to embed a search app into a website or application from the Search Apps page in the Mindbreeze Management Center. That code can then be placed directly in a div or iframe on the target website eliminating the need to redirect users to the appliance. Custom CSS files may be used to style the results to match the rest of the website. Although you can add a search box directly to a search app, webpages usually have their own search box in the header. You can utilize query terms from an existing search box by passing them as a URL parameter where they will be picked up by the embedded search app.

Did you know? This website uses a search app for Mindbreeze-powered website search. For a deep-dive look at that integration, check out our blog post on How We Integrated this Website with Mindbreeze InSpire.

Option 3: Custom Mindbreeze Web Applications Flexibility: High | Development Effort: Low to Moderate

The default client mentioned in Option 1 can also be copied to create a new custom version of a Mindbreeze Web Application. The most common alteration is to add a reference to a custom CSS file which modifies the look and feel of the search results without changing the underlying data or DOM structure. This modification is easy and low risk. It also very easy to isolate issues related to such a change, as you can always attempt to reproduce an issue using the default client without your custom CSS.

More substantial modifications to the applications index.html or JavaScript files can also be made to significantly customize and alter the behavior of the search experience. Examples include adding custom business logic to manipulate search constraints or applying dynamic boosting to alter relevancy at search time. Other Mindbreeze UI elements can also be added to customized web apps using Mindbreeze HTML building blocks; this includes many of the elements exposed through the search app Designer such as graphs, maps, and timelines. While these types of alterations require deeper technical knowledge than simply adding custom CSS, they are still often less effort than building a custom UI from scratch (as described in Option 4). These changes may require refactoring to be compatible with future versions or integrate new features over time, so this should be considered when implementing your results page.

Option 4: The Mindbreeze REST API Flexibility: High | Development Effort: Moderate to High

For customers seeking a more customized integration, the Mindbreeze REST API allows search results to be returned as JSON, giving you full control over their presentation. Custom search pages also allow for dynamic alterations to the query, constraints, or other parameters based on custom business logic. Filters, spelling suggestions, preview URLs, and other Mindbreeze features are all available in the JSON response, but it is up to the front-end developers to determine which features to render on the page, how to arrange them, and what styling to use. This approach allows for the most control and tightest integration with the containing site, but it is also the most effort. That said, just because custom search pages generally require the greatest effort is not to say selecting this option always will result in a lengthy deployment. In fact, one of our clients used the Mindbreeze API to power their custom search page and went from racking to go-live in 37 days.

Mindbreeze offers an excellent combination of built-in features with tools for extending capabilities when necessary. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us.

The post Four Options for Creating Mindbreeze Search Interfaces appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Fishbowl Solutions Leverages Oracle WebCenter to Create Enterprise Employee Portal Solution for National Insurance Company

An insurance company that specializes in business insurance and risk management services for select industries was struggling to provide their 2,300 employees with an employee portal system that kept users engaged and informed. They desired to provide their employees with a much more modern employee portal that leveraged the latest web technologies while making it easier for business users to contribute content. With the ability for business stakeholders to own and manage content on the site, the company believed the new portal would be updated more frequently, which would make it stickier and keep users coming back.

Business Objective

The company had old intranet infrastructure that included 28 Oracle Site Studio Sites. The process for the company’s various business units to contribute content to the site basically involved emailing Word documents to the company’s IT department. IT would then get them checked into their old WebCenter Content system that supported the SiteStudio system. Once the documents were converted to a web-viewable format, it would appear on the site. Since IT did not have a dedicated administrator for the portal, change requests typically took days and sometimes even weeks. With the company’s rapid growth, disseminating information to employees quickly and effectively became a priority. The employee portal was seen as the single place where employees could access company, department and role-specific information – on their desktop or mobile device. The company needed a new portal solution backed by strong content management capabilities to make this possible. Furthermore, Oracle Site Studio was being sunsetted, so the company needed to move off an old and unsupported system and onto a modern portal platform that had a development roadmap to support their business needs now and into the future. The company chose Oracle WebCenter Content and Portal 12c as this new system.

The company’s goals for the new employee portal were:

  • Expand what the business can do without IT involvement
  • Better engage and inform employees
  • Less manual, more dynamic portal content
  • Improve overall portal usability
  • Smart navigation – filter menus by department and role
  • Mobile access

Because of several differentiators and experience, the insurance company chose Fishbowl Solutions to help them meet their goals. The company really liked that Fishbowl offered a packaged solution that they felt would enable them to go to market faster with their new portal. Effectively, the company was looking for a portal framework that included the majority of what they needed – navigation, page templates, taskflows, etc. – that could be achieved with less coding and more configuration. This solution is called Portal Solution Accelerator.

Oracle WebCenter Paired with Fishbowl’s Portal Solution Accelerator

After working together to evaluate the problems, goals, strategy, and timeline, Fishbowl created a plan to help them create their desired Portal. Fishbowl offered software and services for rapid deployment and portal set up by user experience and content integration. Fishbowl upgraded the company’s portal from SiteStudio to Oracle WebCenter Portal and Content 12c. Fishbowl’s Portal Solution Accelerator includes portal and content bundles consisting of a collection of code, pages, assets, content and templates. PSA also offers content integration, single-page application (SPA) task flows, and built-in personalization. These foundational benefits for the portal resulted in a reduction in time-to-market, speed and performance, and developer-friendly design.

Results

After implementing the new Portal and various changes, the content publishing time was reduced by 90 percent as the changes and updates now occur in hours instead of days or weeks, which encourages users to publish content. The new Framework allows for new portals to be created with little work from IT. Additionally, the in-place editor makes it easy for business users to edit their content and see changes in real-time. Role-based contribution and system-managed workflows streamline to content governance. The new mega-menu provided by the SPA provides faster, more intuitive navigation to intranet content. This navigation is overlaid with Google Search integration, further ensuring that users can find the content they need. Most of the components used in the intranet are reusable and easy to modify for unique cases. Therefore, the company can stay up-to-date with minimal effort. Finally, the Portal has phone, tablet, and desktop support making the intranet more accessible, ensuring repeat visits.

Overall, the national insurance company has seen an immense change in content publishing time reduction, ease of editing content, and managing and governing the portal since working with Fishbowl. The solutions that Fishbowl created and implemented helped decrease weekly internal support calls from twenty to one.

The post Fishbowl Solutions Leverages Oracle WebCenter to Create Enterprise Employee Portal Solution for National Insurance Company appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Fishbowl Solutions’ ControlCenter Drives Down Project-Related Costs by $400,000 for National Builder and Real Estate Management Company

A large builder, developer, and real estate management company with over 600 million employees and $1 billion in revenue reached out to Fishbowl Solutions for help with their employee onboarding and project-related content management processes.

Approximately eight percent of their employees were projected to retire by 2020, 43 percent of new hires were under the age of 35, and 41 percent of the workforce had less than two years tenure. The salary cost to onboard a new hire nearly tripled after an employee left. Additionally, the average cost of reinventing projects was $16,000 while the average cost of estimating with incorrect rates was $25,000. All these factors were causing the company’s project initiation/startup costs to be much higher than expected leading to less profit.

Business Objective

The company was in search of a solution to streamline their onboarding process and provide easier access to content overall. Their documents were scattered across file shares, Oracle WebCenter Content, and SharePoint— not including the multiple regional offices and departments with their own copies of documents. Therefore, there were multiple versions of content, outdated content, and no reviews or ownership of content across the company. Due to the unorganized content management, the company had no connection between project-specific content and the master documents. The manuals were tedious to assemble and constantly out of date.

Inevitably, the company decided to change their content management system to accommodate the changing corporate landscape. They wanted to achieve financial value with their content management processes, as well as support their strategic goals such as mitigate risk, create a competitive advantage, and improve customer focus.

The desired capabilities of their new content management system were:

  • Browse menu for structured navigation
  • Google search for keyword searches
  • Structure search pages for advanced search
  • Visual navigation based on their lifecycle framework
Oracle WebCenter Paired with Fishbowl’s ControlCenter

The real estate company chose to work with Fishbowl Solutions because of their WebCenter Content Consulting services experience and expertise. WebCenter was an existing technology investment for the company, but also provided advantages for their content management goals. WebCenter boasts flexible metadata and document profiles, as well as robust version control. Additionally, there is a tiered security, flexible workflow engine, and integration with other applications such as JD Edwards. The other reason the company chose Fishbowl was because of its ControlCenter product. ControlCenter is an all-in-one solution for ensuring compliance with regulatory standards and automating document control. It has a dedicated user interface and extends the functionality of WebCenter with document control, knowledge management, compliance and audibility.

After consulting the company on their existing situation and goals, Fishbowl implemented ControlCenter with Oracle WebCenter Content for the employees to manage, maintain, and share corporate knowledge, assembled manuals, and real estate archives. ControlCenter provided a modern, mobile-ready interface which included search, retrieval, and document control capabilities. It also offers a role-aware interface, including a dashboard for documents requiring attention which would be driven by workflow review notifications. Additionally, the interface enabled relevant content to be located based on the phase of a project (i.e. preconstruction) and the project member role (i.e. developer); therefore, the employees have relevant, necessary information for their positions. The system was also integrated Oracle JD Edwards to sync lease information as lease agreements were kicked off in ControlCenter.

Results

With the implementation in place, the new ControlCenter platform made it easier to train the general user (approximately five to 10 minutes) and decreased onboarding cost per employee by two times because of better process documentation, knowledge sharing, and mobile access. With that, less time is now spent on training and onboarding. ControlCenter also ensures compliance through a schedule review process and the highest degree of content and metadata accuracy. Content owners are now aware of out-of-date information and data and users can easily find the content anytime and anywhere on both desktop and mobile devices. In addition to creating a more effective workspace, Fishbowl helped reduce project startup costs by $400,000 through the reduction of duplicated work.

The post Fishbowl Solutions’ ControlCenter Drives Down Project-Related Costs by $400,000 for National Builder and Real Estate Management Company appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Chatbot Tech Round Table: Three Real Life Use Cases on How Oracle Chatbots Can Be Integrated into Cloud Applications

Watch this video to gather more insight into chatbot capabilities.

Chatbots are increasingly becoming an excellent tool for organizations to consider when developing their user experience. They provide a fast and engaging way for users to access information more efficiently. In this video, you will learn just a few ways of how chatbots can be used by being integrated into cloud applications. Fishbowl’s John Sim, an Oracle Ace, demonstrates three different scenarios in which chatbots can improve the user experience for an account manager.

What You’ll See:

  • The day in the life of an account manager onsite with a customer using a chatbot
  • How chatbots make onboarding more efficient by providing new sales reps with interactive training
  • How chatbots enhance an account managers ability to engage with a customer with knowledge from the Oracle mobile cloud

To get even more information about chatbots and how you can better utilize their capabilities, please contact us directly at info@fishbowlsolutions.com or visit our chatbot consulting page.

The post Chatbot Tech Round Table: Three Real Life Use Cases on How Oracle Chatbots Can Be Integrated into Cloud Applications appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Fishbowl Solutions Helps Global Communications Company Leverage Oracle WebCenter to Create a Consumer-Grade Portal Experience for its Employees

An international communications, media, and automotive company based in the United States, with over $18 billion in revenue 60,000 employees globally, wanted to implement a new, consumer-grade portal to provide a digital workplace where employees can access company-wide information, as well as share tools and resources.

Business Objective

The company was challenged with having to manage and maintain four different portals across their divisions. Each of these had its own set of features including separate collaboration systems, design that did not comply with the company’s current branding and style guidelines, and ten-year old portal technology that was no longer supported.

Overall, the company envisioned a single portal to engage employees as ambassadors and customers by surfacing news about products and key initiatives. Additionally, provide employees with a broader knowledge of the entire company beyond their divisions. In considering this vision, the company outlined their objectives:

  • Connect and engage employees by providing them with easy access to company, department news, resources and tools
  • Improve user experience (UX) design and content restructuring of employee information systems
  • Enhance and increase employee collaboration
To create a connected, consumer-like digital experience that promotes collaboration, sparks innovation, and helps employees get their jobs done, any place, any time, on any device

Company Mission Statement for the New Portal

Portal Solution Accelerator Implementation

After evaluating several enterprise portal platforms, the company chose Oracle WebCenter as the system they would use to build their employee digital workplace portal. Oracle WebCenter includes content management and portal components. It was particularly chosen due to its scalability and performance (backed by Oracle database), its ability to target and personalize content based on metadata, its flexibility to provide integrations with third-party collaboration systems, and its ability to integrate with Oracle applications including E-Business Suite, PeopleSoft, and Taleo Cloud Service.

The company had limited Oracle WebCenter development and implementation experience and resources, so they sought out partners to help with their new portal implementation. Fishbowl Solutions was chosen based on their vast Oracle WebCenter experience and expertise. Additionally, Fishbowl offered a portal jumpstart framework called Portal Solution Accelerator (PSA) that provided additional software capabilities to drive better user experience and overall performance. This includes integrating content to be consumed on the portal using single page applications (SPA) instead of Oracle Application Development Framework (ADF) taskflows. SPA taskflows are more lightweight and can therefore be more easily consumed on the portal without impacting performance. SPA taskflows also enable the use of other front-end design frameworks, such as Oracle JET (JavaScript Extension Toolkit), enabling web designers and marketers to develop their own components with basic Javascript, HTML, and CSS knowledge.

Fishbowl Solutions leveraged its PSA to address seven critical capabilities the company wanted from the new portal:

  • Hybrid Content Integration – Ability to make quick updates/edits to content on the portal portal via a web inline editor, while having new content be checked in via profiles using Oracle WebCenter Content
  • Personlization – Content targeted to individuals based on such user attributes as Division, Department, Company, Loacation, Management, and Employee Type
  • Security – Leveraged roles and groups from Oracle Unified Directory to drive security. Fishbowl’s Advanced User Security Mapping (AUSM) software was used to ease user management because it enables rules to be created to map LDAP attributes to WebCenter roles (participant, contributor, administrator, etc.)
  • Collaboration – Integrated 3rd-party collaboration system, Jive, into the portal user experience so that users can see activity stream and collaborate with others in-context of the portal
  • Application Integration – Integrated with PeopleSoft Human Capital Management to pull additional employee data onto the portal. This was needed for upper management to be able to quickly view HR-related tasks on mobile devices.
  • Content & People Search – Content indexed by the Google Search Appliance is made available by searching on the portal where secure results are returned
  • Optimal Portal Performance – Leveraged local Oracle Coherence cache available per node in WebCenter Portal, while Redis was used as a means to create a central publishing model for updated content to the cache
Results

WebCenter portal devicesThe company officially launched the new employee portal in July of 2017. Since then, user feedback has been very positive. The value-add capabilities of Fishbowl PSA – standard portal page templates and layouts, mega-menu navigation, role-based content contribution using Oracle WebCenter Content – meant the company could focus on implementation and not custom development. This reduced time-to-market by 25 percent. Typically, the company has around 1,500 concurrent users on the home page, which loads in about 4.5 seconds. Secondary page visits take around 2.5 seconds to load. This performance is easily tracked as the company sees around 40,000 active users each week with minimal complaints or issues reported. The load times have exceeded expectations.

It has been reported that 92 percent of user sessions occur from the desktop, 5 percent from smartphones, and 2 percent from tablets. The most popular portal page is the Home page, followed by Time Reporting, Jobs, My Pay, and Employee Discounts.

Overall, the new portal has provided employees with a broader knowledge of the entire company beyond their position, division, and department, while bringing together one unified message and brand.

This is a very well put together site. I will definitely use it more than the old portal.

Field Service Representative

The post Fishbowl Solutions Helps Global Communications Company Leverage Oracle WebCenter to Create a Consumer-Grade Portal Experience for its Employees appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Using CFDOT (CF Diego Operator Toolkit) on Pivotal Cloud Foundry

Pas Apicella - Tue, 2018-06-19 22:12
I decided to use CFDOT (CF Diego Operator Toolkit) on my PCF 2.1 vSphere ENV today. Setting it up isn't required as it's installed out of the box on Bosh Managed Diego Cell as shown below. It gives nice detailed information around Cell Capacity and other useful metrics.

1. SSH into Ops Manager VM

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-165$ ssh ubuntu@opsmgr.haas-165.mydns.com
Unauthorized use is strictly prohibited. All access and activity
is subject to logging and monitoring.
ubuntu@opsmgr.haas-165.mydns.com's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-124-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

...

ubuntu@bosh-stemcell:~$

At this point you will need to log into the Bosh Director as described below


2. Issue a command as follows once logged in to get all VM's. We just need a name of one of the Diego CELL VM's

ubuntu@bosh-stemcell:~$ bosh -e vmware vms --column=Instance --column="Process State"
Using environment '1.1.1.1' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Task 12086
Task 12087
Task 12086 done

Task 12087 done

Deployment 'cf-edc48fe108f1e5581fba'

Instance                                                            Process State
backup-prepare/eff97a4b-15a2-425c-8333-1dbaaefbb5ff                 running
clock_global/d77c485f-7d7c-43ae-b9de-584411ffa0bd                   running
cloud_controller/874dd06c-b76e-427a-943e-dea66f0345b6               running
cloud_controller/bba1819e-b7f4-4a34-897a-c78f6189667c               running
cloud_controller_worker/803bfb3f-653b-4311-b831-9b76e602714e        running
cloud_controller_worker/f5956edb-9510-4d99-a0f7-8545831b45ec        running
consul_server/3bfdc6bd-2f1d-4607-8564-148fadd4bc3d                  running
consul_server/4927cc4b-4531-429b-b379-83e283b779ba                  running
consul_server/69c1c5ee-8288-49bd-9112-afe05fe536f4                  running
diego_brain/01d3914c-2ab1-4b75-ada7-2267f34faee6                    running
diego_brain/564cf558-c2dc-4045-a4d1-54f633633dd6                    running
diego_brain/a22c2621-4278-4a83-94ee-34287deb9310                    running
diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf                     running
diego_cell/9452a3b4-d40c-49f1-9dbf-8d74202f7dff                     running
diego_cell/dfc8e214-2e59-4050-9312-1113662ce79f                     running

...

3. SSH into a Bosh managed Diego Cell VM. Use the correct name for one of your Diego Cells and your deployment name for CF itself

ubuntu@bosh-stemcell:~$ bosh -e vmware -d cf-edc48fe108f1e5581fba ssh diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf
Using environment '1.1.1.1' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Using deployment 'cf-edc48fe108f1e5581fba'

....

4. Run a command as follows "sudo su -"

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~$ sudo su -

5. Verify CFDOT CLI is installed using "cfdot"

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot
A command-line tool to interact with a Cloud Foundry Diego deployment

Usage:
  cfdot [command]

Available Commands:
  actual-lrp-groups            List actual LRP groups
  actual-lrp-groups-for-guid   List actual LRP groups for a process guid
  cancel-task                  Cancel task
  cell                         Show the specified cell presence
  cell-state                   Show the specified cell state
  cell-states                  Show cell states for all cells
  cells                        List registered cell presences
  claim-lock                   Claim Locket lock
  claim-presence               Claim Locket presence
  create-desired-lrp           Create a desired LRP
  create-task                  Create a Task
  delete-desired-lrp           Delete a desired LRP
  delete-task                  Delete a Task
  desired-lrp                  Show the specified desired LRP
  desired-lrp-scheduling-infos List desired LRP scheduling infos
  desired-lrps                 List desired LRPs
  domains                      List domains
  help                         Get help on [command]
  locks                        List Locket locks
  lrp-events                   Subscribe to BBS LRP events
  presences                    List Locket presences
  release-lock                 Release Locket lock
  retire-actual-lrp            Retire actual LRP by index and process guid
  set-domain                   Set domain
  task                         Display task
  task-events                  Subscribe to BBS Task events
  tasks                        List tasks in BBS
  update-desired-lrp           Update a desired LRP

Flags:
  -h, --help   help for cfdot

Use "cfdot [command] --help" for more information about a command.

6. Lets see what each Diego CELL has for Capacity as a whole

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot cells | jq -r
{
  "cell_id": "7ca12f7d-737f-47fb-a8bc-91d73e4791cf",
  "rep_address": "http://10.193.229.62:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://7ca12f7d-737f-47fb-a8bc-91d73e4791cf.cell.service.cf.internal:1801"
}
{
  "cell_id": "9452a3b4-d40c-49f1-9dbf-8d74202f7dff",
  "rep_address": "http://10.193.229.61:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://9452a3b4-d40c-49f1-9dbf-8d74202f7dff.cell.service.cf.internal:1801"
}
{
  "cell_id": "dfc8e214-2e59-4050-9312-1113662ce79f",
  "rep_address": "http://10.193.229.63:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://dfc8e214-2e59-4050-9312-1113662ce79f.cell.service.cf.internal:1801"
}

7. Finally lets see what available resources we have on each Diego Cell

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot cell-states | jq '"Cell Id -> \(.cell_id): L -> \(.LRPs | length), Avaliable Resources [MemoryMB] -> \(.AvailableResources.MemoryMB), Avaliable Resources [DiskMB] -> \(.AvailableResources.DiskMB), Avaliable Resources [Containers] -> \(.AvailableResources.Containers)"' -r

Cell Id -> 7ca12f7d-737f-47fb-a8bc-91d73e4791cf: L -> 17, Avaliable Resources [MemoryMB] -> 6843, Avaliable Resources [DiskMB] -> 86141, Avaliable Resources [Containers] -> 232
Cell Id -> 9452a3b4-d40c-49f1-9dbf-8d74202f7dff: L -> 14, Avaliable Resources [MemoryMB] -> 5371, Avaliable Resources [DiskMB] -> 89213, Avaliable Resources [Containers] -> 235
Cell Id -> dfc8e214-2e59-4050-9312-1113662ce79f: L -> 14, Avaliable Resources [MemoryMB] -> 4015, Avaliable Resources [DiskMB] -> 89213, Avaliable Resources [Containers] -> 235

More Information

https://github.com/cloudfoundry/cfdot


Categories: Fusion Middleware

Top 10 Albums Meme

Greg Pavlik - Fri, 2018-05-25 21:27

I’ve been hit by a barrage of social media posts on people’s top 10 albums, so I thought I would take a look at what I have listened to the most in the last 5 years or so. I’m not claiming these are my favorites or “the best” albums recorded (in fact there are many better albums I enjoy). But I was somewhat surprised to find that I do return to the the same albums over and over, so here’s the top 10, in no particular order.

1)Alina, Arvo Part

If you were going to stereotype and box in Part’s work, this would be a good album to use. It’s also amazing enough that it could run on a continuous loop forever and I’d be pretty happy with that.

2)Benedicta: Marian Chants from Norcia, Monks of Norcia

Yes, the music hasn’t changed much from the middle ages. And yes, these are actually monks singing, who somehow managed to top the Billboard charts. The term to use is sublime – this music is quintessentially music of peace and another album that bears repetition with ease.

3) Mi Sueno, Ibrahim Ferrer

I know the whole Bueno Vista Social Club thing was trendy, but this music – Cuban bolero to be precise – is full of passion, charm, and romance: it music for human beings (which is harder and harder to find these days). This is at once a work of art and a testament to real life.

4) Dream River, Bill Callahan

I don’t even know what to categorize this music as: it’s not popular music, rock, easy listening, country or folk. But it has elements of most of those. Callahan’s baritone voice sounds like someone is speaking to you rather than singing. This album just gets better with the years of listening and it’s by far his best.

5) The Harrow and the Harvest, Gillian Welch

Appalachian roots, contemporary musical twists – I don’t know what they call this: alt-blue grass? In any case, its Welch’s best album and a solid, if somewhat dark, listen.

6) In the Spur of the Moment, Steve Turre

Turre does his jazz trombone (no conch shells on this album – which I am happy about) along with Ray Charles on piano for the first third or so, later trending toward more Afro-Cuban jazz style. I know the complaint on this one is that it feels a bit passionless in parts, but it’s a hard mix not to feel good about.

7) Treasury of Russian Gypsy Songs, Marusia Georgevskaya and Sergei Krotkoff

I’ll admit that it sounds like Georgevskaya has smoked more than a few cigarettes. But this is timeless music, a timeless voice, from a timeless culture. Sophie Milman’s Ochi Chernye is sultry and seductive (she is really fantastic), but somehow I like Marusia’s better.

9) Skeleton Tree, Nick Cave

Nick Cave is uneven at best and often mediocre but this album is distilled pain in poet form and a major work of art. For some reason I listen to this end to end semi regularly on my morning commute.

10) Old Crow Medicine Show, Old Crow Medicine Show

End to end, just hits the right notes over and over again. From introspective to political to just plain fun, these guys made real music for real people at their peak. Things fell apart after Willie Watson, but there is an almost perfect collection of authentic songs.

Deploying a Spring Boot Application on a Pivotal Container Service (PKS) Cluster on GCP

Pas Apicella - Wed, 2018-05-09 00:31
I have been "cf pushing" for as long as I can remember so with Pivotal Container Service (PKS) let's walk through the process of deploying a basic Spring Boot Application with a PKS cluster running on GCP.

Few assumptions:

1. PKS is already installed as shown by my Operations Manager UI below



2. A PKS Cluster already exists as shown by the command below

pasapicella@pas-macbook:~$ pks list-clusters

Name        Plan Name  UUID                                  Status     Action
my-cluster  small      1230fafb-b5a5-4f9f-9327-55f0b8254906  succeeded  CREATE

Example:

We will be using this Spring Boot application at the following GitHub URL

  https://github.com/papicella/springboot-actuator-2-demo


1. In this example my Spring Boot application has what is required within my maven build.xml file to allow me to create a Docker image as shown below
  
<!-- tag::plugin[] -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
<!-- end::plugin[] -->

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack</id>
<phase>package</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>

2. Once a docker image was built I then pushed that to Docker Hub as shown below



3. Now we will need a PKS cluster as shown below before we can continue

pasapicella@pas-macbook:~$ pks cluster my-cluster

Name:                     my-cluster
Plan Name:                small
UUID:                     1230fafb-b5a5-4f9f-9327-55f0b8254906
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   cluster1.pks.pas-apples.online
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  192.168.20.10

4. Now we want to wire "kubectl" using a command as follows

pasapicella@pas-macbook:~$ pks get-credentials my-cluster

Fetching credentials for cluster my-cluster.
Context set for cluster my-cluster.

You can now switch between clusters by using:
$kubectl config use-context

pasapicella@pas-macbook:~$ kubectl cluster-info
Kubernetes master is running at https://cluster1.pks.pas-apples.online:8443
Heapster is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-influxdb is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5. Now we are ready to deploy a Spring Boot workload to our cluster. To do that lets download the YAML file below

https://github.com/papicella/springboot-actuator-2-demo/blob/master/lb-withspringboot.yml

Once downloaded create a deployment as follows

$ kubectl create -f lb-withspringboot.yml

pasapicella@pas-macbook:~$ kubectl create -f lb-withspringboot.yml
service "spring-boot-service" created
deployment "spring-boot-deployment" created

6. Now let’s verify our deployment using some kubectl commands as follows

$ kubectl get deployment spring-boot-deployment
$ kubectl get pods
$ kubectl get svc

pasapicella@pas-macbook:~$ kubectl get deployment spring-boot-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
spring-boot-deployment   1         1         1            1           1m

pasapicella@pas-macbook:~$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
spring-boot-deployment-ccd947455-6clwv   1/1       Running   0          2m

pasapicella@pas-macbook:~$ kubectl get svc
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
kubernetes            ClusterIP      10.100.200.1               443/TCP          23m
spring-boot-service   LoadBalancer   10.100.200.137   35.197.187.43   8080:31408/TCP   2m

7. Using the external IP Address we got GCP to expose for us we can access our Spring Boot application on port 8080 as shown below using the external IP address. In this example

http://35.197.187.43:8080/



RESTful End Point

pasapicella@pas-macbook:~$ http http://35.197.187.43:8080/employees/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Wed, 09 May 2018 05:26:19 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "employee": {
            "href": "http://35.197.187.43:8080/employees/1"
        },
        "self": {
            "href": "http://35.197.187.43:8080/employees/1"
        }
    },
    "name": "pas"
}

More Information

Using PKS
https://docs.pivotal.io/runtimes/pks/1-0/using.html

Categories: Fusion Middleware

A Simple, Straightforward Method to Update Content on WebCenter-Based Portal Pages

In our experience working with numerous WebCenter Portal customers, almost all of whom suffered from failed portal/intranet implementations, this difficulty to update and quickly edit page content always lead to stagnant content throughout the portal. This stagnant content made the portal less sticky and therefore the organization didn’t realize widespread adoption.

The difficulty to add and update content was magnified by the fact that in most cases, portal page updates were performed by system administrators. As you can imagine especially in a large organization, the ability for a few admins making page updates across human resources, finance, marketing, and IT departments would cause bottlenecks and it would be days or weeks before the various business groups would see their new content on the portal. Because the business groups really couldn’t take ownership of the content on the portal, less and less changes or updates would be requested.

To make it easier for customers to update their portals and ultimately realize distributed content authoring, Fishbowl Solutions released its Portal Solution Accelerator (PSA) framework in 2012 which included a profile-driven process to make page updates. Today, one of the most desirable and usable features of Fishbowl’s Portal Solution Accelerator (PSA) is the inline editor. This feature enables portal users with the appropriate permissions to edit content directly on portal pages. Now before I provide more detail on this feature, I think it is important to provide some context on how Fishbowl Solutions has continued to make this feature easier for the business user to edit content.

With previous versions of PSA, the process involved the user going to the content server to find the content item. They then would check out the item, and then use a WYSIWYG-style editor to edit the content. After checking the content item back in they would return to the portal, refresh the page, and hopefully see the changes they made. To highlight what I’ve outlined above, see this video starting at 44:48.

With the current version of PSA, the inline editor was built with the business user mind. Fishbowl wanted to ensure that anyone with the appropriate permissions could edit page content and that the process itself could be done directly on the page itself. This would ensure that more users across more departments could be involved with keeping content on the portal fresh and new, helping companies get more value through higher adoption. An overview of the process is as follows:

  1. Users with the appropriate permissions go to the page they want to update. Hover effects indicate highlighted sections that can be updated.
  2. They then click on the edit icon (pencil on paper) that will appear in the bottom right corner of the editable section.
  3. Once clicked, stylized versions of Content Server profile pages appear. Within this profile form, the user can make any changes to page content. At this point, this page (content item) is checked out from the content server.
  4. Once finished the user click out of the profile form. This checks the item back in, which could trigger a workflow process for page reviews. The user that did the editing can then see in near real-time the updates to the page they made.

To illustrate how easy it is to edit portal page content with Fishbowl’s PSA inline editor, this video shows some examples of editing content, approving those items through workflow, and then seeing the updates.

As you can see, the inline editor feature of Fishbowl’s PSA makes it easy for anyone to update content on the portal. This feature alone ensures that more people within an organization, across departments and roles, can participate in content creation. With content being created more frequently, employees should be more engaged and better informed leading to higher rates of portal adoption.

The post A Simple, Straightforward Method to Update Content on WebCenter-Based Portal Pages appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Using/Verifying the Autoscale service from Apps Manager UI in 5 minutes

Pas Apicella - Fri, 2018-04-20 04:59
Recently at a customer site I was asked to show how the Autoscale service shipped by default with Pivotal Cloud Foundry would work. Here is how we demoed that in less then 5 minutes.

1. Select an application to Autoscale and click on the "Autoscaling" radio option.


2. Select "Manage Autoscaling" link as shown below.


3. Set the maximum instance limit to "4" and click Save as shown below. You can also set minimum to 1 instance if you want to which will make it easier to verify the scaling of instances as one instance can easily be put under pressure.


4. Now lets set a "Scaling Rule" by clicking on the "Edit" link as shown below.


5. Now lets add a CPU rule by clicking on the "Add" link as shown below.


6. Now define a CPU rule as shown below and click on Save. Don't forget to make it active using the radio option. In this example we use very low threshold BUT it would be better to increase this to something more realistic like 30% and 60% respectively.




Now at this point we are ready to test the Autoscale service BUT to do that we are going to have to create some load. Many different ways to do that but "ab" on my Mac was the fastest way.

8. Create some load on an endpoint for your application to force CPU utilization to increase as shown below

pasapicella@pas-macbook:~$ ab -n 10000 -c 25 http://springboot-actuator-appsmanager-delightful-jaguar.cfapps.io/employees
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking springboot-actuator-appsmanager-delightful-jaguar.cfapps.io (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests

....

9. If you return to Apps Manager UI soon enough you will see that the Autoscale service has fired events to add more instances as per the screen shots below.




It's worth noting that the CF CLI Plugin for Autoscale can also show us what we have defined as as shown below. More information on this plugin is as follows

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler-cli.html#install

View which applications are using the Autoscaler service:

pasapicella@pas-macbook:~$ cf autoscaling-apps
Presenting autoscaler apps in org apples-pivotal-org / space development as papicella@pivotal.io
OK
Name                              Guid                                   Enabled   Min Instances   Max Instances
springboot-actuator-appsmanager   6c137fea-6a99-4069-8031-a2aa3978804c   true      2               4

View events for an application that has Autoscaler service bound to it:

pasapicella@pas-macbook:~$ cf autoscaling-events springboot-actuator-appsmanager
Presenting autoscaler events for app springboot-actuator-appsmanager for org apples-pivotal-org / space development as papicella@pivotal.io
OK
Time                   Description
2018-04-20T09:56:30Z   Scaled down from 3 to 2 instances. All metrics are currently below minimum thresholds.
2018-04-20T09:55:56Z   Scaled down from 4 to 3 instances. All metrics are currently below minimum thresholds.
2018-04-20T09:54:46Z   Can not scale up. At max limit of 4 instances. Current CPU of 20.75% is above upper threshold of 8.00%.
2018-04-20T09:54:11Z   Can not scale up. At max limit of 4 instances. Current CPU of 30.53% is above upper threshold of 8.00%.
2018-04-20T09:53:36Z   Can not scale up. At max limit of 4 instances. Current CPU of 32.14% is above upper threshold of 8.00%.
2018-04-20T09:53:02Z   Can not scale up. At max limit of 4 instances. Current CPU of 31.51% is above upper threshold of 8.00%.
2018-04-20T09:52:27Z   Scaled up from 3 to 4 instances. Current CPU of 19.59% is above upper threshold of 8.00%.
2018-04-20T09:51:51Z   Scaled up from 2 to 3 instances. Current CPU of 8.99% is above upper threshold of 8.00%.
2018-04-20T09:13:24Z   Scaling from 1 to 2 instances: app below minimum instance limit
2018-04-20T09:13:23Z   Enabled autoscaling.

More Information

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler-cli.html#install

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler.html

Categories: Fusion Middleware

The Intelligent Chatbot to Customer Service Agent Hand-Off within Zendesk

Chatbots are on the rise. By 2020, over 80% of businesses are expected to implement some type of chatbot automation (Business Insider, 2016). This type of automation is inevitable due to the amount of time and money that chatbots can save a business. However, especially in the early days of the chatbot revolution, a bot will not be able to solve all the problems that a human can. One specific use case for chatbots that we have examined is customer support. Customer support bots can reduce the workload of support staff by a great deal, but some customers will not find the support they need with a bot. Wouldn’t it be great if a customer could seamlessly go from talking to a bot to a live person in the same interface? That is exactly what we created at Fishbowl and you can see in this video.

Starting a conversation with this bot begins using Oracle’s chatbot framework, a feature of Oracle Mobile Cloud Service, much like the rest of our bots. It has the capability do all the integrations that our other bots have with systems such as Salesforce, Oracle Engagement Cloud, and Zendesk Software and Support ticketing system. However, this bot has the ability to connect to Zendesk’s live chat service for more personal support from a live agent. Using the bot, information is collected to be passed to the live agent, so that the live agent can know what was already asked and can waste no time in helping the customer.

To move from a bot conversation to a live chat conversation and back again, customizations had to be made to our web client. Since the live chat feature in Oracle’s bot framework is still a work in progress, the best solution was to stop sending messages to the bot after the user goes through the “connect to a live agent” chat flow. Instead, the web client sends messages directly to Zendesk and receives them in turn. Once the conversation has concluded, the bot returns to normal and talks to the bot framework once again.

Customer service is a critical component of the overall customer experience, and getting customers answers to common questions can go a long way to ensure brand loyalty. Some stats suggest that 80% of routine questions can be answered by a chatbot, but when an agent is needed it is important to provide a seamless handoff while providing the agent with context to immediately begin servicing the customer. If integrated correctly, chatbots and customer service/support representatives (agents) can together improve the customer service experience.

You can see more of the intelligent chatbots Fishbowl has created using Oracle Mobile Cloud here: https://www.fishbowlsolutions.com/oracle-intelligent-chatbot-cloud-service-consulting/

The post The Intelligent Chatbot to Customer Service Agent Hand-Off within Zendesk appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Collaborate Preview #2: Consider your Options for Moving Oracle WebCenter to the Cloud

By now, most people have heard about the benefits of cloud computing. To summarize, the cloud promises more agility and scalability, with less cost and administration. However, for legacy customers using on-premise software, getting to the cloud isn’t always a simple and straightforward process. In fact, a lot of times confusion on deployment options, pricing, customer-managed versus vendor-managed, and security may delay cloud strategies. This is definitely the case for Oracle WebCenter Content customers who have a myriad of options to move their documents, images, and other enterprise content to the cloud.

Fortunately for Oracle WebCenter customers, Oracle offers the most complete set of cloud services spanning Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). With this set of cloud services, Oracle WebCenter Content customers have industry-leading options to deploy their WebCenter instances to the cloud. Here is a summary of those options:

Oracle Bare Metal Cloud Service (IaaS)
  • Public cloud with granular control of security
  • Computing, block storage, networking services
  • Customer administered
  • Bring WebCenter licenses
  • Administration: High, user-owned
Oracle Compute Cloud Service (IaaS)
  • Computing, block storage, networking services
  • Bring WebCenter licenses
  • Administration: High, user-owned
Oracle Java Cloud Service (PaaS)
  • Full operating environment including WebLogic
  • Complete control and customization
  • Bring WebCenter licenses
  • Administration: Moderate to low
Oracle WebCenter Portal Cloud Service (PaaS)
  • WebCenter Portal in the Cloud
  • Metered or non-metered licenses
  • Administration: Moderate to low

You might be surprised that Oracle WebCenter Portal Cloud Service is listed above as one of the options to move Oracle WebCenter Content to, but it does present a viable solution. The user experience has always been one of the biggest complaints with WebCenter Content. Moving your content to the cloud and using WebCenter Portal Cloud to create intranets, extranets, composite applications, self-service portals and user experiences to access content could ensure a better user experience overall, and ensure more adoption going forward. It provides users a more secure and efficient means to consume information while being able to also interact with applications, processes, and other users. The added benefit is that it comes with Oracle WebCenter Content.

We will be discussing more about the options WebCenter Content and Portal customers have to move their on-premise instances to the Oracle Cloud at Collaborate 2018 during this session: Options and Considerations for Moving Oracle WebCenter Content & Portal to the Cloud, which takes place on Monday, April 23rd from 11:00 AM to 12:00 PM. In this session, Fishbowl’s Director of Solutions, Jerry Aber, will go into more detail about the Oracle Cloud options listed above, as well as what to expect from a pricing perspective. Come hear about considerations for hybrid cloud environments as well, and what that means from an Oracle Cloud architecture perspective.

For more information on all of Fishbowl’s activities at Collaborate 2018, please visit this page: https://www.fishbowlsolutions.com/about/news/collaborate/

The post Collaborate Preview #2: Consider your Options for Moving Oracle WebCenter to the Cloud appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Pages

Subscribe to Oracle FAQ aggregator - Fusion Middleware