Mathias Magnusson

Subscribe to Mathias Magnusson feed
Changing the world, one line of code at a time
Updated: 8 months 2 weeks ago

Improving data move on EXADATA III

Wed, 2013-05-29 07:00

Moving to history tables

In the last post I talked about how we made the speed of actually writing all those log-records much faster. It has to date been so fast that no a single report of a problem has been filed. you find that post here.

Once the data was written to the log-tables, it had to be moved to the history tables.This was a process that took around 16 hours. It was never allowed to run for that long as it had to be stopped before the business day started.

This move was done from an instance on EXADATA to a database on an old server. Yes, I can hear you all think “AHA! It must be the slow database link.”. That was the leading thought when I started looking at it. And yes it sure was slow, but no it was not having a big impact. The other area that had been tuned and tweaked over and over and over and … was the insert over the database link to the history tables. Sure enough it was taking a long time. However measuring it showed that it only accounted for 20% of the total time. Reducing that would let us save over three hours. While that would be good, where did the rest of the time go?

It went to a place no one had suspected. Nor tweaked Nor had any measurements been made. What else is new?

It was the part of the process that was EXADATA only. It was the delete of the rows that had been moved. Huh? How could this be? Well it turns out that deleting data based on an in-clause was not as fast as one would think (or at least want). The process was based on selecting a set of primary key values and putting them into a temporary table, this table was then used to identify rows to insert into the history table and to delete rows.

Yes, there are quite a few things in this process that one could attempt to optimise  However, no matter what, the speed would probably not be fast enough. If it ended up being, would it handle the projected growth of the business? And is there really no better way than essentially row by row processing?

Sure there is. Sometimes SQL is not the best or only tool at our disposal.

Everything doesn’t have to be done with SQL. ™

We had already removed the indexes, so the delete should now be faster. It was, just barely fast enough. Thus, just with that change we had squeezed into the seemingly unattainable window for this process. But business is growing and we would within weeks be back to tuning and tweaking.

Back to the idea of not using SQL for everything. But first, let’s revisit the approach that led to success with the write speed. What assumptions are made that we can question? Well… Why are we writing this data over a database link to the slowest database we have in our production environment? It has always been that way and yes we’re worried about the impact of doing this on the EXADATA. Both the impact of letting ad-hoc searches be made as well as the impact of storing all this data on the EXADATA. The storage concern is well founded as the log-data takes up close to 1.5 TB and the volume of logs written are increasing.

However, when we question this we all agree that these are assumed problems and assumed solutions to those problems. Based on that a PoC is produced to show what would happen if we could keep the historic log data in the same database instance on the EXADATA.

With the historic tables in the same database, we get a whole new set of tools to use. I build a PoC showing how data can be moved from the operational tables (the one logs are written to) to the historic ones in under a second for the whole days volume. To achieve this I partition the table on range where the partition key is the time when the log was inserted. Next part is to use a technology in the database called exchange partition.

When exchanging a partition, no data is actually moved. The partition with todays data is via exchange partition moved from table A to table B. However, this move is only updating metadata in the database. That is to say that the only change was to specify which table the partition belongs to. The rows in the partition remains in the same exact physical location on disk as they were from the beginning. They are not changed, not read, and not touched in any way.

This is what makes such a move so fast. Even better, it is transactionally safe. If a query started while it belonged to table A, it will be read even though it was moved to another table in the middle of that query. Queries on table A that starts after the move will of course not see the moved data at all.

Sub-second moving data of millions or billions rows is something that cannot be done with SQL no matter how much one tunes the SQL. So again, SQL is not the only tool at your disposal.

With this we proved that the process can be fast enough. I have not discussed it here, but during this process we also showed that the ad-hoc searches was of no concern either. EXADATA smart scan handles the actual ad-hoc queries very well and most of them are actually sub-second response time even with no indexes. This is for 1 billion+ row tables. Yes smart scan is one part of it and storage indexes is another. I will not discuss those in these posts, but take my word for it, when the time they took were presented the concern was immediately forgotten.

In the next post in this series, I will discuss how we dealt with the concern over the amount of disk space we would use now and in the future if we let the historic data stay on the EXADATA.


Improving data move on EXADATA II

Wed, 2013-05-22 07:00

Writing log records

The last post in this series introduced the problem briefly. You find that post here.

In this post I’ll talk about the changes made to make that writing of log records fast enough. There were 50 million records that was written. Each of them pretty much in its own transaction. Of course the commit activity caused problem, as did log buffer issues. Some of this could be somewhat remedied with configuration.

The big issue though was that the writes themselves took too much time and too often many session ended up in long contention chains. Yes, it would have been great to have the luxury of redesigning the whole logging situation from the ground up. But, as is often the case, the solution was built such that all systems connecting were implemented in such a way that redesigning was not an option. Fixing the performance of this had to be done without requiring code changes to the systems performing the logging. Oh, joy.

So what caused the problem then? For the inserts it was pretty straight forward. Too many transactions making an insert and a commit. This caused indexes to be hotspots where all processes wanted to write at the same spot. Hash-partitioning had been introduced and that had led to less contention but slower performance. As the partitions existed on different parts on the disks the write head had to be constantly moved and that caused slower service times.

What could we do to make a big improvement while not affecting the code? We’re not talking about just 10-20% of improvement on any area in this case, and even more important was to make the performance stable. That is, the most important thing was to ensure that there were no spikes where an insert suddenly to 20 times longer than usual. The contention chains that was occurring made performance spike such that the whole system became unusable.

The solution here turned out to be something so far from advanced technologies as questioning assumptions. The first time I asked “why do we have these indexes”, most people in the room thought I was just joking around. Eventually they realised that I was serious. After an amusing period of silence where I could see them thinking “Do we need to inform him that indexes are needed to enforce uniqueness and to support referential integrity?”, someone went ahead and did just that. OK, now we were on to a productive discussion, as of course that wasn’t what I meant. The followup discussion about why we needed referential integrity and uniqueness for this set of data was very enlightening for everyone. To make a long story short, it was not needed at all. It was there because it had always been there and nobody had questioned the need before.

How come we didn’t need data to be unique? Well, this is log-data. That is it tells us what actions has been performed by the system. If some activity would be reported twice, it really wouldn’t be the end of the world. The possible problem that some activity isn’t logged cannot be handled with defining unique constraints. That is pure system design and nothing I could improve or worsen by removing some indexes.

Thus, the indexes was removed together with the foreign keys (referential integrity).

Sounds simple enough, but did it help? Did it ever! In one month after making the change, there has not been one report of one transaction that was anywhere close to take too long. This simple solution made the logging so fast that it is no longer a concern.

The next post in this series will discuss the solution for moving data to the history tables. This process took around 16 hours and it had to become at least three times as fast. As you’ll see, moving all these rows can be done much faster than so.


Faster data move on EXADATA I

Wed, 2013-05-15 07:00

Introduction

In my work among other things I tune and tweak solutions for EXADATA. Today I’ll write about a big improvement we achieved with a process that moves data from the operational tables to the ones where the history is stored.

This will not be a technical post. While I talk about using advanced technologies, I will not discuss code or deep details of them in this post.

And yes, when I say post, I mean a series of posts. This will be too long to be a single post. I’ll break it up into an introduction and then a post on each area of improvement.

Let’s first discuss the before situation. This set of tables are logged to during the day. These log records are needed both to investigate how transactions were executed as well as to satisfy legal requirements. It is in a highly regulated industry and for good reason as mistakes could put someone’s life in danger.

In this situation the solutions were writing around 50 million log records per day to five tables. These tables all had a primary key based on a sequence and there was also referential integrity set up. This means that for the indexes, all processes were writing to the same place on disk. The lookup on the referential integrity was also looking at the same place. An attempt to remedy some of this had been made by hash partitioning the tables. The write activity was intense enough during the day that most of the logging had to be turned off as the solution otherwise was too slow. This of course has legal as well as diagnostic implications.

What’s worse is that once all that data was written, it had to be moved to another database where the history is kept. This process was even slower and the estimate for how long it would take to move one days worth of data was 16 hours. It never did run for that long as it was not allowed to run during the day, it had to start after midnight and finish before 7 am. As a result the volume built up every night until logging was turned off for a while and the move then caught up a little every night.

This series will have the following parts:

  1. Introduction (this post)
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up and summary

The plan is to publish one part each week. Hopefully I’ll have time to publish some more technical posts between the posts in this series.


Faster data move on EXADATA I

Wed, 2013-05-15 07:00

Introduction

In my work among other things I tune and tweak solutions for EXADATA. Today I’ll write about a big improvement we achieved with a process that moves data from the operational tables to the ones where the history is stored.

This will not be a technical post. While I talk about using advanced technologies, I will not discuss code or deep details of them in this post.

And yes, when I say post, I mean a series of posts. This will be too long to be a single post. I’ll break it up into an introduction and then a post on each area of improvement.

Let’s first discuss the before situation. This set of tables are logged to during the day. These log records are needed both to investigate how transactions were executed as well as to satisfy legal requirements. It is in a highly regulated industry and for good reason as mistakes could put someone’s life in danger.

In this situation the solutions were writing around 50 million log records per day to five tables. These tables all had a primary key based on a sequence and there was also referential integrity set up. This means that for the indexes, all processes were writing to the same place on disk. The lookup on the referential integrity was also looking at the same place. An attempt to remedy some of this had been made by hash partitioning the tables. The write activity was intense enough during the day that most of the logging had to be turned off as the solution otherwise was too slow. This of course has legal as well as diagnostic implications.

What’s worse is that once all that data was written, it had to be moved to another database where the history is kept. This process was even slower and the estimate for how long it would take to move one days worth of data was 16 hours. It never did run for that long as it was not allowed to run during the day, it had to start after midnight and finish before 7 am. As a result the volume built up every night until logging was turned off for a while and the move then caught up a little every night.

This series will have the following parts:

  1. Introduction (this post)
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up and summary

The plan is to publish one part each week. Hopefully I’ll have time to publish some more technical posts between the posts in this series.


Pages