Big Endian and Little Endian operating systems

Last update: 3rd June 2019

Did you ever wonder or need to know which systems use big endian and which ones use little endian? Look no further, you can query it right out of the Oracle database:

SQL> SELECT platform_name, endian_format
FROM V$TRANSPORTABLE_PLATFORM
ORDER BY PLATFORM_NAME;

 

PLATFORM_NAME ENDIAN_FORMAT
AIX-Based Systems (64-bit) Big
Apple Mac OS Big
Apple Mac OS (x86-64) Little
HP IA Open VMS Little
HP Open VMS Little
HP Tru64 UNIX Little
HP-UX (64-bit) Big
HP-UX IA (64-bit) Big
IBM Power Based Linux Little / Big
IBM zSeries Based Linux Big
Linux IA (32-bit) Little
Linux IA (64-bit) Little
Linux OS (S64) Big
Linux x86 64-bit Little
Microsoft Windows IA (32-bit) Little
Microsoft Windows IA (64-bit) Little
Microsoft Windows x86 64-bit Little
Solaris Operating System (x86) Little
Solaris Operating System (x86-64) Little
Solaris[tm] OE (32-bit) Big
Solaris[tm] OE (64-bit) Big

Loading data fast – Batch inserts and errors (DML error logging)

In the last post of my series “Loading data fast” I showed how batch inserts can actually make a huge difference in insert performance. Now one question remains: What happens when an error occurs like a unique key violation? The answer is: The insert statement will fail with an error and stop. Now, if you didn’t catch the exception, it will be raised and the commit will never be issued which will lead to a loss of all previous inserted data as well. However, if you catch the exception and make sure that you issue a commit afterwards, you will at least have your previous successful inserted data in the table if that is suitable for the business logic  – basically meaning that you know where you’ve stopped. Let’s have a look:

In Java you will have to run the PreparedStatement.executeBatch() routine in a try block and execute the Connection.commit() routine in the finally block, like this:


PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<ROWS;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.addBatch();
}

try
{
  stmt.executeBatch();
}
catch (BatchUpdateException e)
{
  System.out.println("Error during my batch insert example");
  e.printStackTrace();
}
finally
{
  conn.commit();
}

In PL/SQL you will have to run your FORALL statement in a new block, like this:


DECLARE
  TYPE tab is TABLE OF testloadtable%ROWTYPE;
  myvals tab;
BEGIN
  SELECT rownum, 'x'
    BULK COLLECT INTO myvals
      FROM dual
        CONNECT BY LEVEL <= 10000;
  BEGIN
    FORALL i IN myvals.FIRST..myvals.LAST
      INSERT INTO testloadtable (id, text)
        VALUES (myvals(i).id, myvals(i).text);
  EXCEPTION
    WHEN DUP_VAL_ON_INDEX THEN
      DBMS_OUTPUT.PUT_LINE('Unique key violation!');
  END;
  COMMIT;
END;

By doing this, you will at least keep all previous inserted data. However, you will still have to fix the error and run the rest of the batch again and latter can especially in batch load scenarios be rather difficult. If you catch the exception, you will have to find the right position in your batch again in order to continue. In some cases that could imply to rebuild the entire batch again. If you don’t catch the exception, because your batch is one logical unit and either it succeeds or will be rolled back, then you will have to start from scratch again. So the conclusion is: Neither of the two options is optimal!
Fortunately, Oracle provides a feature called “DML error logging“! This feature allows you to simply log all errors into an error table and continue the execution of your DML statement with no error being raised! This feature works with all DML statements: Insert, Update, Merge and Delete. I will, however, focus only on insert statements in this post.

First let’s prove what I just said. All I do is to create a primary key on the table and insert a row before I execute my batch.


CREATE TABLE TESTLOADTABLE (id NUMBER, text VARCHAR2(255));
table TESTLOADTABLE created.

CREATE UNIQUE INDEX TESTLOADTABLE_PK ON TESTLOADTABLE(id);
unique index TESTLOADTABLE_PK created.

ALTER TABLE TESTLOADTABLE ADD PRIMARY KEY (id) USING INDEX TESTLOADTABLE_PK;
table TESTLOADTABLE altered.

Now that I have my primary key created, I insert 1 row with the id 3000 before I execute my batch. The batch contains 10k rows with an id range of 0 – 9999. Once it reaches the row with the id 3000 (3001st row, as I start with id 0), I’ll receive an exception that the row already exists. Note, that all other rows would be fine and valid except this single one with the id 3000. Nevertheless, as I do not make sure that a commit is happening, I will lose all of them:

conn.prepareStatement("INSERT INTO testloadtable (id, text) VALUES (3000,'x')").execute();
conn.commit();

stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<ROWS;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.addBatch();
}
stmt.executeBatch();
conn.commit();

As expected, I get following exception back:

Exception in thread "main" java.sql.BatchUpdateException: ORA-00001: unique constraint (TEST.SYS_C0010986) violated

And when I do a count on the table, I have only my previously inserted row in there:


SELECT COUNT(*) FROM testloadtable
COUNT(*)
--------
1

Now let’s see what I get when I do catch the exception and make sure to issue a commit:

conn.prepareStatement("INSERT INTO testloadtable (id, text) VALUES (3000,'x')").execute();
conn.commit();

PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<ROWS;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.addBatch();
}

try
{
  stmt.executeBatch();
}
catch (BatchUpdateException e)
{
  System.out.println("Error during my batch insert example");
  e.printStackTrace();
}
finally
{
  conn.commit();
}

The output shows that I caught the exception successfully:


Error during my batch insert example
java.sql.BatchUpdateException: ORA-00001: unique constraint (TEST.SYS_C0010986) violated

And a count on the table now shows that I have 3001 rows in it; my previously inserted row with id 3000 and the first 3000 rows from the batch with id 0 – 2999:

SELECT COUNT(*) FROM testloadtable
COUNT(*)
--------
3001

Now as said before, the great thing about DML error logging is, that it offers you the ability to load your entire batch without receiving an error at all. Instead the error(s) will be logged into a separate error table which can then be queried and appropriate actions taken afterwards. All you need to do is to modify your DML statement to include the LOG ERRORS clause. You will also have to create an error logging table via the DBMS_ERRLOG package first. Optionally, you can:

  • Include a tag that gets added to the error log to help identify the statement that caused errors
  • Include the REJECT LIMIT subclause, which allows you to define a upper limit of errors that can be encountered before the statement fails. If you have a batch of 10k rows where 7k rows are failing, it probably doesn’t make much sense anymore to continue as there seems to be a bigger problem. This clause offers you the functionality to still raise the error if a certain threshold is reached. I guess in order to make sure that nobody accidentally adds the error logging clause and so suppresses all errors, the default value is 0. Which means that if you omit the REJECT LIMIT clause, an error will be logged into the error logging table but the statement will also be terminated. If you want to log all errors but raise no error you will have to define UNLIMITED

The error table itself is a copy of the target table with a few more columns, telling you things like error number, error message and so forth. But the important part is that it will contain all the columns of your target table and all those columns will contain the values from the failing row of the insert. This means that you will not lose any information of your batch and do not need to re-execute the entire batch again!
Let’s see how it works! First I create an error logging table called ERR_TESTLOADTABLE:


BEGIN
  DBMS_ERRLOG.CREATE_ERROR_LOG('TESTLOADTABLE','ERR_TESTLOADTABLE');
END;
anonymous block completed
DESC err_testloadtable
Name            Null Type
--------------- ---- --------------
ORA_ERR_NUMBER$      NUMBER
ORA_ERR_MESG$        VARCHAR2(2000)
ORA_ERR_ROWID$       UROWID()
ORA_ERR_OPTYP$       VARCHAR2(2)
ORA_ERR_TAG$         VARCHAR2(2000)
ID                   VARCHAR2(4000)
TEXT                 VARCHAR2(4000)

All I then have to do, is to modify my original code so that the insert statement includes a “LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED” clause at the end of it:

conn.prepareStatement("INSERT INTO testloadtable (id, text) VALUES (3000,'x')").execute();
conn.commit();

PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED");

for (int i=0;i<ROWS;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.addBatch();
}
stmt.executeBatch();
conn.commit();

Although I inserted the row with the id 3000 first, no error was reported anymore during the execution. However, if I have a look at my ERR_TESTLOADTABLE, I see following:


SELECT * FROM ERR_TESTLOADTABLE
ORA_ERR_NUMBER$ ORA_ERR_MESG$                                             ORA_ERR_ROWID$  ORA_ERR_OPTYP$ ORA_ERR_TAG$ ID    TEXT
--------------- --------------------------------------------------------- --------------- -------------- ------------ ----- --------
1               ORA-00001: unique constraint (TEST.SYS_C0010987) violated                 I                           3000  test3000

The error logging table shows me:

  1. The error number “1” which means “ORA-00001”
  2. The error message
  3. RowId: In this case null as it was an insert and the row didn’t exist before
  4. The operation type “I” indicating Insert
  5. An empty error tag, as I have not defined one
  6. ID column value from the failing row
  7. Text column value from the failing row

Conclusion: DML error logging allows you to use batch DML statements that won’t fail on errors. This gives you the benefit of maximum flexibility when you perform batch operation!
In my next post I will take a look at the performance impact of DML error logging.

Loading data fast – regular insert vs. bulk insert

In my last post I talked about how persisting data can become the bottleneck on large high scale systems nowadays. I also talked about that more and more people tend to think that databases are simply slow, seeing them as just big I/O systems. And I talked about how lots of applications are still inserting data as they used to do years ago rather to use bulk inserts.

In this post I will show you how bulk inserts can actually boost your inserts and therefore you systems. I will use a simple example showing the difference between:

  • Single row insert with commit
  • Single row insert with only one final commit
  • Bulk insert with final commit

Let’s assume you have a java program that needs to load some data from a file into a single table. Each line in the file represents a row in the database. I won’t go into how to read from the file and build your data together. This is out of scope for this post and not relevant to show the benefit of bulk inserts over regular ones. First let’s build a simple two column table, including an “id” column as NUMBER and a “text” column as VARCHAR2:

CREATE TABLE TESTLOADTABLE (id NUMBER, text VARCHAR2(255));

table TESTLOADTABLE created.

For each test I truncate the table first just to make sure that I always load the same amount of data into the same empty table. I add a comment in the statements, so that I can separate them out later on in the trace file.

The first example loads 10,000 rows into the table. It will simply insert an incrementing counter and a string into the table followed by a commit.

conn.prepareStatement("TRUNCATE TABLE testloadtable").execute();
conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute();

PreparedStatement stmt = conn.prepareStatement("INSERT /* conventional insert with commit */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<10000;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.execute();
  conn.commit();
}

conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();

Looking at the trace file it took the program 2.21 seconds to load these 10,000 rows. You can also see that the statement got actually executed 10,000 times – the commits unfortunately don’t show up in the formatted trace file but they would be listed in the raw data trace file.

 SQL ID: 337xy5qc84nsq Plan Hash: 0

INSERT /* conventional insert with commit */ INTO testloadtable (id, text)
 VALUES
 (:1 ,:2 )

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        1      0.00       0.00          0          0          0           0
 Execute  10000      2.06       2.21          2         90      20527       10000
 Fetch        0      0.00       0.00          0          0          0           0
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total    10001      2.06       2.21          2         90      20527       10000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
 ---------- ---------- ----------  ---------------------------------------------------
 0          0          0  LOAD TABLE CONVENTIONAL  (cr=5 pr=2 pw=0 time=1637 us)

The next test puts the commit outside of the loop. So I still add the data row by row to the table but the commit itself happens only once after all rows were loaded:

conn.prepareStatement("TRUNCATE TABLE testloadtable").execute();
conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute();

PreparedStatement stmt = conn.prepareStatement("INSERT /* conventional insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<10000;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.execute();
}
conn.commit();

conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();

As the results show the statement was still executed 10000 times. However, this time it took only 1.19 seconds to insert all the data. So by just moving the commit to the end, after all inserts were done, I gained already 57% more performance! Although it seems that commits are light weighted and don’t do much, the database still has some tasks to accomplish to make sure that your transaction is saved and visible. And of course instead of having only 1 transaction, I have 10,000 in this case.

 SQL ID: drsv4dw4037zj Plan Hash: 0

INSERT /* conventional insert */ INTO testloadtable (id, text)
 VALUES
 (:1 ,:2 )

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        1      0.00       0.00          0          0          0           0
 Execute  10000      1.09       1.19          2        114      10562       10000
 Fetch        0      0.00       0.00          0          0          0           0
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total    10001      1.09      1.19          2        114      10562       10000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
 ---------- ---------- ----------  ---------------------------------------------------
 0          0          0  LOAD TABLE CONVENTIONAL  (cr=5 pr=2 pw=0 time=1507 us)

The Oracle JDBC driver does support bulk inserts. What you can do is that you add all your data to a “batch” and then execute the entire batch. This gives you the advantage that there is only 1 INSERT statement executed which inserts all your data into the table at once! So instead of 10,000 round-trips, I only have 1 sending all the data over in one big chunk:

conn.prepareStatement("TRUNCATE TABLE testloadtable").execute();
conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute();

PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<10000;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.addBatch();
}
stmt.executeBatch();
conn.commit();

conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();

Now these results are pretty amazing! There was only 1 execution of that insert statement and that loaded the entire batch in only 0.06 seconds!

 SQL ID: gfkg1d43va20y Plan Hash: 0

INSERT /* addBatch insert */ INTO testloadtable (id, text)
 VALUES
 (:1 ,:2 )

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        1      0.00       0.00          0          0          0           0
 Execute      1      0.05       0.06          0        129        394       10000
 Fetch        0      0.00       0.00          0          0          0           0
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total        2      0.05       0.06          0        129        394       10000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
 ---------- ---------- ----------  ---------------------------------------------------
 0          0          0  LOAD TABLE CONVENTIONAL  (cr=139 pr=0 pw=0 time=51971 us)

As you can see, bulk inserting can give your system a huge performance boost! In this example here we are not talking about an improvement of percents anymore, but about factors! Imagine what this could gain you when you have even more data!

This post shows that persisting data into a database is not just simply slow but it also matters a lot how you actually insert that data!

Loading data fast

Today more and more applications have the requirement to load data fast. While this requirement isn’t new in IT, nowadays it mainly means to persist data fast into the database. In times of cheap memory and lots of processing power, system have developed from slim single-threaded applications to huge multi-threaded clusters using lots of memory which provides them with low latency. As a result of that, working with data inside the application became cheap, in some cases even bringing data into the system, e.g. over some in-memory queuing systems. However, the persisting stage still needs to do I/O at some point and although with faster disks emerging these days, I/O can still not compete with the low latency that memory provides. Based on that conclusion there are more and more programmers out there nowadays, thinking that I/O is bad thing and as memory became cheap, they rather dump everything into memory and tell the client to buy more if needed, instead of thinking clearly what needs to be cached and what not. They also tend to see the database as just a big I/O system and therefore associate automatically that it is simply slow and should be avoided. But that’s far away from the truth! Databases also use caching mechanisms which usually are highly sophisticated, having years and years of brainpower in their algorithms. Just because you insert data into the database does not mean at all that this data goes straight to disk. If you insert data into the Oracle database your data will be inserted into the buffer pool cache of the SGA memory structure. And more important, assuming the buffer pool is large enough, it will stay there! Only later on the DB writer background process(es) will actually take that data and persist it onto disk – completely decoupled from your transaction. So but if the database also uses memory for caching, how come that my inserts are still so much slower? In my experience programmers forget to ask one important question: How does the data get inserted? There are tons and tons of applications out there that didn’t change the way of inserting data since years. But just because it was the best way to do so years ago doesn’t necessarily mean that it still is! Or in other scenarios the programmer does not even know about it, because the application is using an object-relational mapping (ORM) tool.

There are several ways how to load data into a table. Most commonly used are following:

  • SQL*Loader – An Oracle tool for loading data fast from external files into tables
  • CREATE TABLE … AS SELECT (CTAS) statement – This SQL statement allows you to create a table and populate it with data selected from another existing table which can also be an external table.
  • MERGE statement – This SQL statement enables you either insert into or update rows of a table by selecting them from another existing table. If a row in the new data corresponds to an already existing row in the table, then an UPDATE is performed instead of an INSERT.
  • INSERT statement – The traditional INSERT statement for inserting one or more rows into a table

In this post I will focus only on the INSERT statement as this is used within most applications.

How often have you seen code like this:

 while (!file.isEOF())
 {
   line = file.getNextLine();
   INSERT INTO table (col1, col2, col3) VALUES (line.val1, line.val2, line.val3);
   COMMIT;
 }
 

What this piece of code does, is to read line by line from a file and for each line it inserts the data into a table. On the first sight this seems pretty much the best way of doing it and years ago it probably was. However, what happens here is that for each line in the file an INSERT statement with a set of data is send to the database  followed by a COMMIT. So for each line one DML statement gets executed, including a round-trip from your application to the database. But that’s not all. Once the INSERT statement arrives at the database, the database has to parse the statement and determine whether it is already known and an execution plan exists for it. Only after all those steps are done it can then execute the statement and finally load the data into the table. After the row got into the table the application executes a COMMIT which triggers another round-trip to the database and causes the databases to make sure that the row is visible to everybody and its transactional integrity given. For a file that contains only 10 lines all these steps might not be a lot of effort and is done pretty quick, but imagine when you have a file with 10,000 lines! Suddenly having 10,000 round-trips  for the INSERT statement, another 10,000 round-trips for the COMMIT, 10,000 parses (soft parses) of the INSERT statement and 10,000 times making sure that transactional integrity is given, becomes very expensive. And as you execute the same statement over and over again, some of the steps performed by the database become redundant. The database doesn’t have to check each time if the statement is already known as you execute it over and over again. It also does not make sense to perform 20,000 round-trips for only 10,000 rows of data. And you might not even want the user to be able to see the data already as long as the file is not completely loaded. So ideally what you want is something like this:

 while (!file.isEOF())
 {
   lines += file.getNextLine();
 }
 INSERT INTO table (col1, col2, col3) VALUES (lines);
 COMMIT;
 

First you read all the data from the file and keep them in memory. Then you insert all that data into the database at once. The data is sent over to the database only once, the parsing of the statement happens only once and the transactional integrity checks/commit happens only one time. Instead of 20,000 round-trips you have only 2!
Now the good news: Databases nowadays do support these kind of loading operations! It is referred as BULKING or BULK INSERTS. While in the past there were various reasons that did not allow you bulking, either because the mechanism didn’t exist back then or there was not sufficient memory to load more than 1 line at once anyway, there are no more reasons these days anymore. But the point is: You have to change the way how you insert data into your database. Sometimes this can be changed rather quick but sometimes there can be major efforts involved with that.

In my next post I will show some examples how to use bulk inserts and what difference they can make!

Indexing strategies with Jonathan Lewis

Today I was part of a NYOUG seminar about “Indexing Strategies” held by Mr. Jonathan Lewis himself. If somebody doesn’t know Jon Lewis, he is one of the top Oracle gurus on the entire planet with more than 25 years of Oracle experience. He probably used Oracle before I was even born (1985)! The very interesting fact however is that he actually never ever worked for Oracle but still has all the deep inside knowledge. Usually those gurus came out from Oracle or are still working for them. I guess the great amount of years dealing with Oracle helped him there.

However, Jonathan Lewis is also one of my top favorite because he shares his knowledge on his personal blog: http://jonathanlewis.wordpress.com In fact, Jons blog is the only one which I never can catch up with because he posts literally quicker than I got time to read his fabulous posts… reminds me that I should update my Blogroll.

The seminar itself was simply great. It was not free but those 230 bucks where totally worth it. Not only is Jon Lewis a great speaker – was the first time that I actually saw him – he also gave great detailed insights into indexes how they work, what is all possible with them and why Oracle is actually taking an index and much more important when it is not! . Furthermore he combined his sessions with some real world examples that he himself dealt with in the past which makes the whole thing much less theoretical.

Of course I won’t cover the whole seminar here, but I’ll end with some interesting things that I took out of it – some of which I got reminded again, some which were new for me:

  • Indexing means: Getting to data quickly
  • Indexes focus/compact data
  • There is always a Trade-Off between loading and querying performance
  • Index maintenance is expensive – it introduces costs
  • It’s all about precision
  • Non-unique indexes include the rowid automatically – for Oracle there are no non-unique indexes
  • Index compression can save a lots of space on repetitive columns
  • Index compression means elimination of duplicates
  • Primary key constraints don’t necessarily need unique key indexes but only indexes with the columns in the right order
  • Reverse indexes scatter entries accros the line, clustering factor can end up terrible
  • Don’t duplicate indexes – e.g. FK constraints can share the first columns of another index
  • Difference between tables and indexes: Index blocks have to be at the right place
  • Don’t trust Oracle – test everything!

 

Why my job is safe

I spent the last couple of days with coding a new registration module for a private website that I administrate. The requirements were the very basic ones:

  • Make an HTML form
  • Validate the inputs
  • Send a confirmation email with a link
  • On link activation: Retrieve the data and write it into a table in the database

As those requirements are common in so many registration forms all over the web, I thought: Before coding something again, just have a look at Google and see if  you can find something useful to embed. And I found stuff, lots of stuff but surprisingly every example I found did the following:

  • HTML form
  • Write data into the table with an “active” column set to false
  • Send a confirmation email with an link with an unique identifier for the inserted row (most times a md5 hash value)
  • On link activation: Retrieve the unique identifier
  • Update the row in the database with active flag set to true

And, at least some examples, took also care about the house keeping and cleaned the inactive rows from time to time.

Well, this does the job of course but the way how it does it is simply dirty – very dirty!

From the functional aspect there are following issues:

  • Every SQL statement for retrieving the active data has to make sure to include ” AND active = TRUE” in the WHERE clause
  • If an user is unlucky, the cleanup process removes his data before he can click the activation link if the process doesn’t implement a time range

From the performance side however:

All I want to do is:

  • Validate the data from the form
  • Save it temporarily until the link is clicked or the data gets expired
  • Write it into the database

With those solutions found on the internet however I would have done following:

  • Validate the data
  • Write it into the database (with an column extension for an active flag)
  • Retrieve the data again and update it to set it active
  • Create a cleanup job to purge inactive data
  • Extend all my SQLs against that table with a “AND active = TRUE” WHERE clause

So I have more steps, more unnecessary hops to the database, probably maintain another index over time for the cleanup job. Sometimes I wonder why programmers don’t think first before the code. Of course many roads lead to Rome but that doesn’t mean that you should just take a random one. But as long as there are programmers out there who don’t think or don’t think hard enough there will be always performance problems and people like me needed to solve them. So I guess I shouldn’t complain but my dream of the perfect IT world is still alive…

Well, the webpage in my case is made in PHP which has one nice feature called: Sessions
That does the job perfectly fine and most important: Clean! It is enabled by default, allows you to store data within a session and includes also an expire functionality which is also enabled by default. It supports two ways of storing the session id: Either within a cookie in the browser (enabled by default) or you can also pass the session id over POST and GET requests. As I use a confirmation email to activate the registration I just had to make sure to deactivate the cookie storage feature first as I pass the session id over a link/GET request. I wouldn’t have to deactivate the cookie storage from a functional point of view as the cookie would be just never read again and would expire some time but from security and the “do it right” aspect this has to be deactivated. So all I did was following:

// Disable cookie storage
ini_set("session.use_cookies",0);
// Start the session
session_start();

// Save the data within the session
$_SESSION["Title"] = $Title;
$_SESSION["FirstName"] = $FistName;
$_SESSION["LastName"] = $LastName;

//Send the email with the activation link
if (mail($Email, "Please activate your registration", "http://www.mywebsite.com/activate.php?sid=".htmlspecialchars (session_id()))
{
 // If sending the mail was successful, store all session data and close the session
 session_write_close();
}
else
{
 // If sending the mail failed, report an error, delete the session data and destroy the session!
 // Error reporting....
 session_unset();
 session_destroy();
}

And all the activation.php file does, is:


// Disable cookie storage
ini_set("session.use_cookies",0);
// Set the session id from GET request
session_id($_GET["sid"]);
// Start session
session_start();
// Get variables
$Title = $_SESSION["Title"];
$FistName = $_SESSION["FirstName"];
$LastName = $_SESSION["LastName"];

// Connect to database
$hDB = @mysql_connect("localhost", "user","password");
mysql_select_db("myDB", $hDB);

// Insert data into database
if (mysql_query("INSERT INTO table (Title, FirstName, LastName) VALUES ('".mysql_real_escape_string($Title)."','".mysql_real_escape_string($FirstName)."','".mysql_real_escape_string($LastName)."'")
{
 // Destroy session
 session_unset();
 session_destroy();
}

The big picture

A couple of weeks ago a customer had some troubles with the overall performance. He complained that everything is slow and (of course) nothing changed since the last few weeks. They also immediately blamed the database on their side to be the issue. The usual stuff also, I’m sure that sounds just too familiar to you.

So as they “identified” the database already as being the issue my team requested some AWR reports and as we got them I noticed a strange but all to common behavior. My team mates got the AWR reports, went to the SQL Statistics, sections “SQL ordered by Elapsed Time” and “SQL ordered by CPU Time” and identified immediately a materialized view rebuild as cause for the problem. That looked like this:

SQL ordered by Elapsed Time

  • Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
  • % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
Elapsed Time (s) CPU Time (s) Executions Elap per Exec (s) % Total DB Time SQL Id SQL Module SQL Text
1,343 679 1 1342.85 33.00 f20ccnxhvbk65 DECLARE job BINARY_INTEGER := …
770 323 1 770.24 18.93 1usnr4gmcq60d /* MV_REFRESH (DEL) */ delete …
571 355 1 571.23 14.04 gz04689vd55db /* MV_REFRESH (INS) */INSERT /…
311 311 1 310.91 7.64 0vhmfumrjchnh SQL*Plus BEGIN dm_incr_symbols_post_pro…
288 9 0 7.09 bb3f2gjndvjss oracle@crptd1 (TNS V1-V3) SELECT /*+ OPAQUE_TRANSFORM */…
258 258 1 257.76 6.34 75vtwb7j4jzdm SQL*Plus INSERT INTO SYMB_EXTRACTT SELE…
149 121 1 148.80 3.66 90wtn50vy6af6 DECLARE job BINARY_INTEGER := …
114 103 1 113.75 2.80 9993mp6h7kqkp INSERT /*+ BYPASS_RECURSIVE_CH…
48 38 2 24.00 1.18 3nkcg1h5ysqss DECLARE job BINARY_INTEGER := …
48 38 2 23.98 1.18 fvb5prrr7b0c3 MERGE INTO FT_E_UPS1 UPS1 USIN…

Back to SQL Statistics
Back to Top

SQL ordered by CPU Time

  • Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
  • % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
CPU Time (s) Elapsed Time (s) Executions CPU per Exec (s) % Total DB Time SQL Id SQL Module SQL Text
679 1,343 1 679.06 33.00 f20ccnxhvbk65 DECLARE job BINARY_INTEGER := …
355 571 1 355.50 14.04 gz04689vd55db /* MV_REFRESH (INS) */INSERT /…
323 770 1 323.18 18.93 1usnr4gmcq60d /* MV_REFRESH (DEL) */ delete …
311 311 1 310.89 7.64 0vhmfumrjchnh SQL*Plus BEGIN dm_incr_symbols_post_pro…
258 258 1 257.74 6.34 75vtwb7j4jzdm SQL*Plus INSERT INTO SYMB_EXTRACTT SELE…
121 149 1 121.05 3.66 90wtn50vy6af6 DECLARE job BINARY_INTEGER := …
103 114 1 102.58 2.80 9993mp6h7kqkp INSERT /*+ BYPASS_RECURSIVE_CH…
38 48 2 19.20 1.18 3nkcg1h5ysqss DECLARE job BINARY_INTEGER := …
38 48 2 19.20 1.18 fvb5prrr7b0c3 MERGE INTO FT_E_UPS1 UPS1 USIN…
29 29 12 2.38 0.70 68930z34bm3db SQL*Plus select ‘file[‘ || substr(trim(…
9 288 0 7.09 bb3f2gjndvjss oracle@dftg1 (TNS V1-V3) SELECT /*+ OPAQUE_TRANSFORM */…

So here we have a PL/SQL job which does the materialized view refresh (first line) and the statements for the refresh itself as second and third line. For completeness here the first statement:


DECLARE
 job BINARY_INTEGER := :job;
 next_date DATE := :mydate;
 broken BOOLEAN := FALSE;
BEGIN
 dbms_refresh.refresh('"ABTGHS"."DB_CREAT_STST_PWOR"');
 :mydate := next_date;
 IF broken THEN
 :b := 1;
 ELSE
 :b := 0;
 END IF;
END;

So all what they did was to go there, look the first SQL with high elapsed and cpu time and nearly reported back to them that this is the problem and they have to solve this. I call this the “lucky shot method”. Sometimes when you are lucky than the reason for the issue on the DB is a bad SQL or a bunch of bad SQLs which max out the DB on CPU power, or I/O or whatever. In such a case you just go to those sections, identify the SQLs, fix them and everything is good again – you were lucky. This works sometimes and you are a hero because it took you just 5 minutes for fixing the issue but sometimes it doesn’t and you blame it on some weird constellation of OS, network and something else so that nobody recognizes that you are just a fool and didn’t look at the big picture. Thinking of that I noticed that this is just far too common in IT. Tech admins, DBA, developers – in every section you have people like this. Looking 5 minutes into the issue and telling you then that this and that is the issue. You go ahead and fix them but still no change. So next round trip, and next, and next, and next. All could have been prevented if the person had just once a look into the big picture…

So what is the big picture, what do I mean by that?

I’m a fan of knowing what happens and why. Always analyze all the information you got, even request some more if you think that some vital information is missing and make your conclusions out of that. Stop the try and error method, the lucky shot method. The AWR report I got from the customer is an all too good example. There were 2 simple lines which made me curious:

Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 17934 05-May-10 09:00:59 129 144.1
End Snap: 17935 05-May-10 10:00:12 130 143.0
Elapsed: 59.21 (mins)
DB Time: 67.81 (mins)

I’m talking about the last two. The first tells me that the snapshot which got compared are in a time range of 59.21 minutes. The DB time, so the time when the DB was actually working was 67.81 minutes. So something looks strange here. If I would be in a single core environment I wouldn’t be over my 59.21. If I would be in a dual-core environment then the database was just working 57% of the time (59.21 x 2 cores = 118.42 – DB time multiplies by the amount of cores where work was performed parallel). Next I had a look into the locks going on. If you have high locking then the DB time is also idle but the throughput is low. But it turned that this was also no issue. The next information confirmed then that the DB wasn’t the problem at all:

Operating System Statistics

Statistic Total
AVG_BUSY_TIME 33,088
AVG_IDLE_TIME 321,941

The average idle time statistic is much higher than the busy time. So I requested the amount of cores on that system and it turned out that there are 14 available. A busy database would have a DB time of max. 828.94 minutes. No locking going on. I followed up with them and it turned out as I thought: The database wasn’t the problem at all. In fact it was pretty much idle over the whole time. The issue was on the application side. A java application which memory got filled up so that it had to do full garbage collection all the time to continue processing.

Instead of doing the lucky shot we prevented us and them from a lot of headache by looking into the big picture.

Oracle 10g enqueue waits

Just found a useful list of Oracle 10g enqueue waits in the web which I don’t want to deprive from you:

Enqueue Type Description
enq: AD – allocate AU Synchronizes accesses to a specific OSM (Oracle Software Manager) disk AU
enq: AD – deallocate AU Synchronizes accesses to a specific OSM disk AU
enq: AF – task serialization Serializes access to an advisor task
enq: AG – contention Synchronizes generation use of a particular workspace
enq: AO – contention Synchronizes access to objects and scalar variables
enq: AS – contention Synchronizes new service activation
enq: AT – contention Serializes alter tablespace operations
enq: AW – AW$ table lock Allows global access synchronization to the AW$ table (analytical workplace tables used in OLAP option)
enq: AW – AW generation lock Gives in-use generation state for a particular workspace
enq: AW – user access for AW Synchronizes user accesses to a particular workspace
enq: AW – AW state lock Row lock synchronization for the AW$ table
enq: BR – file shrink Lock held to prevent file from decreasing in physical size during RMAN backup
enq: BR – proxy-copy Lock held to allow cleanup from backup mode during an RMAN proxy-copy backup
enq: CF – contention Synchronizes accesses to the controlfile
enq: CI – contention Coordinates cross-instance function invocations
enq: CL – drop label Synchronizes accesses to label cache when dropping a label
enq: CL – compare labels Synchronizes accesses to label cache for label comparison
enq: CM – gate Serializes access to instance enqueue
enq: CM – instance Indicates OSM disk group is mounted
enq: CT – global space management Lock held during change tracking space management operations that affect the entire change tracking file
enq: CT – state Lock held while enabling or disabling change tracking to ensure that it is enabled or disabled by only one user at a time
enq: CT – state change gate 2 Lock held while enabling or disabling change tracking in RAC
enq: CT – reading Lock held to ensure that change tracking data remains in existence until a reader is done with it
enq: CT – CTWR process start/stop Lock held to ensure that only one CTWR (Change Tracking Writer, which tracks block changes and is initiated by the alter database enable block change tracking command) process is started in a single instance
enq: CT – state change gate 1 Lock held while enabling or disabling change tracking in RAC
enq: CT – change stream ownership Lock held by one instance while change tracking is enabled to guarantee access to thread-specific resources
enq: CT – local space management Lock held during change tracking space management operations that affect just the data for one thread
enq: CU – contention Recovers cursors in case of death while compiling
enq: DB – contention Synchronizes modification of database wide supplemental logging attributes
enq: DD – contention Synchronizes local accesses to ASM (Automatic Storage Management) disk groups
enq: DF – contention Enqueue held by foreground or DBWR when a datafile is brought online in RAC
enq: DG – contention Synchronizes accesses to ASM disk groups
enq: DL – contention Lock to prevent index DDL during direct load
enq: DM – contention Enqueue held by foreground or DBWR to synchronize database mount/open with other operations
enq: DN – contention Serializes group number generations
enq: DP – contention Synchronizes access to LDAP parameters
enq: DR – contention Serializes the active distributed recovery operation
enq: DS – contention Prevents a database suspend during LMON reconfiguration
enq: DT – contention Serializes changing the default temporary table space and user creation
enq: DV – contention Synchronizes access to lower-version Diana (PL/SQL intermediate representation)
enq: DX – contention Serializes tightly coupled distributed transaction branches
enq: FA – access file Synchronizes accesses to open ASM files
enq: FB – contention Ensures that only one process can format data blocks in auto segment space managed tablespaces
enq: FC – open an ACD thread LGWR opens an ACD thread
enq: FC – recover an ACD thread SMON recovers an ACD thread
enq: FD – Marker generation Synchronization
enq: FD – Flashback coordinator Synchronization
enq: FD – Tablespace flashback on/off Synchronization
enq: FD – Flashback on/off Synchronization
enq: FG – serialize ACD relocate Only 1 process in the cluster may do ACD relocation in a disk group
enq: FG – LGWR redo generation enq race Resolves race condition to acquire Disk Group Redo Generation Enqueue
enq: FG – FG redo generation enq race Resolves race condition to acquire Disk Group Redo Generation Enqueue
enq: FL – Flashback database log Synchronizes access to Flashback database log
enq: FL – Flashback db command Synchronizes Flashback Database and deletion of flashback logs
enq: FM – contention Synchronizes access to global file mapping state
enq: FR – contention Begins recovery of disk group
enq: FS – contention Synchronizes recovery and file operations or synchronizes dictionary check
enq: FT – allow LGWR writes Allows LGWR to generate redo in this thread
enq: FT – disable LGWR writes Prevents LGWR from generating redo in this thread
enq: FU – contention Serializes the capture of the DB feature, usage, and high watermark statistics
enq: HD – contention Serializes accesses to ASM SGA data structures
enq: HP – contention Synchronizes accesses to queue pages
enq: HQ – contention Synchronizes the creation of new queue IDs
enq: HV – contention Lock used to broker the high watermark during parallel inserts
enq: HW – contention Lock used to broker the high watermark during parallel inserts
enq: IA – contention Information not available
enq: ID – contention Lock held to prevent other processes from performing controlfile transaction while NID is running
enq: IL – contention Synchronizes accesses to internal label data structures
enq: IM – contention for blr Serializes block recovery for IMU txn
enq: IR – contention Synchronizes instance recovery
enq: IR – contention2 Synchronizes parallel instance recovery and shutdown immediate
enq: IS – contention Synchronizes instance state changes
enq: IT – contention Synchronizes accesses to a temp object’s metadata
enq: JD – contention Synchronizes dates between job queue coordinator and slave processes
enq: JI – contention Lock held during materialized view operations (such as refresh, alter) to prevent concurrent operations on the same materialized view
enq: JQ – contention Lock to prevent multiple instances from running a single job
enq: JS – contention Synchronizes accesses to the job cache
enq: JS – coord post lock Lock for coordinator posting
enq: JS – global wdw lock Lock acquired when doing wdw ddl
enq: JS – job chain evaluate lock Lock when job chain evaluated for steps to create
enq: JS – q mem clnup lck Lock obtained when cleaning up q memory
enq: JS – slave enq get lock2 Gets run info locks before slv objget
enq: JS – slave enq get lock1 Slave locks exec pre to sess strt
enq: JS – running job cnt lock3 Lock to set running job count epost
enq: JS – running job cnt lock2 Lock to set running job count epre
enq: JS – running job cnt lock Lock to get running job count
enq: JS – coord rcv lock Lock when coord receives msg
enq: JS – queue lock Lock on internal scheduler queue
enq: JS – job run lock – synchronize Lock to prevent job from running elsewhere
enq: JS – job recov lock Lock to recover jobs running on crashed RAC inst
enq: KK – context Lock held by open redo thread, used by other instances to force a log switch
enq: KM – contention Synchronizes various Resource Manager operations
enq: KP – contention Synchronizes kupp process startup
enq: KT – contention Synchronizes accesses to the current Resource Manager plan
enq: MD – contention Lock held during materialized view log DDL statements
enq: MH – contention Lock used for recovery when setting Mail Host for AQ e-mail notifications
enq: ML – contention Lock used for recovery when setting Mail Port for AQ e-mail notifications
enq: MN – contention Synchronizes updates to the LogMiner dictionary and prevents multiple instances from preparing the same LogMiner session
enq: MR – contention Lock used to coordinate media recovery with other uses of datafiles
enq: MS – contention Lock held during materialized view refresh to set up MV log
enq: MW – contention Serializes the calibration of the manageability schedules with the Maintenance Window
enq: OC – contention Synchronizes write accesses to the outline cache
enq: OL – contention Synchronizes accesses to a particular outline name
enq: OQ – xsoqhiAlloc Synchronizes access to olapi history allocation
enq: OQ – xsoqhiClose Synchronizes access to olapi history closing
enq: OQ – xsoqhistrecb Synchronizes access to olapi history globals
enq: OQ – xsoqhiFlush Synchronizes access to olapi history flushing
enq: OQ – xsoq*histrecb Synchronizes access to olapi history parameter CB
enq: PD – contention Prevents others from updating the same property
enq: PE – contention Synchronizes system parameter updates
enq: PF – contention Synchronizes accesses to the password file
enq: PG – contention Synchronizes global system parameter updates
enq: PH – contention Lock used for recovery when setting proxy for AQ HTTP notifications
enq: PI – contention Communicates remote Parallel Execution Server Process creation status
enq: PL – contention Coordinates plug-in operation of transportable tablespaces
enq: PR – contention Synchronizes process startup
enq: PS – contention Parallel Execution Server Process reservation and synchronization
enq: PT – contention Synchronizes access to ASM PST metadata
enq: PV – syncstart Synchronizes slave start_shutdown
enq: PV – syncshut Synchronizes instance shutdown_slvstart
enq: PW – prewarm status in dbw0 DBWR0 holds this enqueue indicating pre-warmed buffers present in cache
enq: PW – flush prewarm buffers Direct Load needs to flush prewarmed buffers if DBWR0 holds this enqueue
enq: RB – contention Serializes OSM rollback recovery operations
enq: RF – synch: per-SGA Broker metadata Ensures r/w atomicity of DG configuration metadata per unique SGA
enq: RF – synchronization: critical ai Synchronizes critical apply instance among primary instances
enq: RF – new AI Synchronizes selection of the new apply instance
enq: RF – synchronization: chief Anoints 1 instance’s DMON (Data Guard Broker Monitor) as chief to other instance’s DMONs
enq: RF – synchronization: HC master Anoints 1 instance’s DMON as health check master
enq: RF – synchronization: aifo master Synchronizes critical apply instance failure detection and failover operation
enq: RF – atomicity Ensures atomicity of log transport setup
enq: RN – contention Coordinates nab computations of online logs during recovery
enq: RO – contention Coordinates flushing of multiple objects
enq: RO – fast object reuse Coordinates fast object reuse
enq: RP – contention Enqueue held when resilvering is needed or when data block is repaired from mirror
enq: RS – file delete Lock held to prevent file from accessing during space reclamation
enq: RS – persist alert level Lock held to make alert level persistent
enq: RS – write alert level Lock held to write alert level
enq: RS – read alert level Lock held to read alert level
enq: RS – prevent aging list update Lock held to prevent aging list update
enq: RS – record reuse Lock held to prevent file from accessing while reusing circular record
enq: RS – prevent file delete Lock held to prevent deleting file to reclaim space
enq: RT – contention Thread locks held by LGWR, DBW0, and RVWR (Recovery Writer, used in Flashback Database operations) to indicate mounted or open status
enq: SB – contention Synchronizes logical standby metadata operations
enq: SF – contention Lock held for recovery when setting sender for AQ e-mail notifications
enq: SH – contention Enqueue always acquired in no-wait mode – should seldom see this contention
enq: SI – contention Prevents multiple streams table instantiations
enq: SK – contention Serialize shrink of a segment
enq: SQ – contention Lock to ensure that only one process can replenish the sequence cache
enq: SR – contention Coordinates replication / streams operations
enq: SS – contention Ensures that sort segments created during parallel DML operations aren’t prematurely cleaned up
enq: ST – contention Synchronizes space management activities in dictionary-managed tablespaces
enq: SU – contention Serializes access to SaveUndo Segment
enq: SW – contention Coordinates the ‘alter system suspend’ operation
enq: TA – contention Serializes operations on undo segments and undo tablespaces
enq: TB – SQL Tuning Base Cache Update Synchronizes writes to the SQL Tuning Base Existence Cache
enq: TB – SQL Tuning Base Cache Load Synchronizes writes to the SQL Tuning Base Existence Cache
enq: TC – contention Lock held to guarantee uniqueness of a tablespace checkpoint
enq: TC – contention2 Lock during setup of a unique tablespace checkpoint in null mode
enq: TD – KTF dump entries KTF dumping time/scn mappings in SMON_SCN_TIME table
enq: TE – KTF broadcast KTF broadcasting
enq: TF – contention Serializes dropping of a temporary file
enq: TL – contention Serializes threshold log table read and update
enq: TM – contention Synchronizes accesses to an object
enq: TO – contention Synchronizes DDL and DML operations on a temp object
enq: TQ – TM contention TM access to the queue table
enq: TQ – DDL contention DDL access to the queue table
enq: TQ – INI contention TM access to the queue table
enq: TS – contention Serializes accesses to temp segments
enq: TT – contention Serializes DDL operations on tablespaces
enq: TW – contention Lock held by one instance to wait for transactions on all instances to finish
enq: TX – contention Lock held by a transaction to allow other transactions to wait for it
enq: TX – row lock contention Lock held on a particular row by a transaction to prevent other transactions from modifying it
enq: TX – allocate ITL entry Allocating an ITL entry in order to begin a transaction
enq: TX – index contention Lock held on an index during a split to prevent other operations on it
enq: UL – contention Lock held used by user applications
enq: US – contention Lock held to perform DDL on the undo segment
enq: WA – contention Lock used for recovery when setting watermark for memory usage in AQ notifications
enq: WF – contention Enqueue used to serialize the flushing of snapshots
enq: WL – contention Coordinates access to redo log files and archive logs
enq: WP – contention Enqueue to handle concurrency between purging and baselines
enq: XH – contention Lock used for recovery when setting No Proxy Domains for AQ HTTP notifications
enq: XR – quiesce database Lock held during database quiesce
enq: XR – database force logging Lock held during database force logging mode
enq: XY – contention Lock used by Oracle Corporation for internal testing

Of course I don’t want to steal anyones content here:

Found at: http://oracle-dox.net/

Direct link: http://oracle-dox.net/McGraw.Hill-Oracle.Wait.Interf/8174final/LiB0063.html

Many thanks to Oracle-Dox for providing this table!

11g and the new SecureFILE LOB data type (simple quick test)

As I had to run some comparison test Oracle 10g and 11g I also did a little test with the new and fast (as Oracle promises) SecureFILE LOB data type. The test is really little as there wasn’t any time to test the feature totally out, but also this result is impressive. Of course all test occurred on the same box and the same settings:

First the 10g test:

The first step is to create a table with a CLOB data type:

SQL> create table test1 (created_tms date, data clob) tablespace data1 lob (data) store as (cache);
Table created.
SQL>

I used cache to get as much bytes as possible into the buffer cache. If you need performance on LOB anyone would activate this as first step. As next step I wrote a little PL/SQL program  which inserts 100.000 rows with a 4402 bytes (that’s 4.4GB all together!):

SQL> set serveroutput on;
SQL> declare
2    thedata CLOB := TO_CLOB (
3  ‘somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
4  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
……
49  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
50  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata ‘);
51  begin
52    dbms_output.put_line(‘Begin: ‘ || systimestamp);
53    for n in 1..100000 loop
54      insert into test1 values (sysdate,thedata);
55    end loop;
56    dbms_output.put_line(‘End: ‘ || systimestamp);
57  end;
58  /
Begin: 11-JAN-08 04.30.50.203325000 PM +01:00
End: 11-JAN-08 04.32.50.422759000 PM +01:00

PL/SQL procedure successfully completed.

Checking the time stamp you can see that with 10g the inserts took nearly exactly 2 minutes (the last time, I executed it several times). Also you can see that I inserted the actual sysdate in a column too. So now we can compute a average of inserts per second:

SQL> select count(*)/((max(created_tms)-min(created_tms))*24*60*60) from test1;

COUNT(*)/((MAX(CREATED_TMS)-MIN(CREATED_TMS))*24*60*60)
——————————————————-
833.333333

The last 4 runs made (average inserts/second):

840.336134
826.446281
800
833.333333

So, now we going to 11g. First step: We need a table with the new SecureFILE LOB type:

SQL>
SQL> create table test1 (created_tms date, data clob) tablespace data1 lob (data) store as securefile cache;

Table created.

Next, I executed exactly the same procedure:

SQL> set serveroutput on;
SQL> declare
2    thedata CLOB := TO_CLOB (
3  ‘somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
……
49  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
50  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata ‘);
51  begin
52    dbms_output.put_line(‘Begin: ‘ || systimestamp);
53    for n in 1..100000 loop
54      insert into test1 values (sysdate,thedata);
55    end loop;
56    dbms_output.put_line(‘End: ‘ || systimestamp);
57  end;
58  /
Begin: 11-JAN-08 04.19.06.534822000 PM +01:00
End: 11-JAN-08 04.20.01.703784000 PM +01:00

PL/SQL procedure successfully completed.

Looking at the time stamps the run took less than a minute! Remember, 10g needed around 2 minutes. So lets look to the average inserts per second:

SQL> select count(*)/((max(created_tms)-min(created_tms))*24*60*60) from test1;

COUNT(*)/((MAX(CREATED_TMS)-MIN(CREATED_TMS))*24*60*60)
——————————————————-
1818.18182

We can see that the program made average 1818 inserts per second. Compared to 833 from 10g it’s more than 100% faster! And that’s not bad without any fine tuning I would say!

The last 4 runs made:

1960.78431
1818.18182
1754.38596
1818.18182

I hope I’ve some time left to provide a good test case, but also this quick one showed that the SecureFILE  data type really speeded-up compared to convention LOB, and that’s what Oracle promised! 😉

Prepared statement in batch mode vs. FORALL in PL/SQL

A few days ago I had a nice discussion with one of our developers here in the office. The subject: What’s faster? Prepared statements with batch mode or calling a PL/SQL function which is using FORALL.

So I made a simple test: Insert 1 million rows with two columns (a number and a string column) into a table. The first part builds the values in java give it to the batch array and executes the insert in batch mode (This feature of JDBC stores the values in the memory until you execute the batch. Then JDBC executes the DML and loads the array at ones, very similar to FORALL.) The second part builds the values also in java creates two collections and call a procedure with the collections. The procedure itself just performs a FORALL over the insert.

So the first part of the java code looks like this:

long l = System.currentTimeMillis();
PreparedStatement pstmt = conn1.prepareStatement(“INSERT INTO TEST VALUES(?,?)”);
System.out.println(“PrepareStatement took: “+(System.currentTimeMillis() – l));
long l1 = System.currentTimeMillis();
for (int i=0;i<1000000;++i)
{
pstmt.setInt(1, i);
pstmt.setString(2, “testvalue”+i);
pstmt.addBatch();
if(i%16961 == 0)
pstmt.executeBatch();
}

pstmt.executeBatch();
System.out.println(“ExecuteBatch took: “+ (System.currentTimeMillis() – l1));

Don’t worry about the if with the modulo function. It turned out that JDBC has a bug with addBatch. It can just execute around 16000 rows at ones. If you add more rows it just ignores them! So I had to call the executeBatch more often. But we will see, if this slows down the test.

The second part of the java code looks like this:

l = System.currentTimeMillis();

ArrayDescriptor integer = ArrayDescriptor.createDescriptor(“INTARRAY”, conn1);
ArrayDescriptor varchar = ArrayDescriptor.createDescriptor(“VARARRAY”, conn1);

ARRAY pliArray = new ARRAY(integer,conn1, iArray);
ARRAY plsArray = new ARRAY(varchar,conn1, sString);
System.out.println(“Creating the Array took: “+(System.currentTimeMillis() – l));

l = System.currentTimeMillis();
CallableStatement callStatement = (CallableStatement)conn1.prepareCall(“{call insertvalues(?,?)}”);
System.out.println(“Preparing Call took: “+(System.currentTimeMillis() – l));

callStatement.setArray(1, pliArray);
callStatement.setArray(2, plsArray);

l1 = System.currentTimeMillis();
callStatement.execute();
System.out.println(“Executing PL/SQL took: “+(System.currentTimeMillis() – l1));

The PL/SQL code:

CREATE OR REPLACE TYPE TEST.intArray IS VARRAY(1000000) OF NUMBER;
show errors;
CREATE OR REPLACE TYPE TEST.varArray IS VARRAY(1000000) OF VARCHAR2(18);
show errors;

create or replace procedure insertvalues (nValue1 IN intArray, sValue2 IN varArray) is
begin
forall nvalue in nValue1.first..nValue1.last
insert into test values (nValue1(nvalue), sValue2(nvalue));
end;
show errors;

Well the result surprised me a little bit. Java was much faster. Look at the trace alone (trimmed!):

INSERT INTO TEST
VALUES
(:1,:2)

call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 1 0.00 0.00 0 0 0 0
Execute 60 1.38 2.08 0 7646 34579 1000000
Fetch 0 0.00 0.00 0 0 0 0
——- —— ——– ———- ———- ———- ———- ———-
total 61 1.38 2.08 0 7646 34579 1000000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1159

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
—————————————- Waited ———- ————
SQL*Net message to client 60 0.00 0.00
SQL*Net message from client 60 0.09 1.64
SQL*Net more data from client 10726 0.00 0.72
********************************************************************************

BEGIN insertvalues(:1,:2); END;

call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 1 0.01 0.00 0 0 0 0
Execute 1 7.77 9.01 0 9277 34886 1
Fetch 0 0.00 0.00 0 0 0 0
——- —— ——– ———- ———- ———- ———- ———-
total 2 7.78 9.01 0 9277 34886 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1159

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
—————————————- Waited ———- ————
SQL*Net more data from client 10399 0.00 1.17
SQL*Net message to client 1 0.00 0.00
log file sync 1 0.21 0.21
SQL*Net message from client 1 0.00 0.00
********************************************************************************
As you can see, the count of the procedure execution is just 1, from JDBC it is 60 (because of the bug with the 16000 rows). The elapsed time with prepared statement just took 2.08 seconds but with FORALL it took 9.01 seconds. And the CPU time is also of course lower than in PL/SQL. To be fair: That 9.01 seconds also include the call of the procedure plus the load of the collections. As it showed, just creating the collection in java took around 2 seconds!

In summary: You can see that prepared statement is much faster than FORALL. But to be fair: FORALL was developed to have some kind of bulking in PL/SQL within cursors to decrease the calls between SQL and PL/SQL engine. But if you using Java and JDBC anyway, work with prepared statements and with batch mode if possible. Don’t mind about the SQL*Net waiting events, these are idle events what means that your application isn’t waiting for it!

BTW: That JDBC bug is fixed in 11g and 10.2.0.3!