11g and the new SecureFILE LOB data type (simple quick test)

As I had to run some comparison test Oracle 10g and 11g I also did a little test with the new and fast (as Oracle promises) SecureFILE LOB data type. The test is really little as there wasn’t any time to test the feature totally out, but also this result is impressive. Of course all test occurred on the same box and the same settings:

First the 10g test:

The first step is to create a table with a CLOB data type:

SQL> create table test1 (created_tms date, data clob) tablespace data1 lob (data) store as (cache);
Table created.
SQL>

I used cache to get as much bytes as possible into the buffer cache. If you need performance on LOB anyone would activate this as first step. As next step I wrote a little PL/SQL program  which inserts 100.000 rows with a 4402 bytes (that’s 4.4GB all together!):

SQL> set serveroutput on;
SQL> declare
2    thedata CLOB := TO_CLOB (
3  ‘somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
4  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
……
49  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
50  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata ‘);
51  begin
52    dbms_output.put_line(‘Begin: ‘ || systimestamp);
53    for n in 1..100000 loop
54      insert into test1 values (sysdate,thedata);
55    end loop;
56    dbms_output.put_line(‘End: ‘ || systimestamp);
57  end;
58  /
Begin: 11-JAN-08 04.30.50.203325000 PM +01:00
End: 11-JAN-08 04.32.50.422759000 PM +01:00

PL/SQL procedure successfully completed.

Checking the time stamp you can see that with 10g the inserts took nearly exactly 2 minutes (the last time, I executed it several times). Also you can see that I inserted the actual sysdate in a column too. So now we can compute a average of inserts per second:

SQL> select count(*)/((max(created_tms)-min(created_tms))*24*60*60) from test1;

COUNT(*)/((MAX(CREATED_TMS)-MIN(CREATED_TMS))*24*60*60)
——————————————————-
833.333333

The last 4 runs made (average inserts/second):

840.336134
826.446281
800
833.333333

So, now we going to 11g. First step: We need a table with the new SecureFILE LOB type:

SQL>
SQL> create table test1 (created_tms date, data clob) tablespace data1 lob (data) store as securefile cache;

Table created.

Next, I executed exactly the same procedure:

SQL> set serveroutput on;
SQL> declare
2    thedata CLOB := TO_CLOB (
3  ‘somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
……
49  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata
50  somedata somedata somedata somedata somedata somedata somedata somedata somedata somedata ‘);
51  begin
52    dbms_output.put_line(‘Begin: ‘ || systimestamp);
53    for n in 1..100000 loop
54      insert into test1 values (sysdate,thedata);
55    end loop;
56    dbms_output.put_line(‘End: ‘ || systimestamp);
57  end;
58  /
Begin: 11-JAN-08 04.19.06.534822000 PM +01:00
End: 11-JAN-08 04.20.01.703784000 PM +01:00

PL/SQL procedure successfully completed.

Looking at the time stamps the run took less than a minute! Remember, 10g needed around 2 minutes. So lets look to the average inserts per second:

SQL> select count(*)/((max(created_tms)-min(created_tms))*24*60*60) from test1;

COUNT(*)/((MAX(CREATED_TMS)-MIN(CREATED_TMS))*24*60*60)
——————————————————-
1818.18182

We can see that the program made average 1818 inserts per second. Compared to 833 from 10g it’s more than 100% faster! And that’s not bad without any fine tuning I would say!

The last 4 runs made:

1960.78431
1818.18182
1754.38596
1818.18182

I hope I’ve some time left to provide a good test case, but also this quick one showed that the SecureFILE  data type really speeded-up compared to convention LOB, and that’s what Oracle promised! 😉

When something doesn’t work…

Do you know that? You want to test something and it just won’t work. You fix this, you fix that but it seems that it doesn’t end. You can get crazy with this. I spent now nearly the whole afternoon on that problem. But now there is a light at the end of the tunnel! 🙂 I’m sure you also know such situations – they happen everywhere!

Prepared statement in batch mode vs. FORALL in PL/SQL

A few days ago I had a nice discussion with one of our developers here in the office. The subject: What’s faster? Prepared statements with batch mode or calling a PL/SQL function which is using FORALL.

So I made a simple test: Insert 1 million rows with two columns (a number and a string column) into a table. The first part builds the values in java give it to the batch array and executes the insert in batch mode (This feature of JDBC stores the values in the memory until you execute the batch. Then JDBC executes the DML and loads the array at ones, very similar to FORALL.) The second part builds the values also in java creates two collections and call a procedure with the collections. The procedure itself just performs a FORALL over the insert.

So the first part of the java code looks like this:

long l = System.currentTimeMillis();
PreparedStatement pstmt = conn1.prepareStatement(“INSERT INTO TEST VALUES(?,?)”);
System.out.println(“PrepareStatement took: “+(System.currentTimeMillis() – l));
long l1 = System.currentTimeMillis();
for (int i=0;i<1000000;++i)
{
pstmt.setInt(1, i);
pstmt.setString(2, “testvalue”+i);
pstmt.addBatch();
if(i%16961 == 0)
pstmt.executeBatch();
}

pstmt.executeBatch();
System.out.println(“ExecuteBatch took: “+ (System.currentTimeMillis() – l1));

Don’t worry about the if with the modulo function. It turned out that JDBC has a bug with addBatch. It can just execute around 16000 rows at ones. If you add more rows it just ignores them! So I had to call the executeBatch more often. But we will see, if this slows down the test.

The second part of the java code looks like this:

l = System.currentTimeMillis();

ArrayDescriptor integer = ArrayDescriptor.createDescriptor(“INTARRAY”, conn1);
ArrayDescriptor varchar = ArrayDescriptor.createDescriptor(“VARARRAY”, conn1);

ARRAY pliArray = new ARRAY(integer,conn1, iArray);
ARRAY plsArray = new ARRAY(varchar,conn1, sString);
System.out.println(“Creating the Array took: “+(System.currentTimeMillis() – l));

l = System.currentTimeMillis();
CallableStatement callStatement = (CallableStatement)conn1.prepareCall(“{call insertvalues(?,?)}”);
System.out.println(“Preparing Call took: “+(System.currentTimeMillis() – l));

callStatement.setArray(1, pliArray);
callStatement.setArray(2, plsArray);

l1 = System.currentTimeMillis();
callStatement.execute();
System.out.println(“Executing PL/SQL took: “+(System.currentTimeMillis() – l1));

The PL/SQL code:

CREATE OR REPLACE TYPE TEST.intArray IS VARRAY(1000000) OF NUMBER;
show errors;
CREATE OR REPLACE TYPE TEST.varArray IS VARRAY(1000000) OF VARCHAR2(18);
show errors;

create or replace procedure insertvalues (nValue1 IN intArray, sValue2 IN varArray) is
begin
forall nvalue in nValue1.first..nValue1.last
insert into test values (nValue1(nvalue), sValue2(nvalue));
end;
show errors;

Well the result surprised me a little bit. Java was much faster. Look at the trace alone (trimmed!):

INSERT INTO TEST
VALUES
(:1,:2)

call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 1 0.00 0.00 0 0 0 0
Execute 60 1.38 2.08 0 7646 34579 1000000
Fetch 0 0.00 0.00 0 0 0 0
——- —— ——– ———- ———- ———- ———- ———-
total 61 1.38 2.08 0 7646 34579 1000000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1159

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
—————————————- Waited ———- ————
SQL*Net message to client 60 0.00 0.00
SQL*Net message from client 60 0.09 1.64
SQL*Net more data from client 10726 0.00 0.72
********************************************************************************

BEGIN insertvalues(:1,:2); END;

call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 1 0.01 0.00 0 0 0 0
Execute 1 7.77 9.01 0 9277 34886 1
Fetch 0 0.00 0.00 0 0 0 0
——- —— ——– ———- ———- ———- ———- ———-
total 2 7.78 9.01 0 9277 34886 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1159

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
—————————————- Waited ———- ————
SQL*Net more data from client 10399 0.00 1.17
SQL*Net message to client 1 0.00 0.00
log file sync 1 0.21 0.21
SQL*Net message from client 1 0.00 0.00
********************************************************************************
As you can see, the count of the procedure execution is just 1, from JDBC it is 60 (because of the bug with the 16000 rows). The elapsed time with prepared statement just took 2.08 seconds but with FORALL it took 9.01 seconds. And the CPU time is also of course lower than in PL/SQL. To be fair: That 9.01 seconds also include the call of the procedure plus the load of the collections. As it showed, just creating the collection in java took around 2 seconds!

In summary: You can see that prepared statement is much faster than FORALL. But to be fair: FORALL was developed to have some kind of bulking in PL/SQL within cursors to decrease the calls between SQL and PL/SQL engine. But if you using Java and JDBC anyway, work with prepared statements and with batch mode if possible. Don’t mind about the SQL*Net waiting events, these are idle events what means that your application isn’t waiting for it!

BTW: That JDBC bug is fixed in 11g and 10.2.0.3!

BULK COLLECT & FORALL vs. CURSOR & FOR LOOP

After more and more reads about BULK COLLECT and FORALL and their performance improvements I decided to have a closer look on it by myself to see how powerful they really are. So I built a little test-case which inserts all entries from the all_object view into another table. The inserts happens on three different ways:
First way is a simple cursor over the view and a insert in a loop with FETCH into local variables. This way also shows how slow the opening of the cursor itself is.
The second way is a simple FOR – IN LOOP with the insert of the cursor variables.
And, of course, the third way is the way with bulking the rows and inserting them with FORALL so lets see.

So the other table looks like this (three columns are enough for this tests)

SQL> create table temp (owner varchar2(30), name varchar2(30), type varchar2(19));

Table created.

And the three different procedures looks like this

CREATE OR REPLACE PROCEDURE CURSOR_FOR_OPEN_QUERY
 IS
 l_sOwner VARCHAR2(30);
 l_sName VARCHAR2(30);
 l_sType VARCHAR2(19);
 CURSOR cur IS SELECT owner, object_name name, object_type type FROM all_objects;
 BEGIN
 dbms_output.put_line('Before CURSOR OPEN: ' || systimestamp);
 OPEN cur;
 dbms_output.put_line('Before LOOP: ' || systimestamp);
 LOOP
 FETCH cur INTO l_sOwner, l_sName, l_sType;
 IF cur%NOTFOUND THEN
 EXIT;
 END IF;
 INSERT INTO temp values (l_sOwner, l_sName, l_sType);
 END LOOP;
 CLOSE cur;
 dbms_output.put_line('After CURSOR CLOSE: ' || systimestamp);
 COMMIT;
 END;
 /

CREATE OR REPLACE PROCEDURE CURSOR_FOR_QUERY
 IS
 BEGIN
 dbms_output.put_line('Before CURSOR: ' || systimestamp);
 FOR cur IN (SELECT owner, object_name name, object_type type FROM all_objects) LOOP
 INSERT INTO temp values (cur.owner, cur.name, cur.type);
 END LOOP;
 dbms_output.put_line('After CURSOR: ' || systimestamp);
 COMMIT;
 END;
 /

CREATE OR REPLACE PROCEDURE BULK_COLLECT_QUERY
 IS
 TYPE sOwner IS TABLE OF VARCHAR2(30);
 TYPE sName IS TABLE OF VARCHAR2(30);
 TYPE sType IS TABLE OF VARCHAR2(19);
 l_sOwner sOwner;
 l_sName sName;
 l_sType sType;
 BEGIN
 dbms_output.put_line('Before Bulk Collect: ' || systimestamp);
 SELECT owner, object_name, object_type
 BULK COLLECT INTO l_sOwner, l_sName, l_sType
 FROM all_objects;
 dbms_output.put_line('After Bulk Collect: ' || systimestamp);
 --
 FORALL indx IN l_sName.FIRST..l_sName.LAST
 INSERT INTO temp values (l_sOwner(indx), l_sName(indx), l_sType(indx));
 --
 dbms_output.put_line('After FORALL: ' || systimestamp);
 COMMIT;
 END;
 /

Ok, then I bounced the database to get no buffers, caching, etc. on it.

So the first execute

SQL> exec cursor_for_open_query
Before CURSOR OPEN: 27-SEP-07 10.56.30.699401000 AM +02:00
Before LOOP: 27-SEP-07 10.56.30.922366000 AM +02:00
After CURSOR CLOSE: 27-SEP-07 10.57.07.699791000 AM +02:00

Only look at the seconds it took 37 seconds and nearly nothing for opening the cursor! But how much rows were inserted?

SQL> select count(*) from temp;

COUNT(*)
----------
49424

Truncate the table (truncate to free the extends!) and bounce the database again and now the second run

SQL> exec cursor_for_query
Before CURSOR: 27-SEP-07 10.59.47.848249000 AM +02:00
After CURSOR: 27-SEP-07 11.00.09.072525000 AM +02:00

The whole loop took 22 seconds, well this looks already better. Well, also all rows inserted?

SQL> select count(*) from temp;

COUNT(*)
----------
49424

But now (after truncate and bouncing) the bulk collect run

SQL> exec bulk_collect_query
Before Bulk Collect: 27-SEP-07 11.01.33.553224000 AM +02:00
After Bulk Collect: 27-SEP-07 11.01.41.874054000 AM +02:00
After FORALL: 27-SEP-07 11.01.42.065753000 AM +02:00

Look at this, for bulking all the lines into the collection took just 8 seconds (for 49 424 rows) and the inserts just 1 second! Unbelievable, together we did everything in 9 seconds where the other ways took over 20 seconds!

Well now lets try to first execute the bulk load then truncate the table again but not bouncing the database so that the buffers and caches a still filled

SQL> exec bulk_collect_query
Before Bulk Collect: 27-SEP-07 11.02.31.257498000 AM +02:00
After Bulk Collect: 27-SEP-07 11.02.41.614205000 AM +02:00
After FORALL: 27-SEP-07 11.02.41.818092000 AM +02:00

PL/SQL procedure successfully completed.

SQL> select count(*) from temp;

COUNT(*)
----------
49423

SQL> truncate table temp;

Table truncated.

SQL> exec cursor_for_query
Before CURSOR: 27-SEP-07 11.04.04.960254000 AM +02:00
After CURSOR: 27-SEP-07 11.04.25.749038000 AM +02:00

Ok so now we need 10 seconds for the run with the bulk but we sill need 21 seconds for the cursor! So not really a improvement with the cache and so on. Ok final test on a big system with over 268 thousand rows

Before Bulk Collect: 27-SEP-07 11.24.17.034732000 AM +02:00
After Bulk Collect: 27-SEP-07 11.24.25.111020000 AM +02:00
After FORALL: 27-SEP-07 11.24.26.129826000 AM +02:00
PL/SQL procedure successfully completed.

COUNT(*)
----------
267985

Table truncated.

Before CURSOR: 27-SEP-07 11.24.29.629354000 AM +02:00
After CURSOR: 27-SEP-07 11.25.02.244549000 AM +02:00
PL/SQL procedure successfully completed.

COUNT(*)
----------
268056

And again, bulking took 8 seconds and the inserts just 1 second! But the run with the cursor took 33 seconds!

So this was just a short test but it definitely shows that BULK COLLECT and FORALL are much faster than cursors within the FOR loop! Only disadvantage of FORALL as you maybe already guess if you looked at the code: You can just perform one DML statement, there is no “FORALL END” clause! But anyway also bulking is a very high-performance functionality of Oracle! So if you have to run throw data collections then use BULK COLLECT!