Loading data fast – regular insert vs. bulk insert

In my last post I talked about how persisting data can become the bottleneck on large high scale systems nowadays. I also talked about that more and more people tend to think that databases are simply slow, seeing them as just big I/O systems. And I talked about how lots of applications are still inserting data as they used to do years ago rather to use bulk inserts.

In this post I will show you how bulk inserts can actually boost your inserts and therefore you systems. I will use a simple example showing the difference between:

  • Single row insert with commit
  • Single row insert with only one final commit
  • Bulk insert with final commit

Let’s assume you have a java program that needs to load some data from a file into a single table. Each line in the file represents a row in the database. I won’t go into how to read from the file and build your data together. This is out of scope for this post and not relevant to show the benefit of bulk inserts over regular ones. First let’s build a simple two column table, including an “id” column as NUMBER and a “text” column as VARCHAR2:

CREATE TABLE TESTLOADTABLE (id NUMBER, text VARCHAR2(255));

table TESTLOADTABLE created.

For each test I truncate the table first just to make sure that I always load the same amount of data into the same empty table. I add a comment in the statements, so that I can separate them out later on in the trace file.

The first example loads 10,000 rows into the table. It will simply insert an incrementing counter and a string into the table followed by a commit.

conn.prepareStatement("TRUNCATE TABLE testloadtable").execute();
conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute();

PreparedStatement stmt = conn.prepareStatement("INSERT /* conventional insert with commit */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<10000;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.execute();
  conn.commit();
}

conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();

Looking at the trace file it took the program 2.21 seconds to load these 10,000 rows. You can also see that the statement got actually executed 10,000 times – the commits unfortunately don’t show up in the formatted trace file but they would be listed in the raw data trace file.

 SQL ID: 337xy5qc84nsq Plan Hash: 0

INSERT /* conventional insert with commit */ INTO testloadtable (id, text)
 VALUES
 (:1 ,:2 )

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        1      0.00       0.00          0          0          0           0
 Execute  10000      2.06       2.21          2         90      20527       10000
 Fetch        0      0.00       0.00          0          0          0           0
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total    10001      2.06       2.21          2         90      20527       10000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
 ---------- ---------- ----------  ---------------------------------------------------
 0          0          0  LOAD TABLE CONVENTIONAL  (cr=5 pr=2 pw=0 time=1637 us)

The next test puts the commit outside of the loop. So I still add the data row by row to the table but the commit itself happens only once after all rows were loaded:

conn.prepareStatement("TRUNCATE TABLE testloadtable").execute();
conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute();

PreparedStatement stmt = conn.prepareStatement("INSERT /* conventional insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<10000;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.execute();
}
conn.commit();

conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();

As the results show the statement was still executed 10000 times. However, this time it took only 1.19 seconds to insert all the data. So by just moving the commit to the end, after all inserts were done, I gained already 57% more performance! Although it seems that commits are light weighted and don’t do much, the database still has some tasks to accomplish to make sure that your transaction is saved and visible. And of course instead of having only 1 transaction, I have 10,000 in this case.

 SQL ID: drsv4dw4037zj Plan Hash: 0

INSERT /* conventional insert */ INTO testloadtable (id, text)
 VALUES
 (:1 ,:2 )

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        1      0.00       0.00          0          0          0           0
 Execute  10000      1.09       1.19          2        114      10562       10000
 Fetch        0      0.00       0.00          0          0          0           0
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total    10001      1.09      1.19          2        114      10562       10000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
 ---------- ---------- ----------  ---------------------------------------------------
 0          0          0  LOAD TABLE CONVENTIONAL  (cr=5 pr=2 pw=0 time=1507 us)

The Oracle JDBC driver does support bulk inserts. What you can do is that you add all your data to a “batch” and then execute the entire batch. This gives you the advantage that there is only 1 INSERT statement executed which inserts all your data into the table at once! So instead of 10,000 round-trips, I only have 1 sending all the data over in one big chunk:

conn.prepareStatement("TRUNCATE TABLE testloadtable").execute();
conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute();

PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)");

for (int i=0;i<10000;i++)
{
  stmt.setInt(1, i);
  stmt.setString(2, "test" + i);
  stmt.addBatch();
}
stmt.executeBatch();
conn.commit();

conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();

Now these results are pretty amazing! There was only 1 execution of that insert statement and that loaded the entire batch in only 0.06 seconds!

 SQL ID: gfkg1d43va20y Plan Hash: 0

INSERT /* addBatch insert */ INTO testloadtable (id, text)
 VALUES
 (:1 ,:2 )

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        1      0.00       0.00          0          0          0           0
 Execute      1      0.05       0.06          0        129        394       10000
 Fetch        0      0.00       0.00          0          0          0           0
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total        2      0.05       0.06          0        129        394       10000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
 ---------- ---------- ----------  ---------------------------------------------------
 0          0          0  LOAD TABLE CONVENTIONAL  (cr=139 pr=0 pw=0 time=51971 us)

As you can see, bulk inserting can give your system a huge performance boost! In this example here we are not talking about an improvement of percents anymore, but about factors! Imagine what this could gain you when you have even more data!

This post shows that persisting data into a database is not just simply slow but it also matters a lot how you actually insert that data!

Easy way to write SYS.XMLTYPE into a file using JAVA

It has been a while since my last post and I’m very sorry for that but I’m currently very busy in work as in my private life.

Yesterday I struggled a little bit around with generating xml from a select and put it into a plain text file. I needed a fast way to put the xml into a file for a temporary workaround so I didn’t really care about clean way to do this. The first part was pretty easy (generating xml from a select) but on the second part I wasted too much time for that simple task. Unfortunately also google wasn’t really helpful so I decided to put the few lines of code into my blog so that it’s documented for all the other peoples out there.

First part – Generate the XML using SQL/XML:

PreparedStatement stmt1 = Conn1.prepareStatement(“SELECT XMLELEMENT(\”TestSuite\”,” +
“(SELECT XMLAGG(XMLELEMENT(\”TestCase\”, XMLELEMENT(\”RuntimeInstanceId\”, instance_id)))” +
“FROM MYTESTS WHERE test_id = ?)).getclobval() as Result FROM dual”);
stmt1.setString(1, this.sTestId);
ResultSet result = stmt1.executeQuery();

The important thing here is the .getclobval() after the column in the select clause but first let’s see the second part.

Second part – Writing the XML to a plain text file:

result.next();
Writer output = new BufferedWriter(new FileWriter(new File(this.sFile)));
output.write(result.getString(1));
output.close();

That part looks now also pretty easy as you just can use result.getString(1) in here. But this is just possible because we called the getclobval() function in the select. This is a method of the XMLTYPE result object and will cast the result from XMLTYPE to CLOB. Without using getclobval() you will be surprised because result.getSQLXML(1) will simply not work. So if you need a fast way to write a XML to a plain text file just use getclobval() method of the XMLTYPE object.

Prepared statement in batch mode vs. FORALL in PL/SQL

A few days ago I had a nice discussion with one of our developers here in the office. The subject: What’s faster? Prepared statements with batch mode or calling a PL/SQL function which is using FORALL.

So I made a simple test: Insert 1 million rows with two columns (a number and a string column) into a table. The first part builds the values in java give it to the batch array and executes the insert in batch mode (This feature of JDBC stores the values in the memory until you execute the batch. Then JDBC executes the DML and loads the array at ones, very similar to FORALL.) The second part builds the values also in java creates two collections and call a procedure with the collections. The procedure itself just performs a FORALL over the insert.

So the first part of the java code looks like this:

long l = System.currentTimeMillis();
PreparedStatement pstmt = conn1.prepareStatement(“INSERT INTO TEST VALUES(?,?)”);
System.out.println(“PrepareStatement took: “+(System.currentTimeMillis() – l));
long l1 = System.currentTimeMillis();
for (int i=0;i<1000000;++i)
{
pstmt.setInt(1, i);
pstmt.setString(2, “testvalue”+i);
pstmt.addBatch();
if(i%16961 == 0)
pstmt.executeBatch();
}

pstmt.executeBatch();
System.out.println(“ExecuteBatch took: “+ (System.currentTimeMillis() – l1));

Don’t worry about the if with the modulo function. It turned out that JDBC has a bug with addBatch. It can just execute around 16000 rows at ones. If you add more rows it just ignores them! So I had to call the executeBatch more often. But we will see, if this slows down the test.

The second part of the java code looks like this:

l = System.currentTimeMillis();

ArrayDescriptor integer = ArrayDescriptor.createDescriptor(“INTARRAY”, conn1);
ArrayDescriptor varchar = ArrayDescriptor.createDescriptor(“VARARRAY”, conn1);

ARRAY pliArray = new ARRAY(integer,conn1, iArray);
ARRAY plsArray = new ARRAY(varchar,conn1, sString);
System.out.println(“Creating the Array took: “+(System.currentTimeMillis() – l));

l = System.currentTimeMillis();
CallableStatement callStatement = (CallableStatement)conn1.prepareCall(“{call insertvalues(?,?)}”);
System.out.println(“Preparing Call took: “+(System.currentTimeMillis() – l));

callStatement.setArray(1, pliArray);
callStatement.setArray(2, plsArray);

l1 = System.currentTimeMillis();
callStatement.execute();
System.out.println(“Executing PL/SQL took: “+(System.currentTimeMillis() – l1));

The PL/SQL code:

CREATE OR REPLACE TYPE TEST.intArray IS VARRAY(1000000) OF NUMBER;
show errors;
CREATE OR REPLACE TYPE TEST.varArray IS VARRAY(1000000) OF VARCHAR2(18);
show errors;

create or replace procedure insertvalues (nValue1 IN intArray, sValue2 IN varArray) is
begin
forall nvalue in nValue1.first..nValue1.last
insert into test values (nValue1(nvalue), sValue2(nvalue));
end;
show errors;

Well the result surprised me a little bit. Java was much faster. Look at the trace alone (trimmed!):

INSERT INTO TEST
VALUES
(:1,:2)

call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 1 0.00 0.00 0 0 0 0
Execute 60 1.38 2.08 0 7646 34579 1000000
Fetch 0 0.00 0.00 0 0 0 0
——- —— ——– ———- ———- ———- ———- ———-
total 61 1.38 2.08 0 7646 34579 1000000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1159

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
—————————————- Waited ———- ————
SQL*Net message to client 60 0.00 0.00
SQL*Net message from client 60 0.09 1.64
SQL*Net more data from client 10726 0.00 0.72
********************************************************************************

BEGIN insertvalues(:1,:2); END;

call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 1 0.01 0.00 0 0 0 0
Execute 1 7.77 9.01 0 9277 34886 1
Fetch 0 0.00 0.00 0 0 0 0
——- —— ——– ———- ———- ———- ———- ———-
total 2 7.78 9.01 0 9277 34886 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 1159

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
—————————————- Waited ———- ————
SQL*Net more data from client 10399 0.00 1.17
SQL*Net message to client 1 0.00 0.00
log file sync 1 0.21 0.21
SQL*Net message from client 1 0.00 0.00
********************************************************************************
As you can see, the count of the procedure execution is just 1, from JDBC it is 60 (because of the bug with the 16000 rows). The elapsed time with prepared statement just took 2.08 seconds but with FORALL it took 9.01 seconds. And the CPU time is also of course lower than in PL/SQL. To be fair: That 9.01 seconds also include the call of the procedure plus the load of the collections. As it showed, just creating the collection in java took around 2 seconds!

In summary: You can see that prepared statement is much faster than FORALL. But to be fair: FORALL was developed to have some kind of bulking in PL/SQL within cursors to decrease the calls between SQL and PL/SQL engine. But if you using Java and JDBC anyway, work with prepared statements and with batch mode if possible. Don’t mind about the SQL*Net waiting events, these are idle events what means that your application isn’t waiting for it!

BTW: That JDBC bug is fixed in 11g and 10.2.0.3!

The beauty of Java stored procedures

Since 8i you can write stored procedures also in Java. But what is the advantage of writing a procedure in Java? Well the first advantage is that you’re now able to write complex solutions and use them in PL/SQL. You can write your own package for file I/O, email reading/sending, OS watching, an API for ZIP-files and many more. You’ve all the possibilities from Java now available in PL/SQL! And another advantage: It runs in the Oracle JVM which means it have access to the huge SGA and can do DB operations with better performance than outside it! And it’s easy to write it. For example a little Java stored procedure which shows you the properties of the Oracle JVM:

The first step is to write the java source which looks like this:

create or replace and compile java source named “JavaProps” as

And now continue with normal Java code:

public class props
{
public static void GetProps( )
{
System.getProperties().list(System.out);
}
}
/

Well this now creates the Java source inside the database. Now we create the stored procedure over it to use it inside PL/SQL:

create or replace procedure JavaProps
as language java name ‘props.GetProps()’;
/

And that it was! Now run the procedure in an anonymous PL/SQL block:

set serveroutput on size 10000;
begin
dbms_java.set_output (10000);
JavaProps;
end;
/

So you see, writing Java stored procedures isn’t difficult and allows you to do now powerful operations inside of PL/SQL!