Oracle has ported DTrace for Oracle Linux. DTrace is a very powerful performance analysis and troubleshooting tool that allows you to instrument all software. It’s name is short for Dynamic Tracing. I’m not a DTrace expert but some say it is that powerful that it allows you to reverse engineer any software…
Category: Technology
Loading data fast – DML error logging performance
In the last post of my “Loading data fast” series I showed how DML error logging can be used to prevent batch loads from failing when one or multiple rows can’t be inserted. Now the question is how much performance impact errors have on the mechanism. What I try to prove is that a developer should not just blindly use DML error logging instead of thinking whether the data he intents to insert is primary valid. So let’s have a look:
First I create the same simple table with a primary key on it that I used before:
CREATE TABLE TESTLOADTABLE (id NUMBER, text VARCHAR2(255)); table TESTLOADTABLE created. CREATE UNIQUE INDEX TESTLOADTABLE_PK ON TESTLOADTABLE(id); unique index TESTLOADTABLE_PK created. ALTER TABLE TESTLOADTABLE ADD PRIMARY KEY (id) USING INDEX TESTLOADTABLE_PK; table TESTLOADTABLE altered.
Then I create the error logging table:
BEGIN DBMS_ERRLOG.CREATE_ERROR_LOG('TESTLOADTABLE','ERR_TESTLOADTABLE'); END; anonymous block completed
So far so good, now let’s execute following tests:
1) Load 10k rows with no errors
2) Load 10k rows with 100 (1%) rows failing
3) Load 10k rows with 1000 (10%) rows failing
4) Load 10k rows with 2500 (25%) rows failing
5) Load 10k rows with 5000 (50%) rows failing
6) Load 10k rows with 7500 (75%) rows failing
And just out of curiosity let’s see how long a regular insert with no error logging takes
7) Load 10k rows with no errors and no error logging
Test 1 – no errors:
long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 861 Executing overall took: 1020 Preparation took: 159
So overall it took 1020 milli seconds while the job spent 861 milli seconds in the execution of the insert.
Test 2 – 100 errors (1%):
PreparedStatement stmt0 = conn.prepareStatement("INSERT /* generate some rows before */ INTO testloadtable (id, text) VALUES(?,?)"); for (int i=0;i<10000;i++) { if (i%100 == 0) { stmt0.setInt(1, i); stmt0.setString(2, "test" + i); stmt0.addBatch(); } } stmt0.executeBatch(); conn.commit(); long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 1017 Executing overall took: 1069 Preparation took: 52
This time it took quite a bit long to execute the batch, 1017 milli seconds. The reason for this is obvious. Oracle now has not only to insert 10k rows but also to reject rows and insert them into another table.
Test 3 – 1000 (10%) errors:
PreparedStatement stmt0 = conn.prepareStatement("INSERT /* generate some rows before */ INTO testloadtable (id, text) VALUES(?,?)"); for (int i=0;i<10000;i++) { if (i%10 == 0) { stmt0.setInt(1, i); stmt0.setString(2, "test" + i); stmt0.addBatch(); } } stmt0.executeBatch(); conn.commit(); long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 1420 Executing overall took: 1470 Preparation took: 50
And as you can see the more rows that are failing, the longer the insert takes. So let’s have a look at the rest of the tests and see how bad it gets:
Test 4 – 2500 (25%) errors:
PreparedStatement stmt0 = conn.prepareStatement("INSERT /* generate some rows before */ INTO testloadtable (id, text) VALUES(?,?)"); for (int i=0;i<10000;i++) { if (i%4 == 0) { stmt0.setInt(1, i); stmt0.setString(2, "test" + i); stmt0.addBatch(); } } stmt0.executeBatch(); conn.commit(); long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 1877 Executing overall took: 1961 Preparation took: 84
Test 5 – 5000 (50%) errors:
PreparedStatement stmt0 = conn.prepareStatement("INSERT /* generate some rows before */ INTO testloadtable (id, text) VALUES(?,?)"); for (int i=0;i<10000;i++) { if (i%2 == 0) { stmt0.setInt(1, i); stmt0.setString(2, "test" + i); stmt0.addBatch(); } } stmt0.executeBatch(); conn.commit(); long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 2680 Executing overall took: 2765 Preparation took: 85
Test 6 – 7500 (75%) errors:
PreparedStatement stmt0 = conn.prepareStatement("INSERT /* generate some rows before */ INTO testloadtable (id, text) VALUES(?,?)"); for (int i=0;i<10000;i++) { if (i<=7500) { stmt0.setInt(1, i); stmt0.setString(2, "test" + i); stmt0.addBatch(); } } stmt0.executeBatch(); conn.commit(); long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 3349 Executing overall took: 3412 Preparation took: 63
So as you can see, the more errors you get, the longer your batch will need to execute. This is only logical of course as the more errors you get, the more exceptions are thrown which then lead to more rows inserted into the error table. The main takeaway is that your insert is not failing and you can smoothly query the error logging table and act appropriately. Just in comparison:
- Insert no errors: 861ms
- Insert 1% errors: 1017ms
- Insert 10% errors: 1420ms
- Insert 25% errors: 1877ms
- Insert 50% errors: 2680ms
- Insert 75% errors: 3349ms
Now let’s do one more test and see how fast a regular insert without DML error logging is. You would think it’s the same but during my experiments if found an interesting fact:
Test 7 – no errors, no error logging:
long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* regular bulk insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 212 Executing overall took: 372 Preparation took: 160
So a regular insert takes only 212ms while a DML error logging insert takes 861ms. That’s 4 times longer!
The reason for this is because of the unpublished bug 11865420 (My Oracle Support Doc Id: 11865420.8). Once you download the patch and update the system accordingly the insert with DML error logging is just as fast as without:
Regular batch insert:
long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* regular bulk insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 248 Executing overall took: 399 Preparation took: 151
DML error logging insert:
long startOverall = System.currentTimeMillis(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } long startExecute = System.currentTimeMillis(); stmt.executeBatch(); long endExecute = System.currentTimeMillis(); conn.commit(); long endOverall = System.currentTimeMillis(); System.out.println("Executing batch took: " + (endExecute-startExecute)); System.out.println("Executing overall took: " + (endOverall-startOverall)); System.out.println("Preparation took: " + ((endOverall-startOverall)-(endExecute-startExecute))); Executing batch took: 227 Executing overall took: 384 Preparation took: 157
Conclusion: This is the last article of the series Loading data fast! In this series I’ve not only shown how bad it is to commit after each row (remember Autocommit in JDBC!) but also how much more speed you can get out of your program by doing batch inserts. I’ve also shown how to deal with potential errors during batch inserts and how much performance impact errors produce. The latter I have done to make it clear that developers should not just throw DML error logging inserts at every problem as they will lose the performance benefit again that batch inserts provide. So now, go over your code and batch up! 😉
Loading data fast – Batch inserts and errors (DML error logging)
In the last post of my series “Loading data fast” I showed how batch inserts can actually make a huge difference in insert performance. Now one question remains: What happens when an error occurs like a unique key violation? The answer is: The insert statement will fail with an error and stop. Now, if you didn’t catch the exception, it will be raised and the commit will never be issued which will lead to a loss of all previous inserted data as well. However, if you catch the exception and make sure that you issue a commit afterwards, you will at least have your previous successful inserted data in the table if that is suitable for the business logic – basically meaning that you know where you’ve stopped. Let’s have a look:
In Java you will have to run the PreparedStatement.executeBatch() routine in a try block and execute the Connection.commit() routine in the finally block, like this:
PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<ROWS;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } try { stmt.executeBatch(); } catch (BatchUpdateException e) { System.out.println("Error during my batch insert example"); e.printStackTrace(); } finally { conn.commit(); }
In PL/SQL you will have to run your FORALL statement in a new block, like this:
DECLARE TYPE tab is TABLE OF testloadtable%ROWTYPE; myvals tab; BEGIN SELECT rownum, 'x' BULK COLLECT INTO myvals FROM dual CONNECT BY LEVEL <= 10000; BEGIN FORALL i IN myvals.FIRST..myvals.LAST INSERT INTO testloadtable (id, text) VALUES (myvals(i).id, myvals(i).text); EXCEPTION WHEN DUP_VAL_ON_INDEX THEN DBMS_OUTPUT.PUT_LINE('Unique key violation!'); END; COMMIT; END;
By doing this, you will at least keep all previous inserted data. However, you will still have to fix the error and run the rest of the batch again and latter can especially in batch load scenarios be rather difficult. If you catch the exception, you will have to find the right position in your batch again in order to continue. In some cases that could imply to rebuild the entire batch again. If you don’t catch the exception, because your batch is one logical unit and either it succeeds or will be rolled back, then you will have to start from scratch again. So the conclusion is: Neither of the two options is optimal!
Fortunately, Oracle provides a feature called “DML error logging“! This feature allows you to simply log all errors into an error table and continue the execution of your DML statement with no error being raised! This feature works with all DML statements: Insert, Update, Merge and Delete. I will, however, focus only on insert statements in this post.
First let’s prove what I just said. All I do is to create a primary key on the table and insert a row before I execute my batch.
CREATE TABLE TESTLOADTABLE (id NUMBER, text VARCHAR2(255)); table TESTLOADTABLE created. CREATE UNIQUE INDEX TESTLOADTABLE_PK ON TESTLOADTABLE(id); unique index TESTLOADTABLE_PK created. ALTER TABLE TESTLOADTABLE ADD PRIMARY KEY (id) USING INDEX TESTLOADTABLE_PK; table TESTLOADTABLE altered.
Now that I have my primary key created, I insert 1 row with the id 3000 before I execute my batch. The batch contains 10k rows with an id range of 0 – 9999. Once it reaches the row with the id 3000 (3001st row, as I start with id 0), I’ll receive an exception that the row already exists. Note, that all other rows would be fine and valid except this single one with the id 3000. Nevertheless, as I do not make sure that a commit is happening, I will lose all of them:
conn.prepareStatement("INSERT INTO testloadtable (id, text) VALUES (3000,'x')").execute(); conn.commit(); stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<ROWS;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } stmt.executeBatch(); conn.commit();
As expected, I get following exception back:
Exception in thread "main" java.sql.BatchUpdateException: ORA-00001: unique constraint (TEST.SYS_C0010986) violated
And when I do a count on the table, I have only my previously inserted row in there:
SELECT COUNT(*) FROM testloadtable COUNT(*) -------- 1
Now let’s see what I get when I do catch the exception and make sure to issue a commit:
conn.prepareStatement("INSERT INTO testloadtable (id, text) VALUES (3000,'x')").execute(); conn.commit(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<ROWS;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } try { stmt.executeBatch(); } catch (BatchUpdateException e) { System.out.println("Error during my batch insert example"); e.printStackTrace(); } finally { conn.commit(); }
The output shows that I caught the exception successfully:
Error during my batch insert example java.sql.BatchUpdateException: ORA-00001: unique constraint (TEST.SYS_C0010986) violated
And a count on the table now shows that I have 3001 rows in it; my previously inserted row with id 3000 and the first 3000 rows from the batch with id 0 – 2999:
SELECT COUNT(*) FROM testloadtable COUNT(*) -------- 3001
Now as said before, the great thing about DML error logging is, that it offers you the ability to load your entire batch without receiving an error at all. Instead the error(s) will be logged into a separate error table which can then be queried and appropriate actions taken afterwards. All you need to do is to modify your DML statement to include the LOG ERRORS clause. You will also have to create an error logging table via the DBMS_ERRLOG package first. Optionally, you can:
- Include a tag that gets added to the error log to help identify the statement that caused errors
- Include the REJECT LIMIT subclause, which allows you to define a upper limit of errors that can be encountered before the statement fails. If you have a batch of 10k rows where 7k rows are failing, it probably doesn’t make much sense anymore to continue as there seems to be a bigger problem. This clause offers you the functionality to still raise the error if a certain threshold is reached. I guess in order to make sure that nobody accidentally adds the error logging clause and so suppresses all errors, the default value is 0. Which means that if you omit the REJECT LIMIT clause, an error will be logged into the error logging table but the statement will also be terminated. If you want to log all errors but raise no error you will have to define UNLIMITED
The error table itself is a copy of the target table with a few more columns, telling you things like error number, error message and so forth. But the important part is that it will contain all the columns of your target table and all those columns will contain the values from the failing row of the insert. This means that you will not lose any information of your batch and do not need to re-execute the entire batch again!
Let’s see how it works! First I create an error logging table called ERR_TESTLOADTABLE:
BEGIN DBMS_ERRLOG.CREATE_ERROR_LOG('TESTLOADTABLE','ERR_TESTLOADTABLE'); END; anonymous block completed DESC err_testloadtable Name Null Type --------------- ---- -------------- ORA_ERR_NUMBER$ NUMBER ORA_ERR_MESG$ VARCHAR2(2000) ORA_ERR_ROWID$ UROWID() ORA_ERR_OPTYP$ VARCHAR2(2) ORA_ERR_TAG$ VARCHAR2(2000) ID VARCHAR2(4000) TEXT VARCHAR2(4000)
All I then have to do, is to modify my original code so that the insert statement includes a “LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED” clause at the end of it:
conn.prepareStatement("INSERT INTO testloadtable (id, text) VALUES (3000,'x')").execute(); conn.commit(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch with logging errors */ INTO testloadtable (id, text) VALUES (?,?) LOG ERRORS INTO ERR_TESTLOADTABLE REJECT LIMIT UNLIMITED"); for (int i=0;i<ROWS;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } stmt.executeBatch(); conn.commit();
Although I inserted the row with the id 3000 first, no error was reported anymore during the execution. However, if I have a look at my ERR_TESTLOADTABLE, I see following:
SELECT * FROM ERR_TESTLOADTABLE ORA_ERR_NUMBER$ ORA_ERR_MESG$ ORA_ERR_ROWID$ ORA_ERR_OPTYP$ ORA_ERR_TAG$ ID TEXT --------------- --------------------------------------------------------- --------------- -------------- ------------ ----- -------- 1 ORA-00001: unique constraint (TEST.SYS_C0010987) violated I 3000 test3000
The error logging table shows me:
- The error number “1” which means “ORA-00001”
- The error message
- RowId: In this case null as it was an insert and the row didn’t exist before
- The operation type “I” indicating Insert
- An empty error tag, as I have not defined one
- ID column value from the failing row
- Text column value from the failing row
Conclusion: DML error logging allows you to use batch DML statements that won’t fail on errors. This gives you the benefit of maximum flexibility when you perform batch operation!
In my next post I will take a look at the performance impact of DML error logging.
Loading data fast – regular insert vs. bulk insert
In my last post I talked about how persisting data can become the bottleneck on large high scale systems nowadays. I also talked about that more and more people tend to think that databases are simply slow, seeing them as just big I/O systems. And I talked about how lots of applications are still inserting data as they used to do years ago rather to use bulk inserts.
In this post I will show you how bulk inserts can actually boost your inserts and therefore you systems. I will use a simple example showing the difference between:
- Single row insert with commit
- Single row insert with only one final commit
- Bulk insert with final commit
Let’s assume you have a java program that needs to load some data from a file into a single table. Each line in the file represents a row in the database. I won’t go into how to read from the file and build your data together. This is out of scope for this post and not relevant to show the benefit of bulk inserts over regular ones. First let’s build a simple two column table, including an “id” column as NUMBER and a “text” column as VARCHAR2:
CREATE TABLE TESTLOADTABLE (id NUMBER, text VARCHAR2(255)); table TESTLOADTABLE created.
For each test I truncate the table first just to make sure that I always load the same amount of data into the same empty table. I add a comment in the statements, so that I can separate them out later on in the trace file.
The first example loads 10,000 rows into the table. It will simply insert an incrementing counter and a string into the table followed by a commit.
conn.prepareStatement("TRUNCATE TABLE testloadtable").execute(); conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute(); PreparedStatement stmt = conn.prepareStatement("INSERT /* conventional insert with commit */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.execute(); conn.commit(); } conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();
Looking at the trace file it took the program 2.21 seconds to load these 10,000 rows. You can also see that the statement got actually executed 10,000 times – the commits unfortunately don’t show up in the formatted trace file but they would be listed in the raw data trace file.
SQL ID: 337xy5qc84nsq Plan Hash: 0 INSERT /* conventional insert with commit */ INTO testloadtable (id, text) VALUES (:1 ,:2 ) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 10000 2.06 2.21 2 90 20527 10000 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 10001 2.06 2.21 2 90 20527 10000 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 111 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 0 0 0 LOAD TABLE CONVENTIONAL (cr=5 pr=2 pw=0 time=1637 us)
The next test puts the commit outside of the loop. So I still add the data row by row to the table but the commit itself happens only once after all rows were loaded:
conn.prepareStatement("TRUNCATE TABLE testloadtable").execute(); conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute(); PreparedStatement stmt = conn.prepareStatement("INSERT /* conventional insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.execute(); } conn.commit(); conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();
As the results show the statement was still executed 10000 times. However, this time it took only 1.19 seconds to insert all the data. So by just moving the commit to the end, after all inserts were done, I gained already 57% more performance! Although it seems that commits are light weighted and don’t do much, the database still has some tasks to accomplish to make sure that your transaction is saved and visible. And of course instead of having only 1 transaction, I have 10,000 in this case.
SQL ID: drsv4dw4037zj Plan Hash: 0 INSERT /* conventional insert */ INTO testloadtable (id, text) VALUES (:1 ,:2 ) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 10000 1.09 1.19 2 114 10562 10000 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 10001 1.09 1.19 2 114 10562 10000 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 111 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 0 0 0 LOAD TABLE CONVENTIONAL (cr=5 pr=2 pw=0 time=1507 us)
The Oracle JDBC driver does support bulk inserts. What you can do is that you add all your data to a “batch” and then execute the entire batch. This gives you the advantage that there is only 1 INSERT statement executed which inserts all your data into the table at once! So instead of 10,000 round-trips, I only have 1 sending all the data over in one big chunk:
conn.prepareStatement("TRUNCATE TABLE testloadtable").execute(); conn.prepareStatement("ALTER SESSION SET SQL_TRACE=TRUE").execute(); PreparedStatement stmt = conn.prepareStatement("INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (?,?)"); for (int i=0;i<10000;i++) { stmt.setInt(1, i); stmt.setString(2, "test" + i); stmt.addBatch(); } stmt.executeBatch(); conn.commit(); conn.prepareStatement("ALTER SESSION SET SQL_TRACE=FALSE").execute();
Now these results are pretty amazing! There was only 1 execution of that insert statement and that loaded the entire batch in only 0.06 seconds!
SQL ID: gfkg1d43va20y Plan Hash: 0 INSERT /* addBatch insert */ INTO testloadtable (id, text) VALUES (:1 ,:2 ) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.05 0.06 0 129 394 10000 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 0.05 0.06 0 129 394 10000 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 111 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 0 0 0 LOAD TABLE CONVENTIONAL (cr=139 pr=0 pw=0 time=51971 us)
As you can see, bulk inserting can give your system a huge performance boost! In this example here we are not talking about an improvement of percents anymore, but about factors! Imagine what this could gain you when you have even more data!
This post shows that persisting data into a database is not just simply slow but it also matters a lot how you actually insert that data!
Loading data fast
Today more and more applications have the requirement to load data fast. While this requirement isn’t new in IT, nowadays it mainly means to persist data fast into the database. In times of cheap memory and lots of processing power, system have developed from slim single-threaded applications to huge multi-threaded clusters using lots of memory which provides them with low latency. As a result of that, working with data inside the application became cheap, in some cases even bringing data into the system, e.g. over some in-memory queuing systems. However, the persisting stage still needs to do I/O at some point and although with faster disks emerging these days, I/O can still not compete with the low latency that memory provides. Based on that conclusion there are more and more programmers out there nowadays, thinking that I/O is bad thing and as memory became cheap, they rather dump everything into memory and tell the client to buy more if needed, instead of thinking clearly what needs to be cached and what not. They also tend to see the database as just a big I/O system and therefore associate automatically that it is simply slow and should be avoided. But that’s far away from the truth! Databases also use caching mechanisms which usually are highly sophisticated, having years and years of brainpower in their algorithms. Just because you insert data into the database does not mean at all that this data goes straight to disk. If you insert data into the Oracle database your data will be inserted into the buffer pool cache of the SGA memory structure. And more important, assuming the buffer pool is large enough, it will stay there! Only later on the DB writer background process(es) will actually take that data and persist it onto disk – completely decoupled from your transaction. So but if the database also uses memory for caching, how come that my inserts are still so much slower? In my experience programmers forget to ask one important question: How does the data get inserted? There are tons and tons of applications out there that didn’t change the way of inserting data since years. But just because it was the best way to do so years ago doesn’t necessarily mean that it still is! Or in other scenarios the programmer does not even know about it, because the application is using an object-relational mapping (ORM) tool.
There are several ways how to load data into a table. Most commonly used are following:
- SQL*Loader – An Oracle tool for loading data fast from external files into tables
- CREATE TABLE … AS SELECT (CTAS) statement – This SQL statement allows you to create a table and populate it with data selected from another existing table which can also be an external table.
- MERGE statement – This SQL statement enables you either insert into or update rows of a table by selecting them from another existing table. If a row in the new data corresponds to an already existing row in the table, then an UPDATE is performed instead of an INSERT.
- INSERT statement – The traditional INSERT statement for inserting one or more rows into a table
In this post I will focus only on the INSERT statement as this is used within most applications.
How often have you seen code like this:
while (!file.isEOF()) { line = file.getNextLine(); INSERT INTO table (col1, col2, col3) VALUES (line.val1, line.val2, line.val3); COMMIT; }
What this piece of code does, is to read line by line from a file and for each line it inserts the data into a table. On the first sight this seems pretty much the best way of doing it and years ago it probably was. However, what happens here is that for each line in the file an INSERT statement with a set of data is send to the database followed by a COMMIT. So for each line one DML statement gets executed, including a round-trip from your application to the database. But that’s not all. Once the INSERT statement arrives at the database, the database has to parse the statement and determine whether it is already known and an execution plan exists for it. Only after all those steps are done it can then execute the statement and finally load the data into the table. After the row got into the table the application executes a COMMIT which triggers another round-trip to the database and causes the databases to make sure that the row is visible to everybody and its transactional integrity given. For a file that contains only 10 lines all these steps might not be a lot of effort and is done pretty quick, but imagine when you have a file with 10,000 lines! Suddenly having 10,000 round-trips for the INSERT statement, another 10,000 round-trips for the COMMIT, 10,000 parses (soft parses) of the INSERT statement and 10,000 times making sure that transactional integrity is given, becomes very expensive. And as you execute the same statement over and over again, some of the steps performed by the database become redundant. The database doesn’t have to check each time if the statement is already known as you execute it over and over again. It also does not make sense to perform 20,000 round-trips for only 10,000 rows of data. And you might not even want the user to be able to see the data already as long as the file is not completely loaded. So ideally what you want is something like this:
while (!file.isEOF()) { lines += file.getNextLine(); } INSERT INTO table (col1, col2, col3) VALUES (lines); COMMIT;
First you read all the data from the file and keep them in memory. Then you insert all that data into the database at once. The data is sent over to the database only once, the parsing of the statement happens only once and the transactional integrity checks/commit happens only one time. Instead of 20,000 round-trips you have only 2!
Now the good news: Databases nowadays do support these kind of loading operations! It is referred as BULKING or BULK INSERTS. While in the past there were various reasons that did not allow you bulking, either because the mechanism didn’t exist back then or there was not sufficient memory to load more than 1 line at once anyway, there are no more reasons these days anymore. But the point is: You have to change the way how you insert data into your database. Sometimes this can be changed rather quick but sometimes there can be major efforts involved with that.
In my next post I will show some examples how to use bulk inserts and what difference they can make!
Indexing strategies with Jonathan Lewis
Today I was part of a NYOUG seminar about “Indexing Strategies” held by Mr. Jonathan Lewis himself. If somebody doesn’t know Jon Lewis, he is one of the top Oracle gurus on the entire planet with more than 25 years of Oracle experience. He probably used Oracle before I was even born (1985)! The very interesting fact however is that he actually never ever worked for Oracle but still has all the deep inside knowledge. Usually those gurus came out from Oracle or are still working for them. I guess the great amount of years dealing with Oracle helped him there.
However, Jonathan Lewis is also one of my top favorite because he shares his knowledge on his personal blog: http://jonathanlewis.wordpress.com In fact, Jons blog is the only one which I never can catch up with because he posts literally quicker than I got time to read his fabulous posts… reminds me that I should update my Blogroll.
The seminar itself was simply great. It was not free but those 230 bucks where totally worth it. Not only is Jon Lewis a great speaker – was the first time that I actually saw him – he also gave great detailed insights into indexes how they work, what is all possible with them and why Oracle is actually taking an index and much more important when it is not! . Furthermore he combined his sessions with some real world examples that he himself dealt with in the past which makes the whole thing much less theoretical.
Of course I won’t cover the whole seminar here, but I’ll end with some interesting things that I took out of it – some of which I got reminded again, some which were new for me:
- Indexing means: Getting to data quickly
- Indexes focus/compact data
- There is always a Trade-Off between loading and querying performance
- Index maintenance is expensive – it introduces costs
- It’s all about precision
- Non-unique indexes include the rowid automatically – for Oracle there are no non-unique indexes
- Index compression can save a lots of space on repetitive columns
- Index compression means elimination of duplicates
- Primary key constraints don’t necessarily need unique key indexes but only indexes with the columns in the right order
- Reverse indexes scatter entries accros the line, clustering factor can end up terrible
- Don’t duplicate indexes – e.g. FK constraints can share the first columns of another index
- Difference between tables and indexes: Index blocks have to be at the right place
- Don’t trust Oracle – test everything!
Exadata, Exadata, Exadata
Log file sync to death…
Oracle 11g R2 lock types
This page has moved to Oracle Database Lock Types.
Please visit that page.
Oracle 11g R2 enqueue waits
Over 1 and a half year ago I posted the enqueue waits for Oracle 10g. Well, a lot has changed in 11g including the enqueue waits. This is the new list of enqueue waits in 11gR2:
Group | Enqueue Type | Description |
---|---|---|
Auto BMR | enq: AB – ABMR process initialized | Lock held to ensure that ABMR process is initialized |
Auto BMR | enq: AB – ABMR process start/stop | Lock held to ensure that only one ABMR is started in the cluster |
ASM Disk AU Lock | enq: AD – allocate AU | Synchronizes accesses to a specific ASM disk AU |
ASM Disk AU Lock | enq: AD – deallocate AU | Synchronizes accesses to a specific ASM disk AU |
ASM Disk AU Lock | enq: AD – relocate AU | Synchronizes accesses to a specific ASM disk AU |
Edition Lock | enq: AE – lock | Prevent Dropping an edition in use |
Advisor Framework | enq: AF – task serialization | This enqueue is used to serialize access to an advisor task |
Analytic Workspace Generation | enq: AG – contention | Synchronizes generation use of a particular workspace |
ASM Enqueue | enq: AM – ASM ACD Relocation | Block ASM cache freeze |
ASM Enqueue | enq: AM – ASM Amdu Dump | Allow only one AMDU dump when block read failure |
ASM Enqueue | enq: AM – ASM File Destroy | Prevent same file deletion race |
ASM Enqueue | enq: AM – ASM Password File Update | Allow one ASM password file update per cluster at a time |
ASM Enqueue | enq: AM – ASM User | Prevents a user from being dropped if it owns any open files |
ASM Enqueue | enq: AM – ASM cache freeze | Start ASM cache freeze |
ASM Enqueue | enq: AM – PST split check | Synchronizes check for Storage (PST) split in disk groups |
ASM Enqueue | enq: AM – background COD reservation | Reserve a background COD entry |
ASM Enqueue | enq: AM – client registration | Registers DB instance to ASM client state object hash |
ASM Enqueue | enq: AM – disk offline | Synchronizes disk offlines |
ASM Enqueue | enq: AM – group block | ASM group block |
ASM Enqueue | enq: AM – group use | Client group use |
ASM Enqueue | enq: AM – rollback COD reservation | Reserve a rollback COD entry |
ASM Enqueue | enq: AM – shutdown | Prevent DB instance registration during ASM instance shutdown |
MultiWriter Object Access | enq: AO – contention | Synchornizes access to objects and scalar variables |
Service Operations | enq: AS – service activation | Synchronizes new service activation |
Alter Tablespace | enq: AT – contention | Serializes ‘alter tablespace’ operations |
Audit index file | enq: AU – audit index file | lock held to operate on the audit index file |
ASM volume locks | enq: AV – AVD client registration | Serialize inst reg and first DG use |
ASM volume locks | enq: AV – add/enable first volume in DG | Serialize taking the AVD DG enqueue |
ASM volume locks | enq: AV – persistent DG number | prevent DG number collisions |
ASM volume locks | enq: AV – volume relocate | Serialize relocating volume extents |
Analytic Workspace | enq: AW – AW generation lock | In-use generation state for a particular workspace |
Analytic Workspace | enq: AW – AW state lock | Row lock synchronization for the AW$ table |
Analytic Workspace | enq: AW – AW$ table lock | Global access synchronization to the AW$ table |
Analytic Workspace | enq: AW – user access for AW | Synchronizes user accesses to a particular workspace |
KSXA Test Affinity Dictionary | enq: AY – contention | Affinity Dictionary test affinity synchronization |
Global Transaction Branch | enq: BB – 2PC across RAC instances | 2PC distributed transaction branch across RAC instances |
BLOOM FILTER | enq: BF – PMON Join Filter cleanup | PMON bloom filter recovery |
BLOOM FILTER | enq: BF – allocation contention | Allocate a bloom filter in a parallel statement |
Backup/Restore | enq: BR – file shrink | Lock held to prevent file from decreasing in physical size during RMAN backup |
Backup/Restore | enq: BR – multi-section restore header | Lock held to serialize file header access during multi-section restore |
Backup/Restore | enq: BR – multi-section restore section | Lock held to serialize section access during multi-section restore |
Backup/Restore | enq: BR – perform autobackup | Lock held to perform a new controlfile autobackup |
Backup/Restore | enq: BR – proxy-copy | Lock held to allow cleanup from backup mode during an RMAN proxy-copy backup |
Backup/Restore | enq: BR – request autobackup | Lock held to request controlfile autobackups |
Backup/Restore | enq: BR – space info datafile hdr update | Lock held to prevent multiple process to update the headers at the same time |
Calibration | enq: CA – contention | Synchronizes various IO calibration runs |
Controlfile Transaction | enq: CF – contention | Synchronizes accesses to the controlfile |
Cross-Instance Call Invocation | enq: CI – contention | Coordinates cross-instance function invocations |
Label Security cache | enq: CL – compare labels | Synchronizes accesses to label cache for label comparison |
Label Security cache | enq: CL – drop label | Synchronizes accesses to label cache when dropping a label |
ASM Instance Enqueue | enq: CM – gate | serialize access to instance enqueue |
ASM Instance Enqueue | enq: CM – instance | indicate ASM diskgroup is mounted |
KTCN REG enq | enq: CN – race with init | during descriptor initialization |
KTCN REG enq | enq: CN – race with reg | during transaction commit to see concurrent registrations |
KTCN REG enq | enq: CN – race with txn | during registration |
KTUCLO Master Slave enq | enq: CO – master slave det | enqueue held be Master in Cleanout Optim |
Cleanup querycache registrations | enq: CQ – contention | Serializes access to cleanup client query cache registrations |
Reuse Block Range | enq: CR – block range reuse ckpt | Coordinates fast block range reuse ckpt |
Block Change Tracking | enq: CT – CTWR process start/stop | Lock held to ensure that only one CTWR process is started in a single instance |
Block Change Tracking | enq: CT – change stream ownership | Lock held by one instance while change tracking is enabled, to guarantee access to thread-specific resources |
Block Change Tracking | enq: CT – global space management | Lock held during change tracking space management operations that affect the entire change tracking file |
Block Change Tracking | enq: CT – local space management | Lock held during change tracking space management operations that affect just the data for one thread |
Block Change Tracking | enq: CT – reading | Lock held to ensure that change tracking data remains in existence until a reader is done with it |
Block Change Tracking | enq: CT – state | Lock held while enabling or disabling change tracking, to ensure that it is only enabled or disabled by one user at a time |
Block Change Tracking | enq: CT – state change gate 1 | Lock held while enabling or disabling change tracking in RAC |
Block Change Tracking | enq: CT – state change gate 2 | Lock held while enabling or disabling change tracking in RAC |
Cursor | enq: CU – contention | Recovers cursors in case of death while compiling |
DbsDriver | enq: DB – contention | Synchronizes modification of database wide supplementallogging attributes |
ASM Local Disk Group | enq: DD – contention | Synchronizes local accesses to ASM disk groups |
Datafile Online in RAC | enq: DF – contention | Enqueue held by foreground or DBWR when a datafile is brought online in RAC |
ASM Disk Group Modification | enq: DG – contention | Synchronizes accesses to ASM disk groups |
Direct Loader Index Creation | enq: DL – contention | Lock to prevent index DDL during direct load |
Database Mount/Open | enq: DM – contention | Enqueue held by foreground or DBWR to syncrhonize database mount/open with other operations |
Diskgroup number generator | enq: DN – contention | Serializes group number generations |
ASM Disk Online Lock | enq: DO – Staleness Registry create | Synchronizes Staleness Registry creation |
ASM Disk Online Lock | enq: DO – disk online | Synchronizes disk onlines and their recovery |
ASM Disk Online Lock | enq: DO – disk online operation | Represents an active disk online operation |
ASM Disk Online Lock | enq: DO – disk online recovery | Synchronizes disk onlines and their recovery |
ASM Disk Online Lock | enq: DO – startup of MARK process | Synchronizes startup of MARK process |
LDAP Parameter | enq: DP – contention | Synchronizes access to LDAP parameters |
Distributed Recovery | enq: DR – contention | Serializes the active distributed recovery operation |
Database Suspend | enq: DS – contention | Prevents a database suspend during LMON reconfiguration |
Default Temporary Tablespace | enq: DT – contention | Serializes changing the default temporary table spaceand user creation |
Diana Versioning | enq: DV – contention | Synchronizes access to lower-version Diana (PL/SQL intermediate representation) |
In memory Dispenser | enq: DW – contention | Serialize in memory dispenser operations |
Distributed Transaction | enq: DX – contention | Serializes tightly coupled distributed transaction branches |
ASM File Access Lock | enq: FA – access file | Synchronizes accesses to open ASM files |
Format Block | enq: FB – contention | Ensures that only one process can format data blcoks in auto segment space managed tablespaces |
Disk Group Chunk Mount | enq: FC – open an ACD thread | LGWR opens an ACD thread |
Disk Group Chunk Mount | enq: FC – recover an ACD thread | SMON recovers an ACD thread |
Flashback Database | enq: FD – Flashback coordinator | Synchronization |
Flashback Database | enq: FD – Flashback logical operations | Synchronization |
Flashback Database | enq: FD – Flashback on/off | Synchronization |
Flashback Database | enq: FD – Marker generation | Synchronization |
Flashback Database | enq: FD – Restore point create/drop | Synchronization |
Flashback Database | enq: FD – Tablespace flashback on/off | Synchronization |
KTFA Recovery | enq: FE – contention | Serializes flashback archive recovery |
ACD Relocation Gate Enqueue | enq: FG – FG redo generation enq race | resolve race condition to acquire Disk Group Redo Generation Enqueue |
ACD Relocation Gate Enqueue | enq: FG – LGWR redo generation enq race | resolve race condition to acquire Disk Group Redo Generation Enqueue |
ACD Relocation Gate Enqueue | enq: FG – serialize ACD relocate | only 1 process in the cluster may do ACD relocation in a disk group |
Flashback database log | enq: FL – Flashback database log | Synchronization |
Flashback database log | enq: FL – Flashback db command | Enqueue used to synchronize Flashback Database and and deletion of flashback logs. |
File Mapping | enq: FM – contention | Synchronizes access to global file mapping state |
File Object | enq: FP – global fob contention | Synchronizes various File Object(FOB) operations |
Disk Group Recovery | enq: FR – contention | begin recovery of disk group |
Disk Group Recovery | enq: FR – recover the thread | wait for lock domain detach |
Disk Group Recovery | enq: FR – use the thread | indicate this ACD thread is alive |
File Set / Dictionary Check | enq: FS – contention | Enqueue used to synchronize recovery and file operations or synchronize dictionary check |
Disk Group Redo Generation | enq: FT – allow LGWR writes | allow LGWR to generate redo in this thread |
Disk Group Redo Generation | enq: FT – disable LGWR writes | prevent LGWR from generating redo in this thread |
DBFUS | enq: FU – contention | This enqueue is used to serialize the capture of the DB Feature Usage and High Water Mark Statistics |
ACD Xtnt Info CIC | enq: FX – issue ACD Xtnt Relocation CIC | ARB relocates ACD extent |
ASM Disk Header | enq: HD – contention | Serializes accesses to ASM SGA data structures |
Queue Page | enq: HP – contention | Synchronizes accesses to queue pages |
Hash Queue | enq: HQ – contention | Synchronizes the creation of new queue IDs |
Direct Loader High Water Mark | enq: HV – contention | Lock used to broker the high water mark during parallel inserts |
Segment High Water Mark | enq: HW – contention | Lock used to broker the high water mark during parallel inserts |
Internal | enq: IA – contention | |
NID | enq: ID – contention | Lock held to prevent other processes from performing controlfile transaction while NID is running |
Label Security | enq: IL – contention | Synchronizes accesses to internal label data structures |
Kti blr lock | enq: IM – contention for blr | Serializes block recovery for IMU txn |
Instance Recovery | enq: IR – contention | Synchronizes instance recovery |
Instance Recovery | enq: IR – contention2 | Synchronizes parallel instance recovery and shutdown immediate |
Instance State | enq: IS – contention | Enqueue used to synchronize instance state changes |
In-Mem Temp Table Meta Creation | enq: IT – contention | Synchronizes accesses to a temp object’s metadata |
Job Queue Date | enq: JD – contention | Synchronizes dates between job queue coordinator and slave processes |
Materialized View | enq: JI – contention | Lock held during materialized view operations (like refresh, alter) to prevent concurrent operations on the same materialized view |
Job Queue | enq: JQ – contention | Lock to prevent multiple instances from running a single job |
Job Scheduler | enq: JS – aq sync | Scheduler evt code and AQ sync |
Job Scheduler | enq: JS – contention | Synchronizes accesses to the job cache |
Job Scheduler | enq: JS – evt notify | Lock got during event notification |
Job Scheduler | enq: JS – evtsub add | Lock got when adding subscriber to event q |
Job Scheduler | enq: JS – evtsub drop | Lock got when dropping subscriber to event q |
Job Scheduler | enq: JS – job recov lock | Lock to recover jobs running on crashed RAC inst |
Job Scheduler | enq: JS – job run lock – synchronize | Lock to prevent job from running elsewhere |
Job Scheduler | enq: JS – q mem clnup lck | Lock obtained when cleaning up q memory |
Job Scheduler | enq: JS – queue lock | Lock on internal scheduler queue |
Job Scheduler | enq: JS – sch locl enqs | Scheduler non-global enqueues |
Job Scheduler | enq: JS – wdw op | Lock got when doing window open/close |
SQL STATEMENT QUEUE | enq: JX – SQL statement queue | statement |
SQL STATEMENT QUEUE | enq: JX – cleanup of queue | release SQL statement resources |
Scheduler Master DBRM | enq: KD – determine DBRM master | Determine DBRM master |
Scheduler | enq: KM – contention | Synchronizes various Resource Manager operations |
Multiple Object Checkpoint | enq: KO – fast object checkpoint | Coordinates fast object checkpoint |
Kupp Process Startup | enq: KP – contention | Synchronizes kupp process startup |
ASM Attributes Enque | enq: KQ – access ASM attribute | Synchronization of ASM cached attributes |
Scheduler Plan | enq: KT – contention | Synchronizes accesses to the current Resource Manager plan |
Materialized View Log DDL | enq: MD – contention | Lock held during materialized view log DDL statements |
AQ Notification Mail Host | enq: MH – contention | Lock used for recovery when setting Mail Host for AQ e-mail notifications |
Master Key | enq: MK – contention | changing values in enc$ |
AQ Notification Mail Port | enq: ML – contention | Lock used for recovery when setting Mail Port for AQ e-mail notifications |
LogMiner | enq: MN – contention | Synchronizes updates to the LogMiner dictionary and prevents multiple instances from preparing the same LogMiner session |
MMON restricted session | enq: MO – contention | Serialize MMON operations for restricted sessions |
Media Recovery | enq: MR – contention | Lock used to coordinate media recovery with other uses of datafiles |
Media Recovery | enq: MR – standby role transition | Lock used to disallow concurrent standby role transition attempt |
Materialized View Refresh Log | enq: MS – contention | Lock held during materialized view refresh to setup MV log |
Online Datafile Move | enq: MV – datafile move | Held during online datafile move operation or cleanup |
MWIN Schedule | enq: MW – contention | This enqueue is used to serialize the calibration of the manageability schedules with the Maintenance Window |
ksz synch | enq: MX – sync storage server info | Lock held to generate a response to the storage server information request when an instance is starting up |
Outline Cache | enq: OC – contention | Synchronizes write accesses to the outline cache |
Online DDLs | enq: OD – Serializing DDLs | Lock to prevent concurrent online DDLs |
Outline Name | enq: OL – contention | Synchronizes accesses to a particular outline name |
OLAPI Histories | enq: OQ – xsoq*histrecb | Synchronizes access to olapi history parameter CB |
OLAPI Histories | enq: OQ – xsoqhiAlloc | Synchronizes access to olapi history allocation |
OLAPI Histories | enq: OQ – xsoqhiClose | Synchronizes access to olapi history closing |
OLAPI Histories | enq: OQ – xsoqhiFlush | Synchronizes access to olapi history flushing |
OLAPI Histories | enq: OQ – xsoqhistrecb | Synchronizes access to olapi history globals |
Encryption Wallet | enq: OW – initialization | initializing the wallet context |
Encryption Wallet | enq: OW – termination | terminate the wallet context |
Property Lock | enq: PD – contention | Prevents others from updating the same property |
Parameter | enq: PE – contention | Synchronizes system parameter updates |
Password File | enq: PF – contention | Synchronizes accesses to the password file |
Global Parameter | enq: PG – contention | Synchronizes global system parameter updates |
AQ Notification Proxy | enq: PH – contention | Lock used for recovery when setting Proxy for AQ HTTP notifications |
Remote PX Process Spawn Status | enq: PI – contention | Communicates remote Parallel Execution Server Process creation status |
Transportable Tablespace | enq: PL – contention | Coordinates plug-in operation of transportable tablespaces |
Process Startup | enq: PR – contention | Synchronizes process startup |
PX Process Reservation | enq: PS – contention | Parallel Execution Server Process reservation and synchronization |
ASM Partnership and Status Table | enq: PT – contention | Synchronizes access to ASM PST metadata |
KSV slave startup | enq: PV – syncshut | Synchronizes instance shutdown_slvstart |
KSV slave startup | enq: PV – syncstart | Synchronizes slave start_shutdown |
Buffer Cache PreWarm | enq: PW – flush prewarm buffers | Direct Load needs to flush prewarmed buffers if DBWR 0 holds enqueue |
Buffer Cache PreWarm | enq: PW – perwarm status in dbw0 | DBWR 0 holds enqueue indicating prewarmed buffers present in cache |
ASM Rollback Recovery | enq: RB – contention | Serializes ASM rollback recovery operations |
Result Cache: Enqueue | enq: RC – Result Cache: Contention | Coordinates access to a result-set |
RAC Load | enq: RD – RAC load | update RAC load info |
Block Repair/Resilvering | enq: RE – block repair contention | Synchronize block repair/resilvering operations |
Data Guard Broker | enq: RF – DG Broker Current File ID | Identifies which configuration metadata file is current |
Data Guard Broker | enq: RF – FSFO Observer Heartbeat | Captures recent Fast-Start Failover Observer heartbeat information |
Data Guard Broker | enq: RF – RF – Database Automatic Disable | Means for detecting when database is being automatically disabled |
Data Guard Broker | enq: RF – atomicity | Ensures atomicity of log transport setup |
Data Guard Broker | enq: RF – new AI | Synchronizes selection of the new apply instance |
Data Guard Broker | enq: RF – synch: DG Broker metadata | Ensures r/w atomicity of DG configuration metadata |
Data Guard Broker | enq: RF – synchronization: aifo master | Synchronizes apply instance failure detection and failover operation |
Data Guard Broker | enq: RF – synchronization: critical ai | Synchronizes critical apply instance among primary instances |
wallet_set_mkey | enq: RK – set key | wallet master key rekey |
RAC Encryption Wallet Lock | enq: RL – RAC wallet lock | RAC wallet lock |
Redo Log Nab Computation | enq: RN – contention | Coordinates nab computations of online logs during recovery |
Multiple Object Reuse | enq: RO – contention | Coordinates flushing of multiple objects |
Multiple Object Reuse | enq: RO – fast object reuse | Coordinates fast object reuse |
Resilver / Repair | enq: RP – contention | Enqueue held when resilvering is needed or when datablock is repaired from mirror |
Workload Capture and Replay | enq: RR – contention | Concurrent invocation of DBMS_WORKLOAD_* package API |
Reclaimable Space | enq: RS – file delete | Lock held to prevent file from accessing during space reclaimation |
Reclaimable Space | enq: RS – persist alert level | Lock held to make alert level persistent |
Reclaimable Space | enq: RS – prevent aging list update | Lock held to prevent aging list update |
Reclaimable Space | enq: RS – prevent file delete | Lock held to prevent deleting file to reclaim space |
Reclaimable Space | enq: RS – read alert level | Lock held to read alert level |
Reclaimable Space | enq: RS – record reuse | Lock held to prevent file from accessing while reusing circular record |
Reclaimable Space | enq: RS – write alert level | Lock held to write alert level |
Redo Thread | enq: RT – contention | Thread locks held by LGWR, DBW0, and RVWR to indicate mounted or open status |
Redo Thread | enq: RT – thread internal enable/disable | Thread locks held by CKPT to synchronize thread enable and disable |
Rolling Migration | enq: RU – contention | Serializes rolling migration operations |
Rolling Migration | enq: RU – waiting | Results of rolling migration CIC |
Materialized View Flags | enq: RW – MV metadata contention | Lock held by CREATE/ALTER/DROP materialized viewwhile updating materialized view flags in detail tables |
ASM Extent Relocation Lock | enq: RX – relocate extent | Synchronizes relocating ASM extents |
LogicalStandby | enq: SB – contention | Synchronizes Logical Standby metadata operations |
Session Migration | enq: SE – contention | Synchronizes transparent session migration operations |
AQ Notification Sender | enq: SF – contention | Lock used for recovery when setting Sender for AQ e-mail notifications |
Active Session History Flushing | enq: SH – contention | Should seldom see this contention as this Enqueue is always acquired in no-wait mode |
Streams Table Instantiation | enq: SI – contention | Prevents multiple streams tabel instantiations |
KTSJ Slave Task Cancel | enq: SJ – Slave Task Cancel | Serializes cancelling task executed by slave process |
Shrink Segment | enq: SK – contention | Serialize shrink of a segment |
Serialize Lock request | enq: SL – escalate lock | sending lock escalate to LCK0 |
Serialize Lock request | enq: SL – get lock | sending lock req to LCK0 |
Serialize Lock request | enq: SL – get lock for undo | sending lock req for undo to LCK0 |
Shared Object | enq: SO – contention | Synchronizes access to Shared Object (PL/SQL Shared Object Manager) |
Spare Enqueue | enq: SP – contention | (1) due to one-off patch |
Spare Enqueue | enq: SP – contention 2 | (2) due to one-off patch |
Spare Enqueue | enq: SP – contention 3 | (3) due to one-off patch |
Spare Enqueue | enq: SP – contention 4 | (4) due to one-off patch |
Sequence Cache | enq: SQ – contention | Lock to ensure that only one process can replenish the sequence cache |
Synchronized Replication | enq: SR – contention | Coordinates replication / streams operations |
Sort Segment | enq: SS – contention | Ensures that sort segments created during parallel DML operations aren’t prematurely cleaned up |
Space Transaction | enq: ST – contention | Synchronizes space management activities in dictionary-managed tablespaces |
SaveUndo Segment | enq: SU – contention | Serializes access to SaveUndo Segment |
Suspend Writes | enq: SW – contention | Coordinates the ‘alter system suspend’ operation |
Instance Undo | enq: TA – contention | Serializes operations on undo segments and undo tablespaces |
SQL Tuning Base Existence Cache | enq: TB – SQL Tuning Base Cache Load | Synchronizes writes to the SQL Tuning Base Existence Cache |
SQL Tuning Base Existence Cache | enq: TB – SQL Tuning Base Cache Update | Synchronizes writes to the SQL Tuning Base Existence Cache |
Tablespace Checkpoint | enq: TC – contention | Lock held to guarantee uniqueness of a tablespace checkpoint |
Tablespace Checkpoint | enq: TC – contention2 | Lock of setup of a unqiue tablespace checkpoint in null mode |
KTF map table enqueue | enq: TD – KTF dump entries | KTF dumping time/scn mappings in SMON_SCN_TIME table |
KTF broadcast | enq: TE – KTF broadcast | KTF broadcasting |
Temporary File | enq: TF – contention | Serializes dropping of a temporary file |
Threshold Chain | enq: TH – metric threshold evaluation | Serializes threshold in-memory chain access |
Auto Task Serialization | enq: TK – Auto Task Serialization | Lock held by MMON to prevent other MMON spawning of Autotask Slave |
Auto Task Serialization | enq: TK – Auto Task Slave Lockout | Serializes spawned Autotask Slaves |
Log Lock | enq: TL – contention | Serializes threshold log table read and update |
DML | enq: TM – contention | Synchronizes accesses to an object |
Temp Object | enq: TO – contention | Synchronizes DDL and DML operations on a temp object |
Runtime Fixed Table Purge | enq: TP – contention | Lock held during purge and dynamic reconfiguration of fixed tables. |
Queue table enqueue | enq: TQ – DDL contention | TM access to the queue table |
Queue table enqueue | enq: TQ – DDL-INI contention | Streams DDL on queue table |
Queue table enqueue | enq: TQ – INI contention | TM access to the queue table |
Queue table enqueue | enq: TQ – TM contention | TM access to the queue table |
Temporary Segment | enq: TS – contention | Serializes accesses to temp segments |
Tablespace | enq: TT – contention | Serializes DDL operations on tablespaces |
Cross-Instance Transaction | enq: TW – contention | Lock held by one instance to wait for transactions on all instances to finish |
Transaction | enq: TX – allocate ITL entry | Allocating an ITL entry in order to begin a transaction |
Transaction | enq: TX – contention | Lock held by a transaction to allow other transactions to wait for it |
Transaction | enq: TX – index contention | Lock held on an index during a split to prevent other operations on it |
Transaction | enq: TX – row lock contention | Lock held on a particular row by a transaction to prevent other transactions from modifying it |
User-defined | enq: UL – contention | Lock used by user applications |
Undo Segment | enq: US – contention | Lock held to perform DDL on the undo segment |
AQ Notification Watermark | enq: WA – contention | Lock used for recovery when setting Watermark for memory usage in AQ notifications |
AWR Flush | enq: WF – contention | This enqueue is used to serialize the flushing of snapshots |
Write gather local enqueue | enq: WG – delete fso | acquire lobid local enqueue when deleting fso |
Write gather local enqueue | enq: WG – lock fso | acquire lobid local enqueue when locking fso |
Being Written Redo Log | enq: WL – RAC-wide SGA contention | Serialize access to RAC-wide SGA |
Being Written Redo Log | enq: WL – RFS global state contention | Serialize access to RFS global state |
Being Written Redo Log | enq: WL – Test access/locking | Testing redo transport access/locking |
Being Written Redo Log | enq: WL – contention | Coordinates access to redo log files and archive logs |
WLM Plan Operations | enq: WM – WLM Plan activation | Synchronizes new WLM Plan activation |
AWR Purge | enq: WP – contention | This enqueue handles concurrency between purging and baselines |
LNS archiving log | enq: WR – contention | Coordinates access to logs by Async LNS and ARCH/FG |
XDB Configuration | enq: XC – XDB Configuration | Lock obtained when incrementing XDB configuration version number |
Auto Online Exadata disks | enq: XD – ASM disk drop/add | Serialize Auto Drop/Add Exadata disk operations |
AQ Notification No-Proxy | enq: XH – contention | Lock used for recovery when setting No Proxy Domains for AQ HTTP notifications |
ASM Extent Fault Lock | enq: XL – fault extent map | Keep multiple processes from faulting in the same extent chunk |
ASM Extent Relocation Enqueue | enq: XQ – purification | wait for relocation before doing block purification |
ASM Extent Relocation Enqueue | enq: XQ – recovery | prevent relocation during _recovery_asserts checking |
ASM Extent Relocation Enqueue | enq: XQ – relocation | wait for recovery before doing relocation |
Quiesce / Force Logging | enq: XR – database force logging | Lock held during database force logging mode |
Quiesce / Force Logging | enq: XR – quiesce database | Lock held during database quiesce |
Internal Test | enq: XY – contention | Lock used for internal testing |
Audit Partition | enq: ZA – add std audit table partition | lock held to add partition to std audit table |
FGA Partition | enq: ZF – add fga audit table partition | lock held to add partition to fga audit table |
File Group | enq: ZG – contention | Coordinates file group operations |
Compression Analyzer | enq: ZH – compression analysis | Synchronizes analysis and insert into compression$, prevents multiple threads analyzing the same table during a load |
Global Context Action | enq: ZZ – update hash tables | lock held for updating global context hash tables |
For those who made it all to the bottom here a little extra:
While I just found the enqueue waits for 10g on the internet I managed to find out how to retrieve that information. And believe it or not – it’s available just over a simple query:
SELECT eq_name "Group", ev.name "Enqueue Type", eq.req_description "Description" FROM v$enqueue_statistics eq, v$event_name ev WHERE eq.event#=ev.event# ORDER BY ev.name;