DTrace is now also available for Oracle Linux

Oracle has ported DTrace for Oracle Linux. DTrace is a very powerful performance analysis and troubleshooting tool that allows you to instrument all software. It’s name is short for Dynamic Tracing. I’m not a DTrace expert but some say it is that powerful that it allows you to reverse engineer any software…

New job – Welcome Oracle!

It’s been a long time since my last post on this blog and there were very good reasons for this. Since the last 6 and a half months I did not only move from the US to the UK but also switched from my previous company to Oracle. Of course it took a while to get into the new job but also to get approval for continuing this blog. You will notice a disclaimer at the right hand side saying that this blog is my private blog and has nothing to do with Oracle. So obviously nothing that I post here reflects any official view or statement of Oracle or its affiliates! That being said: time to catch up on some posts!

Oracle LogMiner

Oracle provides a powerful out of the box tool called LogMiner within the Oracle database. LogMiner enables you to query online and archived redo log files through simple SQL. The log files of the database contain all changes made to user data or to the database dictionary due to recovery purposes. Based on that LogMiner can be a powerful data auditing as well as a sophisticated data analysis tool. So here is a small example how to have an inside look into the redo data of your database (for a detailed cover of Oracle LogMiner have a look at Chapter 19 of the Oracle Database Utilities guide).

First of all you need to tell LogMiner which log files you want to examine. For this example I just going to use the 3 default redo log files that come with the default installation:


[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 26 09:30:09 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc v$logfile;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP#                                             NUMBER
 STATUS                                             VARCHAR2(7)
 TYPE                                               VARCHAR2(7)
 MEMBER                                             VARCHAR2(513)
 IS_RECOVERY_DEST_FILE                              VARCHAR2(3)

SQL> SELECT group#, status, member FROM v$LOGFILE;

 GROUP# STATUS
---------- -------
MEMBER
--------------------------------------------------------------------------------
 3
/home/oracle/app/oracle/oradata/orcl/redo03.log

 2
/home/oracle/app/oracle/oradata/orcl/redo02.log

 1
/home/oracle/app/oracle/oradata/orcl/redo01.log

Next I tell LogMiner to use those three redo log files:


SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/home/oracle/app/oracle/oradata/orcl/redo03.log');

PL/SQL procedure successfully completed.

SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/home/oracle/app/oracle/oradata/orcl/redo02.log');

PL/SQL procedure successfully completed.

SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/home/oracle/app/oracle/oradata/orcl/redo01.log');

PL/SQL procedure successfully completed.

To verify that it worked I’ll query the V$LOGMNR_LOGS view which contains all log files that LogMiner will examine:


SQL> SELECT * FROM V$LOGMNR_LOGS;

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 32
/home/oracle/app/oracle/oradata/orcl/redo02.log
07-DEC-10 07-DEC-10 1229390655 ORCL         754488 30-OCT-09 11.2.0.0.0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 1         32    2056780    2231224 NO  NO  ONLINE         512
 41424384                                           0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 33
/home/oracle/app/oracle/oradata/orcl/redo03.log
07-DEC-10 26-JAN-11 1229390655 ORCL         754488 30-OCT-09 11.2.0.0.0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 1         33    2231224    2252561 NO  NO  ONLINE         512
 7395840                                           0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 34
/home/oracle/app/oracle/oradata/orcl/redo01.log
26-JAN-11 01-JAN-88 1229390655 ORCL         754488 30-OCT-09 11.2.0.0.0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 1         34    2252561 2.8147E+14 NO  NO  ONLINE         512
 0                                           0

Now, after telling LogMiner which log files I want to examine I’m all set to start the mining process.


SQL> exec DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.

The option “DICT_FROM_ONLINE_CATALOG” tells the LogMiner process to get the object information from the database dictionary directly. If I wouldn’t specify this option then Oracle wouldn’t translate the object ids into the given names. So instead of INSERT INTO TEST (id, text) VALUES… I would see something like INSERT INTO “UNKNOWN”.”OBJ# 96768″(“COL 1″,” COL 2″) VALUES… That of course is not very useful if you want to see what actually happened and so I’m going to use the dictionary to translate the ids into the given names.

So now let’s see how it works:


SQL> CREATE TABLE TEST (id NUMBER, text VARCHAR2(25))
TABLESPACE USERS
LOGGING;

Table created.

SQL> INSERT INTO TEST VALUES (1,'Test1');
INSERT INTO TEST VALUES (2,'Test2');
COMMIT;

1 row created.

SQL>
1 row created.

SQL>
Commit complete.

Note that I defined the LOGGING clause to make sure that LOGGING is enabled for that table. Redo log files are usually used for instance and/or media recovery. The data needed for such operations is automatically recorded in the redo log files. However, the recorded data might not be sufficient to make sense for the user. Specifying the LOGGING clause logs a little bit more information that should be enough for this scenario. For other scenarios however it might be also required to turn on supplemental logging.

All information that LogMiner has collected will be visible in the V$LOGMNR_CONTENTS view: The memory structure for V$LOGMNR_CONTENTS is held in the sessions PGA memory and not in the SGA. Therefore just the session which started LogMiner can also retrieve data from V$LOGMNR_CONTENTS:


SQL> SELECT timestamp, scn, start_scn, operation, seg_owner, seg_name, sql_redo
 FROM V$LOGMNR_CONTENTS
 ORDER BY timestamp, scn;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419448            DDL
SYS
TEST
CREATE TABLE TEST (id NUMBER, text VARCHAR2(25))
TABLESPACE USERS
LOGGING ;

27-JAN-2011 09:56:50    125419451            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419451            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419452  125419441 COMMIT

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
commit;

27-JAN-2011 09:56:50    125419453            START

set transaction read write;

27-JAN-2011 09:56:50    125419453            INTERNAL

27-JAN-2011 09:56:50    125419454  125419453 COMMIT

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
commit;

27-JAN-2011 09:56:50    125419456            START

set transaction read write;

27-JAN-2011 09:56:50    125419456            INSERT
SYS
TEST
insert into "SYS"."TEST"("ID","TEXT") values ('1','Test1');

27-JAN-2011 09:56:50    125419456            INSERT
SYS
TEST

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
insert into "SYS"."TEST"("ID","TEXT") values ('2','Test2');

27-JAN-2011 09:56:50    125419457  125419456 COMMIT

commit;

As you can see the creation of the table as well as the INSERT statements can be retrieved from the redo log files. You can also see that the CREATE TABLE DDL issues 2 COMMITs internally. After that you see the two INSERT statements I executed. And that’s not everything. If I look a little bit further up I can also see what the database executed internally in order to create the table:


TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419441            START

set transaction read write;

27-JAN-2011 09:56:50    125419441            INSERT
SYS
OBJ$
insert into "SYS"."OBJ$"("OBJ#","DATAOBJ#","OWNER#","NAME","NAMESPACE","SUBNAME"
,"TYPE#","CTIME","MTIME","STIME","STATUS","REMOTEOWNER","LINKNAME","FLAGS","OID$
","SPARE1","SPARE2","SPARE3","SPARE4","SPARE5","SPARE6") values ('171359','17135
9','0','TEST','1',NULL,'2',TO_DATE('27-JAN-2011 09:56:50', 'DD-MON-YYYY HH24:MI:
SS'),TO_DATE('27-JAN-2011 09:56:50', 'DD-MON-YYYY HH24:MI:SS'),TO_DATE('27-JAN-2
011 09:56:50', 'DD-MON-YYYY HH24:MI:SS'),'1',NULL,NULL,'0',NULL,'6','1','0',NULL
,NULL,NULL);

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419441            INTERNAL

27-JAN-2011 09:56:50    125419441            START

set transaction read write;

27-JAN-2011 09:56:50    125419441            INTERNAL

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419444            START

set transaction read write;

27-JAN-2011 09:56:50    125419444            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419445  125419444 COMMIT

commit;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419445            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419446            START

set transaction read write;

27-JAN-2011 09:56:50    125419446  125419441 COMMIT

commit;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419446            UNSUPPORTED
SYS
TAB$
Unsupported

27-JAN-2011 09:56:50    125419447            UNSUPPORTED
SYS
TAB$
Unsupported

27-JAN-2011 09:56:50    125419447  125419446 COMMIT

commit;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419447            UNSUPPORTED
SYS
COL$
Unsupported

27-JAN-2011 09:56:50    125419447            UNSUPPORTED
SYS
COL$
Unsupported

27-JAN-2011 09:56:50    125419448            DDL
SYS
TEST
CREATE TABLE TEST (id NUMBER, text VARCHAR2(25))
TABLESPACE USERS

As you can see even the INSERT into the internal OBJ$ table got recorded as well as some unsupported operations into SEG$, TAB$ and COL$.

Important to note again is that LogMiner runs just in the current session. The memory structure required runs within the sessions PGA. That means that no other session can retrieve the information from V$LOGMNR_CONTENTS. If you try it from another session you will get: ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from v$logmnr_contents

Also important is that you should always call DBMS_LOGMNR.END_LOGMNR(); before you log off. Otherwise you will get a silent ORA-600 [723]… as the bulletin article “The LogMiner utility” (Doc ID: 62508.1) on MOS says:

If you do not call end_logmnr, you will silently get ORA-00600 [723] …
on logoff. This OERI is triggered because the PGA is bigger at logoff than it was at logon, which is considered a space leak. The main problem from a support perspective is that it is silent, i.e. not signalled back to the user screen, because by then they have logged off.

The way to spot LogMiner leaks is that the trace file produced by the OERI 723 will have A PGA heap dumped with many Chunks of type ‘Freeable’ with a description of “KRVD:alh”

Ask Tom search engine

Today my boss asked me if I know how to fix the search engine for FireFox for AskTom.oracle.com since Tom Kyte has moved it to APEX. Actually I didn’t have the search engine even installed anymore (hope that means that I’m already so good with Oracle myself 😉 ) but thought that it would be nice to have it again.

Therefore I quickly build the search engine for the new Ask Tom site.

To install it, download this file by right-clicking on it and choosing “Save Link As”:

SearchEngineInstaller.png

Unfortunately – or better fortunately – WordPress doesn’t allow Javascript within posts nor xml files as media. Therefore I have to work with faked png files here.
After you download the file rename it to “SearchEngineInstaller.html” and open the file in your browser.
Then click on “Install Ask Tom search engine”, install the engine and “Mission accomplished”!

If you are interested in the “code” you can download the search engine file here: AskTomSearchEngine.png
Again with right click and “Save Link As”. Then rename it to “AskTomSearchEngine.xml”

A few facts about TimesTen

  • TimesTen gets usually deployed in the Middle Tier – same location where the application runs
  • TimesTen is persistent and can run as the only database on a system
  • TimesTen persists to the disk over a background process
  • TimesTen supports High Availability
  • TimesTen can be used as cache between the application and the Oracle database
  • TimesTen can be linked into applications directly – the calls to TimesTen happens over normal subroutine calls in the code which require no context switches on the system
  • You can cache either a group of related tables or even a subset of specific rows
  • TimesTen supports read-only and update able caches
  • TimesTen supports “DynamicLoad” which loads the data from Oracle when you need it
  • You can have as much read-only subscribers as you want
  • TimesTen supports SQL and PL/SQL
  • TimesTen is certified on all main application servers and OR-Mapping tools like Hibernate

The big picture

A couple of weeks ago a customer had some troubles with the overall performance. He complained that everything is slow and (of course) nothing changed since the last few weeks. They also immediately blamed the database on their side to be the issue. The usual stuff also, I’m sure that sounds just too familiar to you.

So as they “identified” the database already as being the issue my team requested some AWR reports and as we got them I noticed a strange but all to common behavior. My team mates got the AWR reports, went to the SQL Statistics, sections “SQL ordered by Elapsed Time” and “SQL ordered by CPU Time” and identified immediately a materialized view rebuild as cause for the problem. That looked like this:

SQL ordered by Elapsed Time

  • Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
  • % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
Elapsed Time (s) CPU Time (s) Executions Elap per Exec (s) % Total DB Time SQL Id SQL Module SQL Text
1,343 679 1 1342.85 33.00 f20ccnxhvbk65 DECLARE job BINARY_INTEGER := …
770 323 1 770.24 18.93 1usnr4gmcq60d /* MV_REFRESH (DEL) */ delete …
571 355 1 571.23 14.04 gz04689vd55db /* MV_REFRESH (INS) */INSERT /…
311 311 1 310.91 7.64 0vhmfumrjchnh SQL*Plus BEGIN dm_incr_symbols_post_pro…
288 9 0 7.09 bb3f2gjndvjss oracle@crptd1 (TNS V1-V3) SELECT /*+ OPAQUE_TRANSFORM */…
258 258 1 257.76 6.34 75vtwb7j4jzdm SQL*Plus INSERT INTO SYMB_EXTRACTT SELE…
149 121 1 148.80 3.66 90wtn50vy6af6 DECLARE job BINARY_INTEGER := …
114 103 1 113.75 2.80 9993mp6h7kqkp INSERT /*+ BYPASS_RECURSIVE_CH…
48 38 2 24.00 1.18 3nkcg1h5ysqss DECLARE job BINARY_INTEGER := …
48 38 2 23.98 1.18 fvb5prrr7b0c3 MERGE INTO FT_E_UPS1 UPS1 USIN…

Back to SQL Statistics
Back to Top

SQL ordered by CPU Time

  • Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
  • % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
CPU Time (s) Elapsed Time (s) Executions CPU per Exec (s) % Total DB Time SQL Id SQL Module SQL Text
679 1,343 1 679.06 33.00 f20ccnxhvbk65 DECLARE job BINARY_INTEGER := …
355 571 1 355.50 14.04 gz04689vd55db /* MV_REFRESH (INS) */INSERT /…
323 770 1 323.18 18.93 1usnr4gmcq60d /* MV_REFRESH (DEL) */ delete …
311 311 1 310.89 7.64 0vhmfumrjchnh SQL*Plus BEGIN dm_incr_symbols_post_pro…
258 258 1 257.74 6.34 75vtwb7j4jzdm SQL*Plus INSERT INTO SYMB_EXTRACTT SELE…
121 149 1 121.05 3.66 90wtn50vy6af6 DECLARE job BINARY_INTEGER := …
103 114 1 102.58 2.80 9993mp6h7kqkp INSERT /*+ BYPASS_RECURSIVE_CH…
38 48 2 19.20 1.18 3nkcg1h5ysqss DECLARE job BINARY_INTEGER := …
38 48 2 19.20 1.18 fvb5prrr7b0c3 MERGE INTO FT_E_UPS1 UPS1 USIN…
29 29 12 2.38 0.70 68930z34bm3db SQL*Plus select ‘file[‘ || substr(trim(…
9 288 0 7.09 bb3f2gjndvjss oracle@dftg1 (TNS V1-V3) SELECT /*+ OPAQUE_TRANSFORM */…

So here we have a PL/SQL job which does the materialized view refresh (first line) and the statements for the refresh itself as second and third line. For completeness here the first statement:


DECLARE
 job BINARY_INTEGER := :job;
 next_date DATE := :mydate;
 broken BOOLEAN := FALSE;
BEGIN
 dbms_refresh.refresh('"ABTGHS"."DB_CREAT_STST_PWOR"');
 :mydate := next_date;
 IF broken THEN
 :b := 1;
 ELSE
 :b := 0;
 END IF;
END;

So all what they did was to go there, look the first SQL with high elapsed and cpu time and nearly reported back to them that this is the problem and they have to solve this. I call this the “lucky shot method”. Sometimes when you are lucky than the reason for the issue on the DB is a bad SQL or a bunch of bad SQLs which max out the DB on CPU power, or I/O or whatever. In such a case you just go to those sections, identify the SQLs, fix them and everything is good again – you were lucky. This works sometimes and you are a hero because it took you just 5 minutes for fixing the issue but sometimes it doesn’t and you blame it on some weird constellation of OS, network and something else so that nobody recognizes that you are just a fool and didn’t look at the big picture. Thinking of that I noticed that this is just far too common in IT. Tech admins, DBA, developers – in every section you have people like this. Looking 5 minutes into the issue and telling you then that this and that is the issue. You go ahead and fix them but still no change. So next round trip, and next, and next, and next. All could have been prevented if the person had just once a look into the big picture…

So what is the big picture, what do I mean by that?

I’m a fan of knowing what happens and why. Always analyze all the information you got, even request some more if you think that some vital information is missing and make your conclusions out of that. Stop the try and error method, the lucky shot method. The AWR report I got from the customer is an all too good example. There were 2 simple lines which made me curious:

Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 17934 05-May-10 09:00:59 129 144.1
End Snap: 17935 05-May-10 10:00:12 130 143.0
Elapsed: 59.21 (mins)
DB Time: 67.81 (mins)

I’m talking about the last two. The first tells me that the snapshot which got compared are in a time range of 59.21 minutes. The DB time, so the time when the DB was actually working was 67.81 minutes. So something looks strange here. If I would be in a single core environment I wouldn’t be over my 59.21. If I would be in a dual-core environment then the database was just working 57% of the time (59.21 x 2 cores = 118.42 – DB time multiplies by the amount of cores where work was performed parallel). Next I had a look into the locks going on. If you have high locking then the DB time is also idle but the throughput is low. But it turned that this was also no issue. The next information confirmed then that the DB wasn’t the problem at all:

Operating System Statistics

Statistic Total
AVG_BUSY_TIME 33,088
AVG_IDLE_TIME 321,941

The average idle time statistic is much higher than the busy time. So I requested the amount of cores on that system and it turned out that there are 14 available. A busy database would have a DB time of max. 828.94 minutes. No locking going on. I followed up with them and it turned out as I thought: The database wasn’t the problem at all. In fact it was pretty much idle over the whole time. The issue was on the application side. A java application which memory got filled up so that it had to do full garbage collection all the time to continue processing.

Instead of doing the lucky shot we prevented us and them from a lot of headache by looking into the big picture.

Bitmap indexes and cardinality

In my last two posts I talked a little bit about bitmap indexes, how they work and why they lock. I also mentioned that they should be used for low cardinality but that there is no cut-and-dried answer about what low cardinality is. I’m not starting here to talk about cardinality on bitmap indexes, actually Richard Foote did a much better job on that so if you interested in that check out the following blog posts from him:

Oracle Seminar

Last Wednesday I had the chance to visit an Oracle seminar at the Marriot hotel in New York City. I was really happy since it was already a long time ago when I last visited an Oracle event (except the Oracle User Group Meeting but that’s not Oracle itself). There were four tracks to join: Database developer, .NET developer, Java developer and APEX developer. All of them sounded interesting but as you had to bring your own laptop and install a specific image for each track I had to decide for one. Well it wasn’t to hard for me and I took the Database developer track with Hands-On on SQL Developer, SQL Developer Data Modeler, TimesTen and last but not least XML database. I have to say, that I was really excited about the sessions. I use SQL Developer and the Data Modeler part already quite a long time so there wasn’t much new for me but still something to learn (there is always something to learn!) The TimesTen and the XML database part was totally new to me and I was surprised what Oracle can already do with all the other products the own. Funniest thing was that I won a book of SQL Developer 2.1 which is brand-new and still wet from the press! 😉 This was because I was the first who finished all SQL Developer related hands-on – I wasn’t aware that I can win a prize for this, but hey, a book is always good! 🙂

So the best things out of this seminar were:

  • SQL Developer can debug PL/SQL (not really new)
  • SQL Developer can build, maintain and execute Unit tests for PL/SQL
  • SQL Developer can connect to TimesTen since 1.6
  • SQL Developer has a new Query Executor which makes executing stuff against different databases much more flexible
  • TimesTen is really fast if you use it right (made selfwritten tests on my laptop bringing me 8.6 times more performance)

Well at the end here it’s time for some pics:

And here the prize I won:

SQL CASE expression

How often have you been in a situation where you wanted to do an IF/ELSE in a SQL query. How often have you then written “SELECT … DECODE (expression, value, return, value, return, default return) FROM….” or even worse and selected the values and did then an IF/ELSE in PL/SQL?

The good thing in Oracle is that you can do a really IF/ELSE in the SQL query using the CASE expression. The documentation says following:

CASE expressions let you use IFTHENELSE logic in SQL statements without having to invoke procedures. The syntax is:

CASE { simple_case_expression
     | searched_case_expression
     }
     [ else_clause ]
     END

Ok here first a short example of the “simple_case_expression”:

SQL> SELECT name,
2            CASE salery
3                 WHEN 1000 THEN ‘Low’
4                 WHEN 2000 THEN ‘Medium’
5                 WHEN 3000 THEN ‘Medium’
6                 ELSE ‘High’
7           END
8           FROM employee;

NAME                      CASESA
————————- ——
John                      Low
Andreas               Medium
Harry                    Medium
Mike                      High

So here you can see a simple IF/ELSIF/ELSE based on a column value. But what if you have some more complex logic like comparing two values with and AND. Well also that works in the “searched_case_expression”:

SQL> select name,
2             CASE
3                 WHEN salery = 1000 THEN ‘Low’
4                 WHEN salery = 2000 OR salery = 3000 THEN ‘Medium’
5                 ELSE ‘High’
6             END
7             FROM employee;

NAME                      CASEWH
————————- ——
John                        Low
Andreas                 Medium
Harry                      Medium
Mike                         High

Here you can see that I’m using  an OR clause like in the ELSIF. That also works with bind variables of course!

So this should help you in future when you have a select and you think of DECODE or PL/SQL IF/ELSE!