Oracle LogMiner

Oracle provides a powerful out of the box tool called LogMiner within the Oracle database. LogMiner enables you to query online and archived redo log files through simple SQL. The log files of the database contain all changes made to user data or to the database dictionary due to recovery purposes. Based on that LogMiner can be a powerful data auditing as well as a sophisticated data analysis tool. So here is a small example how to have an inside look into the redo data of your database (for a detailed cover of Oracle LogMiner have a look at Chapter 19 of the Oracle Database Utilities guide).

First of all you need to tell LogMiner which log files you want to examine. For this example I just going to use the 3 default redo log files that come with the default installation:


[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 26 09:30:09 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc v$logfile;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP#                                             NUMBER
 STATUS                                             VARCHAR2(7)
 TYPE                                               VARCHAR2(7)
 MEMBER                                             VARCHAR2(513)
 IS_RECOVERY_DEST_FILE                              VARCHAR2(3)

SQL> SELECT group#, status, member FROM v$LOGFILE;

 GROUP# STATUS
---------- -------
MEMBER
--------------------------------------------------------------------------------
 3
/home/oracle/app/oracle/oradata/orcl/redo03.log

 2
/home/oracle/app/oracle/oradata/orcl/redo02.log

 1
/home/oracle/app/oracle/oradata/orcl/redo01.log

Next I tell LogMiner to use those three redo log files:


SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/home/oracle/app/oracle/oradata/orcl/redo03.log');

PL/SQL procedure successfully completed.

SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/home/oracle/app/oracle/oradata/orcl/redo02.log');

PL/SQL procedure successfully completed.

SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/home/oracle/app/oracle/oradata/orcl/redo01.log');

PL/SQL procedure successfully completed.

To verify that it worked I’ll query the V$LOGMNR_LOGS view which contains all log files that LogMiner will examine:


SQL> SELECT * FROM V$LOGMNR_LOGS;

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 32
/home/oracle/app/oracle/oradata/orcl/redo02.log
07-DEC-10 07-DEC-10 1229390655 ORCL         754488 30-OCT-09 11.2.0.0.0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 1         32    2056780    2231224 NO  NO  ONLINE         512
 41424384                                           0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 33
/home/oracle/app/oracle/oradata/orcl/redo03.log
07-DEC-10 26-JAN-11 1229390655 ORCL         754488 30-OCT-09 11.2.0.0.0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 1         33    2231224    2252561 NO  NO  ONLINE         512
 7395840                                           0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 34
/home/oracle/app/oracle/oradata/orcl/redo01.log
26-JAN-11 01-JAN-88 1229390655 ORCL         754488 30-OCT-09 11.2.0.0.0

 LOG_ID
----------
FILENAME
--------------------------------------------------------------------------------
LOW_TIME  HIGH_TIME      DB_ID DB_NAME   RESET_SCN RESET_SCN COMPATIBLE
--------- --------- ---------- -------- ---------- --------- -----------------
 THREAD_ID THREAD_SQN    LOW_SCN   NEXT_SCN DIC DIC TYPE     BLOCKSIZE
---------- ---------- ---------- ---------- --- --- ------- ----------
 FILESIZE INFO                                 STATUS
---------- -------------------------------- ----------
 1         34    2252561 2.8147E+14 NO  NO  ONLINE         512
 0                                           0

Now, after telling LogMiner which log files I want to examine I’m all set to start the mining process.


SQL> exec DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.

The option “DICT_FROM_ONLINE_CATALOG” tells the LogMiner process to get the object information from the database dictionary directly. If I wouldn’t specify this option then Oracle wouldn’t translate the object ids into the given names. So instead of INSERT INTO TEST (id, text) VALUES… I would see something like INSERT INTO “UNKNOWN”.”OBJ# 96768″(“COL 1″,” COL 2″) VALUES… That of course is not very useful if you want to see what actually happened and so I’m going to use the dictionary to translate the ids into the given names.

So now let’s see how it works:


SQL> CREATE TABLE TEST (id NUMBER, text VARCHAR2(25))
TABLESPACE USERS
LOGGING;

Table created.

SQL> INSERT INTO TEST VALUES (1,'Test1');
INSERT INTO TEST VALUES (2,'Test2');
COMMIT;

1 row created.

SQL>
1 row created.

SQL>
Commit complete.

Note that I defined the LOGGING clause to make sure that LOGGING is enabled for that table. Redo log files are usually used for instance and/or media recovery. The data needed for such operations is automatically recorded in the redo log files. However, the recorded data might not be sufficient to make sense for the user. Specifying the LOGGING clause logs a little bit more information that should be enough for this scenario. For other scenarios however it might be also required to turn on supplemental logging.

All information that LogMiner has collected will be visible in the V$LOGMNR_CONTENTS view: The memory structure for V$LOGMNR_CONTENTS is held in the sessions PGA memory and not in the SGA. Therefore just the session which started LogMiner can also retrieve data from V$LOGMNR_CONTENTS:


SQL> SELECT timestamp, scn, start_scn, operation, seg_owner, seg_name, sql_redo
 FROM V$LOGMNR_CONTENTS
 ORDER BY timestamp, scn;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419448            DDL
SYS
TEST
CREATE TABLE TEST (id NUMBER, text VARCHAR2(25))
TABLESPACE USERS
LOGGING ;

27-JAN-2011 09:56:50    125419451            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419451            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419452  125419441 COMMIT

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
commit;

27-JAN-2011 09:56:50    125419453            START

set transaction read write;

27-JAN-2011 09:56:50    125419453            INTERNAL

27-JAN-2011 09:56:50    125419454  125419453 COMMIT

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
commit;

27-JAN-2011 09:56:50    125419456            START

set transaction read write;

27-JAN-2011 09:56:50    125419456            INSERT
SYS
TEST
insert into "SYS"."TEST"("ID","TEXT") values ('1','Test1');

27-JAN-2011 09:56:50    125419456            INSERT
SYS
TEST

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
insert into "SYS"."TEST"("ID","TEXT") values ('2','Test2');

27-JAN-2011 09:56:50    125419457  125419456 COMMIT

commit;

As you can see the creation of the table as well as the INSERT statements can be retrieved from the redo log files. You can also see that the CREATE TABLE DDL issues 2 COMMITs internally. After that you see the two INSERT statements I executed. And that’s not everything. If I look a little bit further up I can also see what the database executed internally in order to create the table:


TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419441            START

set transaction read write;

27-JAN-2011 09:56:50    125419441            INSERT
SYS
OBJ$
insert into "SYS"."OBJ$"("OBJ#","DATAOBJ#","OWNER#","NAME","NAMESPACE","SUBNAME"
,"TYPE#","CTIME","MTIME","STIME","STATUS","REMOTEOWNER","LINKNAME","FLAGS","OID$
","SPARE1","SPARE2","SPARE3","SPARE4","SPARE5","SPARE6") values ('171359','17135
9','0','TEST','1',NULL,'2',TO_DATE('27-JAN-2011 09:56:50', 'DD-MON-YYYY HH24:MI:
SS'),TO_DATE('27-JAN-2011 09:56:50', 'DD-MON-YYYY HH24:MI:SS'),TO_DATE('27-JAN-2
011 09:56:50', 'DD-MON-YYYY HH24:MI:SS'),'1',NULL,NULL,'0',NULL,'6','1','0',NULL
,NULL,NULL);

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419441            INTERNAL

27-JAN-2011 09:56:50    125419441            START

set transaction read write;

27-JAN-2011 09:56:50    125419441            INTERNAL

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419444            START

set transaction read write;

27-JAN-2011 09:56:50    125419444            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419445  125419444 COMMIT

commit;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419445            UNSUPPORTED
SYS
SEG$
Unsupported

27-JAN-2011 09:56:50    125419446            START

set transaction read write;

27-JAN-2011 09:56:50    125419446  125419441 COMMIT

commit;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419446            UNSUPPORTED
SYS
TAB$
Unsupported

27-JAN-2011 09:56:50    125419447            UNSUPPORTED
SYS
TAB$
Unsupported

27-JAN-2011 09:56:50    125419447  125419446 COMMIT

commit;

TIMESTAMP                     SCN  START_SCN OPERATION
---------------------- ---------- ---------- --------------------------------
SEG_OWNER
--------------------------------
SEG_NAME
--------------------------------------------------------------------------------
SQL_REDO
--------------------------------------------------------------------------------
27-JAN-2011 09:56:50    125419447            UNSUPPORTED
SYS
COL$
Unsupported

27-JAN-2011 09:56:50    125419447            UNSUPPORTED
SYS
COL$
Unsupported

27-JAN-2011 09:56:50    125419448            DDL
SYS
TEST
CREATE TABLE TEST (id NUMBER, text VARCHAR2(25))
TABLESPACE USERS

As you can see even the INSERT into the internal OBJ$ table got recorded as well as some unsupported operations into SEG$, TAB$ and COL$.

Important to note again is that LogMiner runs just in the current session. The memory structure required runs within the sessions PGA. That means that no other session can retrieve the information from V$LOGMNR_CONTENTS. If you try it from another session you will get: ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from v$logmnr_contents

Also important is that you should always call DBMS_LOGMNR.END_LOGMNR(); before you log off. Otherwise you will get a silent ORA-600 [723]… as the bulletin article “The LogMiner utility” (Doc ID: 62508.1) on MOS says:

If you do not call end_logmnr, you will silently get ORA-00600 [723] …
on logoff. This OERI is triggered because the PGA is bigger at logoff than it was at logon, which is considered a space leak. The main problem from a support perspective is that it is silent, i.e. not signalled back to the user screen, because by then they have logged off.

The way to spot LogMiner leaks is that the trace file produced by the OERI 723 will have A PGA heap dumped with many Chunks of type ‘Freeable’ with a description of “KRVD:alh”

Posted in Uncategorized

9 thoughts on “Oracle LogMiner

    1. Hi Tim,

      Actually I wasn’t even aware that there is an UI for LogMiner in Enterprise Manager!
      But thanks for pointing that out, I’ll have a look!

      Thx,

      Like

  1. I notice that there is an “id Number”.

    Is this a case identifier that will allow different steps in a process to be linked together?
    For example; user A creates a purchase requisition at time X, then user B converts that same purchase requisition into a purchase order at time Y, etc – these would have the same unique case id so that the sequence of events can be traced.

    I am looking to create an audit trail with the format (case id, event (‘create po’ or oracle event), event type (start/complete etc) time-stamp (date & time) and user id + appropriate attributes (eg customer number $ sales etc).

    Is this possible? I am not a technical person but a business analyst looking into mining of Oracle logs.

    Regards,

    George

    Like

    1. George,

      If I understand your requirements correctly I think CDC (Change Data Capturing) is the choice to go.
      Like Logminer, CDC also reads from the REDO logs and applies all changed on your monitored table(s) into a separate change table.
      From there you can then query the change tables for the required information.
      The table will include the columns of the source table; so by using the primary key you can easily join the transactions back together.

      Check it out: http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/cdc.htm#DWHSG016

      Thx,

      Like

  2. Thanks for the reply. I understand GoldenGate is the application to use now.

    One thing I want to clarify is this – is the primary key you mention the same as the transaction code?
    In other words is the transaction code/primary key the case id as described in my example?

    Thanks

    Like

    1. Oh, there you go! 🙂

      No, the primary key is one field in the table the uniquely identifies a record in the table.
      The transaction id/code identifies only the transaction itself. But within the transaction there can be one data manipulation or a thousand.
      In your example you will have two different transactions as you have two different users and therefore two different sessions modifying the purchase.
      However your purchase record in the table should have a primary key field which gets generated during the initial insert and never changes.
      With that Id you can identify your record and find it again in the second modification

      Thx,.

      Like

  3. Hi Venzi,

    thank you for this detailled and very clear explanation of logminer. Helps a lot for daily business.

    Regards
    Stephan

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.