Oracle LogMiner, a component of Oracle Database, provides a SQL interface for querying both live and archived redo log files.

The history of activity on a database is stored in redo log files.

What is Oracle LogMiner?

  • Oracle LogMiner is a utility provided by Oracle Corporation to Oracle database purchasers.
  • It allows users to query logged changes to an Oracle database, primarily using SQL commands that reference data in Oracle redo logs.
  • The Oracle Enterprise Manager product includes a graphical user interface for the functionality.
  • Oracle LogMiner applies the concept and practices of Data Mining to the database’s internal processes.

Oracle LogMiner can be used by database administrators to:

  • Determine the time of a database event 
  • Isolate user-initiated transactions 
  • Determine the steps required for data recovery from inadvertent changes
  • Collect data on actual usage for performance purposes
  • Audit the operation of any commands run against the database during tuning and capacity planning.
Ready to Simplify Oracle LogMiner with Hevo?

Hevo seamlessly integrates with Oracle among 150+ sources to read redo log files, efficiently capturing database activity history. 

By using logs for updates, Hevo eliminates the need for repetitive database queries, ensuring smooth, real-time data management and transformation.

Give Hevo a Try

Key Features of Oracle LogMiner

The Oracle redo log files record all changes made to user data or the database dictionary, allowing database recovery operations to be performed.

  1. Identifying When a Logical Corruption of a Database: Detecting the onset of a database’s logical corruption, such as application-level errors.
    • These could be errors where the incorrect rows were deleted due to incorrect WHERE clause values, rows were updated with incorrect values, the wrong index was dropped, and so on.
    • For example, a user application could accidentally update a database to give all employees 100% raises instead of 10% raises, or a database administrator (DBA) could delete a critical system table by mistake.
    • It’s critical to know exactly when an error occurred so that you can start time-based or change-based recovery. This allows you to restore the database to its previous state before corruption.
  2. Identifying the Actions Required to Perform Fine-Grained Recovery at the Transaction Level: It may be possible to perform a table-specific undo operation to return the table to its original state if you fully understand and account for existing dependencies.
  3. This is accomplished by using Oracle LogMiner’s table-specific reconstructed SQL statements in the reverse order in which they were originally issued.
  4. Trend Analysis: It is used to tune performance and capacity planning. You can figure out which tables get the most updates and inserts by looking at their statistics. This data gives you a historical look at disc access statistics, which you can use to tune your system.
  5. Conducting Post Audits: Any Data Manipulation Language (DML) and Data Definition Language (DDL) statements executed on the database, as well as the order in which they were executed and who executed them, can be tracked with Oracle LogMiner. (However, to use Oracle LogMiner for this purpose, you must know when the event occurred to specify the appropriate logs for analysis; otherwise, you may be forced to mine a large number of redo log files, which can take a long time.) Consider using Oracle LogMiner as a supplement to database usage auditing.

Understanding the Salient Aspects of Oracle LogMiner

Oracle LogMiner Configuration

  • The source database is the one that generates all of the redo log files that Oracle LogMiner will analyze.
  • When Oracle LogMiner performs the analysis, it uses the mining database.
  • When Oracle LogMiner presents the redo log data you request, it can use table and column names instead of internal object IDs.

Internal object identifiers and datatypes are translated to object names and external data formats by Oracle LogMiner using the dictionary. Oracle LogMiner returns internal object IDs and presents data as binary data without a dictionary.

Consider the following SQL statement as an example:

INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY)  VALUES('IT_WT','Technical Writer', 4000, 11000);

Oracle LogMiner will display the following without the dictionary:

insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values
(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),
HEXTORAW('c229'),HEXTORAW('c3020b'));
  • The changes made to the database or dictionary are recorded in the redo log files.
Oracle LogMiner Configuration Example
  • This example demonstrates how to generate redo logs on one Oracle Database release in one location and send them to a different Oracle Database release in a different location.
  • An Oracle LogMiner configuration in which the Source database is in Boston and the Target database is in San Francisco is shown in the diagram below.
  • The Boston source database generates redo log files, which are archived and shipped to the San Francisco database. These remodel log files have an Oracle LogMiner dictionary extracted from them. San Francisco houses the mining database where Oracle LogMiner analyses the redo log files. Oracle Database 12g and Oracle Database 19c are used in the Boston and San Francisco databases, respectively.
  • Only one valid Oracle LogMiner configuration is shown in this example. Other valid configurations include those that use the same database for both the source and mining databases, or those that provide the data dictionary using a different method.
 Oracle LogMiner Requirements

The following objects are required by Oracle LogMiner:

  • A Source database and a Mining database, both with the following features:
    • Both the Source and Mining databases must be installed on the same hardware platform.
    • The Mining database can be the same as the Source database or completely different.
    • The Mining database must run on the same version of Oracle Database software as the Source database, or a later version.
    • The Mining database must use the same character set as the source database (or a superset of the character set).
  • Oracle LogMiner Dictionary
    • The dictionary must be generated by the same Source database that produces the redo log files that Oracle LogMiner will examine.
  • All redo log files that meet the criteria:
    • The source database for the redo log files must be the same.
    • The database RESETLOGS SCN must be linked to the redo log files.
    • Oracle Database version 8.0 or later is required for the redo log files. However, several Oracle LogMiner features introduced in version 9.0.1 only work with Oracle9i or later redo log files.
    • The names of the tables or columns to be mined cannot be longer than 30 characters.

Directing Oracle LogMiner Operations and Retrieving Data of Interest

The PL/SQL packages DBMS LOGMNR and DBMS LOGMNR D are used to direct Oracle LogMiner operations, and the V$LOGMNR CONTENTS view is used to retrieve data of interest, as shown below:

  • Step 1: A Oracle LogMiner dictionary must be specified.
    • Depending on the type of dictionary you plan to use, use the DBMS LOGMNR D.BUILD procedure or specify the dictionary when you start LogMiner (in Step 3), or both.
  • Step 2: For analysis, specify a list of redo log files.
    • Use the DBMS LOGMNR.ADD LOGFILE procedure, or tell LogMiner to automatically create a list of log files for analysis when you start it (in Step 3).
  • Step 3: Launch Oracle LogMiner.
    • DBMS LOGMNR.START LOGMNR is the procedure to use.
  • Step 4: Request the relevant redo data.
    • The V$LOGMNR CONTENTS view can be queried. (To query this view, you must have the SELECT ANY TRANSACTION privilege.)
  • Step 5: Close the Oracle LogMiner window.
    • DBMS LOGMNR.END LOGMNR is the procedure to use.
Integrate Oracle to Redshift
Integrate Oracle on Amazon RDS to BigQuery

Oracle LogMiner Dictionary Files and Redo Log Files

It’s crucial to understand how Oracle LogMiner works with the LogMiner Dictionary file (or files) and redo log files before you start using it. This will assist you in obtaining accurate results and planning the utilization of your system’s resources.

Oracle LogMiner Dictionary Options

When LogMiner returns redo data to you, it requires a dictionary to convert object IDs to object names.

The dictionary can be supplied in three ways by LogMiner.

  • Using the Online Catalog: When you have access to the source database from which the redo log files were generated and no changes to the column definitions in the tables of interest are expected, Oracle recommends that you use this option. This is the most effective and straightforward method.
  • Extracting a LogMiner Dictionary to the Redo Log Files: When you don’t expect to have access to the source database from which the redo log files were created, or if you expect to make changes to the column definitions in the tables of interest, Oracle recommends using this option.
  • Extracting the LogMiner Dictionary to a Flat File: This option is kept around for compatibility with previous versions. Transactional consistency is not guaranteed with this option. Instead, Oracle recommends using the online catalog or extracting the dictionary to redo log files.

Starting Oracle LogMiner

To start LogMiner, use the DBMS LOGMNR.START LOGMNR procedure.

You must call DBMS LOGMNR.START LOGMNR before querying the V$LOGMNR CONTENTS view because of the options available with the DBMS LOGMNR.START LOGMNR procedure allows you to control output to the V$LOGMNR CONTENTS view.

When Oracle LogMiner is launched, you can:

  • Define how LogMiner should filter the information it returns (for example, by starting and ending time or SCN value)
  • Set formatting options for the data returned by LogMiner.
  • Set the LogMiner dictionary that will be used.

The OPTIONS parameter to DBMS LOGMNR.START LOGMNR can be used to specify LogMiner settings, which are listed below.

  • DICT_FROM_ONLINE_CATALOG
  • DICT_FROM_REDO_LOGS
  • COMMITTED_DATA_ONLY
  • SKIP_CORRUPTION
  • NO_SQL_DELIMITER
  • PRINT_PRETTY_SQL
  • NO_ROWID_IN_STMT
  • DDL_DICT_TRACKING

When you run the DBMS LOGMNR.START LOGMNR procedure, LogMiner verifies that the option and parameter combination you specified is correct, as well as that the dictionary and redo log files you specified are available. The V$LOGMNR CONTENTS view, on the other hand, is not populated until you query it.

Note that parameters and options are not saved between DBMS LOGMNR.START LOGMNR calls. Each time you call DBMS LOGMNR.START LOGMNR, you must specify all desired parameters and options (including SCN and time ranges).

Querying V$LOGMNR_CONTENTS for Redo Data of Interest

The V$LOGMNR CONTENTS view is used to find the redo data you’re looking for.

(Note that to query V$LOGMNR CONTENTS, you must have the SYSDBA or LOGMINING privilege.) This view displays historical data about database changes, such as (but not limited to):

  • INSERT, UPDATE, DELETE, or DDL are the different types of database changes (OPERATION column).
  • The SCN where the modification was made (SCN column).
  • The COMMIT SCN column indicates the SCN at which a change was committed.
  • A transaction to which a change is attached (XIDUSN, XIDSLT, and XIDSQN columns).
  • The modified object’s table and schema name (SEG NAME and SEG OWNER columns).
  • The name of the user who made the change by issuing a DDL or DML statement (USERNAME column).
  • The reconstructed SQL statements show SQL DML that is equivalent (but not necessarily identical) to the SQL DML used to generate the redo records (SQL REDO column) if the change was caused by a SQL DML statement.
  • The password is encrypted if it is part of the statement in a SQL REDO column. The SQL REDO column values for DDL statements are always the same as the SQL DDL that generated the redo records.
  • The reconstructed SQL statements show the SQL DML statements needed to undo the change (SQL UNDO column) if the change was due to a SQL DML change.
    • DDL statement-related SQL UNDO columns are always NULL. For some data types and rolled back operations, the SQL UNDO column may be NULL as well.

Example of Querying V$LOGMNR_CONTENTS

Issue a SQL query like this to find any delete operations performed on the oe.orders table by a user named Ron:

SELECT OPERATION, SQL_REDO, SQL_UNDO
   FROM V$LOGMNR_CONTENTS
   WHERE SEG_OWNER = 'OE' AND SEG_NAME = 'ORDERS' AND
   OPERATION = 'DELETE' AND USERNAME = 'RON';

The query returns the following results. The formatting on your screen may differ from what is shown here.

OPERATION   SQL_REDO                        SQL_UNDO

DELETE      delete from "OE"."ORDERS"       insert into "OE"."ORDERS"        
            where "ORDER_ID" = '2413'       ("ORDER_ID","ORDER_MODE",
            and "ORDER_MODE" = 'direct'      "CUSTOMER_ID","ORDER_STATUS",
            and "CUSTOMER_ID" = '101'        "ORDER_TOTAL","SALES_REP_ID",
            and "ORDER_STATUS" = '5'         "PROMOTION_ID")
            and "ORDER_TOTAL" = '48552'      values ('2413','direct','101',
            and "SALES_REP_ID" = '161'       '5','48552','161',NULL);     
            and "PROMOTION_ID" IS NULL  
            and ROWID = 'AAAHTCAABAAAZAPAAN';

DELETE      delete from "OE"."ORDERS"        insert into "OE"."ORDERS"
            where "ORDER_ID" = '2430'        ("ORDER_ID","ORDER_MODE",
            and "ORDER_MODE" = 'direct'       "CUSTOMER_ID","ORDER_STATUS",
            and "CUSTOMER_ID" = '101'         "ORDER_TOTAL","SALES_REP_ID",
            and "ORDER_STATUS" = '8'          "PROMOTION_ID")
            and "ORDER_TOTAL" = '29669.9'     values('2430','direct','101',
            and "SALES_REP_ID" = '159'        '8','29669.9','159',NULL);
            and "PROMOTION_ID" IS NULL 
            and ROWID = 'AAAHTCAABAAAZAPAAe';

User Ron deleted two rows from the oe.orders table, according to this output. The reconstructed SQL statements are similar to, but not necessarily identical to, Ron’s original statement. Oracle LogMiner can only show deleted (or updated or inserted) rows individually because the original WHERE clause is not logged in the redo log files.

As a result, even though both rows may have been deleted by a single DELETE statement, the output in V$LOGMNR CONTENTS does not reflect this.

DELETE FROM OE.ORDERS WHERE CUSTOMER ID =’101′ or DELETE FROM OE.ORDERS WHERE PROMOTION ID = NULL could have been the actual DELETE statement.

Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS

Oracle LogMiner can handle large amounts of data. You can control how much information is returned to the V$LOGMNR CONTENTS view and how quickly it is returned.

The sections that follow show how to set these limits and how they affect the data returned when you query V$LOGMNR CONTENTS.

  • Showing Only Committed Transactions
  • Skipping Redo Corruptions
  • Filtering Data by Time
  • Filtering Data by SCN

In addition, LogMiner has formatting features for the data returned to V$LOGMNR CONTENTS, which are described in the sections below:

  • Formatting Reconstructed SQL Statements for Re-execution
  • Formatting the Appearance of Returned Data for Readability

Showing Only Committed Transactions

Only rows belonging to committed transactions appear in the V$LOGMNR CONTENTS view when you use the COMMITTED DATA ONLY option to DBMS LOGMNR.START LOGMNR. This allows you to filter out transactions that have been rolled back, transactions that are currently in progress, and internal operations.

When you start Oracle LogMiner, type the following command to enable this option:

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
  DBMS_LOGMNR.COMMITTED_DATA_ONLY);

LogMiner groups all DML operations that belong to the same transaction together when the COMMITTED DATA ONLY option is specified. The order in which transactions were committed is maintained.

By default, LogMiner displays rows for all transactions and returns them in the order in which they appear in the redo log files.

Assume you run Oracle LogMiner without specifying the COMMITTED DATA ONLY option and run the following command:

SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, 
   USERNAME, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME != 'SYS' 
   AND SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM');

The following is the result: Rows from different transactions are interwoven, and both committed and uncommitted transactions are returned.

XID         USERNAME  SQL_REDO

1.15.3045   RON       set transaction read write;
1.15.3045   RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                      "MIN_SALARY","MAX_SALARY") values ('9782',
                      'HR_ENTRY',NULL,NULL);
1.18.3046   JANE      set transaction read write;
1.18.3046   JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                      "CUST_FIRST_NAME","CUST_LAST_NAME",
                      "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                      "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                      "ACCOUNT_MGR_ID") values ('9839','Edgar',
                      'Cummings',NULL,NULL,NULL,NULL,
                       NULL,NULL,NULL);
1.9.3041    RAJIV      set transaction read write;
1.9.3041    RAJIV      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                       "CUST_FIRST_NAME","CUST_LAST_NAME","CUST_ADDRESS",
                       "PHONE_NUMBERS","NLS_LANGUAGE","NLS_TERRITORY",
                       "CREDIT_LIMIT","CUST_EMAIL","ACCOUNT_MGR_ID") 
                       values ('9499','Rodney','Emerson',NULL,NULL,NULL,NULL,
                       NULL,NULL,NULL);
1.15.3045    RON       commit;
1.8.3054     RON       set transaction read write;
1.8.3054     RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                       "MIN_SALARY","MAX_SALARY") values ('9566',
                       'FI_ENTRY',NULL,NULL);
1.18.3046    JANE      commit;
1.11.3047    JANE      set transaction read write;
1.11.3047    JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                       "CUST_FIRST_NAME","CUST_LAST_NAME",
                       "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                       "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                       "ACCOUNT_MGR_ID") values ('8933','Ronald',
                       'Frost',NULL,NULL,NULL,NULL,NULL,NULL,NULL);
1.11.3047    JANE      commit;
1.8.3054     RON       commit;

Now consider working LogMiner with the COMMITTED DATA ONLY option. If you run the previous query once more, you’ll get the following results:

1.15.3045   RON       set transaction read write;
1.15.3045   RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                      "MIN_SALARY","MAX_SALARY") values ('9782',
                      'HR_ENTRY',NULL,NULL);
1.15.3045    RON       commit;
1.18.3046   JANE      set transaction read write;
1.18.3046   JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                      "CUST_FIRST_NAME","CUST_LAST_NAME",
                      "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                      "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                      "ACCOUNT_MGR_ID") values ('9839','Edgar',
                      'Cummings',NULL,NULL,NULL,NULL,
                       NULL,NULL,NULL);
1.18.3046    JANE      commit;
1.11.3047    JANE      set transaction read write;
1.11.3047    JANE      insert into "OE"."CUSTOMERS"("CUSTOMER_ID",
                       "CUST_FIRST_NAME","CUST_LAST_NAME",
                       "CUST_ADDRESS","PHONE_NUMBERS","NLS_LANGUAGE",
                       "NLS_TERRITORY","CREDIT_LIMIT","CUST_EMAIL",
                       "ACCOUNT_MGR_ID") values ('8933','Ronald',
                       'Frost',NULL,NULL,NULL,NULL,NULL,NULL,NULL);
1.11.3047    JANE      commit;
1.8.3054     RON       set transaction read write;
1.8.3054     RON       insert into "HR"."JOBS"("JOB_ID","JOB_TITLE",
                       "MIN_SALARY","MAX_SALARY") values ('9566',
                       'FI_ENTRY',NULL,NULL);
1.8.3054     RON       commit;

The entire 1.15.3045 transaction is returned first because the COMMIT statement for the 1.15.3045 transaction was issued before the COMMIT statement for the 1.18.3046 transaction. Even though the 1.18.3046 transaction occurred before the 1.15.3045 transaction, this is correct. Because a COMMIT statement was never issued for the 1.9.3041 transaction, none of it is returned.

Skipping Redo Corruptions

Any redo log file corruptions are skipped during select operations from the V$LOGMNR CONTENTS view when you use the SKIP CORRUPTION option to DBMS LOGMNR.START LOGMNR.

A row is returned for each corrupt redo record encountered, with CORRUPTED BLOCKS in the OPERATION column, 1343 in the STATUS column, and the number of blocks skipped in the INFO column.

Be aware that changes to ongoing transactions in corrupted blocks may be skipped records; such changes will not be reflected in the data returned from the V$LOGMNR CONTENTS view.

By default, the select operation will end when it encounters the first corruption in the redo log file.

The SQL example below demonstrates how this option works:

  • Add redo log files of interest.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   logfilename => '/usr/oracle/data/db1arch_1_16_482701534.log' -
   options => DBMS_LOGMNR.NEW);
  • Start Oracle LogMiner
EXECUTE DBMS_LOGMNR.START_LOGMNR();
  • Select from the V$LOGMNR_CONTENTS view. This example shows corruptions are in the redo log files.
SELECT rbasqn, rbablk, rbabyte, operation, status, info 
   FROM V$LOGMNR_CONTENTS;

ERROR at line 3:
ORA-00368: checksum error in redo log block 
ORA-00353: log corruption near block 6 change 73528 time 11/06/2011 11:30:23 
ORA-00334: archived log: /usr/oracle/data/dbarch1_16_482701534.log
  • Restart LogMiner. This time, specify the SKIP_CORRUPTION option.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
   options => DBMS_LOGMNR.SKIP_CORRUPTION);
  • Select from the V$LOGMNR_CONTENTS view again. The output indicates that corrupted blocks were skipped: CORRUPTED_BLOCKS is in the OPERATION column, 1343 is in the STATUS column, and the number of corrupt blocks skipped is in the INFO column.
SELECT rbasqn, rbablk, rbabyte, operation, status, info 
   FROM V$LOGMNR_CONTENTS;

RBASQN  RBABLK RBABYTE  OPERATION        STATUS  INFO
13      2        76     START              0
13      2        76     DELETE             0
13      3       100     INTERNAL           0
13      3       380     DELETE             0
13      0         0     CORRUPTED_BLOCKS   1343  corrupt blocks 4 to 19 skipped
13      20      116     UPDATE             0

Filtering Data by Time

Set the STARTTIME and ENDTIME parameters in the DBMS LOGMNR.START LOGMNR procedure to filter data by date.

You can use the SQL ALTER SESSION SET NLS DATE FORMAT statement first to avoid having to specify the date format in the call to the PL/SQL DBMS LOGMNR.START LOGMNR procedure, as shown in the example below.

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   DICTFILENAME => '/oracle/database/dictionary.ora', -
   STARTTIME => '01-Jan-2019 08:30:00', -
   ENDTIME => '01-Jan-2019 08:45:00'-
   ); 

Redo record order should not be inferred from timestamps. The SCN can be used to infer the order of redo records.

Filtering Data by SCN

Using the PL/SQL DBMS LOGMNR.START LOGMNR procedure, filter data by SCN (system change number).

As an illustration:

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
    STARTSCN => 621047, -
    ENDSCN   => 625695, -
    OPTIONS  => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
                );

When all of the parameters are specified, the STARTSCN and ENDSCN parameters take precedence over the STARTTIME and ENDTIME parameters.

Formatting Reconstructed SQL Statements for Re-execution

In the reconstructed SQL REDO and SQL UNDO statements, a ROWID clause is included by default, and the statements are terminated with a semicolon.

You can, however, change the default settings by doing so:

  • When starting LogMiner, specify the NO ROWID IN STMT option.
    • The ROWID clause is not included in the reconstructed statements. Because row IDs differ between databases, specify the NO ROWID IN STMT option when starting LogMiner if you intend to re-execute the SQL REDO or SQL UNDO statements against a database other than the one against which they were originally executed.
  • When starting LogMiner, specify the NO SQL DELIMITER option.
    • The semicolon is removed from the reconstructed statements. This is useful for programs that open a cursor before running the reconstructed statements.

The associated SQL statement cannot be executed if the STATUS field of the V$LOGMNR CONTENTS view contains the value 2 (invalid sql).

Formatting the Appearance of Returned Data for Readability

The PRINT PRETTY SQL option in LogMiner allows you to format the returned data to make it easier to read.

A query can sometimes produce a large number of columns containing reconstructed SQL statements, which can be visually overwhelming and difficult to read. To address this issue, LogMiner has a PRINT PRETTY SQL option. The DBMS LOGMNR.START LOGMNR procedure’s PRINT PRETTY SQL option formats the reconstructed SQL statements in the following manner, making them easier to read:

insert into "HR"."JOBS"
 values
    "JOB_ID" = '9782',
    "JOB_TITLE" = 'HR_ENTRY',
    "MIN_SALARY" IS NULL,
    "MAX_SALARY" IS NULL;
  update "HR"."JOBS"
  set
    "JOB_TITLE" = 'FI_ENTRY'
  where
    "JOB_TITLE" = 'HR_ENTRY' and
    ROWID = 'AAAHSeAABAAAY+CAAX';

update "HR"."JOBS"
  set
    "JOB_TITLE" = 'FI_ENTRY'
  where
    "JOB_TITLE" = 'HR_ENTRY' and
    ROWID = 'AAAHSeAABAAAY+CAAX';

delete from "HR"."JOBS"
 where
    "JOB_ID" = '9782' and
    "JOB_TITLE" = 'FI_ENTRY' and
    "MIN_SALARY" IS NULL and
    "MAX_SALARY" IS NULL and
    ROWID = 'AAAHSeAABAAAY+CAAX';

Because the PRINT PRETTY SQL option is enabled, SQL statements that are reconstructed do not use standard SQL syntax and are therefore not executable.

Reapplying DDL Statements Returned to V$LOGMNR_CONTENTS

  • Oracle executes one or more other DDL statements as a result of some DDL statements you issue.
  • Do not execute statements that were executed internally by Oracle when reapplying SQL DDL from the SQL REDO or SQL UNDO columns of the V$LOGMNR CONTENTS view to the database.
  • Query the INFO column of V$LOGMNR CONTENTS to distinguish between DDL statements issued by users and those issued internally by Oracle. The INFO column indicates whether Oracle or a user executed the DDL.
  • Only if the INFO column contains the value USER DDL, re-execute the SQL DDL SQL contained in the SQL REDO or SQL UNDO column of V$LOGMNR CONTENTS.

Calling DBMS_LOGMNR.START_LOGMNR Multiple Times

You can call DBMS LOGMNR.START LOGMNR again without ending the current LogMiner session and specify different options and time or SCN ranges even after you have successfully called DBMS LOGMNR.START LOGMNR and selected from the V$LOGMNR CONTENTS view.

The following is a list of reasons why you should do this:

  • You want LogMiner to analyze as little redo data as possible.
  • You need to specify various options. For example, you might specify the PRINT PRETTY SQL option, or you might specify the COMMITTED DATA ONLY option if you only want to see committed transactions.
  • You want to change the analysis time or SCN range.

Here are some scenarios in which calling DBMS LOGMNR might be useful. START LOGMNR should be run several times.

Example 1: Mining Only a Subset of the Data in the Redo Log Files

Assume LogMiner has a list of redo log files to mine that spans a week. You only want to look at what happened between 12:00 and 1:00 each day. The most efficient way to do this is to:

  • With this time range for Monday, call DBMS LOGMNR.START LOGMNR.
  • Changing the contents of the V$LOGMNR CONTENTS view.
  • Steps 1 and 2 are repeated for each day of the week.
  • If the total amount of redo data for the week is large, this method would speed up the entire analysis because LogMiner would only read a small portion of each redo log file in the list.

Example 2: Adjusting the Time Range or SCN Range

When you start LogMiner, let’s say you give it a redo log file list and a time (or SCN) range. When you run a query against the V$LOGMNR CONTENTS view, you’ll notice that only a portion of the data you’re looking for is included in the time range you specified. To increase the time range by one hour, call DBMS LOGMNR.START LOGMNR again (or adjust the SCN range).

Example 3: Analyzing Redo Log Files As They Arrive at a Remote Database

Assume you’ve created an application to analyze or replicate changes between databases. The mining database receives the source database’s redo log files and places them in an operating system directory. Your submission should include the following information:

  • The redo log file list is expanded to include all redo log files currently in the directory.
  • With the appropriate settings, calls DBMS LOGMNR.START LOGMNR and selects from the V$LOGMNR CONTENTS view.
  • Updates the directory with new redo log files.
  • Indefinitely repeats Steps 2 and 3.

Supplemental Logging

Recovery and media recovery are common uses for redoing log files. The information required for such operations is automatically saved in redo log files. Additional columns in the redo log files may be required for a redo-based application. Supplemental logging is the process of adding these extra columns to your database.

LogMiner is not usable by default because Oracle Database does not provide any supplemental logging. As a result, before you generate log files that LogMiner will analyze, you must enable at least minimal supplemental logging.

Additional columns may be required in the following situations:

  • Because the ROWID shown in the reconstructed SQL returned by the V$LOGMNR CONTENTS view will be different and therefore meaningless in another database, an application that applies reconstructed SQL statements to a different database must identify the update statement by a set of columns that uniquely identify the row (for example, a primary key), not by the ROWID shown in the reconstructed SQL returned by the V$LOGMNR CONTENTS view.
  • To track row changes more efficiently, an application may require logging the before-image of the entire row, not just the modified columns.

When supplemental logging is enabled, a supplemental log group is a collection of additional columns to log. When columns in a log group are logged, there are two types of supplemental log groups:

  • Unconditional Supplemental Log Groups: Any time a row is updated, the before-images of specified columns are logged regardless of whether the update affected any of the specified columns. An ALWAYS log group is another term for this.
  • Conditional Supplemental Log Groups: Only if at least one of the columns in the log group is updated, the before-images of all specified columns are logged.

Accessing Oracle LogMiner Operational Information in Views

The following views contain LogMiner operational information (rather than redo data). You can query them like any other view by using SQL.

  • V$LOGMNR_DICTIONARY: Shows information about a LogMiner dictionary file created with the DBMS LOGMNR option STORE IN FLAT FILE. START LOGMNR. The information displayed includes details about the database used to create the LogMiner dictionary.
  • V$LOGMNR_LOGS: Information about specified redo log files is displayed.
  • V$LOGMNR_PARAMETERS: Shows information about LogMiner’s optional parameters, such as starting and ending system change numbers (SCNs) and times.
  • V$DATABASE, DBA_LOG_GROUPS, ALL_LOG_GROUPS, USER_LOG_GROUPS, DBA_LOG_GROUP_COLUMNS, ALL_LOG_GROUP_COLUMNS, USER_LOG_GROUP_COLUMNS: Displays details about the current supplemental logging settings.

Steps in a Typical Oracle LogMiner Session

The most common ways to extract and mine data are with LogMiner.

  • Understanding How to Run LogMiner Sessions: Oracle Autonomous Cloud Platform Services on-premises and in the cloud LogMiner Sessions are similar, but different users are required.
  • Typical LogMiner Session Task 1: Enable Supplemental Logging: You must enable supplemental logging before you can use LogMiner to redo log files.
  • Typical LogMiner Session Task 2: Extract a LogMiner Dictionary: You must provide LogMiner with a database dictionary if you want to use it.
  • Typical LogMiner Session Task 3: Specify Redo Log Files for Analysis: Before starting LogMiner, you must specify the redo log files you want to analyze with DBMS LOGMNR ADD LOGFILE.
  • Start LogMiner: What options do you have for analyzing redo log files, filter criteria, and other session characteristics, as well as how to start LogMiner.
  • Query V$LOGMNR_CONTENTS: You can query the Oracle Database V$LOGMNR CONTENTS view after starting LogMiner.
  • Typical LogMiner Session Task 6: End the LogMiner Session: The LogMiner session is now finished.

Example Using Oracle LogMiner

Examine the provided example to see how LogMiner can be used for Data Mining.

Learn how to use LogMiner to find all database changes made by a single user over a specific period.

Assume you want to find out all of the database changes made by user john over a specific time period. You can use LogMiner to complete this task:

  • Establish a connection to the database.
  • Make a dictionary file in LogMiner.
  • You must either create a LogMiner dictionary file before any table definition changes are made to tables that john uses, or use the online catalog at LogMiner startup to analyze john’s data. This example makes use of a LogMiner dictionary extracted from redo log files.
  • Redo log files should be added.
  • Assume that john has made some database changes. The names of the redo log files you want to analyze can now be specified as follows:
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => 'log1orc1.ora', -
   OPTIONS => DBMS_LOGMNR.NEW);
  • Add additional redo log files as needed, as follows:
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
   LOGFILENAME => 'log2orc1.ora', -
   OPTIONS => DBMS_LOGMNR.ADDFILE);
  • Start LogMiner and specify a time range for the search:
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
   DICTFILENAME => 'orcldict.ora', -
   STARTTIME => TO_DATE('01-Jan-1998 08:30:00','DD-MON-YYYY HH:MI:SS'), -
   ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
  • V$LOGMNR CONTENTS is a view that allows you to query the contents of a log file. V$LOGMNR CONTENTS is now queryable. You decide to track down all of the salary table changes made by user john. Run the SELECT statement below:
SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS 
   WHERE USERNAME = 'john' AND SEG_NAME = 'salary';

There are two rows returned for both SQL REDO and SQL UNDO columns (the format of the data display will be different on your screen). You discover that john requested two operations: he deleted his previous salary and then replaced it with a new, higher salary. You now have all of the information you need to reverse this action.

SQL_REDO                              SQL_UNDO
--------                              --------
delete from SALARY                    insert into SALARY(NAME, EMPNO, SAL)
where EMPNO = 12345                    values ('JOHN', 12345, 500)
and NAME='JOHN'
and SAL=500;

insert into SALARY(NAME, EMPNO, SAL)  delete from SALARY
values('JOHN',12345, 2500)         where EMPNO = 12345
                                      and NAME = 'JOHN'
2 rows selected                       and SAL = 2500;
  • Close the LogMiner window. To properly end the LogMiner session, use the DBMS LOGMNR.END LOGMNR procedure:
DBMS_LOGMNR.END_LOGMNR( );

Supported Datatypes, Table Storage Attributes, and Database and Redo Log File Versions

The data types and table storage attributes supported by LogMiner are listed below. Some data types are only supported in specific releases, as described in Compatibility Requirements.

Data Types:

  • BINARY_DOUBLE
  • BINARY_FLOAT
  • BLOB
  • CHAR
  • CLOB and NCLOB
  • DATE
  • INTERVAL YEAR TO MONTH
  • INTERVAL DAY TO SECOND
  • LOBs stored as SecureFiles (requires that the database be run at a compatibility of 11.2 or higher.
  • LONG
  • LONG RAW
  • NCHAR
  • NUMBER
  • NVARCHAR2
  • Objects stored as VARRAYs
  • Objects (Simple and Nested ADTs without Collections)

The database must be running Oracle Database 12c Release 1 (12.1) or higher with a redo compatibility setting of 12.0.0.0 or higher to support object support (including Oracle-supplied types like SDO GEOMETRY, ORDIMAGE, and so on). The SQL REDO column for XML data-related operations never contains valid SQL or PL/SQL.

  • Oracle Text
  • RAW
  • TIMESTAMP
  • TIMESTAMP WITH TIMEZONE
  • TIMESTAMP WITH LOCAL TIMEZONE
  • VARCHAR and VARCHAR2
  • XDB
  • All storage models will use XMLType data if the following primary database compatibility requirements are met:
    • (XMLType stored as CLOB is deprecated as of Oracle Database 12c Release 1 (12.1)).
    • Oracle Database 11g Release 2 (11.2.0.3) or higher, with a redo compatibility setting of 11.2.0.3 or higher, is required for XMLType stored in object-relational format or as binary XML. The SQL REDO column is never valid SQL or PL/SQL for XML data-related operations.

Table Storage Types

  • Table clusters (including index clusters and heap clusters).
  • IOTs are tables with indexes (partitioned and nonpartitioned, including overflow segments).
  • Tables that are well-ordered (partitioned and nonpartitioned).
  • Compression of advanced rows and tables. A database compatibility setting of 11.1.0 or higher is required for both of these options.
  • When the database compatibility is set to 11.2 or higher, tables with LOB columns are stored as SecureFiles.
  • When database compatibility is set to 11.2.0.2 or higher, tables that use Hybrid Columnar Compression.

Learn More About:

Archive Logging Oracle

Supported Databases and Redo Log File Versions

Describes the database releases and redo log file versions that are supported.

LogMiner is only compatible with databases running release 8.1 or later, but you can use it to analyze redo log files from databases running release 8.0.

However, the information that LogMiner can extract from a redo log file is determined by the log’s version, not the database’s release.

When supplemental logging is enabled, for example, redo log files for Oracle9i can be augmented to capture additional information.

This enables you to take full advantage of LogMiner’s capabilities. Because redo log files created with older Oracle releases lack that additional data, LogMiner may be limited in the operations and data types it can support.

Sign for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Frequently Asked Questions

1. What is the use of LogMiner in Oracle?

Oracle LogMiner is a tool that allows users to query and analyze the contents of Oracle redo log files. Redo log files store all changes made to the database, and LogMiner enables you to mine this historical data for various purposes.

2. Is LogMiner deprecated in Oracle 19c?

No, LogMiner is not deprecated in Oracle 19c.

3. How do I start LogMiner in Oracle?

Involves preparing the database, creating a directory for dictionary files, populating the dictionary, starting LogMiner with specific redo log files, and querying the data.

Harshitha Balasankula
Marketing Content Analyst, Hevo Data

Harshitha is a dedicated data analysis fanatic with a strong passion for data, software architecture, and technical writing. Her commitment to advancing the field motivates her to produce comprehensive articles on a wide range of topics within the data industry.