Active Dataguard in Action for Block Corruptions

Active Dataguard enables a dataguard to be open as read only and log apply at the same time. This way you can have near real time ( depending on the gap with your primary) read only copy of your primary system which has the all functionality of a normal dataguard. One of the cool features of Active Dataguard is Automatic Block Repair. In this blog I will demonstrate this feature.

The data block can be lost during write phase, zeroed out or corrupted. when the datablock is the first time used, it returns an error to the end user about the corruption.If Active Dataguard is enabled in your system then, on primary that block is restored from one of the active dataguards, no error is returned to the users. Everything occurs in the background and transparent to the users.

I assume the following points :
* block corruption has NOT been propagated to Dataguard
* you can access all the necessary redo logs and archive logs for recovering that block. (not much archivelog required if your lag is small)

Be careful you need extra licence to enable Active Dataguard.
http://docs.oracle.com/cd/E11882_01/license.112/e47877/options.htm#DBLIC141″

Startup the dataguard as active dataguard.

SQL> startup mount ;     
Database mounted.

SQL> alter database open read only ;
Database altered.

SQL> select open_mode from v$database ;

OPEN_MODE
--------------------
READ ONLY

SQL> alter database recover managed standby database using current logfile disconnect ;
Database altered.

SQL> select open_mode from v$database ;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

Set up the Test Case on Primary Database

SQL> conn mehmeteser
Enter password: 
Connected.
SQL> create table junk_tbl tablespace JUNK_TBS as select * from dba_objects ;
Table created.

SQL> select *
  2    from (select distinct dbms_rowid.rowid_block_number(rowid) block_number
  3            from junk_tbl)
  4   where rownum  alter system flush buffer_cache;
System altered.

fun part Lets zero out the block on the test table block_number=1427

[oracle@host1 ~]$ dd of=/u01/datafile/VTYTEST_junk_tbs01  bs=8192 seek=1427 conv=notrunc count=1 if=/dev/zero
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 3.6328e-05 seconds, 226 MB/s

Let’s see What happens when we try to query the table:

WITH ACTIVE STANDBY

Block will be restored and recovered transparently in the background.

SQL> create table junk_tbl_bck tablespace junk_tbs as select * from junk_tbl;
Table created.

alert log :

Tue Jun 10 16:17:50 2014
Hex dump of (file 5, block 1427) in trace file /u01/app/oracle/diag/rdbms/vtytest/VTYTEST/trace/VTYTEST_ora_115127.trc
Corrupt block relative dba: 0x01400593 (file 5, block 1427)
Completely zero block found during multiblock buffer read
Reading datafile '/u01/datafile/VTYTEST_junk_tbs01' for corruption at rdba: 0x01400593 (file 5, block 1427)
Reread (file 5, block 1427) found same corrupt data (no logical check)
Automatic block media recovery requested for (file# 5, block# 1427)
Tue Jun 10 16:21:27 2014
Thread 1 advanced to log sequence 5211 (LGWR switch)
  Current log# 3 seq# 5211 mem# 0: +DATA/vtytest/onlinelog/group_3.299.839760905
  Current log# 3 seq# 5211 mem# 1: +DATA/vtytest/onlinelog/group_3.297.839760909
Tue Jun 10 16:21:27 2014
Archived Log entry 10399 added for thread 1 sequence 5210 ID 0x8cf991b8 dest 1:
Tue Jun 10 16:21:28 2014
Automatic block media recovery successful for (file# 5, block# 1427)
Tue Jun 10 16:21:28 2014

WITHOUT ACTIVE STANDBY

Manual restore is required. Rman block recovery can be used.

SQL> create table junk_tbl_bck tablespace junk_tbs as select * from junk_tbl;
create table junk_tbl_bck tablespace junk_tbs as select * from junk_tbl
                                                               *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 1427)
ORA-01110: data file 5: '/u01/datafile/VTYTEST_junk_tbs01'

alert log :

Tue Jun 10 16:29:16 2014
Hex dump of (file 5, block 1427) in trace file /u01/app/oracle/diag/rdbms/vtytest/VTYTEST/trace/VTYTEST_ora_115127.trc
Corrupt block relative dba: 0x01400593 (file 5, block 1427)
Completely zero block found during multiblock buffer read
Reading datafile '/u01/datafile/VTYTEST_junk_tbs01' for corruption at rdba: 0x01400593 (file 5, block 1427)
Reread (file 5, block 1427) found same corrupt data (no logical check)
Errors in file /u01/app/oracle/diag/rdbms/vtytest/VTYTEST/trace/VTYTEST_ora_115127.trc  (incident=59009):
ORA-01578: ORACLE data block corrupted (file # 5, block # 1427)
ORA-01110: data file 5: '/u01/datafile/VTYTEST_junk_tbs01'
Incident details in: /u01/app/oracle/diag/rdbms/vtytest/VTYTEST/incident/incdir_59009/VTYTEST_ora_115127_i59009.trc
Tue Jun 10 16:29:16 2014
Corrupt Block Found
         TSN = 6, TSNAME = JUNK_TBS
         RFN = 5, BLK = 1427, RDBA = 20972947
         OBJN = 24975, OBJD = 24975, OBJECT = JUNK_TBL, SUBOBJECT = 
         SEGMENT OWNER = MEHMETESER, SEGMENT TYPE = Table Segment
Tue Jun 10 16:29:20 2014
Sweep [inc][59009]: completed
Errors in file /u01/app/oracle/diag/rdbms/vtytest/VTYTEST/incident/incdir_59009/VTYTEST_m000_130104_i59009_a.trc:
ORA-19583: conversation terminated due to error
ORA-01122: database file 5 failed verification check
ORA-01110: data file 5: '/u01/datafile/VTYTEST_junk_tbs01'
ORA-01200: actual file size of 131072 is smaller than correct size of 131072 blocks
Tue Jun 10 16:29:20 2014
Dumping diagnostic data in directory=[cdmp_20140610162920], requested by (instance=1, osid=115127), summary=[incident=59009].
Tue Jun 10 16:29:43 2014

Tips on Linux Kernel Upgrade

Kernel upgrade can be applied node by node on exadata so there will be no service interruption. Kernel upgrades are required when you need new functionality or when you hit bugs on the current kernel version. I had to upgrade kernel of a box. It is a good experience and The following procedure is based on kernel upgrade on Oracle Linux 5.8 with Unbreakable Enterprise Kernel [2.6.32], a compute node of exadata.

PRE-UPGRADE

==> If you have EM12C the targets on the host will be unavailable for upgrade period. Put them in blackout state so that no false alarms generated from them.

==> Run the upgrade step on X-windows like vnc. This will prevent any disconnection issues from ssh clients.

==> Disable all NFS mounts on the system. check the locations /etc/rc.local , /etc/fstab

==> Is there any asm operations going on the system. Wait for them to finish. Make sure no rebalance job is running on the ASM part. check v$asm_operation.

==> Backup the grup startup file /boot/grub/grub.conf. you might need it for rollback.

==> Shutdown the crs and disable crs auto start. Also shutdown any databases or listeners that are not registered with the csr.
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl stop crs -f

==> Make sure crs is not running
[root@host1 ~]# ps -ef | grep d.bin
root 66664 60395 0 09:55 pts/1 00:00:00 grep d.bin

==> Reboot the system and make sure it is able to restart before any kernel changes :)

==> Check the ilom problem page and make sure there is no problem on the server. If there are any like memory problems etc. fix them.

==> Record the current kernel
[root@host1 ~]# uname -r
2.6.32-400.11.1.el5uek

==> Check the server version and make sure the next kernel is designed for the server.
[root@host1 ~]# dmidecode -s system-product-name
SUN FIRE X4170 M3

==> Make sure enough space is available
[root@host1 ~]# df -h

==> Shutdown any database or listeners that hasn’t been registered with the crs. check the crs for the last time.
[root@host1 ~]# ps -ef | grep d.bin
root 66664 60395 0 09:55 pts/1 00:00:00 grep d.bin

UPGRADE
==> upgrade the kernel

[root@host1 ~]# rpm -ivh kernel-uek-firmware-2.6.32-400.34.1.el5uek.noarch.rpm kernel-uek-2.6.32-400.34.1.el5uek.x86_64.rpm ofa-2.6.32-400.34.1.el5uek-1.5.1-4.0.58.1.x86_64.rpm
Preparing… ########################################### [100%]
1:kernel-uek-firmware ########################################### [ 33%]
2:kernel-uek ########################################### [ 67%]
3:ofa-2.6.32-400.34.1.el5########################################### [100%]

==> Reboot the system
[root@host1 ~]# reboot

POST-UPGRADE
==> Check ilom for any errors. Check /var/log/messages for any errors.

==> Check the new kernel version
[root@host1 ~]# uname -r
2.6.32-400.34.1.el5uek

==> Start the crs and enable crs auto start
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl start crs

==> Check if crs is starting

[root@host1 ~]# ps -ef | grep d.bin
root 11852 1 4 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/ohasd.bin reboot
oracle 12013 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/oraagent.bin
oracle 12025 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/mdnsd.bin
oracle 12109 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/gpnpd.bin
root 12119 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/orarootagent.bin
oracle 12122 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/gipcd.bin
root 12137 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/osysmond.bin
root 12150 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/cssdmonitor
root 12167 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/cssdagent
oracle 12169 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/diskmon.bin -d -f
oracle 12187 1 2 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/ocssd.bin
root 12389 10620 0 10:23 pts/0 00:00:00 grep d.bin
[root@host1 ~]#

==> Enable any NFS mount on the system and mount them

==> On EM12c end the blackout period for the targets.

Now you can move on the other server in the cluster.

Kdump – Kernel Crashdump

Many people agree that Linux kernel is pretty stable. It doesn’t crash pretty often. But there are times you hit a bug and It suddenly crash. In that case kdump is useful for investigating the reason.
Kdump can be simply defined as Linux Kernel Crashdump Mechanism. It is used to gather diagnostic information during a Linux system crash. If the Linux system crash, the dump file (memory image – vmcore) is generated.
This core file can be pretty huge depending on your memory usage. To investigate the root cause you need to generate stack trace from this file. For more information refer : “How to get the stack trace of kdump in Exadata (Doc ID 1363245.1)”

Example :

crash /u01/crashfiles/127.0.0.1-2012-05-12-02:31:16/crashcore  /usr/lib/debug/lib/modules/"kernel version"/vmlinux 

After a system crash, It may take a lot of time to start a linux machine when the kdump service is active. You can TEMPORARY disable the service. It is NOT advised to disable it all the time since
you may lose valuable diag information.

Disable/Enable on Start

[root@host1 ~]# dcli -g dbs_group  -l root "chkconfig kdump off  "
[root@host1 ~]# dcli -g dbs_group  -l root "chkconfig --list kdump "
host1: kdump              0:off   1:off   2:off   3:off   4:off   5:off   6:off
host2: kdump              0:off   1:off   2:off   3:off   4:off   5:off   6:off

[root@host1 ~]# dcli -g dbs_group  -l root "chkconfig kdump on   "
[root@host1 ~]# dcli -g dbs_group  -l root "chkconfig --list kdump "
host1: kdump              0:off   1:off   2:on    3:on    4:on    5:on    6:off
host2: kdump              0:off   1:off   2:on    3:on    4:on    5:on    6:off

Stop/Start Kdump Service

[root@host1 ~]#  dcli -g dbs_group  -l root " service kdump stop  "
host1: Stopping kdump:[  OK  ]
host2: Stopping kdump:[  OK  ]
[root@host1 ~]#  dcli -g dbs_group  -l root " service kdump status  "
host1: Kdump is not operational
host2: Kdump is not operational

[root@host1 ~]#  dcli -g dbs_group  -l root " service kdump start   "
host1: Starting kdump:[  OK  ]
host2: Starting kdump:[  OK  ]
[root@host1 ~]#  dcli -g dbs_group  -l root " service kdump status  "
host1: Kdump is operational
host2: Kdump is operational

for more info => http://docs.oracle.com/cd/E37670_01/E37355/html/ol_kdump_diag.html

Files Used :

/boot/grub/grub.conf
Appends the crash kernel option to the kernel line to specify the amount of reserved memory and any offset value.

/etc/kdump.conf
Sets the location where the dump file can be written, the filtering level for the makedumpfile command, and the default behavior to take if the dump fails.
See the comments in the file for information about the supported parameters.

When is the last time you changed a flashdisk or harddisk on Exadata ?

Why is this question is important for a DBA ? Actually the results are pretty surprising when you execute the query and see the last inserted date of a harddisk or a flashdisk. In another way, that information shows you the time you changed a faulty harddisk or flashdisk on the exadata. If you deleted your alert history on that cell, this information may give you a quick idea. Also It gives you an idea when the system is first installed :) Assuming not most of the disks are replaced.

[root@testdbadm01 ~]#  dcli -g cellnodes.txt  -l root "cellcli -e list physicaldisk  attributes name,diskType,physicalInsertTime " | sort -k 4
testdb01-priv: 20:0             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:10            HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:11            HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:1             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:2             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:3             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:4             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:5             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:7             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:8             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: 20:9             HardDisk        2013-03-26T11:23:53+02:00       
testdb01-priv: FLASH_1_0        FlashDisk       2013-03-26T11:23:53+02:00       
testdb01-priv: FLASH_1_1        FlashDisk       2013-03-26T11:23:53+02:00       
testdb01-priv: FLASH_1_2        FlashDisk       2013-03-26T11:23:53+02:00       
. 
. 
. 
testdb07-priv: FLASH_4_0        FlashDisk       2013-03-26T12:05:59+02:00       
testdb07-priv: FLASH_4_1        FlashDisk       2013-03-26T12:05:59+02:00       
testdb07-priv: FLASH_4_2        FlashDisk       2013-03-26T12:05:59+02:00       
testdb07-priv: FLASH_4_3        FlashDisk       2013-03-26T12:05:59+02:00       
testdb07-priv: FLASH_5_0        FlashDisk       2013-09-20T11:10:31+03:00       
testdb07-priv: FLASH_5_1        FlashDisk       2013-09-20T11:10:31+03:00       
testdb07-priv: FLASH_5_2        FlashDisk       2013-09-20T11:10:31+03:00       
testdb07-priv: FLASH_5_3        FlashDisk       2013-09-20T11:10:31+03:00       
testdb06-priv: FLASH_5_0        FlashDisk       2013-10-10T16:30:15+03:00       
testdb06-priv: FLASH_5_1        FlashDisk       2013-10-10T16:30:15+03:00       
testdb06-priv: FLASH_5_2        FlashDisk       2013-10-10T16:30:15+03:00       
testdb06-priv: FLASH_5_3        FlashDisk       2013-10-10T16:30:15+03:00       
testdb01-priv: 20:6             HardDisk        2013-11-18T09:45:08+02:00       
testdb06-priv: FLASH_1_0        FlashDisk       2013-11-22T16:13:33+02:00       
testdb06-priv: FLASH_1_1        FlashDisk       2013-11-22T16:13:33+02:00       
testdb06-priv: FLASH_1_2        FlashDisk       2013-11-22T16:13:33+02:00       
testdb06-priv: FLASH_1_3        FlashDisk       2013-11-22T16:13:33+02:00       

Extended Statistics

Extended statistics were introduced in the oracle cost based optimized beginning with 11g. They are used to gather information on the correlated columns.

For example there is a correlation between the “county” and “city” column of a table. Number of distinct values over these columns can not be calculated by multiplying the NDV (number of distinct values) of these columns. In the following test table for example.
NDV of vmod1 is 5.
NDV of vmod2 is 5.
But NDV of (vmod1 & vmod2) is still 5.

Possible values you can get from the test table :

vmod1 vmod1
0 1
1 2
2 3
3 4
4 5

Prepare the Table

SQL> drop table eser_sil purge;
SQL> create table eser_sil as 
  2  SELECT LEVEL vrow, mod(level,5) vmod1, mod(level,5)+1 vmod2, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' vval
  3  FROM dual
  4  CONNECT BY LEVEL <= 1e6;
SQL> create index eser_sil_x1 on eser_sil (vmod1);

WITHOUT EXTENDED STATISTICS

Gather Statistic on the indexed columns.

SQL>  set lines 200
SQL>  set pages 0
SQL>  explain plan for
  2   select count(*)  from eser_sil
  3          where vmod1 = :a
  4          and vmod2 = :b ;
Explained.
SQL>  begin
  2   dbms_stats.gather_table_stats(ownname => 'VTYUSER',
  3                                 tabname => 'ESER_SIL',
  4                                 method_opt => 'for all  indexed columns size auto' ,
  5                                 estimate_percent => 100 ,
  6                                 no_invalidate => false );
  7   end;
  8   /
PL/SQL procedure successfully completed.

Histograms are also created on the indexed columns. But there are no extended statistics created as expected.

SQL> select table_name, column_name, histogram from dba_tab_col_statistics where table_name = 'ESER_SIL';
ESER_SIL                       VMOD1                          FREQUENCY
ESER_SIL                       VMOD2                          FREQUENCY
SQL>  select table_name, extension_name, extension from dba_stat_extensions where table_name = 'ESER_SIL';
no rows selected

When we examine the expected rows from the explained plan of a query which uses both columns as predicates. We can see that the expected rows is 40000. How did we get this number ? Lets explain. For both columns the number of distinct values is 5. And since there is AND operator the we have to multiply the selectivity of the two columns. See more details on “Cost Based Optimizer (CBO) Overview (Doc ID 10626.1)”

( number of rows ) * ( selectivity of VMOD1 ) 	             * ( selectivity of VMOD2 )                = 
( number of rows ) * ( 1 / No. Of distinct Values of VMOD1 ) * ( 1 / No. Of distinct Values of VMOD2 ) = 
1000000 * (1/5) * (1/5)	=  40000
SQL>  set lines 200
SQL>  set pages 0
SQL>  explain plan for
  2   select count(*)  from eser_sil
  3          where vmod1 = :a
  4          and vmod2 = :b ;

Explained.

SQL> select * from table(dbms_xplan.display);
Plan hash value: 2186261789

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     1 |     6 |    96   (0)| 00:00:02 |
|   1 |  SORT AGGREGATE   |             |     1 |     6 |            |          |
|*  2 |   INDEX RANGE SCAN| ESER_SIL_X1 | 40000 |   234K|    96   (0)| 00:00:02 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("VMOD1"=TO_NUMBER(:A) AND "VMOD2"=TO_NUMBER(:B))

14 rows selected.

WITH EXTENDED STATISTICS

Calculate statistics with extended statistics on the columns.

SQL> begin
  2  dbms_stats.gather_table_stats(ownname => 'VTYUSER',
  3                                tabname => 'ESER_SIL',
  4                                method_opt => 'for all indexed columns size auto for columns size 10 (vmod1, vmod2)' ,
  5                                estimate_percent => 100 ,
  6                                no_invalidate => false );
  7  end;
  8  /

PL/SQL procedure successfully completed.

Check if the extended statistics are calculated properly.

SQL> select table_name, column_name, histogram from dba_tab_col_statistics where table_name = 'ESER_SIL';
ESER_SIL                       VMOD1                          FREQUENCY
ESER_SIL                       VMOD2                          FREQUENCY
ESER_SIL                       SYS_STUUKQC85N0T0_U4SUOFY4M29V FREQUENCY

SQL>  select table_name, extension_name, extension from dba_stat_extensions where table_name = 'ESER_SIL';
ESER_SIL                       SYS_STUUKQC85N0T0_U4SUOFY4M29V ("VMOD1","VMOD2")

This time when we check the explain plan of query. The correlation between the columns were taken into consideration by the optimizer. This time the optimizer correctly calculated number of distinct values from the combination of these two columns. There would be only 5 distinct values because of the high correlation between the columns.

( number of rows ) * ( selectivity of VMOD1 & VMOD2 ) 		  = 
( number of rows ) * ( 1 / No. Of distinct Values of VMOD1 & VMOD2 )  = 
1000000 * (1/5)  =  200000
SQL>  set lines 200
SQL>  set pages 0
SQL>  explain plan for
  2   select count(*)  from eser_sil
  3          where vmod1 = :a
  4          and vmod2 = :b ;

Explained.

SQL> select * from table(dbms_xplan.display);
Plan hash value: 2186261789

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     1 |     6 |   472   (1)| 00:00:06 |
|   1 |  SORT AGGREGATE   |             |     1 |     6 |            |          |
|*  2 |   INDEX RANGE SCAN| ESER_SIL_X1 |   200K|  1171K|   472   (1)| 00:00:06 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("VMOD1"=TO_NUMBER(:A) AND "VMOD2"=TO_NUMBER(:B))

METHOD_OPT with AUTO & Histograms

It is always better to know why oracle choose a bad plan over a good one. For this reason you need to know the mechanism behind the statistics. Statistics are a representation of the data distribution in a smaller scale. This representation helps oracle to choose the right path and join algorithms when generating the execution plan for the query. Oracle builds a cost model on top of this statistics and compares possible execution plans and tries to choose the plan with the lowest cost. If your statistics do not properly reflect your data distribution then oracle may start to choose wrong execution plans since all comes to mathematics and If you give wrong inputs to a formula you may have no right to expect good results.

There are auto options in the gathering package dbms_stats. For the method_opt option => ‘FOR ALL [INDEXED|HIDDEN] COLUMNS SIZE AUTO’ oracle decides if a column needs the collection of histograms and statistics on that column. The decision is based on the usage history of that column in the query predicates. You can get this information from the sys.col_usage$ view. If the column usage is empty no histogram created for that column.


SCENARIO 1
Table is newly created and column usage info is empty.
AUTO column stats gathers NO histogram on the column.

Prepare the Table

SQL> drop table eser_sil purge;
SQL> create table eser_sil as 
  2  SELECT LEVEL vrow, mod(level,5) vmod1, mod(level,5)+1 vmod2, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' vval
  3  FROM dual
  4  CONNECT BY LEVEL <= 1e6;
SQL> create index eser_sil_x1 on eser_sil (vmod1);

Update the monitoring and usage info for table and columns.

SQL> begin 
  2    dbms_stats.flush_database_monitoring_info();
  3  end;
  4  /

Check sys.col_usage$ table. It will show none of the columns used as a predicate in a query.

SQL> select c.OWNER, c.TABLE_NAME, c.COLUMN_NAME,  cu.equality_preds, cu.timestamp
  2      from sys.col_usage$ cu,
  3         dba_tab_columns c,
  4         dba_objects o
  5      where cu.obj# = o.OBJECT_ID 
  6         and cu.intcol# = c.COLUMN_ID   
  7         and o.OWNER = c.OWNER
  8         and o.OBJECT_NAME = c.TABLE_NAME
  9         and o.OBJECT_NAME ='ESER_SIL'
 10         order by cu.intcol# ;

no rows selected

Gather stats using auto on the indexed columns.

SQL> begin 
  2  dbms_stats.gather_table_stats(ownname => 'VTYUSER',
  3                                tabname => 'ESER_SIL',
  4                                method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');
  5  end;
  6  /

No histogram is created on the column because it has newer been used.

SQL> select table_name, column_name, histogram from dba_tab_col_statistics where table_name = 'ESER_SIL';

TABLE_NAME                     COLUMN_NAME                    HISTOGRAM
------------------------------ ------------------------------ ---------------
ESER_SIL                       VMOD1                          NONE


SCENARIO 2
Table is newly created. Explain plan used on a query with the column as a predicate.
Column usage info is filled. AUTO column stats gathers histogram on the column.

Prepare the Table

SQL> drop table eser_sil purge;
SQL> create table eser_sil as 
  2  SELECT LEVEL vrow, mod(level,5) vmod1, mod(level,5)+1 vmod2, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' vval
  3  FROM dual
  4  CONNECT BY LEVEL <= 1e6;
SQL> create index eser_sil_x1 on eser_sil (vmod1);

Explain plan is used to parse the query

SQL> explain plan  for select count(*)  from eser_sil  where vmod1 = :a;

Explained.

Update the monitoring and usage info for table and columns.

SQL>  begin 
  2     dbms_stats.flush_database_monitoring_info();
  3   end;
  4   /

Check sys.col_usage$ table. It will show the columns is used as an equality predicate once.

SQL> set lines 200
SQL> select c.TABLE_NAME, c.COLUMN_NAME,  cu.equality_preds, cu.timestamp
  2      from sys.col_usage$ cu,
  3         dba_tab_columns c,
  4         dba_objects o
  5      where cu.obj# = o.OBJECT_ID 
  6         and cu.intcol# = c.COLUMN_ID   
  7         and o.OWNER = c.OWNER
  8         and o.OBJECT_NAME = c.TABLE_NAME
  9         and o.OBJECT_NAME ='ESER_SIL'
 10         order by cu.intcol# ;

TABLE_NAME                     COLUMN_NAME                    EQUALITY_PREDS TIMESTAMP
------------------------------ ------------------------------ -------------- ---------
ESER_SIL                       VMOD1                                       1 30-APR-14
SQL> begin 
  2  dbms_stats.gather_table_stats(ownname => 'VTYUSER',
  3                                tabname => 'ESER_SIL',
  4                                method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');
  5  end;
  6  /

As expected, Histogram is created on that column.

SQL>  select table_name, column_name, histogram from dba_tab_col_statistics where table_name = 'ESER_SIL';

TABLE_NAME                     COLUMN_NAME                    HISTOGRAM
------------------------------ ------------------------------ ---------------
ESER_SIL                       VMOD1                          FREQUENCY


SCENARIO 3
Table is newly created. A query is issued with the column as a predicate.
Column usage info is filled. AUTO column stats gathers histogram on the column.

Prepare the Table

SQL> drop table eser_sil purge;
SQL> create table eser_sil as 
  2  SELECT LEVEL vrow, mod(level,5) vmod1, mod(level,5)+1 vmod2, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' vval
  3  FROM dual
  4  CONNECT BY LEVEL <= 1e6;
SQL> create index eser_sil_x1 on eser_sil (vmod1);

Run a query using the column as a predicate.

SQL> select count(*)  from eser_sil  where vmod1 = 2;

  COUNT(*)
----------
    200000

Update the monitoring and usage info for table and columns.

SQL>  begin 
  2     dbms_stats.flush_database_monitoring_info();
  3   end;
  4   /

Check sys.col_usage$ table. It will show the columns is used as an equality predicate once.

SQL> set lines 200
SQL> select c.TABLE_NAME, c.COLUMN_NAME,  cu.equality_preds, cu.timestamp
  2      from sys.col_usage$ cu,
  3         dba_tab_columns c,
  4         dba_objects o
  5      where cu.obj# = o.OBJECT_ID 
  6         and cu.intcol# = c.COLUMN_ID   
  7         and o.OWNER = c.OWNER
  8         and o.OBJECT_NAME = c.TABLE_NAME
  9         and o.OBJECT_NAME ='ESER_SIL'
 10         order by cu.intcol# ;

TABLE_NAME                     COLUMN_NAME                    EQUALITY_PREDS TIMESTAMP
------------------------------ ------------------------------ -------------- ---------
ESER_SIL                       VMOD1                                       1 30-APR-14
SQL> begin 
  2  dbms_stats.gather_table_stats(ownname => 'VTYUSER',
  3                                tabname => 'ESER_SIL',
  4                                method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');
  5  end;
  6  /

As expected, Histogram is created on that column.

SQL> select table_name, column_name, histogram from dba_tab_col_statistics where table_name = 'ESER_SIL';

TABLE_NAME                     COLUMN_NAME                    HISTOGRAM
------------------------------ ------------------------------ ---------------
ESER_SIL                       VMOD1                          FREQUENCY

Correct SCN on Standby Database

There are many documents that explains how to resolve a gap on the standby database using rman incremental backups. It is pretty useful when you do not want to ship and apply all the missing archive logs to refresh standby database. Recently I had such problem on a new dataguard. My problem was not the size of the missing archive logs that I need to ship and apply. The archive logs were missing on the primary system. Instead of restoring the archivelogs from the tape I prefer to use incremental backup from the primary database.
But It did not work for me. Later I realized that I was calculating wrong scn on standby. Restore time of the datafiles on standby was 4 days before the standby controlfile that I use. When I use the “select current_scn from v$database;” to calculate the scn It gave me the scn on the control file which was pretty close to the primary. But scn on the datafiles was 4 days ago. When I used the scn on the datafiles I was successfully restored the incremental backup and my problem was resolved.

MOS doc :
Steps to perform for Rolling forward a standby database using RMAN Incremental Backup. (Doc ID 836986.1)

How it works :
1 – Determine latest scn on the dataguard. this must reflect the scn on the datafiles so it is better to use x$kcvfh table (take the minimum SCN from the below queries)

		
SQL> select current_scn from v$database;

-- If no READ ONLY Tablespaces/datafiles in database use 
SQL> select min(fhscn) from x$kcvfh;

-- If READ ONLY tablespaced/datafiles in database use
SQL> select min(f.fhscn) from x$kcvfh f, v$datafile d
	  where f.hxfil =d.file#
		and d.enabled != 'READ ONLY' ;

2 - Take incremental backup on the primary database from that scn

	
RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE  FORMAT '/primary_archive/incr_standby_%U';

3 - Ship the incremental backup to the standby location (scp , ftp or aircargo :) whatever you like )

4 - Catalog the backup on the standby database

RMAN> CATALOG START WITH '/standby_archive/incrback/';

5 - Stop the recovery on the standby database

RMAN> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

6 - Restore the standby database using the cataloged incremental backup

RMAN> RECOVER DATABASE NOREDO;	 
Follow

Get every new post delivered to your Inbox.