Saturday 7 November 2015

ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []

Problem :  While trying to start the database after rollback from 11.2.0.4 to 11.2.0.3, I found so many ORA-600 errors in the alert log and instance was restarting  automatically.
I was not able to shutdown cleanly it was giving "ORA-03113: end-of-file on communication channel"


SQL> shutdown immediate ;
ORA-03113: end-of-file on communication channel
SQL>


In alert log -->
ORA-01595: error freeing extent (44) of rollback segment (1))
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []


Incident 432371 created, dump file: /bobshr/dump01/diag/rdbms/inst_a/inst_a1/incident/incdir_432371/insta1_smon_78761_i432371.trc
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
Error 600 in redo application callback


Error 600 in redo application callback  --  indicates Problem is with undo tablespace.  


I tried creating undo tablespace but that also errored out


SQL> CREATE undo tablespace APPS_UNDOTBS02 datafile '+INST_AIDB_VG/inst_a1/datafile/undots1_002.dbf' SIZE 5G
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [4194], [ody "SYS.DBMS_STANDARD"


Also Not able to compile package SYS.DBMS_STANDARD


SQL> alter package dbms_standard compile ;
alter package dbms_standard compile
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [4194], [ody "SYS.DBMS_STANDARD"


Solution :


1) Shutdown immediate;


2) edit init files to change below values


*.undo_management='MANUAL'
inst_a1.undo_tablespace='APPS_UNDOTBS01'  <-- the name of new undo tablespace


3) startup pfile=init<sid>.ora




4)  Create a new undo tablespace
CREATE undo tablespace APPS_UNDOTBS01 datafile '+INST_A1DB_VG/inst_a1/datafile/undots1_001.dbf' SIZE 5G;




5) Set the current undo tablespace to newly created one and drop the old one.
ALTER SYSTEM SET inst_a1.undo_tablespace='APPS_UNDOTBS01' ;
DROP TABLESPACE APPS_UNDOTS1 INCLUDING CONTENTS AND DATAFILES;


6) shutdown and restart the database ;


This fixed the issue and no "ORA-00600: internal error code, arguments: [4194]"  any further.



Tuesday 3 November 2015


Oracle clusterware GI or CRS abbreviations, acronyms and procedures.


Here we will understand different naming conventions used by oracle for oracle clusterware versions. Oracle clusterware is a Oracle software to setup and manage clusters that provide high availability, load balancing solutions for Business critical applications.

Pre 11gR2, Oracle clusterware was knows as CRS ( cluster ready service ). From 11gR2,  it is known with name, GI ( Grid Infrastructure) and oracle cluster is called GI cluster.

Pre 11gR2, we had option to install ASM and CRS in seperate oracle home and hence we may have different users owning those oracle home also called as ASM user and CRS user. From 11gR2 ASM and GI oracle home are same and owned by same user known as GI user.

In 11gR2  --> 

Clusterware user =  CRS user = Grid User = ASM user = Oracle Cluster sofwtare owner
Clusterware home = CRS home = Grid home

ORACLE_BASE is oracle_base for GI or CRS user

Oracle Restart = Grid Infrastructure in Standalone mode

Query --

I want to know OCR location --- To find out execute ocrcheck

root@abc.bob.dba.com [+ASM1] /root > ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3432
         Available space (kbytes) :     258688
         ID                       :  159017389
         Device/File Name         : +MyDisk_OCR_VOTE_VG
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

VD: Voting Disk. To find out voting file location, execute: crsctl query css votedisk

root@ABC.bob.dba.com [+ASM1] /root > crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   245ddd0e694e4fccbf8f267fc8210a00 (ORCL:MYDISK_OCR_VOTE01) [MYDISK_OCR_VOTE_VG]
 2. ONLINE   255e8bb6c9df4fa7bfef7113ec13a0c7 (ORCL:MYDISK_OCR_VOTE02) [MYDISK_OCR_VOTE_VG]
 3. ONLINE   4404bf827c3e4f89bf9b0867fa5ea428 (ORCL:MYDISK_OCR_VOTE03) [MYDISK_OCR_VOTE_VG]
Located 3 voting disk(s).

Automatic OCR Backup: OCR is backed up automatically every four hours in cluster environment on OCR Master node, the default location is <clusterware-home>/cdata/<clustername>. To find out backup location, execute: ocrconfig -showbackup


 root@abc.bob.dba.com [+ASM1] /root > ocrconfig -showbackup

node3     2015/11/04 05:51:15     /myerp/oracle/grid/11.2.0.3/cdata/myerp/backup00.ocr

node3     2015/11/04 01:51:13     /myerp/oracle/grid/11.2.0.3/cdata/myerp/backup01.ocr

node3     2015/11/03 21:51:10     /myerp/oracle/grid/11.2.0.3/cdata/myerp/backup02.ocr

node3     2015/11/02 09:50:53     /myerp/oracle/grid/11.2.0.3/cdata/myerp/day.ocr

node3     2015/11/01 01:50:37     /myerp/oracle/grid/11.2.0.3/cdata/myerp/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available



SCR Base: the directory where ocr.loc and olr.loc are located.
Linux:         /etc/oracle
Solaris:     /var/opt/oracle
hp-ux:         /var/opt/oracle
AIX:             /etc/oracle

INITD Location: the directory where ohasd and init.ohasd are located.
Linux:         /etc/init.d
Solaris:     /etc/init.d
hp-ux:         /sbin/init.d
AIX:             /etc

oratab Location: the directory where oratab is located.
Linux:         /etc
Solaris:     /var/opt/oracle
hp-ux:         /etc
AIX:             /etc

CIL: Central Inventory Location. The location is defined by parameter inventory_loc in /etc/oraInst.loc or /var/opt/oracle/oraInst.loc depend on platform.
Example on Linux:

cat /etc/oraInst.loc | grep inventory_loc
inventory_loc=/home/ogrid/app/oraInventory


Disable CRS/GI: To disable clusterware from auto startup when node reboots, as root execute "crsctl disable crs". Replace it with "crsctl stop has" for Oracle Restart.

DG Compatible: ASM Disk Group's compatible.asm setting. To store OCR/VD on ASM, the compatible setting must be at least 11.2.0.0.0, but on the other hand lower GI version won't work with higher compatible setting. For example, 11.2.0.1 GI will have issue to access a DG if compatible.asm is set to 11.2.0.2.0. When downgrading from higher GI version to lower GI version, if DG for OCR/VD has higher compatible, OCR/VD relocation to lower compatible setting is necessary.
To find out compatible setting, log on to ASM and query:

SQL> select name||' => '||compatibility from v$asm_diskgroup where name='GI';

NAME||'=>'||COMPATIBILITY
--------------------------------------------------------------------------------
GI => 11.2.0.0.0

In above example, GI is the name of the interested disk group.

To relocate OCR from higher compatible DG to lower one:

ocrconfig -add <diskgroup>
ocrconfig -delete <disk group>

To relocate VD from higher compatible DG to lower one:

crsctl replace votedisk <diskgroup>
 
 

grid@abc.bob.dba.com [+ASM1] /orasoft/upgrade_project/common/log > sqlplus "/ as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Wed Nov 4 06:52:44 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL>

SQL> select name||' => '||compatibility from v$asm_diskgroup ;

NAME||'=>'||COMPATIBILITY
--------------------------------------------------------------------------------
myerpp_OCR_VOTE_VG => 11.2.0.0.0
INST2DB_VG => 11.2.0.0.0
INST1DB_VG => 11.2.0.0.0
RMP01DB_VG => 11.2.0.0.0



When upgrading Oracle Clusterware:

OLD_HOME: pre-upgrade Oracle clusterware home - the home where existing clusterware is running off. For Oracle Restart, the OLD_HOME is pre-upgrade ASM home.

OLD_HOME = /myerpp/oracle/grid/11.2.0.3

OLD_VERSION: pre-upgrade Oracle clusterware version.

OLD_VERSION =  11.2.0.3

NEW_HOME: new Oracle clusterware home.

NEW_HOME = /myerpp/oracle/grid/11.2.0.4

NEW_VERSION: new Oracle clusterware version.

NEW_VERSION = 11.2.0.4

grid@abc.bob.dba.com [+ASM1] /myerpp/oracle/grid > ls -ltr
total 8
drwxr-xr-x 72 root oinstall 4096 Apr 11  2015 11.2.0.3
drwxr-xr-x  2 root oinstall 4096 Oct 21 16:12 11.2.0.4


OCR Node: The node where rootupgrade.sh backs up pre-upgrade OCR to $NEW_HOME/cdata/ocr$OLD_VERSION. In most case it's first node where rootupgrade.sh was executed.

    Example when upgrading from 11.2.0.1 to 11.2.0.2, after execution of rootupgrade.sh

    ls -l $NEW_HOME/cdata/ocr*
    -rw-r--r-- 1 root root 78220 Feb 16 10:21 /ocw/b202/cdata/ocr11.2.0.1.0

root@cmfbuhuqudb01c.bob.dba.com [+ASM1] /olmerpq/oracle/grid/11.2.0.4/cdata > ls -ltr $ORACLE_HOME/cdata/ocr*
-rw------- 1 root root 159038 Oct  6 22:13 /olmerpq/oracle/grid/11.2.0.4/cdata/ocr11.2.0.3.0