ERROR: No viable tables matched specification.

GGSCI (myrac.com) 2> add trandata VIKAS.T2;
ERROR: No viable tables matched specification.

 

If you are getting the this error, It’s due to the semi column given in the statement . Just remove the semi columns. It should work fine.

 

GGSCI (myrac.com) 3> add trandata VIKAS.T2

2016-08-20 21:34:35  WARNING OGG-00869  No unique key is defined for table ‘T2’. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.

ASM Diskgroup Re-Size in Exadata

Note: Some changes has been performed in below calculation from Newer release of Exadata Machines.

Now in Newer release,

Req_mir_free_MB = Total MB/( No. of cells * No. of Disk [mostly 12])

In Older Exadata Machines,

Req_mir_free_MB = Total MB/ No.of cell

Rest all calculation below is same

=====================================================================

Below calculation is based on Exadata X2 and X3 Machines.

Generally in any Exadata Environment ,we have three disk-groups — DATA (primarily for Datafile etc..) ,RECO (primarily for Backup related files) and DBFS (for voting disk and OCR).

DBFS disk group is something which is fixed, which doesn’t change based on increase in DB size.

DATA and RECO diskgroup is something whose size change is directly based on increase in size of DB.It will directly impact customer capacity planning.

If customer need some extra space in any of the Diskgroup,to configure new DB.It will become quite unjustifiable for DBA in simple terms that we don’t have any space available.

To tackle this situation, DBA will suggest various option to customer. Like..

  1. Reorg Objects and remove fragmentation in Current DB by various option. ( eg: Reorg,Shrink,by segment advisory etc..) — This is always not a feasible option
  2. Add more storage server. — Customer will not agree in first go:)
  3. Resize Diskgroup,in simple terms,,take some free space from less utilized diskgroup and add it to more utilized diskgroup.

Here we are discussing on Point 3…

Diskgroup Resizing:

Please note that this calculation is for Half Rack ( ie. 4 DB Nodes and 7 Cell Nodes)

Let take an example,suppose we need extra space in Data ( as it becomes negative now ) and we have extra space in Reco.

Now we need to resize Data and Reco .Before resize let take we have 49:51 Ratio between DATA:RECO.

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED HIGH N 512 4096 4194304 22020096 6014096 6291456 -92453 0 Y DATA/
MOUNTED NORMAL N 512 4096 4194304 2087680 2085936 298240 893848 0 N DBFS_DG/
MOUNTED NORMAL N 512 4096 4194304 23455488 15460932 3350784 6055074 0 N RECO/
49:51 Ratio between DATA and RECO

For RECO

Current Storage:
Total = 23455488 MB == 23.36 TB
Free = 15460932 MB == 14.75 TB
Used = 7994556 MB == 7.6 TB
Required MB = 3350784 MB == 3.19 TB

Actual used == Total – Required MB == 20104704 B == 19.17 TB
ACTUAL USABLE_MB =(FREE-REQUIRED_MIRROR)/2 == 11.55/2 TB == 5.77 TB === 6055074 MB

Note: ACTUAL USABLE_MB =(FREE-REQUIRED_MIRROR)/2 — 2 here for normal redundancy

For DATA

Current Storage:
Total = 22020096 MB == 21 TB
Free = 6014096 MB == 5.73 TB
Used = 16006000 MB == 15.26 TB
Required MB = 6291456 MB == 6 TB

Actual used == Total – Required MB == 15728640 B == 15 TB
ACTUAL USABLE_MB =(FREE-REQUIRED_MIRROR)/3 == -0.27/3 TB == -0.09 TB === -92453 MB

Note: ACTUAL USABLE_MB =(FREE-REQUIRED_MIRROR)/2 — 3 here for high redundancy

Assumption to perform resize (60:40 Approx )

RECO

Total = 18 TB(taking away 5.36TB)
Free = 10.4 TB
Required Mirror = 18/7 TB = 2.57 TB == 2696338 MB
Actual usable mb == 7.83/2 == 4105175 MB == 3.915 TB

DATA

Total = 26.36 TB (adding 5.36 TB from reco)
Free = 11.10 TB
Required Mirror = (26.36/7)*2 == 7.53 TB
Actual usable_mb = (FREE-REQUIRED_MIRROR)/3 == 3.56/3 == 1.18 TB (Approx) — That is the value which we have to look

Current disk size

Reco== 272.6875
Data== 256
Total== 528.6875
After Resize (approx)
===================
Reco== 207.6875
Data== 321
Total== 528.6875
To check space of griddisk ,we need to use below command.

list griddisk attributes name,asmmodestatus ,size.

Note: The above calculation is not much required ,if we apply mathematical calculation directly based on griddisk size.

Steps to perform while resizing .It should be in rolling or non-rolling based on downtime from customer.

1> Drop Old disk 2> Resize them and 3> Add them back .

1> DROP OPERATION :

  1. a) connect to ASM instance as sysam on first DB node
    SQL > select * from gv$asm_operation ( should show no rows )
    SQL > alter diskgroup DATA drop disks In failgroup cell1 rebalance power 11 nowait;
  2. b) connect to ASM instance as sysam on second DB node
    SQL > select * from v$asm_operation ( should not show as RUN for second instance)
    SQL > alter diskgroup RECO drop disks In failgroup cell1 rebalance power 11 nowait;
  3. c) Wait for rebalance to complete
    select * from gv$asm_operation; (make sure you are firing this query by connecting to ASM instance as sysasm)
  4. d) col path for a50
    set lines 300
    set pages 300
    select path,name,mount_status,header_status,mode_status,state from v$asm_disk where failgroup= ‘cell1’ order by name; (crosscheck for dropped disks)

2> RESIZE OPERATION

Example: For cell1

Connect to cell1

CELLCLI > list griddisk attributes name,asmmodestatus (should show UNUSED for dropped disk)

CELLCLI > drop griddisk all prefix=’RECO’ force;
CELLCLI > drop griddisk all prefix=’DATA’ force;
CELLCLI > list griddisk

CREATE GRIDDISK ALL PREFIX=’DATA’, size=new size
CREATE GRIDDISK ALL PREFIX=’RECO’, size=new size

Crosscheck the newly resized disks
CELLCLI > list griddisk attributes name,size

3> ADD OPERATION

 

connect to ASM instance as sysam on first DB node
select * from gv$asm_operation ( should show no rows )
SQL > alter diskgroup DATA add disk ‘o/*/DATA*cell1*’ rebalance power 11 nowait;

connect to ASM instance as sysam on 2nd DB node
SQL > select * from v$asm_operation ( should not show as RUN for second instance)
SQL > alter diskgroup RECO add disk ‘o/*/RECO*cell1*’ rebalance power 11 nowait;

Wait for rebalance to complete
select * from gv$asm_operation; (make sure you are firing this query by connecting to ASM instance as sysasm)

Monitor rebalance:

SQL> col ERROR_CODE for a20
SQL> set lines 300
SQL> set pages 300
SQL> select INST_ID,OPERATION,STATE,POWER,SOFAR,EST_WORK,EST_MINUTES,ERROR_CODE from gv$asm_operation;

Note:  For rolling,Please repeat same step on each cellnodes one by one.For Non-rolling we don’t need to wait for re-balancing.
Just shutdown all cellnodes after making all griddisk inactive and resize the disk after drop.Add them back altogether with new size.

Guarantee restore point and flashback in Primary/Standby Environment

When we have a Data Guard environment (Primary and Standby Databases) and we want to make use of restore points.We need to follow below steps.

1>Always create a restore point on the standby database first then in Primary.
The main reason for this is because the SCN of the standby must be less than the SCN of the primary in case you need to flashback your database.
If we didn’t follow the step 1 then we need to rebuild the standby.

2>Always flashback the primary database first. Then follow below step for Standby.
a)On the primary, connect using rman and list the incarnations.
b)On the standby: Cancel the recovery and flashback the standby to restore point created earlier.
c)On the standby, connect using rman and list the incarnations.
d)If there is mismatch in incarnation between primary and standby then reset the incarnation of standby and start recovery.

==============

Some commands to use :

To take restore point

Connect to rman using the target

RMAN> Create restore point ‘GOLD_RESTORE_POINT’ guarantee falshback database.

Start the first instance of database ABC in mount state to flashback to the guarantee restore point using below steps.
srvctl status instance -d abc -i abc1

srvctl start instance -d abc -i abc1 -o mount

===============================================================================

9. Connect to rman using the target  and perform the below steps to flashback the database.

If it;s primary then stop recovery by making apply=OFF.
RMAN> LIST RESTORE POINT ALL;
RMAN> FLASHBACK DATABASE TO RESTORE POINT ‘GOLD_RESTORE_POINT’;

==============================================================================================
10. Once flashback is done open the instance with reset logs and shutdown the database.
SQL> alter database open RESETLOGS;
SQL> shut immediate

11. Now start all the database using srvctl as below.
srvctl status database -d ed0301ttce
srvctl start database -d ed0301ttce

12. Once done please perform helath check of the database to confirm that the time is changed.
select name from v$database;
select to_char(sysdate, ‘Dy DD-Mon-YYYY HH24:MI:SS’) as “Current Time” from dual;
select log_mode,flashback_on from v$database;
select status from gv$instance;
Reference doc id– How to perform Flashback in a Data Guard/RAC Environment (Doc ID 1342165.1)

Script to connect to All 12c PDB inside every CDB on a specific node and perform desired action

Note : You can use same script in Exadata also.

Changes required for Exadata.

DIR == change /home/monitoring with Shared location across all DB nodes. Eg: ZFS mount point

Schedule below command in crontab script

dcli -l oracle -g ~/dbs_group “$shared_location/ALL_PDB_CONNECT.sh”

 

== Script Start ==

## All_PDB_CONNECT.sh

#Script to Connect to ALL PDB amd perform similar action all PDB — Like User Creation,Parameter check etc
# set the Oracle environement to be able to run in crontab
DIR=/home/monitoring/ALL_PDB
PATH=$PATH:/usr/local/bin
date=`date`
PDB_LOG=$DIR/log/All_PDB_`hostname`.log
if [ -f $PDB_LOG ] ;
then
rm $PDB_LOG
fi
ps -eaf | grep ora_smon|grep -v grep |tr -s ” ” | cut -d ” ” -f8 | cut -d _ -f3|grep ‘[1]$’ > $DIR/config/instance_name_`hostname`
cat $DIR/config/instance_name_`hostname` | while read INSTANCE
do
export ORACLE_SID=$INSTANCE
export ORAENV_ASK=NO
. /usr/local/bin/oraenv > /dev/null
(sqlplus -s -l / as sysdba << EOF
set head off
set echo off
set feedback off
select NAME from v\$pdbs where NAME!=’PDB\$SEED’;
exit
EOF
) > $DIR/config/pdb/PDB_$ORACLE_SID.log
done
ps -eaf | grep ora_smon|grep -v grep |tr -s ” ” | cut -d ” ” -f8 | cut -d _ -f3|grep ‘[1]$’ > $DIR/config/instance_name_`hostname`
cat $DIR/config/instance_name_`hostname` | while read INSTANCE
do
export ORACLE_SID=$INSTANCE
export ORAENV_ASK=NO
. /usr/local/bin/oraenv > /dev/null
cat $DIR/config/pdb/PDB_$ORACLE_SID.log | grep -v ^$| while read PDB
do
$DIR/pdb_run.sh $PDB >> $DIR/log/All_PDB_`hostname`.log
done
done
if [ -f $PDB_LOG ] ;
then
cat $DIR/log/All_PDB_`hostname`.log
exit 1
fi

== Script End ==

 
====================
# Sub Script — pdb_run.sh —
sqlplus -s -l / as sysdba <<EOF
set head off
set echo off
set feedback off
alter session set container=$1;
show con_name
–Any SQL command
select name from v\$database;
Create User test identified by Welcome1;
select name,open_mode from gv\$pdbs;
–Any SQL command
EXIT;
EOF
=======================

Script to check FRA usage on all running Database Simultaneously

Note : You can use same script in Exadata also.

Changes required for Exadata.

FRA_LOG == change /home/monitoring with Shared location across all DB nodes. Eg: ZFS mount point

Schedule below command in crontab script

dcli -l oracle -g ~/dbs_group “$shared_location/FRA_CHECK.sh”

 

=====================================

## Script Start — FRA_CHECK.sh##

#Script to check FRA percentage
# set the Oracle environement to be able to run in crontab

PATH=$PATH:/usr/local/bin
date=`date`
FRA_LOG=/home/monitoring/FRA_CHECK/log/db_fra_check_`hostname`.log
if [ -f $FRA_LOG ] ;
then
rm $FRA_LOG
fi
ps -eaf | grep ora_smon|grep -v grep |tr -s ” ” | cut -d ” ” -f8 | cut -d _ -f3|grep ‘[1]$’ > /home/monitoring/FRA_CHECK/config/instance_name_`hostname`
cat /home/monitoring/FRA_CHECK/config/instance_name_`hostname` | while read INSTANCE
do
export ORACLE_SID=$INSTANCE
export ORAENV_ASK=NO
. /usr/local/bin/oraenv > /dev/null
fra_usage=$(sqlplus -s -l / as sysdba << EOF
set head off
select round((SPACE_USED/SPACE_LIMIT)*100) from V\$RECOVERY_FILE_DEST;
exit
EOF
)
#Based on threshold as per environment.Below threshold is 70%
if ((fra_usage > 70))
then
(
echo “”
echo $date Warning: FRA size percentage is $fra_usage went above the warning of 70 for database $INSTANCE >> /home/monitoring/FRA_CHECK/log/db_fra_check_`hostname`.log
echo “”
)
fi
done
if [ -f $FRA_LOG ] ;
then
cat /home/monitoring/FRA_CHECK/log/db_fra_check_`hostname`.log
exit 1
fi

## Script End ##

Installing & Uninstalling XDB in Oracle 11g

Below Steps can be used for installing & uninstalling XDB in Oracle 11g.

Refer Master Doc – (Doc ID 1292089.1)

Pre checks

SYS> SELECT OWNER,COUNT(*) FROM DBA_OBJECTS WHERE STATUS=’INVALID’ GROUP BY OWNER;

SYS>select USERNAME,ACCOUNT_STATUS,CREATED from dba_users where USERNAME=’XDB’;

SYS>SELECT COMP_NAME,VERSION,STATUS,MODIFIED FROM DBA_REGISTRY where COMP_NAME like ‘%XML%’;

Uninstalling XDB

srvctl stop database -d dbname
srvcyl start database -d dbname
connect / as sysdba
@?/rdbms/admin/catnoqm.sql

Some XDB related objects in the SYS schema are not dropped during the removal of XDB.  Also, the SYS.KU$_% views will become invalid.  Please see the following document for cleaning up these objects:

(Doc ID 1269470.1) XDB Deinstallation script catnoqm.sql leads to Invalid SYS Objects

Installing XDB

connect / as sysdba

CREATE TABLESPACE XDB_TBS DATAFILE ‘+DATA’ size 500M;

create temporary tablespace TEMP_XDB TEMPFILE ‘+DATA’ size 500M;

@?/rdbms/admin/catqm.sql Welcome1 XDB_TBS TEMP_XDB YES

Post checks-

SYS> SELECT OWNER,COUNT(*) FROM DBA_OBJECTS WHERE STATUS=’INVALID’ GROUP BY OWNER;

SYS>select USERNAME,ACCOUNT_STATUS,CREATED from dba_users where USERNAME=’XDB’;

SYS>SELECT COMP_NAME,VERSION,STATUS,MODIFIED FROM DBA_REGISTRY where COMP_NAME like ‘%XML%’;

Oracle Home Cloning

Note: Modify the Home Names and keywords as per your need and environment                   Below steps can be used to clone Oracle Home in DB Environment (Tested on 10g and 11g)

Source Home
============
On Node vipdbnode07
/u01/app/oracle/product/11.2.0.3/VIP1/dbhome_1

Target Home
===========
On Node vipdbnode05
/u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1

1. Pack the Source Home

As root user,

cd /u01/app/oracle/product/11.2.0.3/VIP1

tar-cvpf dbhome_1.tar dbhome_1    (p–preserves permission)

2. Move the tar file outside of ORACLE_HOMES directory

As root user,

cd /u01/app/oracle/product/11.2.0.3/

mv dbhome_1.tar ../

3. Unpack the tar file

As root user, scp it to all the nodes

cd /u01/app/oracle/product/

scp  dbhome_1.tar vipdbnode05:/u01/app/oracle/product/11.2.0.3/VIP4/

On  vipdbnode05

cd /u01/app/oracle/product/11.2.0.3/VIP4/

tar -xvf dbhome_1.tar

mv dbhome_1 dbhome_2

Note: Make sure the unpacked DB Home has the correct permission on the directory (oracle:oinstall). Permissions should not change as the pack was done with ‘p’ option

5. On New Home run clone script

As Oracle user,

cd /u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1/clone/bin

perl clone.pl ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1″ ORACLE_HOME_NAME=”OraDb11g_home8″ ORACLE_BASE=”/u01/app/oracle” OSDBA_GROUP=oinstall

7. After the clone script is done check inventory

As Oracle user,

export PATH=/u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1/OPatch:$PATH

opatch lsinventory (should succeed and should show new home with list of patches applied)

8. Run root.sh on New Home

As root user,

cd /u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1/

./root.sh

9. Update cluster nodes in inventory as follows

Example:

As Oracle user,

cd /u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1/oui/bin

./runInstaller -local -updateNodeList ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/VIP4/dbhome_1″ ORACLE_HOME_NAME=”OraDb11g_home8″ LOCAL_NODE=’vipdbnode07′ CLUSTER_NODES=vipdbnode01,vipdbnode02,vipdbnode03,vipdbnode04,vipdbnode05,vipdbnode06,vipdbnode07,vipdbnode08

Setting Up Huge Pages in Exadata

Below are the steps taken to configure huge pages on each Exadata node.

1)  Ensure that all  DB instances are up an running on the node.

2)  Ensure that the databases are not using Oracle 11g AMM.  See Metalink note 749851.1 “HugePages and Oracle Database 11g Automatic Memory Management (AMM) on Linux”.
The note states:
If you want to use HugePages make sure that both MEMORY_TARGET / MEMORY_MAX_TARGET initialization parameters are unset (i.e. using “ALTER SYSTEM RESET”)
for the database instance.
This should just be a verification check, as we are not using AMM in DB aenvironments.

3)  Create and execute the script provided in metalink note 401749.1 “Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration”.
This can be done on every nodes i will be setting the same values across all nodes as i have only 2 db instances running with same SGA & PGA.

[root@radbvip1 ~]# ./hugepages_settings.sh

This script is provided by Doc ID 401749.1 from My Oracle Support
(http://support.oracle.com) where it is intended to compute values for
the recommended HugePages/HugeTLB configuration for the current shared
memory segments. Before proceeding with the execution please make sure
that:
* Oracle Database instance(s) are up and running
* Oracle Database 11g Automatic Memory Management (AMM) is not setup
(See Doc ID 749851.1)
* The shared memory segments can be listed by command:
# ipcs -m

Press Enter to proceed…

Recommended setting: vm.nr_hugepages = 16900

4)  Set the kernel parameter.  This can be done on all nodes as root via the following dcli command:
[root@radbvip1 ~]# dcli -l root -g dbs_group “sysctl -w vm.nr_hugepages=16900”
radbvip1: vm.nr_hugepages = 16900
radbvip2: vm.nr_hugepages = 16900
radbvip3: vm.nr_hugepages = 16900
radbvip4: vm.nr_hugepages = 16900

5)  Ensure that the kernel setting is persistent on reboot.  Edit the /etc/sysctl.conf configuration file and add the following in all nodes.
##################################################################################################
# Added by Oracle &lt;date> to reserve huge pages for
# if memory is adjusted, this parameter will also need to be
# adjusted based upon the output of /root/hugepages_settings.sh
# MOS note 401749.1 for precise computations and see MOS 361323.1 for a description of Hugepages
# also MOS note 1274318.1 Oracle Sun Database Machine X2-2 Setup/Configuration Best Practices
vm.nr_hugepages = 16900
##################################################################################################

6)  Verify Huge Page reservations:
# dcli -l root -g dbs_group “grep Huge /proc/meminfo”

7)  Set the database init.ora parameter for using huge pages in both databases:

SYS@PROD> alter system set use_large_pages=ONLY scope=spfile sid=’*’;

SYS@PRE_PROD> alter system set use_large_pages=ONLY scope=spfile sid=’*’;

8)  Restart databases

$ srvctl stop database -d prod
$ srvctl stop database -d pre_prod
$ srvctl start database -d prod
$ srvctl start database -d pre_prod

9)  Verify HugePages used via review of alert log.  You should see somethink like that below.

Starting ORACLE instance (normal)
****************** Huge Pages Information *****************
Huge Pages memory pool detected (total: 197662 free: 197662)
DFLT Huge Pages allocation successful (allocated: 10240)

DFLT Huge Pages allocation successful (allocated: 23552)
DFLT Huge Pages allocation successful (allocated: 1)
***********************************************************

Done !!

Drop Restore Points in Standby Database

While dropping restore point on standby databases i got below error.

SYS@NFTBINT1>drop restore point GRPT_BF_UPGR1;
drop restore point GRPT_BF_UPGR1
*
ERROR at line 1:
ORA-16000: database open for read-only access

As my Physical Standby was in Read Only Mode, So we need to stop the standby database ->Start it in mount mode -> Drop the restore point -> validate the restore point -> startup the database using srvctl -> check for mrp and lag.

[oracle@racdb_vip01 ~]$ srvctl stop database -d PROD_STBY
[oracle@racdb_vip01 ~]$
[oracle@racdb_vip01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Jul 28 16:41:05 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@PROD_STBY1>startup mount;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 6413680640 bytes
Fixed Size                  2265224 bytes
Variable Size            2550140792 bytes
Database Buffers         3707764736 bytes
Redo Buffers              153509888 bytes
Database mounted.
SYS@PROD_STBY1>set lines 234

SYS@PROD_STBY1>select TIME,name from v$restore_point;

TIME                                     NAME
—————————————- ———————————–
30/JAN/15 11:23:03.000000000 AM          BEFORE_DR_30JAN2015
19/FEB/15 06:04:08.000000000 PM          INT_UPG_19FEB2015
12/MAR/15 03:53:44.000000000 PM          GRPT_BF_ACT
12/MAR/15 06:45:38.000000000 PM          GRPT_BF_ACT1

SYS@PROD_STBY1>drop restore point GRPT_BF_ACT1;

Restore point dropped.

SYS@PROD_STBY1>drop restore point GRPT_BF_ACT;

Restore point dropped.

SYS@PROD_STBY1>drop restore point INT_UPG_19FEB2015;

Restore point dropped.

SYS@PROD_STBY1>drop restore point BEFORE_DR_30JAN2015;

Restore point dropped.

SYS@PROD_STBY1>
SYS@PROD_STBY1>select TIME,name from v$restore_point;

no rows selected

SYS@PROD_STBY1>shut immediate

[oracle@racdb_vip01 ~]$ srvctl start database -d PROD_STBY

SYS@PROD_STBY1>alter database recover managed standby database using current logfile disconnect from session;

Done!!

Stats Jobs Failing in EBS Database

Stats Job failing in EBS Database.

SYS@EBS_DB>BEGIN dbms_stats.gather_fixed_objects_stats(); END;

*
ERROR at line 1:
ORA-20011: Approximate NDV failed: ORA-00600: internal error code, arguments:
[1350], [1], [23], [], [], [], [], [], [], [], [], []
ORA-06512: at “SYS.DBMS_STATS”, line 24900
ORA-06512: at “SYS.DBMS_STATS”, line 25441
ORA-06512: at line 1

This is a issue reported While Fetching NLS Information applicable for Oracle Database EE Version 11.2.0.3


The cause of this error is datafiles pointing to older NLS parameters which is not supported in my database version (11.2.0.3)

Check for enviroment variable ORA_NLS

In my case it was not set,

[root@dbnode_vip01 ~]# printenv | grep ORA_NLS10
[root@dbnode_vip01 ~]#

Now i modified my .bash_profile with ORA_NLS10 variable.

[oracle@dbnode_vip01 ~]$ cat .bash_profile | grep ORA_NLS10
export ORA_NLS10=/u01/app/oracle/product/11.2.0.3/VIP/VIPFRC/nls/data/9idata

You need to set ORA_NLS10 to $ORACLE_HOME/nls/data/9idata (Make sure directories are present)

[oracle@dbnode_vip01 ~]$ printenv | grep NLS
ORA_NLS10=/u01/app/oracle/product/11.2.0.3/VIP/VIPFRC/nls/data/9idata
[oracle@dbnode_vip01 ~]$

Now my stats job started running without any errors.

SYS@EBS_DB>BEGIN dbms_stats.gather_fixed_objects_stats(); END;

PL/SQL procedure successfully completed