Oracle RAC Assessment Report

System Health Score is 90 out of 100 (detail)

Cluster Summary

Cluster Namerac02
OS/Kernel Version LINUX X86-64 OELRHEL 6 2.6.39-400.209.1.el6uek.x86_64
CRS Home - Version/u01/app/11.2.0/grid - 11.2.0.4.0
DB Home - Version - Names/u01/app/oracle/product/11.2.0/db_1 - 11.2.0.4.0 - RAC01
Number of nodes8
   Database Servers8
raccheck Version 2.2.3(BETA)_20130918
Collectionraccheck_nerv01_RAC01_092513_055913.zip
Collection Date25-Sep-2013 06:00:56

Note! This version of raccheck is considered valid for days from today or until a new version is available


WARNING! The data collection activity appears to be incomplete for this raccheck run. Please review the "Killed Processes" and / or "Skipped Checks" section and refer to "Appendix A - Troubleshooting Scenarios" of the "Raccheck User Guide" for corrective actions.

Table of Contents


Show Check Ids

Remove finding from report


Findings Needing Attention

FAIL, WARNING, ERROR and INFO finding details should be reviewed in the context of your environment.

NOTE: Any recommended change should be applied to and thoroughly tested (functionality and load) in one or more non-production environments before applying the change to a production environment.

Database Server

Check Id Status Type Message Status On Details
DC4495442D7A0CEBE04313C0E50A76E8FAILOS CheckPackage unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installedAll Database ServersView
C1D1B240993425B8E0431EC0E50AFEF5FAILOS CheckPackage unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installedAll Database ServersView
C1D0BD14BF4A3BCEE0431EC0E50A9DB5FAILOS CheckPackage unixODBC-2.2.14-11.el6-i686 is recommended but NOT installedAll Database ServersView
CCF6F44765861F7AE0431EC0E50A72ADFAILOS CheckOperating system hugepages count does not satisfy total SGA requirementsAll Database ServersView
834835A4EC032658E040E50A1EC056F6WARNINGOS Check/tmp is NOT on a dedicated filesystemnerv05, nerv06View
8E1B5EE973BAA8C6E040E50A1EC0622EWARNINGOS Checkohasd Log Ownership is NOT Correct (should be root root)nerv03, nerv05View
8E1A46CB0BDA0608E040E50A1EC022CDWARNINGOS Checkohasd/orarootagent_root Log Ownership is NOT Correct (should be root root)nerv03, nerv05View
8E197A76D887BAC4E040E50A1EC07E0BWARNINGOS Checkcrsd/orarootagent_root Log Ownership is NOT Correct (should be root root)nerv03View
8E19457488167806E040E50A1EC00310WARNINGOS Checkcrsd Log Ownership is NOT Correct (should be root root)nerv03View
7EDE9EBEC9429FBAE040E50A1EC03AEDWARNINGOS Check$ORACLE_HOME/bin/oradism ownership is NOT rootnerv03View
7EDDA570A1827FBAE040E50A1EC02EB1WARNINGOS Check$ORACLE_HOME/bin/oradism setuid bit is NOT setnerv03View
9AA08EB2573A36C6E040E50A1EC02BD9WARNINGOS Checkkernel parameter rp_filter is set to 1.All Database ServersView
E10E99868C34569BE04313C0E50A44C1WARNINGOS Checkvm.min_free_kbytes should be set as recommended.All Database ServersView
D348A289DD032396E0431EC0E50A26D5WARNINGOS CheckOCR and Voting disks are not stored in ASMAll Database ServersView
DC28F07D94FD1B10E04313C0E50A9FD8WARNINGOS CheckTFA Collector is either not installed or not runningnerv01, nerv08View
D35CE19AE68165F3E0431EC0E50A4C09WARNINGOS CheckRedo log write time is more than 500 millisecondsAll Database ServersView
951C025701C65CC5E040E50A1EC0371FWARNINGOS CheckOSWatcher is not running as is recommended.All Database ServersView
8C9D63D9441C1F52E040E50A1EC0211FWARNINGOS CheckNIC bonding is NOT configured for public network (VIP)All Database ServersView
5EA8F4C6C6BDF8F0E0401490CACF067FWARNINGOS CheckNIC bonding is not configured for interconnectAll Database ServersView
CB94D8434AA02210E0431EC0E50A7C40WARNINGSQL Parameter CheckDatabase Parameter memory_target is not set to the recommended valueAll InstancesView
DC3D819F5D2A50FEE04312C0E50AFF9FINFOOS CheckParallel Execution Health-Checks and Diagnostics ReportsAll Database ServersView
BBB4357BF09B79D6E0431EC0E50AFB57INFOOS CheckInformation about hanganalyze and systemstate dumpAll Database ServersView
5E4956EE574FB034E0401490CACF2F84INFOOS CheckJumbo frames (MTU >= 8192) are not configured for interconnectAll Database ServersView
85F282CFD5DADCB4E040E50A1EC01BC9INFOSQL CheckAll redo log files are not same size.All DatabasesView

Top

MAA Scorecard

Outage Type Status Type Message Status On Details
.
DATABASE FAILURE PREVENTION BEST PRACTICESPASS
Description
Oracle database can be configured with best practices that are applicable to all Oracle databases, including single-instance, Oracle RAC databases, Oracle RAC One Node databases, and the primary and standby databases in Oracle Data Guard configurations. Key HA Benefits:
  • Improved recoverability
  • Improved stability

Best Practices
PASSSQL CheckAll tablespaces are locally managed tablespaceAll DatabasesView
PASSSQL CheckAll tablespaces are using Automatic segment storage managementAll DatabasesView
PASSSQL CheckDefault temporary tablespace is setAll DatabasesView
PASSSQL CheckDatabase Archivelog Mode is set to ARCHIVELOGAll DatabasesView
PASSSQL CheckThe SYS and SYSTEM userids have a default tablespace of SYSTEMAll DatabasesView
.
COMPUTER FAILURE PREVENTION BEST PRACTICESFAIL
Description
Oracle RAC and Oracle Clusterware allow Oracle Database to run any packaged or custom application across a set of clustered servers. This capability provides server side high availability and scalability. If a clustered server fails, then Oracle Database continues running on the surviving servers. When more processing power is needed, you can add another server without interrupting access to data. Key HA Benefits: Zero database downtime for node and instance failures. Application brownout can be zero or seconds compared to minutes and an hour with third party cold cluster failover solutions. Oracle RAC and Oracle Clusterware rolling upgrade for most hardware and software changes excluding Oracle RDBMS patch sets and new database releases.
Best Practices
WARNINGSQL Parameter Checkfast_start_mttr_target should be greater than or equal to 300.All InstancesView
.
DATA CORRUPTION PREVENTION BEST PRACTICESFAIL
Description
The MAA recommended way to achieve the most comprehensive data corruption prevention and detection is to use Oracle Active Data Guard and configure the DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT database initialization parameters on the Primary database and any Data Guard and standby databases. Key HA Benefits
  • Application downtime can be reduced from hours and days to seconds to no downtime.
  • Prevention, quick detection and fast repair of data block corruptions.
  • With Active Data Guard, data block corruptions can be repaired automatically.

Best Practices
FAILSQL CheckThe data files should be recoverableAll DatabasesView
WARNINGOS CheckDatabase parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.All Database ServersView
PASSSQL CheckNo reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONSAll DatabasesView
.
LOGICAL CORRUPTION PREVENTION BEST PRACTICESFAIL
Description
Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area. Application monitoring is required for early detection. Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database). Key HA Benefits: With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency. Fast repair for logical failures caused by malicious or accidental DML or DDL operations. Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database. Questions: Can your application or monitoring infrastructure detect logical inconsistencies? Is your operations team prepared to use various flashback technologies to repair quickly and efficiently? Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
Best Practices
FAILSQL CheckFlashback on PRIMARY is not configuredAll DatabasesView
PASSSQL Parameter CheckRECYCLEBIN on PRIMARY is set to the recommended valueAll InstancesView
PASSSQL Parameter CheckDatabase parameter UNDO_RETENTION on PRIMARY is not nullAll InstancesView
.
DATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICESFAIL
Description
Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations. For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location. Key HA Benefits: With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss. With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes. With remote standby database (Disaster Recovery Site), you have protection from complete site failures. In all cases, the Active Data Guard instances can be active and used for other activities. Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations. Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution’s main trade-off is the additional administration for application developer and database administrators.
Best Practices
FAILSQL CheckPrimary database is NOT protected with Data Guard (standby database) for real-time data protection and availabilityAll DatabasesView
.
CLIENT FAILOVER OPERATIONAL BEST PRACTICESPASS
Description
A highly available architecture requires the ability of the application tier to transparently fail over to a surviving instance or database advertising the required service. This ensures that applications are generally available or minimally impacted in the event of node failure, instance failure, or database failures. Oracle listeners can be configured to throttle incoming connections to avoid logon storms after a database node or instance failure. The connection rate limiter feature in the Oracle Net Listener enables a database administrator (DBA) to limit the number of new connections handled by the listener.
Best Practices
PASSOS CheckClusterware is runningAll Database ServersView
.
ORACLE RECOVERY MANAGER(RMAN) BEST PRACTICESFAIL
Description
Oracle Recovery Manager (RMAN) is an Oracle Database utility to manage database backup and, more importantly, the recovery of the database. RMAN eliminates operational complexity while providing superior performance and availability of the database. RMAN determines the most efficient method of executing the requested backup, restoration, or recovery operation and then submits these operations to the Oracle Database server for processing. RMAN and the server automatically identify modifications to the structure of the database and dynamically adjust the required operation to adapt to the changes. RMAN has many unique HA capabilities that can be challenging or impossible for third party backup and restore utilities to deliver such as
  • In-depth Oracle data block checks during every backup or restore operation
  • Efficient block media recovery
  • Automatic recovery through complex database state changes such as resetlogs or past Data Guard role transitions
  • Fast incremental backup and restore operations
  • Integrated retention policies and backup file management with Oracle’s fast recovery area
  • Online backups without the need to put the database or data file in hot backup mode.
RMAN backups are strategic to MAA so a damaged database (complete database or subset of the database such as a data file or tablespace, log file, or controlfile) can be recovered but for the fastest recovery, use Data Guard or GoldenGate. RMAN operations are also important for detecting any corrupted blocks from data files that are not frequently accessed.
Best Practices
WARNINGSQL CheckRMAN controlfile autobackup should be set to ONAll DatabasesView
WARNINGSQL CheckFast Recovery Area (FRA) should have sufficient reclaimable spaceAll DatabasesView
PASSOS Checkcontrol_file_record_keep_time is within recommended range [1-9] for RAC01All Database ServersView
.
OPERATIONAL BEST PRACTICESINFO
Description
Operational best practices are an essential prerequisite to high availability. The integration of Oracle Maximum Availability Architecture (MAA) operational and configuration best practices with Oracle Exadata Database Machine (Exadata MAA) provides the most comprehensive high availability solution available for the Oracle Database.
Best Practices
.
DATABASE CONSOLIDATION BEST PRACTICESINFO
Description
Database consolidation requires additional planning and management to ensure HA requirements are met.
Best Practices

Top

GRID and RDBMS patch recommendation Summary report

Summary Report for "nerv01"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv03"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv04"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv05"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv02"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv08"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv07"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv06"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Top

GRID and RDBMS patch recommendation Detailed report

Detailed report for "nerv01"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv03"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv04"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv05"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv02"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv08"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv07"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv06"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top
Top

Findings Passed

Database Server

Check Id Status Type Message Status On Details
DC28F07D94FD1B10E04313C0E50A9FD8PASSOS CheckTFA Collector is installed and runningnerv03, nerv04, nerv05, nerv02, nerv07 moreView
E47ECDCFE09A122CE04313C0E50A35ECPASSOS CheckThere are no duplicate parameter entries in the database init.ora(spfile) fileAll Database ServersView
E47EBE3023936D3CE04313C0E50A7A0EPASSASM CheckThere are no duplicate parameter entries in the ASM init.ora(spfile) fileAll ASM InstancesView
E1DF2A6140395D42E04312C0E50A0A6CPASSASM CheckAll diskgroups from v$asm_diskgroups are registered in clusterware registryAll ASM InstancesView
E18D7F9837B7754EE04313C0E50AD4AAPASSOS CheckPackage cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendationAll Database ServersView
E1500ADF060A3EA2E04313C0E50A3676PASSOS CheckOLR Integrity check SucceededAll Database ServersView
E12A91DC10F31AD7E04312C0E50A6361PASSOS Checkpam_limits configured properly for shell limitsAll Database ServersView
D0C2640EBA071F73E0431EC0E50AA159PASSOS CheckSystem clock is synchronized to hardware clock at system shutdownAll Database ServersView
DBC2C9218542349FE04312C0E50AC1E9PASSOS CheckNo clusterware resource are in unknown stateAll Database ServersView
D9A5C0E2DE430A85E04312C0E50AC8B0PASSASM CheckNo corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)All ASM InstancesView
D957C871B811597AE04312C0E50A91BFPASSASM CheckNo disks found which are not part of any disk groupAll ASM InstancesView
D112D25A574F13DCE0431EC0E50A55CDPASSOS CheckGrid infastructure network broadcast requirements are metAll Database ServersView
CB5BD768E88F7F71E0431EC0E50A346FPASSOS CheckPackage libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
6B515A724AB85906E040E50A1EC039F6PASSSQL CheckNo read/write errors found for ASM disksAll DatabasesView
C1D39B834AA46E44E0431EC0E50A5366PASSOS CheckPackage sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D39B834AA36E44E0431EC0E50A5366PASSOS CheckPackage libgcc-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D34D17A4F45402E0431EC0E50A5DD9PASSOS CheckPackage binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D348AB978E3873E0431EC0E50A19F0PASSOS CheckPackage glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D30E313A4C0B0BE0431EC0E50A1931PASSOS CheckPackage libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D2A95C2BF31FE4E0431EC0E50AB101PASSOS CheckPackage libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D29B4860DA19C2E0431EC0E50AFB36PASSOS CheckPackage glibc-2.12-1.7.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D29B4860D919C2E0431EC0E50AFB36PASSOS CheckPackage gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1CC4D830F3B90E0431EC0E50A559FPASSOS CheckPackage make-3.81-19.el6 meets or exceeds recommendationAll Database ServersView
C1D1CC4D830E3B90E0431EC0E50A559FPASSOS CheckPackage libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D1BA6C1CD213F9E0431EC0E50A8B9CPASSOS CheckPackage libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1BA6C1CD013F9E0431EC0E50A8B9CPASSOS CheckPackage libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1B240991A25B8E0431EC0E50AFEF5PASSOS CheckPackage compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D1973D1B4C0EA1E0431EC0E50A9108PASSOS CheckPackage glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D15659D96376CBE0431EC0E50A74F5PASSOS CheckPackage glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D15659D96276CBE0431EC0E50A74F5PASSOS CheckPackage compat-libcap1-1.10-1-x86_64 meets or exceeds recommendationAll Database ServersView
C1D0EE98B4BC4083E0431EC0E50ADCB2PASSOS CheckPackage ksh-20100621-12.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D0BD14BF493BCEE0431EC0E50A9DB5PASSOS CheckPackage libaio-0.3.107-10.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D0BD14BF483BCEE0431EC0E50A9DB5PASSOS CheckPackage libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CF431B59054969E0431EC0E50A9B88PASSOS CheckPackage gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CF431B59034969E0431EC0E50A9B88PASSOS CheckPackage compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CEC9D9E9432BDFE0431EC0E50AF329PASSOS CheckPackage libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendationAll Database ServersView
89130F49748E6CC7E040E50A1EC07A44PASSOS CheckRemote listener is set to SCAN nameAll Database ServersView
65F8FA5F9B838079E040E50A1EC059DCPASSOS CheckValue of remote_listener parameter is able to tnspingAll Database ServersView
D6972E101386682AE0431EC0E50A9FD9PASSOS CheckNo tnsname alias is defined as scanname:portAll Database ServersView
D6972E101384682AE0431EC0E50A9FD9PASSOS Checkezconnect is configured in sqlnet.oraAll Database ServersView
BEAE25E17C4130E4E0431EC0E50A8C3FPASSSQL Parameter CheckDatabase Parameter parallel_execution_message_size is set to the recommended valueAll InstancesView
B6457DE59F9D457EE0431EC0E50A1DD2PASSSQL Parameter CheckDatabase parameter CURSOR_SHARING is set to recommended valueAll InstancesView
B167E5248D476B74E0431EC0E50A3E27PASSSQL CheckAll bigfile tablespaces have non-default maxbytes values setAll DatabasesView
AD6481CF9BDD6058E040E50A1EC021ECPASSOS Checkumask for RDBMS owner is set to 0022All Database ServersView
9DEBED7B8DAB583DE040E50A1EC01BA0PASSASM CheckASM Audit file destination file count <= 100,000All ASM InstancesView
9DAFD1040CA9389FE040E50A1EC0307CPASSOS CheckShell limit hard stack for GI is configured according to recommendationAll Database ServersView
64DC3E59CB88B984E0401490CACF1104PASSSQL Parameter Checkasm_power_limit is set to recommended value of 1All InstancesView
90DCECE833790E9DE040E50A1EC0750APASSOS CheckCSS reboottime is set to the default value of 3All Database ServersView
90DCB860F9380638E040E50A1EC07248PASSOS CheckCSS disktimeout is set to the default value of 200All Database ServersView
8E1B5EE973BAA8C6E040E50A1EC0622EPASSOS Checkohasd Log Ownership is Correct (root root)nerv01, nerv04, nerv02, nerv08, nerv07 moreView
8E1A46CB0BDA0608E040E50A1EC022CDPASSOS Checkohasd/orarootagent_root Log Ownership is Correct (root root)nerv01, nerv04, nerv02, nerv08, nerv07 moreView
8E197A76D887BAC4E040E50A1EC07E0BPASSOS Checkcrsd/orarootagent_root Log Ownership is Correct (root root)nerv01, nerv04, nerv05, nerv02, nerv08 moreView
8E19457488167806E040E50A1EC00310PASSOS Checkcrsd Log Ownership is Correct (root root)nerv01, nerv04, nerv05, nerv02, nerv08 moreView
898E1DF96754C57FE040E50A1EC03224PASSASM CheckCRS version is higher or equal to ASM version.All ASM InstancesView
8915B823FCEBC259E040E50A1EC04AD6PASSOS CheckLocal listener init parameter is set to local node VIPAll Database ServersView
8914F5D0A9AB85BAE040E50A1EC04A31PASSOS CheckNumber of SCAN listeners is equal to the recommended number of 3.All Database ServersView
87604C73D768DF7AE040E50A1EC0566BPASSOS CheckAll voting disks are onlineAll Database ServersView
90E150135F6859C4E040E50A1EC01FF5PASSOS CheckCSS misscount is set to the default value of 30All Database ServersView
856A9B77AF14DD9FE040E50A1EC00285PASSOS CheckSELinux is not being Enforced.All Database ServersView
8529D3798EA039F3E040E50A1EC07218PASSOS CheckPublic interface is configured and exists in OCRAll Database ServersView
84C193C69EE36512E040E50A1EC06466PASSOS Checkip_local_port_range is configured according to recommendationAll Database ServersView
84BE8B9C4817090DE040E50A1EC07DB8PASSOS Checkkernel.shmmax parameter is configured according to recommendationAll Database ServersView
84BE4DE1F00AD833E040E50A1EC07771PASSOS CheckKernel Parameter fs.file-max is configuration meets or exceeds recommendationAll Database ServersView
8449C298FC0EF19CE040E50A1EC00965PASSOS CheckShell limit hard stack for DB is configured according to recommendationAll Database ServersView
841FD604C3C8F2B1E040E50A1EC0122FPASSOS CheckFree space in /tmp directory meets or exceeds recommendation of minimum 1GBAll Database ServersView
841F8C3E78906005E040E50A1EC00357PASSOS CheckShell limit hard nproc for GI is configured according to recommendationAll Database ServersView
841F0977B92F0185E040E50A1EC070BBPASSOS CheckShell limit soft nofile for DB is configured according to recommendationAll Database ServersView
841E706550995C68E040E50A1EC05EFBPASSOS CheckShell limit hard nofile for GI is configured according to recommendationAll Database ServersView
841E706550975C68E040E50A1EC05EFBPASSOS CheckShell limit hard nproc for DB is configured according to recommendationAll Database ServersView
841D87785594F263E040E50A1EC020D6PASSOS CheckShell limit soft nofile for GI is configured according to recommendationAll Database ServersView
841C7DEB776DB4BBE040E50A1EC0782EPASSOS CheckShell limit soft nproc for GI is configured according to recommendationAll Database ServersView
841A3A9F4A74AC6AE040E50A1EC03FC0PASSOS CheckShell limit hard nofile for DB is configured according to recommendationAll Database ServersView
841A3A9F4A73AC6AE040E50A1EC03FC0PASSOS CheckShell limit soft nproc for DB is configured according to recommendationAll Database ServersView
83C301ACFF203C9BE040E50A1EC067EBPASSOS CheckLinux Swap Configuration meets or exceeds RecommendationAll Database ServersView
834835A4EC032658E040E50A1EC056F6PASSOS Check/tmp is on a dedicated filesystemnerv01, nerv03, nerv04, nerv02, nerv08 moreView
8343C0D6A9D8702BE040E50A1EC045C8PASSSQL CheckAll data and temporary are autoextensibleAll DatabasesView
833F68D88AE57B7CE040E50A1EC02BE7PASSSQL CheckRedo logs are multiplexedAll DatabasesView
833F12C25516ACAFE040E50A1EC020F7PASSSQL CheckControlfile is multiplexedAll DatabasesView
833D92F95B0A5CB6E040E50A1EC06498PASSSQL Parameter Checkremote_login_passwordfile is configured according to recommendationAll InstancesView
831B9FABDB6CFCB4E040E50A1EC034C0PASSOS Checkaudit_file_dest does not have any audit files older than 30 daysAll Database ServersView
7EDE9EBEC9429FBAE040E50A1EC03AEDPASSOS Check$ORACLE_HOME/bin/oradism ownership is rootnerv01, nerv04, nerv05, nerv02, nerv08 moreView
7EDDA570A1827FBAE040E50A1EC02EB1PASSOS Check$ORACLE_HOME/bin/oradism setuid bit is setnerv01, nerv04, nerv05, nerv02, nerv08 moreView
77029A014E159389E040E50A1EC02060PASSSQL CheckAvg message sent queue time on ksxp is <= recommendedAll DatabasesView
770244572FC70393E040E50A1EC01299PASSSQL CheckAvg message sent queue time is <= recommendedAll DatabasesView
7701CFDB2F6EF98EE040E50A1EC00573PASSSQL CheckAvg message received queue time is <= recommendedAll DatabasesView
7674FEDB08C2FDA2E040E50A1EC0156FPASSSQL CheckNo Global Cache lost blocks detectedAll DatabasesView
7674C09669C5BCE6E040E50A1EC011E5PASSSQL CheckFailover method (SELECT) and failover mode (BASIC) are configured properlyAll DatabasesView
70CFB24C11B52EF5E040E50A1EC03ED0PASSOS CheckOpen files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimitedAll Database ServersView
6890329C1FFFCEDDE040E50A1EC02FEDPASSOS CheckNo indication of checkpoints not being completedAll Database ServersView
670FE09A93E12317E040E50A1EC018E9PASSSQL CheckAvg GC CURRENT Block Receive Time Within Acceptable RangeAll DatabasesView
670FE09A93E02317E040E50A1EC018E9PASSSQL CheckAvg GC CR Block Receive Time Within Acceptable RangeAll DatabasesView
66FEB2848B21DB24E040E50A1EC00A0CPASSSQL CheckTablespace allocation type is SYSTEM for all appropriate tablespaces for RAC01All DatabasesView
66EBC49E368387CAE040E50A1EC03B98PASSOS Checkbackground_dump_dest does not have any files older than 30 daysAll Database ServersView
66EABE4A113A3B1EE040E50A1EC006B2PASSOS CheckAlert log is not too bigAll Database ServersView
66EAB3BB6CF79C54E040E50A1EC06084PASSOS CheckNo ORA-07445 errors found in alert logAll Database ServersView
66E70B43167837ABE040E50A1EC02FEAPASSOS CheckNo ORA-00600 errors found in alert logAll Database ServersView
66E6B013BAE3EFBEE040E50A1EC01F87PASSOS Checkuser_dump_dest does not have trace files older than 30 daysAll Database ServersView
66E59E657BFC85F4E040E50A1EC0501DPASSOS Checkcore_dump_dest does not have too many older core dump filesAll Database ServersView
669862F59599CA2AE040E50A1EC018FDPASSOS CheckKernel Parameter SEMMNS OKAll Database ServersView
66985D930D2DF070E040E50A1EC019EBPASSOS CheckKernel Parameter kernel.shmmni OKAll Database ServersView
6697946779AC8AD3E040E50A1EC03C0EPASSOS CheckKernel Parameter SEMMSL OKAll Database ServersView
6696C7B368784A66E040E50A1EC01B92PASSOS CheckKernel Parameter SEMMNI OKAll Database ServersView
66959FC16B423896E040E50A1EC07CDCPASSOS CheckKernel Parameter SEMOPM OKAll Database ServersView
6694F204EE47A92DE040E50A1EC07145PASSOS CheckKernel Parameter kernel.shmall OKAll Database ServersView
65E6F4BD15BB92EBE040E50A1EC04384PASSSQL Parameter CheckRemote listener parameter is set to achieve load balancing and failoverAll InstancesView
6580DCAAE8A28F5BE0401490CACF6186PASSOS CheckThe number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)All Database ServersView
6556EAA74E28214FE0401490CACF6C89PASSOS Check$CRS_HOME/log/hostname/client directory does not have too many older log filesAll Database ServersView
65414495B2047F26E0401490CACF0FEDPASSOS CheckOCR is being backed up dailyAll Database ServersView
6050196F644254BDE0401490CACF203DPASSOS Checknet.core.rmem_max is Configured ProperlyAll Database ServersView
60500BAFB377E3ADE0401490CACF2245PASSSQL Parameter CheckInstance is using spfileAll InstancesView
5E5B7EEA0010DC6BE0401490CACF3B82PASSOS CheckInterconnect is configured on non-routable network addressesAll Database ServersView
5DC7EBCB6B72E046E0401490CACF321APASSOS CheckNone of the hostnames contains an underscore characterAll Database ServersView
5ADE14B5205111D1E0401490CACF673BPASSOS Checknet.core.rmem_default Is Configured ProperlyAll Database ServersView
5ADD88EC8E0AFF2EE0401490CACF0C10PASSOS Checknet.core.wmem_max Is Configured ProperlyAll Database ServersView
5ADCECF64757E914E0401490CACF4BBDPASSOS Checknet.core.wmem_default Is Configured ProperlyAll Database ServersView
595A436B3A7172FDE0401490CACF5BA5PASSOS CheckORA_CRS_HOME environment variable is not setAll Database ServersView
4B8B98A9C9644FADE0401490CACF6528PASSSQL CheckSYS.AUDSES$ sequence cache size >= 10,000All DatabasesView
4B881724781BB7BEE0401490CACF59FDPASSSQL CheckSYS.IDGEN1$ sequence cache size >= 1,000All DatabasesView

Cluster Wide

Check Id Status Type Message Status On Details
8FC4FA469BAA945EE040E50A1EC06AC6PASSCluster Wide CheckTime zone matches for root user across clusterCluster WideView
8FC307D9A9CEF95FE040E50A1EC01580PASSCluster Wide CheckTime zone matches for GI/CRS software owner across clusterCluster WideView
8BEFCB0B4C9DBF5CE040E50A1EC03B14PASSCluster Wide CheckOperating system version matches across cluster.Cluster WideView
8BEFA88017530395E040E50A1EC05E99PASSCluster Wide CheckOS Kernel version(uname -r) matches across cluster.Cluster WideView
8955120D63FCAC2DE040E50A1EC006CAPASSCluster Wide CheckClusterware active version matches across cluster.Cluster WideView
895255E0D2A63C8CE040E50A1EC00A43PASSCluster Wide CheckRDBMS software version matches across cluster.Cluster WideView
88704DB19306DC92E040E50A1EC02C92PASSCluster Wide CheckTimezone matches for current user across cluster.Cluster WideView
7E8D719B61F43773E040E50A1EC029C0PASSCluster Wide CheckPublic network interface names are the same across clusterCluster WideView
7E40D02BD3C22C5AE040E50A1EC033F5PASSCluster Wide CheckGI/CRS software owner UID matches across clusterCluster WideView
7E3FAC1843F137ABE040E50A1EC0139BPASSCluster Wide CheckRDBMS software owner UID matches across clusterCluster WideView
7E2DCCF1429A6A8FE040E50A1EC05FE6PASSCluster Wide CheckPrivate interconnect interface names are the same across clusterCluster WideView

Top

Best Practices and Other Recommendations

Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. raccheck assesses them and calls attention to any findings.


Top

Root time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Time zone matches for root user across cluster


nerv01 = BRT
nerv03 = BRT
nerv04 =
nerv05 =
nerv02 =
nerv08 = BRT
nerv07 = BRT
nerv06 = BRT
Top

Top

GI/CRS software owner time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Time zone matches for GI/CRS software owner across cluster


nerv01 = BRT
nerv03 = BRT
nerv04 =
nerv05 =
nerv02 =
nerv08 = BRT
nerv07 = BRT
nerv06 = BRT
Top

Top

Operating System Version comparison

Recommendation
 Operating system versions should match on each node of the cluster
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Operating system version matches across cluster.


nerv01 = 64
nerv03 = 64
nerv04 = 64
nerv05 = 64
nerv02 = 64
nerv08 = 64
nerv07 = 64
nerv06 = 64
Top

Top

Kernel version comparison across cluster

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to kernel version mismatch on cluster nodes.
It is possible that if the kernel versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the l
ater kernel still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of cluster node kernels it is assumed
that the kernel versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => OS Kernel version(uname -r) matches across cluster.


nerv01 = 2639-4002091el6uekx86_64
nerv03 = 2639-4002091el6uekx86_64
nerv04 = 2639-4002091el6uekx86_64
nerv05 = 2639-4002091el6uekx86_64
nerv02 = 2639-4002091el6uekx86_64
nerv08 = 2639-4002091el6uekx86_64
nerv07 = 2639-4002091el6uekx86_64
nerv06 = 2639-4002091el6uekx86_64
Top

Top

Clusterware version comparison

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to clusterware version mismatch on cluster nodes.
It is possible that if the clusterware versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the
later clusterware version still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of the clusterware it is assumed
that the clusterware versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Clusterware active version matches across cluster.


nerv01 = 112040
nerv03 = 112040
nerv04 = 112040
nerv05 = 112040
nerv02 = 112040
nerv08 = 112040
nerv07 = 112040
nerv06 = 112040
Top

Top

RDBMS software version comparison

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential database or application instability due to version mismatch for database homes.
It is possible that if the versions of related RDBMS homes on all the cluster nodes do not
match that some incompatibility could exist which would make diagnosing problems difficult
or bugs fixed in the later RDBMS version still being present on some nodes but not on others.

Action / Repair:

It is assumed that the RDBMS versions of related database homes will match across the cluster. 
If the versions of related RDBMS homes do not match then it is assumed that some mistake has
been made and overlooked.  The purpose of this check is to bring this situation to the attention
of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => RDBMS software version matches across cluster.


nerv01 = 112040
nerv03 = 112040
nerv04 = 112040
nerv05 = 112040
nerv02 = 112040
nerv08 = 112040
nerv07 = 112040
nerv06 = 112040
Top

Top

Timezone for current user

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Timezone matches for current user across cluster.


nerv01 = BRT
nerv03 = BRT
nerv04 = BRT
nerv05 = BRT
nerv02 = BRT
nerv08 = BRT
nerv07 = BRT
nerv06 = BRT
Top

Top

GI/CRS - Public interface name check (VIP)

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential application instability due to incorrectly named network interfaces used for node VIP.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the public interface used for the node VIP be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Public network interface names are the same across cluster


nerv01 = eth0
nerv03 = eth0
nerv04 = eth0
nerv05 = eth0
nerv02 = eth0
nerv08 = eth0
nerv07 = eth0
nerv06 = eth0
Top

Top

GI/CRS software owner across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => GI/CRS software owner UID matches across cluster


nerv01 = 54321
nerv03 = 54321
nerv04 = 54321
nerv05 = 54321
nerv02 = 54321
nerv08 = 54321
nerv07 = 54321
nerv06 = 54321
Top

Top

RDBMS software owner UID across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => RDBMS software owner UID matches across cluster


nerv01 = 54321
nerv03 = 54321
nerv04 = 54321
nerv05 = 54321
nerv02 = 54321
nerv08 = 54321
nerv07 = 54321
nerv06 = 54321
Top

Top

GI/CRS - Private interconnect interface name check

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster or application instability due to incorrectly named network interfaces.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the cluster interconnect be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Private interconnect interface names are the same across cluster


nerv01 =
nerv03 = eth1
nerv04 = eth1
nerv05 = eth1
nerv02 = eth1
nerv08 =
nerv07 = eth1
nerv06 = eth1
Top

Top

/tmp on dedicated filesystem

Recommendation
 It is a best practice to locate the /tmp directory on a dedicated filesystem, otherwise accidentally filling up /tmp could lead to filling up the root (/) filesystem as the result of other file management (logs, traces, etc.) and lead to availability problems.  For example, Oracle creates socket files in /tmp.  Make sure 1GB of free space is maintained in /tmp.
 
Needs attention onnerv05, nerv06
Passed onnerv01, nerv03, nerv04, nerv02, nerv08, nerv07

Status on nerv01:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV01 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    145016   4638600   4% /tmp

Status on nerv03:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV03 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    407388   4376228   9% /tmp

Status on nerv04:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV04 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    633472   4150144  14% /tmp

Status on nerv05:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM NERV05 - /TMP ON DEDICATED FILESYSTEM 




Status on nerv02:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV02 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    145156   4638460   4% /tmp

Status on nerv08:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV08 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    147516   4636100   4% /tmp

Status on nerv07:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV07 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    145088   4638528   4% /tmp

Status on nerv06:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM NERV06 - /TMP ON DEDICATED FILESYSTEM 



Top

Top

TFA Collector status

Recommendation
 TFA Collector (aka TFA) is a diagnostic collection utility to simplify  diagnostic data collection on Oracle Clusterware/Grid Infrastructure and RAC  systems.  TFA is similar to the diagcollection utility packaged with Oracle  Clusterware in the fact that it collects and packages diagnostic data however  TFA is MUCH more powerful than diagcollection with its ability to centralize  and automate the collection of diagnostic information. This helps speed up  the data collection and upload process with Oracle Support, minimizing delays  in data requests and analysis.
TFA provides the following key benefits:- 
  - Encapsulates diagnostic data collection for all CRS/GI and RAC components  on all cluster nodes into a single command executed from a single node 
  - Ability to "trim" diagnostic files during data collection to reduce data  upload size 
  - Ability to isolate diagnostic data collection to a given time period 
  - Ability to centralize collected diagnostic output to a single server in  the cluster 
  - Ability to isolate diagnostic collection to a particular product  component, e.g. ASM, RDBMS, Clusterware 
  - Optional Real Time Scan Alert Logs for conditions indicating a problem (DB 
  - Alert Logs, ASM Alert Logs, Clusterware Alert Logs, etc) 
  - Optional Automatic Data Collection based off of Real Time Scan findings 
  - Optional On Demand Scan (user initialted) of all log and trace files for  conditions indicating a problem 
  - Optional Automatic Data Collection based off of On Demand Scan findings 
 
Links
Needs attention onnerv01, nerv08
Passed onnerv03, nerv04, nerv05, nerv02, nerv07, nerv06

Status on nerv01:
WARNING => TFA Collector is either not installed or not running


DATA FROM NERV01 - TFA COLLECTOR STATUS 




ls: cannot access /etc/init.d/init.tfa: No such file or directory 

ps -ef |grep -v grep|grep -w TFAMain returned no rows

Status on nerv03:
PASS => TFA Collector is installed and running


DATA FROM NERV03 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 20 16:48 /etc/init.d/init.tfa 

root      1449     1  0 Sep24 ?        00:02:17 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv03/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv03/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv03/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv03/tfa_home

Status on nerv04:
PASS => TFA Collector is installed and running


DATA FROM NERV04 - TFA COLLECTOR STATUS 




-rwxr-xr-x. 1 root root 10436 Sep 20 17:00 /etc/init.d/init.tfa 

root      1371     1  0 Sep24 ?        00:02:13 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv04/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv04/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv04/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv04/tfa_home

Status on nerv05:
PASS => TFA Collector is installed and running


DATA FROM NERV05 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 15:40 /etc/init.d/init.tfa 

root      1405     1  0 Sep24 ?        00:02:03 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv05/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv05/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv05/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv05/tfa_home

Status on nerv02:
PASS => TFA Collector is installed and running


DATA FROM NERV02 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 16:33 /etc/init.d/init.tfa 

root      1432     1  0 Sep24 ?        00:02:29 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv02/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv02/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv02/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv02/tfa_home

Status on nerv08:
WARNING => TFA Collector is either not installed or not running


DATA FROM NERV08 - TFA COLLECTOR STATUS 




ls: cannot access /etc/init.d/init.tfa: No such file or directory 

ps -ef |grep -v grep|grep -w TFAMain returned no rows

Status on nerv07:
PASS => TFA Collector is installed and running


DATA FROM NERV07 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 17:49 /etc/init.d/init.tfa 

root      1587     1  0 Sep24 ?        00:05:11 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv07/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv07/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv07/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv07/tfa_home

Status on nerv06:
PASS => TFA Collector is installed and running


DATA FROM NERV06 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 19:43 /etc/init.d/init.tfa 

root      1177     1  0 Sep24 ?        00:01:55 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv06/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv06/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv06/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv06/tfa_home
Top

Top

ohasd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention onnerv03, nerv05
Passed onnerv01, nerv04, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV01 - OHASD LOG FILE OWNERSHIP 



total 2680
-rw-r--r-- 1 root root 2739523 Sep 25 06:02 ohasd.log
-rw-r--r-- 1 root root     546 Sep 24 17:37 ohasdOUT.log

Status on nerv03:
WARNING => ohasd Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - OHASD LOG FILE OWNERSHIP 



total 10260
-rw-r--r-- 1 oracle oinstall 10490551 Sep 25 06:05 ohasd.l01
-rw-r--r-- 1 root   root         3457 Sep 25 06:11 ohasd.log
-rw-r--r-- 1 oracle oinstall     2366 Sep 24 17:39 ohasdOUT.log

Status on nerv04:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV04 - OHASD LOG FILE OWNERSHIP 



total 9488
-rw-r--r--. 1 root root 9706728 Sep 25 06:24 ohasd.log
-rw-r--r--. 1 root root    1820 Sep 24 17:39 ohasdOUT.log

Status on nerv05:
WARNING => ohasd Log Ownership is NOT Correct (should be root root)


DATA FROM NERV05 - OHASD LOG FILE OWNERSHIP 



total 7388
-rw-r--r-- 1 oracle oinstall 7554696 Sep 25 06:36 ohasd.log
-rw-r--r-- 1 oracle oinstall     910 Sep 24 17:38 ohasdOUT.log

Status on nerv02:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV02 - OHASD LOG FILE OWNERSHIP 



total 2672
-rw-r--r-- 1 root root 2728721 Sep 25 06:51 ohasd.log
-rw-r--r-- 1 root root     546 Sep 24 17:35 ohasdOUT.log

Status on nerv08:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV08 - OHASD LOG FILE OWNERSHIP 



total 8492
-rw-r--r-- 1 root root 8685653 Sep 25 07:03 ohasd.log
-rw-r--r-- 1 root root     182 Sep 23 17:01 ohasdOUT.log

Status on nerv07:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV07 - OHASD LOG FILE OWNERSHIP 



total 6084
-rw-r--r-- 1 root root 6221637 Sep 25 07:16 ohasd.log
-rw-r--r-- 1 root root     364 Sep 24 17:29 ohasdOUT.log

Status on nerv06:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV06 - OHASD LOG FILE OWNERSHIP 



total 5516
-rw-r--r-- 1 root root 5639729 Sep 25 07:26 ohasd.log
-rw-r--r-- 1 root root     364 Sep 24 17:38 ohasdOUT.log
Top

Top

ohasd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
  • Oracle Bug # 9837321 - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention onnerv03, nerv05
Passed onnerv01, nerv04, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV01 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6620
-rw-r--r-- 1 root root 6770257 Sep 25 06:03 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:37 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 15:00 orarootagent_rootOUT.log

Status on nerv03:
WARNING => ohasd/orarootagent_root Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 18732
-rw-r--r-- 1 oracle oinstall 10492515 Sep 23 01:19 orarootagent_root.l01
-rw-r--r-- 1 oracle oinstall  8672555 Sep 25 06:11 orarootagent_root.log
-rw-r--r-- 1 oracle oinstall        5 Sep 24 17:39 orarootagent_root.pid
-rw-r--r-- 1 oracle oinstall        0 Sep 20 16:53 orarootagent_rootOUT.log

Status on nerv04:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV04 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 18168
-rw-r--r--. 1 root root 10496588 Sep 23 07:45 orarootagent_root.l01
-rw-r--r--  1 root root  8090921 Sep 25 06:24 orarootagent_root.log
-rw-r--r--. 1 root root        5 Sep 24 17:39 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 20 17:04 orarootagent_rootOUT.log

Status on nerv05:
WARNING => ohasd/orarootagent_root Log Ownership is NOT Correct (should be root root)


DATA FROM NERV05 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 8452
-rw-r--r-- 1 oracle oinstall 8643480 Sep 25 06:37 orarootagent_root.log
-rw-r--r-- 1 oracle oinstall       5 Sep 24 17:38 orarootagent_root.pid
-rw-r--r-- 1 oracle oinstall       0 Sep 22 17:15 orarootagent_rootOUT.log

Status on nerv02:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV02 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6532
-rw-r--r-- 1 root root 6679598 Sep 25 06:51 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:35 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 16:38 orarootagent_rootOUT.log

Status on nerv08:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV08 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6348
-rw-r--r-- 1 root root 6488563 Sep 25 07:03 orarootagent_root.log
-rw-r--r-- 1 root root       6 Sep 23 17:07 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 17:06 orarootagent_rootOUT.log

Status on nerv07:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV07 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6148
-rw-r--r-- 1 root root 6284189 Sep 25 07:16 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:29 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 17:59 orarootagent_rootOUT.log

Status on nerv06:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV06 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6060
-rw-r--r-- 1 root root 6196342 Sep 25 07:27 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:38 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 19:48 orarootagent_rootOUT.log
Top

Top

crsd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV01 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 88616
-rw-r--r-- 1 root root 10508336 Sep 25 03:12 orarootagent_root.l01
-rw-r--r-- 1 root root 10507104 Sep 24 22:41 orarootagent_root.l02
-rw-r--r-- 1 root root 10488279 Sep 24 18:09 orarootagent_root.l03
-rw-r--r-- 1 root root 10507323 Sep 24 13:35 orarootagent_root.l04
-rw-r--r-- 1 root root 10506229 Sep 24 09:05 orarootagent_root.l05
-rw-r--r-- 1 root root 10506866 Sep 24 04:36 orarootagent_root.l06
-rw-r--r-- 1 root root 10507352 Sep 24 00:07 orarootagent_root.l07
-rw-r--r-- 1 root root 10503371 Sep 23 19:38 orarootagent_root.l08
-rw-r--r-- 1 root root  6650938 Sep 25 06:03 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:39 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 23 15:30 orarootagent_rootOUT.log

Status on nerv03:
WARNING => crsd/orarootagent_root Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 80312
-rw-r--r-- 1 oracle oinstall 10518383 Sep 24 03:32 orarootagent_root.l01
-rw-r--r-- 1 oracle oinstall 10513697 Sep 23 12:15 orarootagent_root.l02
-rw-r--r-- 1 oracle oinstall 10485988 Sep 22 15:38 orarootagent_root.l03
-rw-r--r-- 1 oracle oinstall 10523211 Sep 22 09:35 orarootagent_root.l04
-rw-r--r-- 1 oracle oinstall 10525157 Sep 22 05:05 orarootagent_root.l05
-rw-r--r-- 1 oracle oinstall 10525394 Sep 22 00:34 orarootagent_root.l06
-rw-r--r-- 1 oracle oinstall 10505168 Sep 21 20:04 orarootagent_root.l07
-rw-r--r-- 1 root   root      8597279 Sep 25 06:11 orarootagent_root.log
-rw-r--r-- 1 oracle oinstall        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 oracle oinstall        0 Sep 21 09:37 orarootagent_rootOUT.log

Status on nerv04:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV04 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 33116
-rw-r--r--  1 root root 10489924 Sep 24 23:14 orarootagent_root.l01
-rw-r--r--  1 root root 10502799 Sep 23 14:42 orarootagent_root.l02
-rw-r--r--. 1 root root 10488311 Sep 22 14:06 orarootagent_root.l03
-rw-r--r--  1 root root  2400018 Sep 25 06:24 orarootagent_root.log
-rw-r--r--. 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r--  1 root root        0 Sep 21 09:38 orarootagent_rootOUT.log

Status on nerv05:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV05 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 48992
-rw-r--r-- 1 root root 10514496 Sep 24 05:18 orarootagent_root.l01
-rw-r--r-- 1 root root 10558537 Sep 23 08:55 orarootagent_root.l02
-rw-r--r-- 1 root root 10558709 Sep 23 03:46 orarootagent_root.l03
-rw-r--r-- 1 root root 10557962 Sep 22 22:36 orarootagent_root.l04
-rw-r--r-- 1 root root  7945617 Sep 25 06:36 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 22 17:28 orarootagent_rootOUT.log

Status on nerv02:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV02 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 12296
-rw-r--r-- 1 root root 10492134 Sep 25 00:42 orarootagent_root.l01
-rw-r--r-- 1 root root  2088222 Sep 25 06:51 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 23 19:22 orarootagent_rootOUT.log

Status on nerv08:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV08 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 11912
-rw-r--r-- 1 root root 10546665 Sep 25 01:52 orarootagent_root.l01
-rw-r--r-- 1 root root  1635337 Sep 25 07:03 orarootagent_root.log
-rw-r--r-- 1 root root        6 Sep 23 17:06 orarootagent_root.pid

Status on nerv07:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV07 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 11276
-rw-r--r-- 1 root root 10503476 Sep 25 03:56 orarootagent_root.l01
-rw-r--r-- 1 root root  1025821 Sep 25 07:16 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 24 17:40 orarootagent_rootOUT.log

Status on nerv06:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV06 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 10728
-rw-r--r-- 1 root root 10512292 Sep 25 05:58 orarootagent_root.l01
-rw-r--r-- 1 root root   458343 Sep 25 07:26 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 24 17:40 orarootagent_rootOUT.log
Top

Top

crsd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 CRSD trace file should owned by "root:root", but due to Bug 9837321application of patch may have resulted in changing the trace file ownership for patching and not changing it back.
 
Links
  • Oracle Bug # 9837321 - Bug 9837321 - Ownership of crsd traces gets changed from root by patching script - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV01 - CRSD LOG FILE OWNERSHIP 



total 1904
-rw-r--r-- 1 root root 1939004 Sep 25 06:03 crsd.log
-rw-r--r-- 1 root root     345 Sep 24 17:38 crsdOUT.log

Status on nerv03:
WARNING => crsd Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - CRSD LOG FILE OWNERSHIP 



total 10132
-rw-r--r-- 1 oracle oinstall 10364419 Sep 25 06:11 crsd.log
-rw-r--r-- 1 oracle oinstall     1840 Sep 24 17:39 crsdOUT.log

Status on nerv04:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV04 - CRSD LOG FILE OWNERSHIP 



total 14424
-rw-r--r--. 1 root root 10511055 Sep 23 18:34 crsd.l01
-rw-r--r--  1 root root  4240421 Sep 25 06:24 crsd.log
-rw-r--r--. 1 root root     2180 Sep 24 17:39 crsdOUT.log

Status on nerv05:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV05 - CRSD LOG FILE OWNERSHIP 



total 1684
-rw-r--r-- 1 root root 1716283 Sep 25 06:36 crsd.log
-rw-r--r-- 1 root root     575 Sep 24 17:38 crsdOUT.log

Status on nerv02:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV02 - CRSD LOG FILE OWNERSHIP 



total 1000
-rw-r--r-- 1 root root 1018670 Sep 25 06:51 crsd.log
-rw-r--r-- 1 root root     826 Sep 24 17:39 crsdOUT.log

Status on nerv08:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV08 - CRSD LOG FILE OWNERSHIP 



total 2804
-rw-r--r-- 1 root root 2862387 Sep 25 07:03 crsd.log
-rw-r--r-- 1 root root     115 Sep 23 17:06 crsdOUT.log

Status on nerv07:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV07 - CRSD LOG FILE OWNERSHIP 



total 996
-rw-r--r-- 1 root root 1012712 Sep 25 07:16 crsd.log
-rw-r--r-- 1 root root     230 Sep 24 17:39 crsdOUT.log

Status on nerv06:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV06 - CRSD LOG FILE OWNERSHIP 



total 988
-rw-r--r-- 1 root root 1005680 Sep 25 07:26 crsd.log
-rw-r--r-- 1 root root     230 Sep 24 17:39 crsdOUT.log
Top

Top

oradism executable ownership

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 enefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV01 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 15:06 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv03:
WARNING => $ORACLE_HOME/bin/oradism ownership is NOT root


DATA FROM NERV03 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwxr-x--- 1 oracle oinstall 71790 Aug 24 10:51 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv04:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV04 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x---. 1 root oinstall 71790 Sep 20 17:37 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv05:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV05 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 16:03 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv02:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV02 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv08:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV08 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:12 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv07:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV07 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 18:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv06:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV06 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 19:56 /u01/app/oracle/product/11.2.0/db_1/bin/oradism
Top

Top

oradism executable permission

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 Benefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV01 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 15:06 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv03:
WARNING => $ORACLE_HOME/bin/oradism setuid bit is NOT set


DATA FROM NERV03 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwxr-x--- 1 oracle oinstall 71790 Aug 24 10:51 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv04:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV04 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x---. 1 root oinstall 71790 Sep 20 17:37 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv05:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV05 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 16:03 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv02:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV02 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv08:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV08 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:12 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv07:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV07 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 18:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv06:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV06 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 19:56 /u01/app/oracle/product/11.2.0/db_1/bin/oradism
Top

Top

Verify no multiple parameter entries in database init.ora(spfile)

Recommendation
 
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV01 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC012.__db_cache_size=109051904
rac014.__db_cache_size=104857600
RAC016.__db_cache_size=88080384
RAC015.__db_cache_size=100663296
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv03:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV03 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC012.__db_cache_size=104857600
RAC016.__db_cache_size=83886080
rac014.__db_cache_size=100663296
RAC015.__db_cache_size=92274688
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv04:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV04 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC016.__db_cache_size=83886080
rac014.__db_cache_size=100663296
RAC015.__db_cache_size=88080384
RAC012.__db_cache_size=96468992
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv05:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV05 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC016.__db_cache_size=83886080
rac014.__db_cache_size=100663296
RAC012.__db_cache_size=96468992
RAC015.__db_cache_size=96468992
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv02:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV02 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
rac014.__db_cache_size=100663296
RAC012.__db_cache_size=96468992
RAC016.__db_cache_size=79691776
RAC015.__db_cache_size=88080384
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv08:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV08 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC016.__db_cache_size=79691776
rac014.__db_cache_size=113246208
RAC012.__db_cache_size=109051904
RAC015.__db_cache_size=100663296
RAC013.__db_cache_size=62914560
RAC017.__db_cache_size=96468992
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv07:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV07 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC016.__db_cache_size=79691776
RAC012.__db_cache_size=109051904
RAC017.__db_cache_size=92274688
RAC013.__db_cache_size=58720256
RAC015.__db_cache_size=92274688
rac014.__db_cache_size=104857600
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv06:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV06 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC012.__db_cache_size=109051904
RAC017.__db_cache_size=92274688
RAC013.__db_cache_size=58720256
rac014.__db_cache_size=100663296
RAC016.__db_cache_size=75497472
RAC015.__db_cache_size=88080384
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data
Top

Top

Verify no multiple parameter entries in ASM init.ora(spfile)

Recommendation
 
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV01 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv03:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV03 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv04:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV04 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv05:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV05 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv02:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV02 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv08:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV08 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv07:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV07 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv06:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV06 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
Top

Top

Verify control_file_record_keep_time value is in recommended range

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

When a Recovery Manager catalog is not used, the initialization parameter "control_file_record_keep_time" controls the period of time for which circular reuse records are maintained within the database control file. RMAN repository records are kept in circular reuse records.  The optimal setting is the maximum number of days in the past that is required to restore and recover a specific database without the use of a RMAN recovery catalog.  Setting this parameter within a recommended range (1 to 9 days) has been shown to address most recovery scenarios by ensuring archive logs and backup records are not prematurely aged out making database recovery much more challenging.    

The impact of verifying that the initialization parameter control_file_record_keep_time value is in the recommended range is minimal. Increasing this value will increase the size of the controlfile and possible query time for backup meta data and archive data.

Risk:

If the control_file_record_keep_time is set to 0, no RMAN repository records are retained in the controlfile, which causes a much more challenging database recovery operation if RMAN recovery catalog is not available.

If the control_file_record_keep_time is set too high, problems can arise with space management within the control file, expansion of the control file, and control file contention issues.


Action / Repair:

To verify that the FRA space management function is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

CF_RECORD_KEEP_TIME="";
CF_RECORD_KEEP_TIME=$(echo -e "set heading off feedback off\n select value from V\$PARAMETER where name = 'control_file_record_keep_time';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [[ $CF_RECORD_KEEP_TIME -ge "1" && $CF_RECORD_KEEP_TIME -le "9" ]]
then echo -e "\nPASS:  control_file_record_keep_time is within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
elif [ $CF_RECORD_KEEP_TIME -eq "0" ]
then echo -e "\nFAIL:  control_file_record_keep_time is set to zero:" $CF_RECORD_KEEP_TIME;
else echo -e "\nWARNING:  control_file_record_keep_time is not within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
fi;

The expected output should be:

PASS:  control_file_record_keep_time is within recommended range [1-9]: 7

If the output is not as expected, investigate and correct the condition(s).

NOTE: The use of an RMAN recovery catalog is recommended as the best way to avoid the loss of RMAN metadata because of overwritten control file records.
 
Links
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => control_file_record_keep_time is within recommended range [1-9] for RAC01


DATA FROM NERV01 - RAC01 DATABASE - VERIFY CONTROL_FILE_RECORD_KEEP_TIME VALUE IS IN RECOMMENDED RANGE 



control_file_record_keep_time = 7
Top

Top

Verify rman controlfile autobackup is set to ON

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

The control file is a binary file that records the physical structure of the database and contains important meta data required to recover the database. The database cannot startup or stay up unless all control files are valid. When a Recovery Manager catalog is not used, the control file is needed for database recovery because it contains all backup and recovery meta data.

The impact of verifying and setting "CONTROLFILE AUTOBACKUP" to "ON" is minimal. 

Risk:

When a Recovery Manager catalog is not used, loss of the controlfile results in loss of all backup and recovery meta data, which causes a much more challenging database recovery operation

Action / Repair:

To verify that RMAN "CONTROLFILE AUTOBACKUP" is set to "ON", as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

RMAN_AUTOBACKUP_STATUS="";
RMAN_AUTOBACKUP_STATUS=$(echo -e "set heading off feedback off\n select value from V\$RMAN_CONFIGURATION where name = 'CONTROLFILE AUTOBACKUP';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ -n "$RMAN_AUTOBACKUP_STATUS" ] && [ "$RMAN_AUTOBACKUP_STATUS" = "ON" ]
then echo -e "\nPASS:  RMAN "CONTROLFILE AUTOBACKUP" is set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
else
echo -e "\nFAIL:  RMAN "CONTROLFILE AUTOBACKUP" should be set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
fi;

The expected output should be:

PASS:  RMAN CONTROLFILE AUTOBACKUP is set to "ON": ON

If the output is not as expected, investigate and correct the condition(s).

For additional information, review information on CONFIGURE syntax in Oracle® Database Backup and Recovery Reference 11g Release 2 (11.2).

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

NOTE: Oracle MAA also recommends periodically backing up the controlfile to trace as additional backup.

SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 
Needs attention onRAC01
Passed on-

Status on RAC01:
WARNING => RMAN controlfile autobackup should be set to ON


DATA FOR RAC01 FOR VERIFY RMAN CONTROLFILE AUTOBACKUP IS SET TO ON 



Top

Top

Verify the Fast Recovery Area (FRA) has reclaimable space

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

Oracle's Fast Recovery Area (FRA) manages archivelog files, flashback logs, and RMAN backups.
Before RMAN's space management can clean up files according to your configured retention and
deletion policies, the database needs to be backup periodically. Without these backups, FRA can run
out of available space resulting in database hang because it cannot archive locally.

The impact of verifying that the Flash Recovery Area (FRA) has reclaimable space is minimal.

Risk:

If the Flash Recovery Area (FRA) space management function has no space available to reclaim, the database may hang because it cannot archive a log to the FRA.

Action / Repair:

To verify that the FRA space management funcion is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

PROBLEM_FILE_TYPES_PRESENT=$(echo -e "set heading off feedback off\n select count(*) from V\$FLASH_RECOVERY_AREA_USAGE where file_type in ('ARCHIVED LOG', 'BACKUP PIECE', 'IMAGE COPY') and number_of_files > 0 ;" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
RMAN_BACKUP_WITHIN_30_DAYS=$(echo -e "set heading off feedback off\n select count(*) from V\$BACKUP_SET where completion_time > sysdate-30;" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ $PROBLEM_FILE_TYPES_PRESENT -eq "0" ]
then echo -e "\nThis check is not applicable because file types 'ARCHIVED LOG', 'BACKUP PIECE', or 'IMAGE COPY' are not present in V\$FLASH_RECOVERY_AREA_USAGE";
else if [[ $PROBLEM_FILE_TYPES_PRESENT -ge "1" && $RMAN_BACKUP_WITHIN_30_DAYS -ge "1" ]]
then echo -e "\nPASS:  FRA space management problem file types are present with an RMAN backup completion within the last 30 days."
else echo -e "\nFAIL:  FRA space management problem file types are present without an RMAN backup completion within the last 7 days."
fi;
fi;

The expected output should be:

PASS:  FRA space management problem file types are present with an RMAN backup completion within the last 30 days.

If the output is not as expected, investigate and correct the condition(s).
 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
WARNING => Fast Recovery Area (FRA) should have sufficient reclaimable space


DATA FOR RAC01 FOR VERIFY THE FAST RECOVERY AREA (FRA) HAS RECLAIMABLE SPACE 




rman_backup_within_30_days = 0                                                  
Top

Top

Registered diskgroups in clusterware registry

Recommendation
 Benefit / Impact: :-

Risk:-

Action / Repair:-
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV01 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv03:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV03 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv04:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV04 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv05:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV05 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv02:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV02 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv08:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV08 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv07:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV07 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv06:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV06 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA
Top

Top

rp_filter for bonded private interconnects

Recommendation
 As a consequence of having rp_filter set to 1, Interconnect packets may potentially be blocked/discarded. 

To fix this problem, use following MOS note.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV01 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv03:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV03 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv04:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV04 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv05:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV05 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv02:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV02 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv08:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV08 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv07:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV07 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv06:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV06 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1
Top

Top

Check for parameter cvuqdisk|1.0.9|1|x86_64

Recommendation
 Install the operating system package cvuqdisk. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility. Use the cvuqdisk rpm for your hardware (for example, x86_64, or i386).
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv03:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv04:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv05:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv02:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv08:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv07:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv06:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64
Top

Top

OLR Integrity

Recommendation
 Any Kind of OLR corruption should be remedied before attempting upgrade otherwise 11.2 GI rootupgrade.sh fails with "Invalid  OLR during upgrade"
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => OLR Integrity check Succeeded


DATA FROM NERV01 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       : 1652380693
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv01.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv03:
PASS => OLR Integrity check Succeeded


DATA FROM NERV03 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       : 2099673501
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv03.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv08:
PASS => OLR Integrity check Succeeded


DATA FROM NERV08 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       :  494538828
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv08.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv07:
PASS => OLR Integrity check Succeeded


DATA FROM NERV07 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       : 1736539511
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv07.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv06:
PASS => OLR Integrity check Succeeded


DATA FROM NERV06 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       :   47187088
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv06.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded

Top

Top

pam_limits check

Recommendation
 This is required to make the shell limits work properly and applies to 10g,11g and 12c.  

Add the following line to the /etc/pam.d/login file, if it does not already exist:

session    required     pam_limits.so

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV01 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv03:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV03 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv04:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV04 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv05:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV05 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv02:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV02 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv08:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV08 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv07:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV07 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv06:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV06 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so
Top

Top

Verify vm.min_free_kbytes

Recommendation
  Benefit / Impact:

Maintaining vm.min_free_kbytes=524288 (512MB) helps a Linux system to reclaim memory faster and avoid LowMem pressure issues which can lead to node eviction or other outage or performance issues.

The impact of verifying vm.min_free_kbytes=524288 is minimal. The impact of adjusting the parameter should include editing the /etc/sysctl.conf file and rebooting the system. It is possible, but not recommended, especially for a system already under LowMem pressure, to modify the setting interactively. However, a reboot should still be performed to make sure the interactive setting is retained through a reboot.

Risk:

Exposure to unexpected node eviction and reboot.

Action / Repair:

To verify that vm.min_free_kbytes is properly set to 524288 execute the following command

/sbin/sysctl -n vm.min_free_kbytes

cat /proc/sys/vm/min_free_kbytes

If the output is not as expected, investigate and correct the condition. For example if the value is incorrect in /etc/sysctl.conf but current memory matches the incorrect value, simply edit the /etc/sysctl.conf file to include the line "vm.min_free_kbytes = 524288" and reboot the node. 
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV01 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv03:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV03 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv04:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV04 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv05:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV05 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 8115

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 8115

Status on nerv02:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV02 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv08:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV08 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5681

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5681

Status on nerv07:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV07 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5681

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5681

Status on nerv06:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV06 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 8115

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 8115
Top

Top

Verify data files are recoverable

Success FactorDATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, database backups made prior to the unrecoverable operation are invalidated and new backups are required. You can specify the SQL ALTER DATABASE or SQL ALTER TABLESPACE statement with the FORCE LOGGING clause to override the NOLOGGING setting; however, this statement will not repair a database that is already invalid.

Risk:

Changes under NOLOGGING will not be available after executing database recovery from a backup made prior to the unrecoverable change.

Action / Repair:

To verify that the data files are recoverable, execute the following Sqlplus command as the userid that owns the oracle home for the database:
select file#, unrecoverable_time, unrecoverable_change# from v$datafile where unrecoverable_time is not null;
If there are any unrecoverable actions, the output will be similar to:
     FILE# UNRECOVER UNRECOVERABLE_CHANGE#
---------- --------- ---------------------
        11 14-JAN-13               8530544
If nologging changes have occurred and the data must be recoverable then a backup of those datafiles that have nologging operations within should be done immediately. Please consult the following sections of the Backup and Recovery User guide for specific steps to resolve files that have unrecoverable changes

The standard best practice is to enable FORCE LOGGING at the database level (ALTER DATABASE FORCE LOGGING;) to ensure that all transactions are recoverable. However, placing the a database in force logging mode for ETL operations can lead to unnecessary database overhead. MAA best practices call for isolating data that does not need to be recoverable. Such data would include:

Data resulting from temporary loads
Data resulting from transient transformations
Any non critical data

To reduce unnecessary redo generation, do the following:

Specifiy FORCE LOGGING for all tablespaces that you explicitly wish to protect (ALTERTABLESPACE FORCE LOGGING;)
Specify NO FORCE LOGGING for those tablespaces that do not need protection (ALTERTABLESPACE NO FORCE LOGGING;).
Disable force logging at the database level (ALTER DATABASE NO FORCE LOGGING;) otherwise the database level settings will override the tablespace settings.

Once the above is complete, redo logging will function as follows:

Explicit no logging operations on objects in the no logging tablespace will not generate the normal redo (a small amount of redo is always generated for no logging operations to signal that a no logging operation was performed).

All other operations on objects in the no logging tablespace will generate the normal redo.
Operations performed on objects in the force logging tablespaces always generate normal redo.

Note:-Please seek oracle support assistance to mitigate this problem. Upon their guidance, the following commands could help validate, identify corrupted blocks.

              oracle> dbv file=<data_file_returned_by_above_command> userid=sys/******
              RMAN> validate check logical database;
              SQL> select COUNT(*) from v$database_block_corruption;

 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
FAIL => The data files should be recoverable


DATA FOR RAC01 FOR VERIFY DATA FILES ARE RECOVERABLE 




         5 22-SEP-13               1386702                                      
         8 24-SEP-13               3542463                                      
Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv03:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv04:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv05:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv02:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv08:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv07:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv06:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed

Top

Top

OCR and Voting file location

Recommendation
 Starting with Oracle 11gR2, our recommendation is to use Oracle ASM to store OCR and Voting Disks. With appropriate redundancy level (HIGH or NORMAL) of the ASM Disk Group being used, Oracle can create required number of Voting Disks as part of installation
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV01 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv03:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV03 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv04:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV04 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv05:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV05 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv02:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV02 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv08:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV08 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv07:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV07 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv06:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV06 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data
Top

Top

Parallel Execution Health-Checks and Diagnostics Reports

Recommendation
 This audit check captures information related to Oracle Parallel Query (PQ), DOP, PQ/PX Statistics, Database Resource Plans, Consumers Groups etc. This is primarily for Oracle Support Team consumption. However, customers may also review this to identify/troubleshoot related problems.
For every database, there will be a zip file of format <pxhcdr_DBNAME_HOSTNAME_DBVERSION_DATE_TIMESTAMP.zip> in raccheck output directory. 
 
Needs attention onnerv01
Passed on-
Top

Top

Hardware clock synchronization

Recommendation
 /etc/init.d/halt file is called when system is rebooted or halt. this file must have instructions to synchronize system time to hardware clock.

it should have commands like 

[ -x /sbin/hwclock ] && action $"Syncing hardware clock to system time" /sbin/hwclock $CLOCKFLAGS
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV01 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv03:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV03 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv04:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV04 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv05:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV05 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv02:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV02 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv08:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV08 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv07:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV07 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv06:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV06 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc
Top

Top

Clusterware resource status

Recommendation
 Resources were found to be in an UNKNOWN state on the system.  Having  resources in this state often results in issues when upgrading.  It is  recommended to correct resources in an UNKNOWN state prior to upgrading.   

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No clusterware resource are in unknown state


DATA FROM NERV01 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv01] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv03:
PASS => No clusterware resource are in unknown state


DATA FROM NERV03 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv03] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv04:
PASS => No clusterware resource are in unknown state


DATA FROM NERV04 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv04] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv05:
PASS => No clusterware resource are in unknown state


DATA FROM NERV05 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv05] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv02:
PASS => No clusterware resource are in unknown state


DATA FROM NERV02 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv02] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv08:
PASS => No clusterware resource are in unknown state


DATA FROM NERV08 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv08] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv07:
PASS => No clusterware resource are in unknown state


DATA FROM NERV07 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv07] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv06:
PASS => No clusterware resource are in unknown state


DATA FROM NERV06 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv06] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data
Top

Top

ORA-15196 errors in ASM alert log

Recommendation
 ORA-15196 errors means ASM encountered an invalid metadata block. Please see the trace file for more information next to ORA-15196 error in ASM alert log.  If this is an old error, you can ignore this finding otherwise open service request with Oracle support to find the cause and fix it


 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV01 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv03:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV03 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv04:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV04 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv05:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV05 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv02:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV02 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv08:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV08 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv07:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV07 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv06:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV06 - ORA-15196 ERRORS IN ASM ALERT LOG 



Top

Top

Disks without Disk Group

Recommendation
 The GROUP_NUMBER and DISK_NUMBER columns in GV$ASM_DISK will only be valid if the disk is part of a disk group which is currently mounted by the instance. Otherwise, GROUP_NUMBER will be 0, and DISK_NUMBER will be a unique value with respect to the other disks that also have a group number of 0. Run following query to find out the disks which are not part of any disk group.

select name,path,HEADER_STATUS,GROUP_NUMBER  from gv$asm_disk where group_number=0;
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No disks found which are not part of any disk group


DATA FROM NERV01 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv03:
PASS => No disks found which are not part of any disk group


DATA FROM NERV03 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv04:
PASS => No disks found which are not part of any disk group


DATA FROM NERV04 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv05:
PASS => No disks found which are not part of any disk group


DATA FROM NERV05 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv02:
PASS => No disks found which are not part of any disk group


DATA FROM NERV02 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv08:
PASS => No disks found which are not part of any disk group


DATA FROM NERV08 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv07:
PASS => No disks found which are not part of any disk group


DATA FROM NERV07 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv06:
PASS => No disks found which are not part of any disk group


DATA FROM NERV06 - DISKS WITHOUT DISK GROUP 




no rows selected

Top

Top

Redo log file write time latency

Recommendation
 When the latency hits 500ms, a Warning message is written to the lgwr trace file(s). For example:

Warning: log write elapsed time 564ms, size 2KB

Even though this threshold is very high and latencies below this range could impact the application performance, it is still worth to capture and report it to customers for necessary action.The performance impact of LGWR latencies include commit delays,Broadcast-on-Commit delays etc.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV01 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 516ms, size 2KB
Warning: log write elapsed time 688ms, size 10KB
Warning: log write elapsed time 815ms, size 1KB
Warning: log write elapsed time 899ms, size 0KB
Warning: log write elapsed time 517ms, size 0KB
Warning: log write elapsed time 1267ms, size 0KB
Warning: log write elapsed time 507ms, size 10KB
Warning: log write elapsed time 661ms, size 1KB
Warning: log write elapsed time 682ms, size 0KB
Warning: log write elapsed time 527ms, size 0KB
Warning: log write elapsed time 1388ms, size 4KB
Warning: log write elapsed time 682ms, size 0KB
Warning: log write elapsed time 535ms, size 18KB
Warning: log write elapsed time 643ms, size 104KB
Warning: log write elapsed time 706ms, size 0KB
Warning: log write elapsed time 677ms, size 0KB
Click for more data

Status on nerv03:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV03 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 686ms, size 1KB
Warning: log write elapsed time 676ms, size 2KB
Warning: log write elapsed time 520ms, size 1KB
Warning: log write elapsed time 583ms, size 7KB
Warning: log write elapsed time 689ms, size 5KB
Warning: log write elapsed time 1354ms, size 1KB
Warning: log write elapsed time 554ms, size 13KB
Warning: log write elapsed time 1086ms, size 2KB
Warning: log write elapsed time 1009ms, size 1KB
Warning: log write elapsed time 1254ms, size 1KB
Warning: log write elapsed time 1154ms, size 1KB
Warning: log write elapsed time 856ms, size 1KB
Warning: log write elapsed time 2516ms, size 1KB
Warning: log write elapsed time 1328ms, size 1KB
Warning: log write elapsed time 590ms, size 0KB
Warning: log write elapsed time 747ms, size 0KB