Oracle RAC Assessment Report

System Health Score is 90 out of 100 (detail)

Cluster Summary

Cluster Namerac02
OS/Kernel Version LINUX X86-64 OELRHEL 6 2.6.39-400.209.1.el6uek.x86_64
CRS Home - Version/u01/app/11.2.0/grid - 11.2.0.4.0
DB Home - Version - Names/u01/app/oracle/product/11.2.0/db_1 - 11.2.0.4.0 - RAC01
Number of nodes8
   Database Servers8
raccheck Version 2.2.3(BETA)_20130918
Collectionraccheck_nerv01_RAC01_092513_055913.zip
Collection Date25-Sep-2013 06:00:56

Note! This version of raccheck is considered valid for days from today or until a new version is available


WARNING! The data collection activity appears to be incomplete for this raccheck run. Please review the "Killed Processes" and / or "Skipped Checks" section and refer to "Appendix A - Troubleshooting Scenarios" of the "Raccheck User Guide" for corrective actions.

Table of Contents


Show Check Ids

Remove finding from report


Findings Needing Attention

FAIL, WARNING, ERROR and INFO finding details should be reviewed in the context of your environment.

NOTE: Any recommended change should be applied to and thoroughly tested (functionality and load) in one or more non-production environments before applying the change to a production environment.

Database Server

Check Id Status Type Message Status On Details
DC4495442D7A0CEBE04313C0E50A76E8FAILOS CheckPackage unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installedAll Database ServersView
C1D1B240993425B8E0431EC0E50AFEF5FAILOS CheckPackage unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installedAll Database ServersView
C1D0BD14BF4A3BCEE0431EC0E50A9DB5FAILOS CheckPackage unixODBC-2.2.14-11.el6-i686 is recommended but NOT installedAll Database ServersView
CCF6F44765861F7AE0431EC0E50A72ADFAILOS CheckOperating system hugepages count does not satisfy total SGA requirementsAll Database ServersView
834835A4EC032658E040E50A1EC056F6WARNINGOS Check/tmp is NOT on a dedicated filesystemnerv05, nerv06View
8E1B5EE973BAA8C6E040E50A1EC0622EWARNINGOS Checkohasd Log Ownership is NOT Correct (should be root root)nerv03, nerv05View
8E1A46CB0BDA0608E040E50A1EC022CDWARNINGOS Checkohasd/orarootagent_root Log Ownership is NOT Correct (should be root root)nerv03, nerv05View
8E197A76D887BAC4E040E50A1EC07E0BWARNINGOS Checkcrsd/orarootagent_root Log Ownership is NOT Correct (should be root root)nerv03View
8E19457488167806E040E50A1EC00310WARNINGOS Checkcrsd Log Ownership is NOT Correct (should be root root)nerv03View
7EDE9EBEC9429FBAE040E50A1EC03AEDWARNINGOS Check$ORACLE_HOME/bin/oradism ownership is NOT rootnerv03View
7EDDA570A1827FBAE040E50A1EC02EB1WARNINGOS Check$ORACLE_HOME/bin/oradism setuid bit is NOT setnerv03View
9AA08EB2573A36C6E040E50A1EC02BD9WARNINGOS Checkkernel parameter rp_filter is set to 1.All Database ServersView
E10E99868C34569BE04313C0E50A44C1WARNINGOS Checkvm.min_free_kbytes should be set as recommended.All Database ServersView
D348A289DD032396E0431EC0E50A26D5WARNINGOS CheckOCR and Voting disks are not stored in ASMAll Database ServersView
DC28F07D94FD1B10E04313C0E50A9FD8WARNINGOS CheckTFA Collector is either not installed or not runningnerv01, nerv08View
D35CE19AE68165F3E0431EC0E50A4C09WARNINGOS CheckRedo log write time is more than 500 millisecondsAll Database ServersView
951C025701C65CC5E040E50A1EC0371FWARNINGOS CheckOSWatcher is not running as is recommended.All Database ServersView
8C9D63D9441C1F52E040E50A1EC0211FWARNINGOS CheckNIC bonding is NOT configured for public network (VIP)All Database ServersView
5EA8F4C6C6BDF8F0E0401490CACF067FWARNINGOS CheckNIC bonding is not configured for interconnectAll Database ServersView
CB94D8434AA02210E0431EC0E50A7C40WARNINGSQL Parameter CheckDatabase Parameter memory_target is not set to the recommended valueAll InstancesView
DC3D819F5D2A50FEE04312C0E50AFF9FINFOOS CheckParallel Execution Health-Checks and Diagnostics ReportsAll Database ServersView
BBB4357BF09B79D6E0431EC0E50AFB57INFOOS CheckInformation about hanganalyze and systemstate dumpAll Database ServersView
5E4956EE574FB034E0401490CACF2F84INFOOS CheckJumbo frames (MTU >= 8192) are not configured for interconnectAll Database ServersView
85F282CFD5DADCB4E040E50A1EC01BC9INFOSQL CheckAll redo log files are not same size.All DatabasesView

Top

MAA Scorecard

Outage Type Status Type Message Status On Details
.
DATABASE FAILURE PREVENTION BEST PRACTICESPASS
Description
Oracle database can be configured with best practices that are applicable to all Oracle databases, including single-instance, Oracle RAC databases, Oracle RAC One Node databases, and the primary and standby databases in Oracle Data Guard configurations. Key HA Benefits:
  • Improved recoverability
  • Improved stability

Best Practices
PASSSQL CheckAll tablespaces are locally managed tablespaceAll DatabasesView
PASSSQL CheckAll tablespaces are using Automatic segment storage managementAll DatabasesView
PASSSQL CheckDefault temporary tablespace is setAll DatabasesView
PASSSQL CheckDatabase Archivelog Mode is set to ARCHIVELOGAll DatabasesView
PASSSQL CheckThe SYS and SYSTEM userids have a default tablespace of SYSTEMAll DatabasesView
.
COMPUTER FAILURE PREVENTION BEST PRACTICESFAIL
Description
Oracle RAC and Oracle Clusterware allow Oracle Database to run any packaged or custom application across a set of clustered servers. This capability provides server side high availability and scalability. If a clustered server fails, then Oracle Database continues running on the surviving servers. When more processing power is needed, you can add another server without interrupting access to data. Key HA Benefits: Zero database downtime for node and instance failures. Application brownout can be zero or seconds compared to minutes and an hour with third party cold cluster failover solutions. Oracle RAC and Oracle Clusterware rolling upgrade for most hardware and software changes excluding Oracle RDBMS patch sets and new database releases.
Best Practices
WARNINGSQL Parameter Checkfast_start_mttr_target should be greater than or equal to 300.All InstancesView
.
DATA CORRUPTION PREVENTION BEST PRACTICESFAIL
Description
The MAA recommended way to achieve the most comprehensive data corruption prevention and detection is to use Oracle Active Data Guard and configure the DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT database initialization parameters on the Primary database and any Data Guard and standby databases. Key HA Benefits
  • Application downtime can be reduced from hours and days to seconds to no downtime.
  • Prevention, quick detection and fast repair of data block corruptions.
  • With Active Data Guard, data block corruptions can be repaired automatically.

Best Practices
FAILSQL CheckThe data files should be recoverableAll DatabasesView
WARNINGOS CheckDatabase parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.All Database ServersView
PASSSQL CheckNo reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONSAll DatabasesView
.
LOGICAL CORRUPTION PREVENTION BEST PRACTICESFAIL
Description
Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area. Application monitoring is required for early detection. Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database). Key HA Benefits: With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency. Fast repair for logical failures caused by malicious or accidental DML or DDL operations. Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database. Questions: Can your application or monitoring infrastructure detect logical inconsistencies? Is your operations team prepared to use various flashback technologies to repair quickly and efficiently? Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
Best Practices
FAILSQL CheckFlashback on PRIMARY is not configuredAll DatabasesView
PASSSQL Parameter CheckRECYCLEBIN on PRIMARY is set to the recommended valueAll InstancesView
PASSSQL Parameter CheckDatabase parameter UNDO_RETENTION on PRIMARY is not nullAll InstancesView
.
DATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICESFAIL
Description
Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations. For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location. Key HA Benefits: With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss. With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes. With remote standby database (Disaster Recovery Site), you have protection from complete site failures. In all cases, the Active Data Guard instances can be active and used for other activities. Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations. Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution’s main trade-off is the additional administration for application developer and database administrators.
Best Practices
FAILSQL CheckPrimary database is NOT protected with Data Guard (standby database) for real-time data protection and availabilityAll DatabasesView
.
CLIENT FAILOVER OPERATIONAL BEST PRACTICESPASS
Description
A highly available architecture requires the ability of the application tier to transparently fail over to a surviving instance or database advertising the required service. This ensures that applications are generally available or minimally impacted in the event of node failure, instance failure, or database failures. Oracle listeners can be configured to throttle incoming connections to avoid logon storms after a database node or instance failure. The connection rate limiter feature in the Oracle Net Listener enables a database administrator (DBA) to limit the number of new connections handled by the listener.
Best Practices
PASSOS CheckClusterware is runningAll Database ServersView
.
ORACLE RECOVERY MANAGER(RMAN) BEST PRACTICESFAIL
Description
Oracle Recovery Manager (RMAN) is an Oracle Database utility to manage database backup and, more importantly, the recovery of the database. RMAN eliminates operational complexity while providing superior performance and availability of the database. RMAN determines the most efficient method of executing the requested backup, restoration, or recovery operation and then submits these operations to the Oracle Database server for processing. RMAN and the server automatically identify modifications to the structure of the database and dynamically adjust the required operation to adapt to the changes. RMAN has many unique HA capabilities that can be challenging or impossible for third party backup and restore utilities to deliver such as
  • In-depth Oracle data block checks during every backup or restore operation
  • Efficient block media recovery
  • Automatic recovery through complex database state changes such as resetlogs or past Data Guard role transitions
  • Fast incremental backup and restore operations
  • Integrated retention policies and backup file management with Oracle’s fast recovery area
  • Online backups without the need to put the database or data file in hot backup mode.
RMAN backups are strategic to MAA so a damaged database (complete database or subset of the database such as a data file or tablespace, log file, or controlfile) can be recovered but for the fastest recovery, use Data Guard or GoldenGate. RMAN operations are also important for detecting any corrupted blocks from data files that are not frequently accessed.
Best Practices
WARNINGSQL CheckRMAN controlfile autobackup should be set to ONAll DatabasesView
WARNINGSQL CheckFast Recovery Area (FRA) should have sufficient reclaimable spaceAll DatabasesView
PASSOS Checkcontrol_file_record_keep_time is within recommended range [1-9] for RAC01All Database ServersView
.
OPERATIONAL BEST PRACTICESINFO
Description
Operational best practices are an essential prerequisite to high availability. The integration of Oracle Maximum Availability Architecture (MAA) operational and configuration best practices with Oracle Exadata Database Machine (Exadata MAA) provides the most comprehensive high availability solution available for the Oracle Database.
Best Practices
.
DATABASE CONSOLIDATION BEST PRACTICESINFO
Description
Database consolidation requires additional planning and management to ensure HA requirements are met.
Best Practices

Top

GRID and RDBMS patch recommendation Summary report

Summary Report for "nerv01"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv03"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv04"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv05"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv02"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv08"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv07"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Summary Report for "nerv06"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
0 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
0 0 0 /u01/app/oracle/product/11.2.0/db_1 View

Top

GRID and RDBMS patch recommendation Detailed report

Detailed report for "nerv01"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv03"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv04"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv05"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv02"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv08"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv07"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top

Detailed report for "nerv06"




0 Recommended CRS patches for 112040 from /u01/app/11.2.0/grid
Top

0 Recommended RDBMS patches for 112040 from /u01/app/oracle/product/11.2.0/db_1
Top
Top

Findings Passed

Database Server

Check Id Status Type Message Status On Details
DC28F07D94FD1B10E04313C0E50A9FD8PASSOS CheckTFA Collector is installed and runningnerv03, nerv04, nerv05, nerv02, nerv07 moreView
E47ECDCFE09A122CE04313C0E50A35ECPASSOS CheckThere are no duplicate parameter entries in the database init.ora(spfile) fileAll Database ServersView
E47EBE3023936D3CE04313C0E50A7A0EPASSASM CheckThere are no duplicate parameter entries in the ASM init.ora(spfile) fileAll ASM InstancesView
E1DF2A6140395D42E04312C0E50A0A6CPASSASM CheckAll diskgroups from v$asm_diskgroups are registered in clusterware registryAll ASM InstancesView
E18D7F9837B7754EE04313C0E50AD4AAPASSOS CheckPackage cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendationAll Database ServersView
E1500ADF060A3EA2E04313C0E50A3676PASSOS CheckOLR Integrity check SucceededAll Database ServersView
E12A91DC10F31AD7E04312C0E50A6361PASSOS Checkpam_limits configured properly for shell limitsAll Database ServersView
D0C2640EBA071F73E0431EC0E50AA159PASSOS CheckSystem clock is synchronized to hardware clock at system shutdownAll Database ServersView
DBC2C9218542349FE04312C0E50AC1E9PASSOS CheckNo clusterware resource are in unknown stateAll Database ServersView
D9A5C0E2DE430A85E04312C0E50AC8B0PASSASM CheckNo corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)All ASM InstancesView
D957C871B811597AE04312C0E50A91BFPASSASM CheckNo disks found which are not part of any disk groupAll ASM InstancesView
D112D25A574F13DCE0431EC0E50A55CDPASSOS CheckGrid infastructure network broadcast requirements are metAll Database ServersView
CB5BD768E88F7F71E0431EC0E50A346FPASSOS CheckPackage libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
6B515A724AB85906E040E50A1EC039F6PASSSQL CheckNo read/write errors found for ASM disksAll DatabasesView
C1D39B834AA46E44E0431EC0E50A5366PASSOS CheckPackage sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D39B834AA36E44E0431EC0E50A5366PASSOS CheckPackage libgcc-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D34D17A4F45402E0431EC0E50A5DD9PASSOS CheckPackage binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D348AB978E3873E0431EC0E50A19F0PASSOS CheckPackage glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D30E313A4C0B0BE0431EC0E50A1931PASSOS CheckPackage libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D2A95C2BF31FE4E0431EC0E50AB101PASSOS CheckPackage libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D29B4860DA19C2E0431EC0E50AFB36PASSOS CheckPackage glibc-2.12-1.7.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D29B4860D919C2E0431EC0E50AFB36PASSOS CheckPackage gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1CC4D830F3B90E0431EC0E50A559FPASSOS CheckPackage make-3.81-19.el6 meets or exceeds recommendationAll Database ServersView
C1D1CC4D830E3B90E0431EC0E50A559FPASSOS CheckPackage libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D1BA6C1CD213F9E0431EC0E50A8B9CPASSOS CheckPackage libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1BA6C1CD013F9E0431EC0E50A8B9CPASSOS CheckPackage libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D1B240991A25B8E0431EC0E50AFEF5PASSOS CheckPackage compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D1973D1B4C0EA1E0431EC0E50A9108PASSOS CheckPackage glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D15659D96376CBE0431EC0E50A74F5PASSOS CheckPackage glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D15659D96276CBE0431EC0E50A74F5PASSOS CheckPackage compat-libcap1-1.10-1-x86_64 meets or exceeds recommendationAll Database ServersView
C1D0EE98B4BC4083E0431EC0E50ADCB2PASSOS CheckPackage ksh-20100621-12.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1D0BD14BF493BCEE0431EC0E50A9DB5PASSOS CheckPackage libaio-0.3.107-10.el6-i686 meets or exceeds recommendationAll Database ServersView
C1D0BD14BF483BCEE0431EC0E50A9DB5PASSOS CheckPackage libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CF431B59054969E0431EC0E50A9B88PASSOS CheckPackage gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CF431B59034969E0431EC0E50A9B88PASSOS CheckPackage compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendationAll Database ServersView
C1CEC9D9E9432BDFE0431EC0E50AF329PASSOS CheckPackage libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendationAll Database ServersView
89130F49748E6CC7E040E50A1EC07A44PASSOS CheckRemote listener is set to SCAN nameAll Database ServersView
65F8FA5F9B838079E040E50A1EC059DCPASSOS CheckValue of remote_listener parameter is able to tnspingAll Database ServersView
D6972E101386682AE0431EC0E50A9FD9PASSOS CheckNo tnsname alias is defined as scanname:portAll Database ServersView
D6972E101384682AE0431EC0E50A9FD9PASSOS Checkezconnect is configured in sqlnet.oraAll Database ServersView
BEAE25E17C4130E4E0431EC0E50A8C3FPASSSQL Parameter CheckDatabase Parameter parallel_execution_message_size is set to the recommended valueAll InstancesView
B6457DE59F9D457EE0431EC0E50A1DD2PASSSQL Parameter CheckDatabase parameter CURSOR_SHARING is set to recommended valueAll InstancesView
B167E5248D476B74E0431EC0E50A3E27PASSSQL CheckAll bigfile tablespaces have non-default maxbytes values setAll DatabasesView
AD6481CF9BDD6058E040E50A1EC021ECPASSOS Checkumask for RDBMS owner is set to 0022All Database ServersView
9DEBED7B8DAB583DE040E50A1EC01BA0PASSASM CheckASM Audit file destination file count <= 100,000All ASM InstancesView
9DAFD1040CA9389FE040E50A1EC0307CPASSOS CheckShell limit hard stack for GI is configured according to recommendationAll Database ServersView
64DC3E59CB88B984E0401490CACF1104PASSSQL Parameter Checkasm_power_limit is set to recommended value of 1All InstancesView
90DCECE833790E9DE040E50A1EC0750APASSOS CheckCSS reboottime is set to the default value of 3All Database ServersView
90DCB860F9380638E040E50A1EC07248PASSOS CheckCSS disktimeout is set to the default value of 200All Database ServersView
8E1B5EE973BAA8C6E040E50A1EC0622EPASSOS Checkohasd Log Ownership is Correct (root root)nerv01, nerv04, nerv02, nerv08, nerv07 moreView
8E1A46CB0BDA0608E040E50A1EC022CDPASSOS Checkohasd/orarootagent_root Log Ownership is Correct (root root)nerv01, nerv04, nerv02, nerv08, nerv07 moreView
8E197A76D887BAC4E040E50A1EC07E0BPASSOS Checkcrsd/orarootagent_root Log Ownership is Correct (root root)nerv01, nerv04, nerv05, nerv02, nerv08 moreView
8E19457488167806E040E50A1EC00310PASSOS Checkcrsd Log Ownership is Correct (root root)nerv01, nerv04, nerv05, nerv02, nerv08 moreView
898E1DF96754C57FE040E50A1EC03224PASSASM CheckCRS version is higher or equal to ASM version.All ASM InstancesView
8915B823FCEBC259E040E50A1EC04AD6PASSOS CheckLocal listener init parameter is set to local node VIPAll Database ServersView
8914F5D0A9AB85BAE040E50A1EC04A31PASSOS CheckNumber of SCAN listeners is equal to the recommended number of 3.All Database ServersView
87604C73D768DF7AE040E50A1EC0566BPASSOS CheckAll voting disks are onlineAll Database ServersView
90E150135F6859C4E040E50A1EC01FF5PASSOS CheckCSS misscount is set to the default value of 30All Database ServersView
856A9B77AF14DD9FE040E50A1EC00285PASSOS CheckSELinux is not being Enforced.All Database ServersView
8529D3798EA039F3E040E50A1EC07218PASSOS CheckPublic interface is configured and exists in OCRAll Database ServersView
84C193C69EE36512E040E50A1EC06466PASSOS Checkip_local_port_range is configured according to recommendationAll Database ServersView
84BE8B9C4817090DE040E50A1EC07DB8PASSOS Checkkernel.shmmax parameter is configured according to recommendationAll Database ServersView
84BE4DE1F00AD833E040E50A1EC07771PASSOS CheckKernel Parameter fs.file-max is configuration meets or exceeds recommendationAll Database ServersView
8449C298FC0EF19CE040E50A1EC00965PASSOS CheckShell limit hard stack for DB is configured according to recommendationAll Database ServersView
841FD604C3C8F2B1E040E50A1EC0122FPASSOS CheckFree space in /tmp directory meets or exceeds recommendation of minimum 1GBAll Database ServersView
841F8C3E78906005E040E50A1EC00357PASSOS CheckShell limit hard nproc for GI is configured according to recommendationAll Database ServersView
841F0977B92F0185E040E50A1EC070BBPASSOS CheckShell limit soft nofile for DB is configured according to recommendationAll Database ServersView
841E706550995C68E040E50A1EC05EFBPASSOS CheckShell limit hard nofile for GI is configured according to recommendationAll Database ServersView
841E706550975C68E040E50A1EC05EFBPASSOS CheckShell limit hard nproc for DB is configured according to recommendationAll Database ServersView
841D87785594F263E040E50A1EC020D6PASSOS CheckShell limit soft nofile for GI is configured according to recommendationAll Database ServersView
841C7DEB776DB4BBE040E50A1EC0782EPASSOS CheckShell limit soft nproc for GI is configured according to recommendationAll Database ServersView
841A3A9F4A74AC6AE040E50A1EC03FC0PASSOS CheckShell limit hard nofile for DB is configured according to recommendationAll Database ServersView
841A3A9F4A73AC6AE040E50A1EC03FC0PASSOS CheckShell limit soft nproc for DB is configured according to recommendationAll Database ServersView
83C301ACFF203C9BE040E50A1EC067EBPASSOS CheckLinux Swap Configuration meets or exceeds RecommendationAll Database ServersView
834835A4EC032658E040E50A1EC056F6PASSOS Check/tmp is on a dedicated filesystemnerv01, nerv03, nerv04, nerv02, nerv08 moreView
8343C0D6A9D8702BE040E50A1EC045C8PASSSQL CheckAll data and temporary are autoextensibleAll DatabasesView
833F68D88AE57B7CE040E50A1EC02BE7PASSSQL CheckRedo logs are multiplexedAll DatabasesView
833F12C25516ACAFE040E50A1EC020F7PASSSQL CheckControlfile is multiplexedAll DatabasesView
833D92F95B0A5CB6E040E50A1EC06498PASSSQL Parameter Checkremote_login_passwordfile is configured according to recommendationAll InstancesView
831B9FABDB6CFCB4E040E50A1EC034C0PASSOS Checkaudit_file_dest does not have any audit files older than 30 daysAll Database ServersView
7EDE9EBEC9429FBAE040E50A1EC03AEDPASSOS Check$ORACLE_HOME/bin/oradism ownership is rootnerv01, nerv04, nerv05, nerv02, nerv08 moreView
7EDDA570A1827FBAE040E50A1EC02EB1PASSOS Check$ORACLE_HOME/bin/oradism setuid bit is setnerv01, nerv04, nerv05, nerv02, nerv08 moreView
77029A014E159389E040E50A1EC02060PASSSQL CheckAvg message sent queue time on ksxp is <= recommendedAll DatabasesView
770244572FC70393E040E50A1EC01299PASSSQL CheckAvg message sent queue time is <= recommendedAll DatabasesView
7701CFDB2F6EF98EE040E50A1EC00573PASSSQL CheckAvg message received queue time is <= recommendedAll DatabasesView
7674FEDB08C2FDA2E040E50A1EC0156FPASSSQL CheckNo Global Cache lost blocks detectedAll DatabasesView
7674C09669C5BCE6E040E50A1EC011E5PASSSQL CheckFailover method (SELECT) and failover mode (BASIC) are configured properlyAll DatabasesView
70CFB24C11B52EF5E040E50A1EC03ED0PASSOS CheckOpen files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimitedAll Database ServersView
6890329C1FFFCEDDE040E50A1EC02FEDPASSOS CheckNo indication of checkpoints not being completedAll Database ServersView
670FE09A93E12317E040E50A1EC018E9PASSSQL CheckAvg GC CURRENT Block Receive Time Within Acceptable RangeAll DatabasesView
670FE09A93E02317E040E50A1EC018E9PASSSQL CheckAvg GC CR Block Receive Time Within Acceptable RangeAll DatabasesView
66FEB2848B21DB24E040E50A1EC00A0CPASSSQL CheckTablespace allocation type is SYSTEM for all appropriate tablespaces for RAC01All DatabasesView
66EBC49E368387CAE040E50A1EC03B98PASSOS Checkbackground_dump_dest does not have any files older than 30 daysAll Database ServersView
66EABE4A113A3B1EE040E50A1EC006B2PASSOS CheckAlert log is not too bigAll Database ServersView
66EAB3BB6CF79C54E040E50A1EC06084PASSOS CheckNo ORA-07445 errors found in alert logAll Database ServersView
66E70B43167837ABE040E50A1EC02FEAPASSOS CheckNo ORA-00600 errors found in alert logAll Database ServersView
66E6B013BAE3EFBEE040E50A1EC01F87PASSOS Checkuser_dump_dest does not have trace files older than 30 daysAll Database ServersView
66E59E657BFC85F4E040E50A1EC0501DPASSOS Checkcore_dump_dest does not have too many older core dump filesAll Database ServersView
669862F59599CA2AE040E50A1EC018FDPASSOS CheckKernel Parameter SEMMNS OKAll Database ServersView
66985D930D2DF070E040E50A1EC019EBPASSOS CheckKernel Parameter kernel.shmmni OKAll Database ServersView
6697946779AC8AD3E040E50A1EC03C0EPASSOS CheckKernel Parameter SEMMSL OKAll Database ServersView
6696C7B368784A66E040E50A1EC01B92PASSOS CheckKernel Parameter SEMMNI OKAll Database ServersView
66959FC16B423896E040E50A1EC07CDCPASSOS CheckKernel Parameter SEMOPM OKAll Database ServersView
6694F204EE47A92DE040E50A1EC07145PASSOS CheckKernel Parameter kernel.shmall OKAll Database ServersView
65E6F4BD15BB92EBE040E50A1EC04384PASSSQL Parameter CheckRemote listener parameter is set to achieve load balancing and failoverAll InstancesView
6580DCAAE8A28F5BE0401490CACF6186PASSOS CheckThe number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)All Database ServersView
6556EAA74E28214FE0401490CACF6C89PASSOS Check$CRS_HOME/log/hostname/client directory does not have too many older log filesAll Database ServersView
65414495B2047F26E0401490CACF0FEDPASSOS CheckOCR is being backed up dailyAll Database ServersView
6050196F644254BDE0401490CACF203DPASSOS Checknet.core.rmem_max is Configured ProperlyAll Database ServersView
60500BAFB377E3ADE0401490CACF2245PASSSQL Parameter CheckInstance is using spfileAll InstancesView
5E5B7EEA0010DC6BE0401490CACF3B82PASSOS CheckInterconnect is configured on non-routable network addressesAll Database ServersView
5DC7EBCB6B72E046E0401490CACF321APASSOS CheckNone of the hostnames contains an underscore characterAll Database ServersView
5ADE14B5205111D1E0401490CACF673BPASSOS Checknet.core.rmem_default Is Configured ProperlyAll Database ServersView
5ADD88EC8E0AFF2EE0401490CACF0C10PASSOS Checknet.core.wmem_max Is Configured ProperlyAll Database ServersView
5ADCECF64757E914E0401490CACF4BBDPASSOS Checknet.core.wmem_default Is Configured ProperlyAll Database ServersView
595A436B3A7172FDE0401490CACF5BA5PASSOS CheckORA_CRS_HOME environment variable is not setAll Database ServersView
4B8B98A9C9644FADE0401490CACF6528PASSSQL CheckSYS.AUDSES$ sequence cache size >= 10,000All DatabasesView
4B881724781BB7BEE0401490CACF59FDPASSSQL CheckSYS.IDGEN1$ sequence cache size >= 1,000All DatabasesView

Cluster Wide

Check Id Status Type Message Status On Details
8FC4FA469BAA945EE040E50A1EC06AC6PASSCluster Wide CheckTime zone matches for root user across clusterCluster WideView
8FC307D9A9CEF95FE040E50A1EC01580PASSCluster Wide CheckTime zone matches for GI/CRS software owner across clusterCluster WideView
8BEFCB0B4C9DBF5CE040E50A1EC03B14PASSCluster Wide CheckOperating system version matches across cluster.Cluster WideView
8BEFA88017530395E040E50A1EC05E99PASSCluster Wide CheckOS Kernel version(uname -r) matches across cluster.Cluster WideView
8955120D63FCAC2DE040E50A1EC006CAPASSCluster Wide CheckClusterware active version matches across cluster.Cluster WideView
895255E0D2A63C8CE040E50A1EC00A43PASSCluster Wide CheckRDBMS software version matches across cluster.Cluster WideView
88704DB19306DC92E040E50A1EC02C92PASSCluster Wide CheckTimezone matches for current user across cluster.Cluster WideView
7E8D719B61F43773E040E50A1EC029C0PASSCluster Wide CheckPublic network interface names are the same across clusterCluster WideView
7E40D02BD3C22C5AE040E50A1EC033F5PASSCluster Wide CheckGI/CRS software owner UID matches across clusterCluster WideView
7E3FAC1843F137ABE040E50A1EC0139BPASSCluster Wide CheckRDBMS software owner UID matches across clusterCluster WideView
7E2DCCF1429A6A8FE040E50A1EC05FE6PASSCluster Wide CheckPrivate interconnect interface names are the same across clusterCluster WideView

Top

Best Practices and Other Recommendations

Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. raccheck assesses them and calls attention to any findings.


Top

Root time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Time zone matches for root user across cluster


nerv01 = BRT
nerv03 = BRT
nerv04 =
nerv05 =
nerv02 =
nerv08 = BRT
nerv07 = BRT
nerv06 = BRT
Top

Top

GI/CRS software owner time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Time zone matches for GI/CRS software owner across cluster


nerv01 = BRT
nerv03 = BRT
nerv04 =
nerv05 =
nerv02 =
nerv08 = BRT
nerv07 = BRT
nerv06 = BRT
Top

Top

Operating System Version comparison

Recommendation
 Operating system versions should match on each node of the cluster
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Operating system version matches across cluster.


nerv01 = 64
nerv03 = 64
nerv04 = 64
nerv05 = 64
nerv02 = 64
nerv08 = 64
nerv07 = 64
nerv06 = 64
Top

Top

Kernel version comparison across cluster

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to kernel version mismatch on cluster nodes.
It is possible that if the kernel versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the l
ater kernel still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of cluster node kernels it is assumed
that the kernel versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => OS Kernel version(uname -r) matches across cluster.


nerv01 = 2639-4002091el6uekx86_64
nerv03 = 2639-4002091el6uekx86_64
nerv04 = 2639-4002091el6uekx86_64
nerv05 = 2639-4002091el6uekx86_64
nerv02 = 2639-4002091el6uekx86_64
nerv08 = 2639-4002091el6uekx86_64
nerv07 = 2639-4002091el6uekx86_64
nerv06 = 2639-4002091el6uekx86_64
Top

Top

Clusterware version comparison

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to clusterware version mismatch on cluster nodes.
It is possible that if the clusterware versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the
later clusterware version still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of the clusterware it is assumed
that the clusterware versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Clusterware active version matches across cluster.


nerv01 = 112040
nerv03 = 112040
nerv04 = 112040
nerv05 = 112040
nerv02 = 112040
nerv08 = 112040
nerv07 = 112040
nerv06 = 112040
Top

Top

RDBMS software version comparison

Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential database or application instability due to version mismatch for database homes.
It is possible that if the versions of related RDBMS homes on all the cluster nodes do not
match that some incompatibility could exist which would make diagnosing problems difficult
or bugs fixed in the later RDBMS version still being present on some nodes but not on others.

Action / Repair:

It is assumed that the RDBMS versions of related database homes will match across the cluster. 
If the versions of related RDBMS homes do not match then it is assumed that some mistake has
been made and overlooked.  The purpose of this check is to bring this situation to the attention
of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => RDBMS software version matches across cluster.


nerv01 = 112040
nerv03 = 112040
nerv04 = 112040
nerv05 = 112040
nerv02 = 112040
nerv08 = 112040
nerv07 = 112040
nerv06 = 112040
Top

Top

Timezone for current user

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Timezone matches for current user across cluster.


nerv01 = BRT
nerv03 = BRT
nerv04 = BRT
nerv05 = BRT
nerv02 = BRT
nerv08 = BRT
nerv07 = BRT
nerv06 = BRT
Top

Top

GI/CRS - Public interface name check (VIP)

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential application instability due to incorrectly named network interfaces used for node VIP.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the public interface used for the node VIP be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Public network interface names are the same across cluster


nerv01 = eth0
nerv03 = eth0
nerv04 = eth0
nerv05 = eth0
nerv02 = eth0
nerv08 = eth0
nerv07 = eth0
nerv06 = eth0
Top

Top

GI/CRS software owner across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => GI/CRS software owner UID matches across cluster


nerv01 = 54321
nerv03 = 54321
nerv04 = 54321
nerv05 = 54321
nerv02 = 54321
nerv08 = 54321
nerv07 = 54321
nerv06 = 54321
Top

Top

RDBMS software owner UID across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => RDBMS software owner UID matches across cluster


nerv01 = 54321
nerv03 = 54321
nerv04 = 54321
nerv05 = 54321
nerv02 = 54321
nerv08 = 54321
nerv07 = 54321
nerv06 = 54321
Top

Top

GI/CRS - Private interconnect interface name check

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster or application instability due to incorrectly named network interfaces.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the cluster interconnect be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide:
PASS => Private interconnect interface names are the same across cluster


nerv01 =
nerv03 = eth1
nerv04 = eth1
nerv05 = eth1
nerv02 = eth1
nerv08 =
nerv07 = eth1
nerv06 = eth1
Top

Top

/tmp on dedicated filesystem

Recommendation
 It is a best practice to locate the /tmp directory on a dedicated filesystem, otherwise accidentally filling up /tmp could lead to filling up the root (/) filesystem as the result of other file management (logs, traces, etc.) and lead to availability problems.  For example, Oracle creates socket files in /tmp.  Make sure 1GB of free space is maintained in /tmp.
 
Needs attention onnerv05, nerv06
Passed onnerv01, nerv03, nerv04, nerv02, nerv08, nerv07

Status on nerv01:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV01 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    145016   4638600   4% /tmp

Status on nerv03:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV03 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    407388   4376228   9% /tmp

Status on nerv04:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV04 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    633472   4150144  14% /tmp

Status on nerv05:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM NERV05 - /TMP ON DEDICATED FILESYSTEM 




Status on nerv02:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV02 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    145156   4638460   4% /tmp

Status on nerv08:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV08 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    147516   4636100   4% /tmp

Status on nerv07:
PASS => /tmp is on a dedicated filesystem


DATA FROM NERV07 - /TMP ON DEDICATED FILESYSTEM 



/dev/sda7              5039616    145088   4638528   4% /tmp

Status on nerv06:
WARNING => /tmp is NOT on a dedicated filesystem


DATA FROM NERV06 - /TMP ON DEDICATED FILESYSTEM 



Top

Top

TFA Collector status

Recommendation
 TFA Collector (aka TFA) is a diagnostic collection utility to simplify  diagnostic data collection on Oracle Clusterware/Grid Infrastructure and RAC  systems.  TFA is similar to the diagcollection utility packaged with Oracle  Clusterware in the fact that it collects and packages diagnostic data however  TFA is MUCH more powerful than diagcollection with its ability to centralize  and automate the collection of diagnostic information. This helps speed up  the data collection and upload process with Oracle Support, minimizing delays  in data requests and analysis.
TFA provides the following key benefits:- 
  - Encapsulates diagnostic data collection for all CRS/GI and RAC components  on all cluster nodes into a single command executed from a single node 
  - Ability to "trim" diagnostic files during data collection to reduce data  upload size 
  - Ability to isolate diagnostic data collection to a given time period 
  - Ability to centralize collected diagnostic output to a single server in  the cluster 
  - Ability to isolate diagnostic collection to a particular product  component, e.g. ASM, RDBMS, Clusterware 
  - Optional Real Time Scan Alert Logs for conditions indicating a problem (DB 
  - Alert Logs, ASM Alert Logs, Clusterware Alert Logs, etc) 
  - Optional Automatic Data Collection based off of Real Time Scan findings 
  - Optional On Demand Scan (user initialted) of all log and trace files for  conditions indicating a problem 
  - Optional Automatic Data Collection based off of On Demand Scan findings 
 
Links
Needs attention onnerv01, nerv08
Passed onnerv03, nerv04, nerv05, nerv02, nerv07, nerv06

Status on nerv01:
WARNING => TFA Collector is either not installed or not running


DATA FROM NERV01 - TFA COLLECTOR STATUS 




ls: cannot access /etc/init.d/init.tfa: No such file or directory 

ps -ef |grep -v grep|grep -w TFAMain returned no rows

Status on nerv03:
PASS => TFA Collector is installed and running


DATA FROM NERV03 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 20 16:48 /etc/init.d/init.tfa 

root      1449     1  0 Sep24 ?        00:02:17 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv03/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv03/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv03/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv03/tfa_home

Status on nerv04:
PASS => TFA Collector is installed and running


DATA FROM NERV04 - TFA COLLECTOR STATUS 




-rwxr-xr-x. 1 root root 10436 Sep 20 17:00 /etc/init.d/init.tfa 

root      1371     1  0 Sep24 ?        00:02:13 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv04/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv04/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv04/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv04/tfa_home

Status on nerv05:
PASS => TFA Collector is installed and running


DATA FROM NERV05 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 15:40 /etc/init.d/init.tfa 

root      1405     1  0 Sep24 ?        00:02:03 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv05/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv05/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv05/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv05/tfa_home

Status on nerv02:
PASS => TFA Collector is installed and running


DATA FROM NERV02 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 16:33 /etc/init.d/init.tfa 

root      1432     1  0 Sep24 ?        00:02:29 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv02/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv02/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv02/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv02/tfa_home

Status on nerv08:
WARNING => TFA Collector is either not installed or not running


DATA FROM NERV08 - TFA COLLECTOR STATUS 




ls: cannot access /etc/init.d/init.tfa: No such file or directory 

ps -ef |grep -v grep|grep -w TFAMain returned no rows

Status on nerv07:
PASS => TFA Collector is installed and running


DATA FROM NERV07 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 17:49 /etc/init.d/init.tfa 

root      1587     1  0 Sep24 ?        00:05:11 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv07/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv07/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv07/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv07/tfa_home

Status on nerv06:
PASS => TFA Collector is installed and running


DATA FROM NERV06 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10436 Sep 23 19:43 /etc/init.d/init.tfa 

root      1177     1  0 Sep24 ?        00:01:55 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/nerv06/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/nerv06/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/nerv06/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/nerv06/tfa_home
Top

Top

ohasd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention onnerv03, nerv05
Passed onnerv01, nerv04, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV01 - OHASD LOG FILE OWNERSHIP 



total 2680
-rw-r--r-- 1 root root 2739523 Sep 25 06:02 ohasd.log
-rw-r--r-- 1 root root     546 Sep 24 17:37 ohasdOUT.log

Status on nerv03:
WARNING => ohasd Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - OHASD LOG FILE OWNERSHIP 



total 10260
-rw-r--r-- 1 oracle oinstall 10490551 Sep 25 06:05 ohasd.l01
-rw-r--r-- 1 root   root         3457 Sep 25 06:11 ohasd.log
-rw-r--r-- 1 oracle oinstall     2366 Sep 24 17:39 ohasdOUT.log

Status on nerv04:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV04 - OHASD LOG FILE OWNERSHIP 



total 9488
-rw-r--r--. 1 root root 9706728 Sep 25 06:24 ohasd.log
-rw-r--r--. 1 root root    1820 Sep 24 17:39 ohasdOUT.log

Status on nerv05:
WARNING => ohasd Log Ownership is NOT Correct (should be root root)


DATA FROM NERV05 - OHASD LOG FILE OWNERSHIP 



total 7388
-rw-r--r-- 1 oracle oinstall 7554696 Sep 25 06:36 ohasd.log
-rw-r--r-- 1 oracle oinstall     910 Sep 24 17:38 ohasdOUT.log

Status on nerv02:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV02 - OHASD LOG FILE OWNERSHIP 



total 2672
-rw-r--r-- 1 root root 2728721 Sep 25 06:51 ohasd.log
-rw-r--r-- 1 root root     546 Sep 24 17:35 ohasdOUT.log

Status on nerv08:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV08 - OHASD LOG FILE OWNERSHIP 



total 8492
-rw-r--r-- 1 root root 8685653 Sep 25 07:03 ohasd.log
-rw-r--r-- 1 root root     182 Sep 23 17:01 ohasdOUT.log

Status on nerv07:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV07 - OHASD LOG FILE OWNERSHIP 



total 6084
-rw-r--r-- 1 root root 6221637 Sep 25 07:16 ohasd.log
-rw-r--r-- 1 root root     364 Sep 24 17:29 ohasdOUT.log

Status on nerv06:
PASS => ohasd Log Ownership is Correct (root root)


DATA FROM NERV06 - OHASD LOG FILE OWNERSHIP 



total 5516
-rw-r--r-- 1 root root 5639729 Sep 25 07:26 ohasd.log
-rw-r--r-- 1 root root     364 Sep 24 17:38 ohasdOUT.log
Top

Top

ohasd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
  • Oracle Bug # 9837321 - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention onnerv03, nerv05
Passed onnerv01, nerv04, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV01 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6620
-rw-r--r-- 1 root root 6770257 Sep 25 06:03 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:37 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 15:00 orarootagent_rootOUT.log

Status on nerv03:
WARNING => ohasd/orarootagent_root Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 18732
-rw-r--r-- 1 oracle oinstall 10492515 Sep 23 01:19 orarootagent_root.l01
-rw-r--r-- 1 oracle oinstall  8672555 Sep 25 06:11 orarootagent_root.log
-rw-r--r-- 1 oracle oinstall        5 Sep 24 17:39 orarootagent_root.pid
-rw-r--r-- 1 oracle oinstall        0 Sep 20 16:53 orarootagent_rootOUT.log

Status on nerv04:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV04 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 18168
-rw-r--r--. 1 root root 10496588 Sep 23 07:45 orarootagent_root.l01
-rw-r--r--  1 root root  8090921 Sep 25 06:24 orarootagent_root.log
-rw-r--r--. 1 root root        5 Sep 24 17:39 orarootagent_root.pid
-rw-r--r--. 1 root root        0 Sep 20 17:04 orarootagent_rootOUT.log

Status on nerv05:
WARNING => ohasd/orarootagent_root Log Ownership is NOT Correct (should be root root)


DATA FROM NERV05 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 8452
-rw-r--r-- 1 oracle oinstall 8643480 Sep 25 06:37 orarootagent_root.log
-rw-r--r-- 1 oracle oinstall       5 Sep 24 17:38 orarootagent_root.pid
-rw-r--r-- 1 oracle oinstall       0 Sep 22 17:15 orarootagent_rootOUT.log

Status on nerv02:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV02 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6532
-rw-r--r-- 1 root root 6679598 Sep 25 06:51 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:35 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 16:38 orarootagent_rootOUT.log

Status on nerv08:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV08 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6348
-rw-r--r-- 1 root root 6488563 Sep 25 07:03 orarootagent_root.log
-rw-r--r-- 1 root root       6 Sep 23 17:07 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 17:06 orarootagent_rootOUT.log

Status on nerv07:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV07 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6148
-rw-r--r-- 1 root root 6284189 Sep 25 07:16 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:29 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 17:59 orarootagent_rootOUT.log

Status on nerv06:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV06 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 6060
-rw-r--r-- 1 root root 6196342 Sep 25 07:27 orarootagent_root.log
-rw-r--r-- 1 root root       5 Sep 24 17:38 orarootagent_root.pid
-rw-r--r-- 1 root root       0 Sep 23 19:48 orarootagent_rootOUT.log
Top

Top

crsd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV01 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 88616
-rw-r--r-- 1 root root 10508336 Sep 25 03:12 orarootagent_root.l01
-rw-r--r-- 1 root root 10507104 Sep 24 22:41 orarootagent_root.l02
-rw-r--r-- 1 root root 10488279 Sep 24 18:09 orarootagent_root.l03
-rw-r--r-- 1 root root 10507323 Sep 24 13:35 orarootagent_root.l04
-rw-r--r-- 1 root root 10506229 Sep 24 09:05 orarootagent_root.l05
-rw-r--r-- 1 root root 10506866 Sep 24 04:36 orarootagent_root.l06
-rw-r--r-- 1 root root 10507352 Sep 24 00:07 orarootagent_root.l07
-rw-r--r-- 1 root root 10503371 Sep 23 19:38 orarootagent_root.l08
-rw-r--r-- 1 root root  6650938 Sep 25 06:03 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:39 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 23 15:30 orarootagent_rootOUT.log

Status on nerv03:
WARNING => crsd/orarootagent_root Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 80312
-rw-r--r-- 1 oracle oinstall 10518383 Sep 24 03:32 orarootagent_root.l01
-rw-r--r-- 1 oracle oinstall 10513697 Sep 23 12:15 orarootagent_root.l02
-rw-r--r-- 1 oracle oinstall 10485988 Sep 22 15:38 orarootagent_root.l03
-rw-r--r-- 1 oracle oinstall 10523211 Sep 22 09:35 orarootagent_root.l04
-rw-r--r-- 1 oracle oinstall 10525157 Sep 22 05:05 orarootagent_root.l05
-rw-r--r-- 1 oracle oinstall 10525394 Sep 22 00:34 orarootagent_root.l06
-rw-r--r-- 1 oracle oinstall 10505168 Sep 21 20:04 orarootagent_root.l07
-rw-r--r-- 1 root   root      8597279 Sep 25 06:11 orarootagent_root.log
-rw-r--r-- 1 oracle oinstall        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 oracle oinstall        0 Sep 21 09:37 orarootagent_rootOUT.log

Status on nerv04:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV04 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 33116
-rw-r--r--  1 root root 10489924 Sep 24 23:14 orarootagent_root.l01
-rw-r--r--  1 root root 10502799 Sep 23 14:42 orarootagent_root.l02
-rw-r--r--. 1 root root 10488311 Sep 22 14:06 orarootagent_root.l03
-rw-r--r--  1 root root  2400018 Sep 25 06:24 orarootagent_root.log
-rw-r--r--. 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r--  1 root root        0 Sep 21 09:38 orarootagent_rootOUT.log

Status on nerv05:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV05 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 48992
-rw-r--r-- 1 root root 10514496 Sep 24 05:18 orarootagent_root.l01
-rw-r--r-- 1 root root 10558537 Sep 23 08:55 orarootagent_root.l02
-rw-r--r-- 1 root root 10558709 Sep 23 03:46 orarootagent_root.l03
-rw-r--r-- 1 root root 10557962 Sep 22 22:36 orarootagent_root.l04
-rw-r--r-- 1 root root  7945617 Sep 25 06:36 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 22 17:28 orarootagent_rootOUT.log

Status on nerv02:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV02 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 12296
-rw-r--r-- 1 root root 10492134 Sep 25 00:42 orarootagent_root.l01
-rw-r--r-- 1 root root  2088222 Sep 25 06:51 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 23 19:22 orarootagent_rootOUT.log

Status on nerv08:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV08 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 11912
-rw-r--r-- 1 root root 10546665 Sep 25 01:52 orarootagent_root.l01
-rw-r--r-- 1 root root  1635337 Sep 25 07:03 orarootagent_root.log
-rw-r--r-- 1 root root        6 Sep 23 17:06 orarootagent_root.pid

Status on nerv07:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV07 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 11276
-rw-r--r-- 1 root root 10503476 Sep 25 03:56 orarootagent_root.l01
-rw-r--r-- 1 root root  1025821 Sep 25 07:16 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 24 17:40 orarootagent_rootOUT.log

Status on nerv06:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM NERV06 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 10728
-rw-r--r-- 1 root root 10512292 Sep 25 05:58 orarootagent_root.l01
-rw-r--r-- 1 root root   458343 Sep 25 07:26 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 24 17:40 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 24 17:40 orarootagent_rootOUT.log
Top

Top

crsd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 CRSD trace file should owned by "root:root", but due to Bug 9837321application of patch may have resulted in changing the trace file ownership for patching and not changing it back.
 
Links
  • Oracle Bug # 9837321 - Bug 9837321 - Ownership of crsd traces gets changed from root by patching script - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV01 - CRSD LOG FILE OWNERSHIP 



total 1904
-rw-r--r-- 1 root root 1939004 Sep 25 06:03 crsd.log
-rw-r--r-- 1 root root     345 Sep 24 17:38 crsdOUT.log

Status on nerv03:
WARNING => crsd Log Ownership is NOT Correct (should be root root)


DATA FROM NERV03 - CRSD LOG FILE OWNERSHIP 



total 10132
-rw-r--r-- 1 oracle oinstall 10364419 Sep 25 06:11 crsd.log
-rw-r--r-- 1 oracle oinstall     1840 Sep 24 17:39 crsdOUT.log

Status on nerv04:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV04 - CRSD LOG FILE OWNERSHIP 



total 14424
-rw-r--r--. 1 root root 10511055 Sep 23 18:34 crsd.l01
-rw-r--r--  1 root root  4240421 Sep 25 06:24 crsd.log
-rw-r--r--. 1 root root     2180 Sep 24 17:39 crsdOUT.log

Status on nerv05:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV05 - CRSD LOG FILE OWNERSHIP 



total 1684
-rw-r--r-- 1 root root 1716283 Sep 25 06:36 crsd.log
-rw-r--r-- 1 root root     575 Sep 24 17:38 crsdOUT.log

Status on nerv02:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV02 - CRSD LOG FILE OWNERSHIP 



total 1000
-rw-r--r-- 1 root root 1018670 Sep 25 06:51 crsd.log
-rw-r--r-- 1 root root     826 Sep 24 17:39 crsdOUT.log

Status on nerv08:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV08 - CRSD LOG FILE OWNERSHIP 



total 2804
-rw-r--r-- 1 root root 2862387 Sep 25 07:03 crsd.log
-rw-r--r-- 1 root root     115 Sep 23 17:06 crsdOUT.log

Status on nerv07:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV07 - CRSD LOG FILE OWNERSHIP 



total 996
-rw-r--r-- 1 root root 1012712 Sep 25 07:16 crsd.log
-rw-r--r-- 1 root root     230 Sep 24 17:39 crsdOUT.log

Status on nerv06:
PASS => crsd Log Ownership is Correct (root root)


DATA FROM NERV06 - CRSD LOG FILE OWNERSHIP 



total 988
-rw-r--r-- 1 root root 1005680 Sep 25 07:26 crsd.log
-rw-r--r-- 1 root root     230 Sep 24 17:39 crsdOUT.log
Top

Top

oradism executable ownership

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 enefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV01 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 15:06 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv03:
WARNING => $ORACLE_HOME/bin/oradism ownership is NOT root


DATA FROM NERV03 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwxr-x--- 1 oracle oinstall 71790 Aug 24 10:51 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv04:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV04 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x---. 1 root oinstall 71790 Sep 20 17:37 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv05:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV05 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 16:03 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv02:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV02 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv08:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV08 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:12 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv07:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV07 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 18:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv06:
PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM NERV06 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 23 19:56 /u01/app/oracle/product/11.2.0/db_1/bin/oradism
Top

Top

oradism executable permission

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 Benefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention onnerv03
Passed onnerv01, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV01 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 15:06 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv03:
WARNING => $ORACLE_HOME/bin/oradism setuid bit is NOT set


DATA FROM NERV03 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwxr-x--- 1 oracle oinstall 71790 Aug 24 10:51 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv04:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV04 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x---. 1 root oinstall 71790 Sep 20 17:37 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv05:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV05 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 16:03 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv02:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV02 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv08:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV08 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 17:12 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv07:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV07 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 18:29 /u01/app/oracle/product/11.2.0/db_1/bin/oradism

Status on nerv06:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM NERV06 - /U01/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 23 19:56 /u01/app/oracle/product/11.2.0/db_1/bin/oradism
Top

Top

Verify no multiple parameter entries in database init.ora(spfile)

Recommendation
 
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV01 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC012.__db_cache_size=109051904
rac014.__db_cache_size=104857600
RAC016.__db_cache_size=88080384
RAC015.__db_cache_size=100663296
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv03:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV03 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC012.__db_cache_size=104857600
RAC016.__db_cache_size=83886080
rac014.__db_cache_size=100663296
RAC015.__db_cache_size=92274688
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv04:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV04 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC016.__db_cache_size=83886080
rac014.__db_cache_size=100663296
RAC015.__db_cache_size=88080384
RAC012.__db_cache_size=96468992
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv05:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV05 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
RAC016.__db_cache_size=83886080
rac014.__db_cache_size=100663296
RAC012.__db_cache_size=96468992
RAC015.__db_cache_size=96468992
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv02:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV02 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC013.__db_cache_size=71303168
RAC017.__db_cache_size=104857600
rac014.__db_cache_size=100663296
RAC012.__db_cache_size=96468992
RAC016.__db_cache_size=79691776
RAC015.__db_cache_size=88080384
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv08:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV08 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC016.__db_cache_size=79691776
rac014.__db_cache_size=113246208
RAC012.__db_cache_size=109051904
RAC015.__db_cache_size=100663296
RAC013.__db_cache_size=62914560
RAC017.__db_cache_size=96468992
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv07:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV07 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC016.__db_cache_size=79691776
RAC012.__db_cache_size=109051904
RAC017.__db_cache_size=92274688
RAC013.__db_cache_size=58720256
RAC015.__db_cache_size=92274688
rac014.__db_cache_size=104857600
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data

Status on nerv06:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file


DATA FROM NERV06 - RAC01 DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



RAC018.__db_cache_size=100663296
RAC011.__db_cache_size=130023424
RAC012.__db_cache_size=109051904
RAC017.__db_cache_size=92274688
RAC013.__db_cache_size=58720256
rac014.__db_cache_size=100663296
RAC016.__db_cache_size=75497472
RAC015.__db_cache_size=88080384
RAC018.__java_pool_size=4194304
RAC013.__java_pool_size=4194304
RAC016.__java_pool_size=4194304
rac014.__java_pool_size=4194304
RAC015.__java_pool_size=4194304
RAC017.__java_pool_size=4194304
RAC011.__java_pool_size=4194304
RAC012.__java_pool_size=4194304
Click for more data
Top

Top

Verify no multiple parameter entries in ASM init.ora(spfile)

Recommendation
 
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV01 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv03:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV03 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv04:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV04 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv05:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV05 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv02:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV02 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv08:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV08 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv07:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV07 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

Status on nerv06:
PASS => There are no duplicate parameter entries in the ASM init.ora(spfile) file


DATA FROM NERV06 - VERIFY NO MULTIPLE PARAMETER ENTRIES IN ASM INIT.ORA(SPFILE) 



*.asm_diskstring='/dev/asm*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
Top

Top

Verify control_file_record_keep_time value is in recommended range

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

When a Recovery Manager catalog is not used, the initialization parameter "control_file_record_keep_time" controls the period of time for which circular reuse records are maintained within the database control file. RMAN repository records are kept in circular reuse records.  The optimal setting is the maximum number of days in the past that is required to restore and recover a specific database without the use of a RMAN recovery catalog.  Setting this parameter within a recommended range (1 to 9 days) has been shown to address most recovery scenarios by ensuring archive logs and backup records are not prematurely aged out making database recovery much more challenging.    

The impact of verifying that the initialization parameter control_file_record_keep_time value is in the recommended range is minimal. Increasing this value will increase the size of the controlfile and possible query time for backup meta data and archive data.

Risk:

If the control_file_record_keep_time is set to 0, no RMAN repository records are retained in the controlfile, which causes a much more challenging database recovery operation if RMAN recovery catalog is not available.

If the control_file_record_keep_time is set too high, problems can arise with space management within the control file, expansion of the control file, and control file contention issues.


Action / Repair:

To verify that the FRA space management function is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

CF_RECORD_KEEP_TIME="";
CF_RECORD_KEEP_TIME=$(echo -e "set heading off feedback off\n select value from V\$PARAMETER where name = 'control_file_record_keep_time';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [[ $CF_RECORD_KEEP_TIME -ge "1" && $CF_RECORD_KEEP_TIME -le "9" ]]
then echo -e "\nPASS:  control_file_record_keep_time is within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
elif [ $CF_RECORD_KEEP_TIME -eq "0" ]
then echo -e "\nFAIL:  control_file_record_keep_time is set to zero:" $CF_RECORD_KEEP_TIME;
else echo -e "\nWARNING:  control_file_record_keep_time is not within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
fi;

The expected output should be:

PASS:  control_file_record_keep_time is within recommended range [1-9]: 7

If the output is not as expected, investigate and correct the condition(s).

NOTE: The use of an RMAN recovery catalog is recommended as the best way to avoid the loss of RMAN metadata because of overwritten control file records.
 
Links
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => control_file_record_keep_time is within recommended range [1-9] for RAC01


DATA FROM NERV01 - RAC01 DATABASE - VERIFY CONTROL_FILE_RECORD_KEEP_TIME VALUE IS IN RECOMMENDED RANGE 



control_file_record_keep_time = 7
Top

Top

Verify rman controlfile autobackup is set to ON

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

The control file is a binary file that records the physical structure of the database and contains important meta data required to recover the database. The database cannot startup or stay up unless all control files are valid. When a Recovery Manager catalog is not used, the control file is needed for database recovery because it contains all backup and recovery meta data.

The impact of verifying and setting "CONTROLFILE AUTOBACKUP" to "ON" is minimal. 

Risk:

When a Recovery Manager catalog is not used, loss of the controlfile results in loss of all backup and recovery meta data, which causes a much more challenging database recovery operation

Action / Repair:

To verify that RMAN "CONTROLFILE AUTOBACKUP" is set to "ON", as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

RMAN_AUTOBACKUP_STATUS="";
RMAN_AUTOBACKUP_STATUS=$(echo -e "set heading off feedback off\n select value from V\$RMAN_CONFIGURATION where name = 'CONTROLFILE AUTOBACKUP';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ -n "$RMAN_AUTOBACKUP_STATUS" ] && [ "$RMAN_AUTOBACKUP_STATUS" = "ON" ]
then echo -e "\nPASS:  RMAN "CONTROLFILE AUTOBACKUP" is set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
else
echo -e "\nFAIL:  RMAN "CONTROLFILE AUTOBACKUP" should be set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
fi;

The expected output should be:

PASS:  RMAN CONTROLFILE AUTOBACKUP is set to "ON": ON

If the output is not as expected, investigate and correct the condition(s).

For additional information, review information on CONFIGURE syntax in Oracle® Database Backup and Recovery Reference 11g Release 2 (11.2).

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

NOTE: Oracle MAA also recommends periodically backing up the controlfile to trace as additional backup.

SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 
Needs attention onRAC01
Passed on-

Status on RAC01:
WARNING => RMAN controlfile autobackup should be set to ON


DATA FOR RAC01 FOR VERIFY RMAN CONTROLFILE AUTOBACKUP IS SET TO ON 



Top

Top

Verify the Fast Recovery Area (FRA) has reclaimable space

Success FactorORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 Benefit / Impact:

Oracle's Fast Recovery Area (FRA) manages archivelog files, flashback logs, and RMAN backups.
Before RMAN's space management can clean up files according to your configured retention and
deletion policies, the database needs to be backup periodically. Without these backups, FRA can run
out of available space resulting in database hang because it cannot archive locally.

The impact of verifying that the Flash Recovery Area (FRA) has reclaimable space is minimal.

Risk:

If the Flash Recovery Area (FRA) space management function has no space available to reclaim, the database may hang because it cannot archive a log to the FRA.

Action / Repair:

To verify that the FRA space management funcion is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

PROBLEM_FILE_TYPES_PRESENT=$(echo -e "set heading off feedback off\n select count(*) from V\$FLASH_RECOVERY_AREA_USAGE where file_type in ('ARCHIVED LOG', 'BACKUP PIECE', 'IMAGE COPY') and number_of_files > 0 ;" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
RMAN_BACKUP_WITHIN_30_DAYS=$(echo -e "set heading off feedback off\n select count(*) from V\$BACKUP_SET where completion_time > sysdate-30;" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ $PROBLEM_FILE_TYPES_PRESENT -eq "0" ]
then echo -e "\nThis check is not applicable because file types 'ARCHIVED LOG', 'BACKUP PIECE', or 'IMAGE COPY' are not present in V\$FLASH_RECOVERY_AREA_USAGE";
else if [[ $PROBLEM_FILE_TYPES_PRESENT -ge "1" && $RMAN_BACKUP_WITHIN_30_DAYS -ge "1" ]]
then echo -e "\nPASS:  FRA space management problem file types are present with an RMAN backup completion within the last 30 days."
else echo -e "\nFAIL:  FRA space management problem file types are present without an RMAN backup completion within the last 7 days."
fi;
fi;

The expected output should be:

PASS:  FRA space management problem file types are present with an RMAN backup completion within the last 30 days.

If the output is not as expected, investigate and correct the condition(s).
 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
WARNING => Fast Recovery Area (FRA) should have sufficient reclaimable space


DATA FOR RAC01 FOR VERIFY THE FAST RECOVERY AREA (FRA) HAS RECLAIMABLE SPACE 




rman_backup_within_30_days = 0                                                  
Top

Top

Registered diskgroups in clusterware registry

Recommendation
 Benefit / Impact: :-

Risk:-

Action / Repair:-
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV01 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv03:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV03 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv04:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV04 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv05:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV05 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv02:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV02 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv08:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV08 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv07:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV07 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA

Status on nerv06:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry


DATA FROM NERV06 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

DATA

Diskgroups from Clusterware resources:-

DATA
Top

Top

rp_filter for bonded private interconnects

Recommendation
 As a consequence of having rp_filter set to 1, Interconnect packets may potentially be blocked/discarded. 

To fix this problem, use following MOS note.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV01 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv03:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV03 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv04:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV04 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv05:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV05 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv02:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV02 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv08:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV08 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv07:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV07 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1

Status on nerv06:
WARNING => kernel parameter rp_filter is set to 1.


DATA FROM NERV06 - RP_FILTER FOR BONDED PRIVATE INTERCONNECTS 



net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1
Top

Top

Check for parameter cvuqdisk|1.0.9|1|x86_64

Recommendation
 Install the operating system package cvuqdisk. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility. Use the cvuqdisk rpm for your hardware (for example, x86_64, or i386).
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv03:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv04:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv05:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv02:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv08:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv07:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on nerv06:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64
Top

Top

OLR Integrity

Recommendation
 Any Kind of OLR corruption should be remedied before attempting upgrade otherwise 11.2 GI rootupgrade.sh fails with "Invalid  OLR during upgrade"
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => OLR Integrity check Succeeded


DATA FROM NERV01 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       : 1652380693
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv01.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv03:
PASS => OLR Integrity check Succeeded


DATA FROM NERV03 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       : 2099673501
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv03.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv08:
PASS => OLR Integrity check Succeeded


DATA FROM NERV08 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       :  494538828
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv08.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv07:
PASS => OLR Integrity check Succeeded


DATA FROM NERV07 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       : 1736539511
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv07.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded


Status on nerv06:
PASS => OLR Integrity check Succeeded


DATA FROM NERV06 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2760
	 Available space (kbytes) :     259360
	 ID                       :   47187088
	 Device/File Name         : /u01/app/11.2.0/grid/cdata/nerv06.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded

Top

Top

pam_limits check

Recommendation
 This is required to make the shell limits work properly and applies to 10g,11g and 12c.  

Add the following line to the /etc/pam.d/login file, if it does not already exist:

session    required     pam_limits.so

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV01 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv03:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV03 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv04:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV04 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv05:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV05 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv02:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV02 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv08:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV08 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv07:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV07 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so

Status on nerv06:
PASS => pam_limits configured properly for shell limits


DATA FROM NERV06 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so
Top

Top

Verify vm.min_free_kbytes

Recommendation
  Benefit / Impact:

Maintaining vm.min_free_kbytes=524288 (512MB) helps a Linux system to reclaim memory faster and avoid LowMem pressure issues which can lead to node eviction or other outage or performance issues.

The impact of verifying vm.min_free_kbytes=524288 is minimal. The impact of adjusting the parameter should include editing the /etc/sysctl.conf file and rebooting the system. It is possible, but not recommended, especially for a system already under LowMem pressure, to modify the setting interactively. However, a reboot should still be performed to make sure the interactive setting is retained through a reboot.

Risk:

Exposure to unexpected node eviction and reboot.

Action / Repair:

To verify that vm.min_free_kbytes is properly set to 524288 execute the following command

/sbin/sysctl -n vm.min_free_kbytes

cat /proc/sys/vm/min_free_kbytes

If the output is not as expected, investigate and correct the condition. For example if the value is incorrect in /etc/sysctl.conf but current memory matches the incorrect value, simply edit the /etc/sysctl.conf file to include the line "vm.min_free_kbytes = 524288" and reboot the node. 
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV01 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv03:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV03 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv04:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV04 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv05:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV05 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 8115

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 8115

Status on nerv02:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV02 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5380

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5380

Status on nerv08:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV08 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5681

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5681

Status on nerv07:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV07 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 5681

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 5681

Status on nerv06:
WARNING => vm.min_free_kbytes should be set as recommended.


DATA FROM NERV06 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 8115

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 8115
Top

Top

Verify data files are recoverable

Success FactorDATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, database backups made prior to the unrecoverable operation are invalidated and new backups are required. You can specify the SQL ALTER DATABASE or SQL ALTER TABLESPACE statement with the FORCE LOGGING clause to override the NOLOGGING setting; however, this statement will not repair a database that is already invalid.

Risk:

Changes under NOLOGGING will not be available after executing database recovery from a backup made prior to the unrecoverable change.

Action / Repair:

To verify that the data files are recoverable, execute the following Sqlplus command as the userid that owns the oracle home for the database:
select file#, unrecoverable_time, unrecoverable_change# from v$datafile where unrecoverable_time is not null;
If there are any unrecoverable actions, the output will be similar to:
     FILE# UNRECOVER UNRECOVERABLE_CHANGE#
---------- --------- ---------------------
        11 14-JAN-13               8530544
If nologging changes have occurred and the data must be recoverable then a backup of those datafiles that have nologging operations within should be done immediately. Please consult the following sections of the Backup and Recovery User guide for specific steps to resolve files that have unrecoverable changes

The standard best practice is to enable FORCE LOGGING at the database level (ALTER DATABASE FORCE LOGGING;) to ensure that all transactions are recoverable. However, placing the a database in force logging mode for ETL operations can lead to unnecessary database overhead. MAA best practices call for isolating data that does not need to be recoverable. Such data would include:

Data resulting from temporary loads
Data resulting from transient transformations
Any non critical data

To reduce unnecessary redo generation, do the following:

Specifiy FORCE LOGGING for all tablespaces that you explicitly wish to protect (ALTERTABLESPACE FORCE LOGGING;)
Specify NO FORCE LOGGING for those tablespaces that do not need protection (ALTERTABLESPACE NO FORCE LOGGING;).
Disable force logging at the database level (ALTER DATABASE NO FORCE LOGGING;) otherwise the database level settings will override the tablespace settings.

Once the above is complete, redo logging will function as follows:

Explicit no logging operations on objects in the no logging tablespace will not generate the normal redo (a small amount of redo is always generated for no logging operations to signal that a no logging operation was performed).

All other operations on objects in the no logging tablespace will generate the normal redo.
Operations performed on objects in the force logging tablespaces always generate normal redo.

Note:-Please seek oracle support assistance to mitigate this problem. Upon their guidance, the following commands could help validate, identify corrupted blocks.

              oracle> dbv file=<data_file_returned_by_above_command> userid=sys/******
              RMAN> validate check logical database;
              SQL> select COUNT(*) from v$database_block_corruption;

 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
FAIL => The data files should be recoverable


DATA FOR RAC01 FOR VERIFY DATA FILES ARE RECOVERABLE 




         5 22-SEP-13               1386702                                      
         8 24-SEP-13               3542463                                      
Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv03:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv04:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv05:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv02:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv08:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv07:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed


Status on nerv06:
FAIL => Package unixODBC-devel-2.2.14-11.el6-x86_64 is recommended but NOT installed

Top

Top

OCR and Voting file location

Recommendation
 Starting with Oracle 11gR2, our recommendation is to use Oracle ASM to store OCR and Voting Disks. With appropriate redundancy level (HIGH or NORMAL) of the ASM Disk Group being used, Oracle can create required number of Voting Disks as part of installation
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV01 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv03:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV03 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv04:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV04 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv05:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV05 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv02:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV02 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv08:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV08 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv07:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV07 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data

Status on nerv06:
WARNING => OCR and Voting disks are not stored in ASM


DATA FROM NERV06 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       4344
	 Available space (kbytes) :     257776
	 ID                       : 1708227132
	 Device/File Name         : /u01/shared_config/rac02/ocr
                                    Device/File integrity check succeeded
	 Device/File Name         : /u01/shared_config/rac02/backup_ocr/ocr_bkp
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Click for more data
Top

Top

Parallel Execution Health-Checks and Diagnostics Reports

Recommendation
 This audit check captures information related to Oracle Parallel Query (PQ), DOP, PQ/PX Statistics, Database Resource Plans, Consumers Groups etc. This is primarily for Oracle Support Team consumption. However, customers may also review this to identify/troubleshoot related problems.
For every database, there will be a zip file of format <pxhcdr_DBNAME_HOSTNAME_DBVERSION_DATE_TIMESTAMP.zip> in raccheck output directory. 
 
Needs attention onnerv01
Passed on-
Top

Top

Hardware clock synchronization

Recommendation
 /etc/init.d/halt file is called when system is rebooted or halt. this file must have instructions to synchronize system time to hardware clock.

it should have commands like 

[ -x /sbin/hwclock ] && action $"Syncing hardware clock to system time" /sbin/hwclock $CLOCKFLAGS
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV01 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv03:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV03 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv04:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV04 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv05:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV05 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv02:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV02 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv08:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV08 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv07:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV07 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on nerv06:
PASS => System clock is synchronized to hardware clock at system shutdown


DATA FROM NERV06 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc
Top

Top

Clusterware resource status

Recommendation
 Resources were found to be in an UNKNOWN state on the system.  Having  resources in this state often results in issues when upgrading.  It is  recommended to correct resources in an UNKNOWN state prior to upgrading.   

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No clusterware resource are in unknown state


DATA FROM NERV01 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv01] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv03:
PASS => No clusterware resource are in unknown state


DATA FROM NERV03 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv03] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv04:
PASS => No clusterware resource are in unknown state


DATA FROM NERV04 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv04] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv05:
PASS => No clusterware resource are in unknown state


DATA FROM NERV05 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv05] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv02:
PASS => No clusterware resource are in unknown state


DATA FROM NERV02 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv02] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv08:
PASS => No clusterware resource are in unknown state


DATA FROM NERV08 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv08] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv07:
PASS => No clusterware resource are in unknown state


DATA FROM NERV07 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv07] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data

Status on nerv06:
PASS => No clusterware resource are in unknown state


DATA FROM NERV06 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [nerv06] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
Click for more data
Top

Top

ORA-15196 errors in ASM alert log

Recommendation
 ORA-15196 errors means ASM encountered an invalid metadata block. Please see the trace file for more information next to ORA-15196 error in ASM alert log.  If this is an old error, you can ignore this finding otherwise open service request with Oracle support to find the cause and fix it


 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV01 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv03:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV03 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv04:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV04 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv05:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV05 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv02:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV02 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv08:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV08 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv07:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV07 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on nerv06:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)


DATA FROM NERV06 - ORA-15196 ERRORS IN ASM ALERT LOG 



Top

Top

Disks without Disk Group

Recommendation
 The GROUP_NUMBER and DISK_NUMBER columns in GV$ASM_DISK will only be valid if the disk is part of a disk group which is currently mounted by the instance. Otherwise, GROUP_NUMBER will be 0, and DISK_NUMBER will be a unique value with respect to the other disks that also have a group number of 0. Run following query to find out the disks which are not part of any disk group.

select name,path,HEADER_STATUS,GROUP_NUMBER  from gv$asm_disk where group_number=0;
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No disks found which are not part of any disk group


DATA FROM NERV01 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv03:
PASS => No disks found which are not part of any disk group


DATA FROM NERV03 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv04:
PASS => No disks found which are not part of any disk group


DATA FROM NERV04 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv05:
PASS => No disks found which are not part of any disk group


DATA FROM NERV05 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv02:
PASS => No disks found which are not part of any disk group


DATA FROM NERV02 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv08:
PASS => No disks found which are not part of any disk group


DATA FROM NERV08 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv07:
PASS => No disks found which are not part of any disk group


DATA FROM NERV07 - DISKS WITHOUT DISK GROUP 




no rows selected


Status on nerv06:
PASS => No disks found which are not part of any disk group


DATA FROM NERV06 - DISKS WITHOUT DISK GROUP 




no rows selected

Top

Top

Redo log file write time latency

Recommendation
 When the latency hits 500ms, a Warning message is written to the lgwr trace file(s). For example:

Warning: log write elapsed time 564ms, size 2KB

Even though this threshold is very high and latencies below this range could impact the application performance, it is still worth to capture and report it to customers for necessary action.The performance impact of LGWR latencies include commit delays,Broadcast-on-Commit delays etc.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV01 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 516ms, size 2KB
Warning: log write elapsed time 688ms, size 10KB
Warning: log write elapsed time 815ms, size 1KB
Warning: log write elapsed time 899ms, size 0KB
Warning: log write elapsed time 517ms, size 0KB
Warning: log write elapsed time 1267ms, size 0KB
Warning: log write elapsed time 507ms, size 10KB
Warning: log write elapsed time 661ms, size 1KB
Warning: log write elapsed time 682ms, size 0KB
Warning: log write elapsed time 527ms, size 0KB
Warning: log write elapsed time 1388ms, size 4KB
Warning: log write elapsed time 682ms, size 0KB
Warning: log write elapsed time 535ms, size 18KB
Warning: log write elapsed time 643ms, size 104KB
Warning: log write elapsed time 706ms, size 0KB
Warning: log write elapsed time 677ms, size 0KB
Click for more data

Status on nerv03:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV03 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 686ms, size 1KB
Warning: log write elapsed time 676ms, size 2KB
Warning: log write elapsed time 520ms, size 1KB
Warning: log write elapsed time 583ms, size 7KB
Warning: log write elapsed time 689ms, size 5KB
Warning: log write elapsed time 1354ms, size 1KB
Warning: log write elapsed time 554ms, size 13KB
Warning: log write elapsed time 1086ms, size 2KB
Warning: log write elapsed time 1009ms, size 1KB
Warning: log write elapsed time 1254ms, size 1KB
Warning: log write elapsed time 1154ms, size 1KB
Warning: log write elapsed time 856ms, size 1KB
Warning: log write elapsed time 2516ms, size 1KB
Warning: log write elapsed time 1328ms, size 1KB
Warning: log write elapsed time 590ms, size 0KB
Warning: log write elapsed time 747ms, size 0KB
Click for more data

Status on nerv04:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV04 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 600ms, size 0KB
Warning: log write elapsed time 899ms, size 0KB
Warning: log write elapsed time 531ms, size 3KB
Warning: log write elapsed time 547ms, size 97KB
Warning: log write elapsed time 546ms, size 1KB
Warning: log write elapsed time 836ms, size 145KB
Warning: log write elapsed time 588ms, size 0KB
Warning: log write elapsed time 852ms, size 0KB
Warning: log write elapsed time 709ms, size 0KB
Warning: log write elapsed time 545ms, size 0KB
Warning: log write elapsed time 534ms, size 0KB
Warning: log write elapsed time 536ms, size 1KB
Warning: log write elapsed time 510ms, size 0KB
Warning: log write elapsed time 1014ms, size 0KB
Warning: log write elapsed time 591ms, size 0KB
Warning: log write elapsed time 612ms, size 0KB
Click for more data

Status on nerv05:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV05 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 571ms, size 10KB
Warning: log write elapsed time 633ms, size 1KB
Warning: log write elapsed time 591ms, size 1KB
Warning: log write elapsed time 647ms, size 1KB
Warning: log write elapsed time 529ms, size 40KB
Warning: log write elapsed time 672ms, size 54KB
Warning: log write elapsed time 962ms, size 28KB
Warning: log write elapsed time 767ms, size 2KB
Warning: log write elapsed time 967ms, size 13KB
Warning: log write elapsed time 712ms, size 85KB
Warning: log write elapsed time 593ms, size 10KB
Warning: log write elapsed time 785ms, size 2KB
Warning: log write elapsed time 1427ms, size 92KB
Warning: log write elapsed time 1890ms, size 2KB
Warning: log write elapsed time 728ms, size 2KB
Warning: log write elapsed time 1044ms, size 2KB
Click for more data

Status on nerv02:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV02 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 1111ms, size 0KB
Warning: log write elapsed time 652ms, size 1KB
Warning: log write elapsed time 631ms, size 1KB
Warning: log write elapsed time 660ms, size 1KB
Warning: log write elapsed time 539ms, size 0KB
Warning: log write elapsed time 509ms, size 0KB
Warning: log write elapsed time 721ms, size 3KB
Warning: log write elapsed time 729ms, size 1KB
Warning: log write elapsed time 3377ms, size 0KB
Warning: log write elapsed time 1955ms, size 0KB
Warning: log write elapsed time 3681ms, size 0KB
Warning: log write elapsed time 703ms, size 0KB
Warning: log write elapsed time 2478ms, size 0KB
Warning: log write elapsed time 2033ms, size 0KB
Warning: log write elapsed time 2003ms, size 0KB
Warning: log write elapsed time 2779ms, size 0KB
Click for more data

Status on nerv08:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV08 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 535ms, size 0KB
Warning: log write elapsed time 1493ms, size 2KB
Warning: log write elapsed time 1008ms, size 10KB
Warning: log write elapsed time 530ms, size 0KB
Warning: log write elapsed time 737ms, size 0KB
Warning: log write elapsed time 852ms, size 1KB
Warning: log write elapsed time 1060ms, size 0KB
Warning: log write elapsed time 1229ms, size 110KB
Warning: log write elapsed time 708ms, size 2KB
Warning: log write elapsed time 524ms, size 4KB
Warning: log write elapsed time 869ms, size 4KB
Warning: log write elapsed time 566ms, size 3KB
Warning: log write elapsed time 685ms, size 10KB
Warning: log write elapsed time 544ms, size 0KB
Warning: log write elapsed time 825ms, size 0KB
Warning: log write elapsed time 900ms, size 1KB
Click for more data

Status on nerv07:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV07 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 547ms, size 0KB
Warning: log write elapsed time 3720ms, size 0KB
Warning: log write elapsed time 502ms, size 10KB
Warning: log write elapsed time 531ms, size 0KB
Warning: log write elapsed time 1043ms, size 1KB
Warning: log write elapsed time 908ms, size 4KB
Warning: log write elapsed time 1679ms, size 1KB
Warning: log write elapsed time 982ms, size 2KB
Warning: log write elapsed time 1560ms, size 1KB
Warning: log write elapsed time 1022ms, size 20KB
Warning: log write elapsed time 1054ms, size 0KB
Warning: log write elapsed time 628ms, size 1KB
Warning: log write elapsed time 575ms, size 0KB
Warning: log write elapsed time 1552ms, size 26KB
Warning: log write elapsed time 526ms, size 1KB
Warning: log write elapsed time 775ms, size 1KB
Click for more data

Status on nerv06:
WARNING => Redo log write time is more than 500 milliseconds


DATA FROM NERV06 - RAC01 DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Warning: log write elapsed time 1196ms, size 0KB
Warning: log write elapsed time 514ms, size 0KB
Warning: log write elapsed time 617ms, size 123KB
Warning: log write elapsed time 733ms, size 1KB
Warning: log write elapsed time 676ms, size 132KB
Warning: log write elapsed time 594ms, size 1KB
Warning: log write elapsed time 573ms, size 0KB
Warning: log write elapsed time 839ms, size 0KB
Warning: log write elapsed time 513ms, size 2KB
Warning: log write elapsed time 523ms, size 1KB
Warning: log write elapsed time 994ms, size 2KB
Warning: log write elapsed time 815ms, size 2KB
Warning: log write elapsed time 661ms, size 28KB
Warning: log write elapsed time 519ms, size 3KB
Warning: log write elapsed time 594ms, size 0KB
Warning: log write elapsed time 680ms, size 31KB
Click for more data
Top

Top

Broadcast Requirements for Networks

Success FactorUSE SEPARATE SUBNETS FOR INTERFACES CONFIGURED FOR REDUNDANT INTERCONNECT (HAIP)
Recommendation
 all public and private interconnect network cards should be able to arping to all remote nodes in cluster.

For example using public network card, arping remote node using following command and output should be "Received 1 response(s)"

/sbin/arping -b -f -c 1 -w 1 -I eth1 nodename2.

Here eth1 is public network interface and nodename2 is second node in cluster.

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM NERV01 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.0.103 from 192.168.0.101 eth0
Unicast reply from 192.168.0.103 [10:78:D2:B9:27:E0]  0.637ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.103 from 192.168.3.101 eth1
Unicast reply from 192.168.0.103 [00:26:5A:70:F3:FD]  0.751ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.0.101 eth0
Unicast reply from 192.168.0.104 [10:78:D2:B9:29:54]  0.618ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.3.101 eth1
Unicast reply from 192.168.0.104 [1C:AF:F7:0D:73:B5]  0.717ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Click for more data

Status on nerv03:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM NERV03 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.0.104 from 192.168.0.103 eth0
Unicast reply from 192.168.0.104 [10:78:D2:B9:29:54]  0.689ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.3.103 eth1
Unicast reply from 192.168.0.104 [1C:AF:F7:0D:73:B5]  0.701ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.105 from 192.168.0.103 eth0
Unicast reply from 192.168.0.105 [00:25:11:DC:9F:62]  0.607ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.105 from 192.168.3.103 eth1
Unicast reply from 192.168.0.105 [D8:5D:4C:80:25:E7]  0.659ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Click for more data

Status on nerv08:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM NERV08 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.0.103 from 192.168.0.108 eth0
Unicast reply from 192.168.0.103 [10:78:D2:B9:27:E0]  0.794ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.103 from 192.168.3.108 eth1
Unicast reply from 192.168.0.103 [00:26:5A:70:F3:FD]  1.132ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.0.108 eth0
Unicast reply from 192.168.0.104 [10:78:D2:B9:29:54]  0.844ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.3.108 eth1
Unicast reply from 192.168.0.104 [1C:AF:F7:0D:73:B5]  1.841ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Click for more data

Status on nerv07:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM NERV07 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.0.103 from 192.168.0.107 eth0
Unicast reply from 192.168.0.103 [10:78:D2:B9:27:E0]  5.884ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.103 from 192.168.3.107 eth1
Unicast reply from 192.168.0.103 [00:26:5A:70:F3:FD]  1.004ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.0.107 eth0
Unicast reply from 192.168.0.104 [10:78:D2:B9:29:54]  0.901ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.3.107 eth1
Unicast reply from 192.168.0.104 [1C:AF:F7:0D:73:B5]  1.237ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Click for more data

Status on nerv06:
PASS => Grid infastructure network broadcast requirements are met


DATA FROM NERV06 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 192.168.0.103 from 192.168.0.106 eth0
Unicast reply from 192.168.0.103 [10:78:D2:B9:27:E0]  0.964ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.103 from 192.168.3.106 eth1
Unicast reply from 192.168.0.103 [00:26:5A:70:F3:FD]  0.837ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.0.106 eth0
Unicast reply from 192.168.0.104 [10:78:D2:B9:29:54]  0.964ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.104 from 192.168.3.106 eth1
Unicast reply from 192.168.0.104 [1C:AF:F7:0D:73:B5]  0.764ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Click for more data
Top

Top

Primary database protection with Data Guard

Success FactorDATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations.  
 
For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location.

Key HA Benefits:

With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss.

With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes.

With remote standby database (Disaster Recovery Site), you have protection from complete site failures.

In all cases, the Active Data Guard instances can be active and used for other activities.

Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations.

Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution’s main trade-off is the additional administration for application developer and database administrators.
 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
FAIL => Primary database is NOT protected with Data Guard (standby database) for real-time data protection and availability


DATA FOR RAC01 FOR PRIMARY DATABASE PROTECTION WITH DATA GUARD 



Top

Top

Locally managed tablespaces

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 In order to reduce contention to the data dictionary, rollback data, and reduce the amount of generated redo, locally managed tablespaces should be used rather than dictionary managed tablespaces.Please refer to the below referenced notes for more information about benefits of locally managed tablespace and how to migrate a tablesapce from dictionary managed to locally managed.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => All tablespaces are locally managed tablespace


DATA FOR RAC01 FOR LOCALLY MANAGED TABLESPACES 




dictionary_managed_tablespace_count = 0                                         
Top

Top

Automatic segment storage management

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Starting with Oracle 9i Auto Segment Space Management (ASSM) can be used by specifying the SEGMENT SPACE MANAGEMENT clause, set to AUTO in the CREATE TABLESPACE statement. Implementing the ASSM feature allows Oracle to use bitmaps to manage the free space within segments. The bitmap describes the status of each data block within a segment, with respect to the amount of space in the block available for inserting rows. The current status of the space available in a data block is reflected in the bitmap allowing for Oracle to manage free space automatically with ASSM. ASSM tablespaces automate freelist management and remove the requirement/ability to specify PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters for individual tables and indexes created in these tablespaces. 
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => All tablespaces are using Automatic segment storage management


DATA FOR RAC01 FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Default Temporary Tablespace

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Its recommended to set default temporary tablespace at database level to achieve optimal performance for queries which requires sorting the data.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Default temporary tablespace is set


DATA FOR RAC01 FOR DEFAULT TEMPORARY TABLESPACE 




DEFAULT_TEMP_TABLESPACE                                                         
TEMP                                                                            
                                                                                
Top

Top

Archivelog Mode

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Running the database in ARCHIVELOG mode and using database FORCE LOGGING mode are prerequisites for database recovery operations. The ARCHIVELOG mode enables online database backup and is necessary to recover the database to a point in time later than what has been restored. Features such as Oracle Data Guard and Flashback Database require that the production database run in ARCHIVELOG mode.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Database Archivelog Mode is set to ARCHIVELOG


DATA FOR RAC01 FOR ARCHIVELOG MODE 




Archivelog Mode = ARCHIVELOG                                                    
Top

Top

Check for parameter libgcc|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv03:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv04:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv05:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv07:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
Top

Top

ASM disk read write error

Recommendation
 Read errors can be the result of a loss of access to the entire disk or media corruptions on an otherwise a healthy disk. ASM tries to recover from read errors on corrupted sectors on a disk. When a read error by the database or ASM triggers the ASM instance to attempt bad block remapping, ASM reads a good copy of the extent and copies it to the disk that had the read error.

If the write to the same location succeeds, then the underlying allocation unit (sector) is deemed healthy. This might be because the underlying disk did its own bad block reallocation.

If the write fails, ASM attempts to write the extent to a new allocation unit on the same disk. If this write succeeds, the original allocation unit is marked as unusable. If the write fails, the disk is taken offline.

One unique benefit on ASM-based mirroring is that the database instance is aware of the mirroring. For many types of logical corruptions such as a bad checksum or incorrect System Change Number (SCN), the database instance proceeds through the mirror side looking for valid content and proceeds without errors. If the process in the database that encountered the read is in a position to obtain the appropriate locks to ensure data consistency, it writes the correct data to all mirror sides.

When encountering a write error, a database instance sends the ASM instance a disk offline message.

If database can successfully complete a write to at least one extent copy and receive acknowledgment of the offline disk from ASM, the write is considered successful.

If the write to all mirror side fails, database takes the appropriate actions in response to a write error such as taking the tablespace offline.

When the ASM instance receives a write error message from an database instance or when an ASM instance encounters a write error itself, ASM instance attempts to take the disk offline. ASM consults the Partner Status Table (PST) to see whether any of the disk's partners are offline. If too many partners are already offline, ASM forces the dismounting of the disk group. Otherwise, ASM takes the disk offline.

The ASMCMD remap command was introduced to address situations where a range of bad sectors exists on a disk and must be corrected before ASM or database I/O
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => No read/write errors found for ASM disks


DATA FOR RAC01 FOR ASM DISK READ WRITE ERROR 




                0                  0                                            
Top

Top

Block Corruptions

Success FactorDATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 The V$DATABASE_BLOCK_CORRUPTION view displays blocks marked corrupt by Oracle Database components such as RMAN commands, ANALYZE, dbv, SQL queries, and so on. Any process that encounters a corrupt block records the block corruption in this view.  Repair techniques include block media recovery, restoring data files, recovering with incremental backups, and block newing. Block media recovery can repair physical corruptions, but not logical corruptions. It is also recommended to use RMAN “CHECK LOGICAL” option to check for data block corruptions periodically. Please consult the Oracle® Database Backup and Recovery User's Guide for repair instructions
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => No reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONS


DATA FOR RAC01 FOR BLOCK CORRUPTIONS 




0 block_corruptions found in v$database_block_corruptions                       
Top

Top

Check for parameter sysstat|9.0.4|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv03:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv04:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv05:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv02:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv08:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv07:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on nerv06:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64
Top

Top

Check for parameter libgcc|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv03:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv04:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv05:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv07:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
Top

Top

Check for parameter binutils|2.20.51.0.2|5.11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv03:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv04:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv05:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv02:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv08:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv07:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64

Status on nerv06:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils|2.20.51.0.2|5.36.el6|x86_64
Top

Top

Check for parameter glibc|2.12|1.7.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv03:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv04:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv05:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64

Status on nerv02:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv08:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv07:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv06:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter libstdc++|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv03:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv04:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv05:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv07:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter libstdc++|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv03:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv04:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv05:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv07:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++|4.4.7|3.el6|x86_64
libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter glibc|2.12|1.7.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv03:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv04:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv05:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64

Status on nerv02:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv08:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv07:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc-headers|2.12|1.107.el6_4.4|x86_64

Status on nerv06:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-headers|2.12|1.107.el6_4.4|x86_64
glibc-devel|2.12|1.107.el6_4.4|x86_64
glibc-common|2.12|1.107.el6_4.4|x86_64
glibc|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter gcc|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
gcc|4.4.7|3.el6|x86_64

Status on nerv03:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64

Status on nerv04:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64

Status on nerv05:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
gcc|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
gcc|4.4.7|3.el6|x86_64

Status on nerv07:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
gcc|4.4.7|3.el6|x86_64
Top

Top

Check for parameter make|3.81|19.el6|

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv03:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv04:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv05:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv02:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv08:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv07:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on nerv06:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64
Top

Top

Check for parameter libstdc++-devel|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv03:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv04:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv05:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv07:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter libaio-devel|0.3.107|10.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv03:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv04:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv05:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv02:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv08:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv07:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv06:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Check for parameter libaio|0.3.107|10.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on nerv03:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv04:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv05:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv02:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv08:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on nerv07:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv06:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv03:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv04:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv05:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv02:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv08:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv07:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv06:
FAIL => Package unixODBC-devel-2.2.14-11.el6-i686 is recommended but NOT installed

Top

Top

Check for parameter compat-libstdc++-33|3.2.3|69.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv03:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv04:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv05:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv02:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv08:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv07:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv06:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter glibc-devel|2.12|1.7.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv03:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv04:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv05:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv02:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv08:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv07:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv06:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter glibc-devel|2.12|1.7.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv03:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv04:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv05:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv02:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv08:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv07:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64

Status on nerv06:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6_4.4|x86_64
Top

Top

Check for parameter compat-libcap1|1.10|1|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv03:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv04:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv05:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv02:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv08:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv07:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on nerv06:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64
Top

Top

Check for parameter ksh|20100621|12.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv03:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv04:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv05:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv02:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv08:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv07:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64

Status on nerv06:
PASS => Package ksh-20100621-12.el6-x86_64 meets or exceeds recommendation

ksh|20100621|19.el6_4.4|x86_64
Top

Top

Check for parameter unixODBC|2.2.14|11.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv03:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv04:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv05:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv02:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv08:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv07:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed


Status on nerv06:
FAIL => Package unixODBC-2.2.14-11.el6-i686 is recommended but NOT installed

Top

Top

Check for parameter libaio|0.3.107|10.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on nerv03:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv04:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv05:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv02:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv08:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64

Status on nerv07:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64

Status on nerv06:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Check for parameter libstdc++-devel|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv03:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv04:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv05:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv07:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter gcc-c++|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv03:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv04:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv05:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv02:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv08:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv07:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on nerv06:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
Top

Top

Check for parameter compat-libstdc++-33|3.2.3|69.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv03:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv04:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv05:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv02:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv08:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv07:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on nerv06:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter libaio-devel|0.3.107|10.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv03:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv04:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv05:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv02:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv08:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv07:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64

Status on nerv06:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
Top

Top

Remote listener set to scan name

Recommendation
 For Oracle Database 11g Release 2, the REMOTE_LISTENER parameter should be set to the SCAN. This allows the instances to register with the SCAN Listeners to provide information on what services are being provided by the instance, the current load, and a recommendation on how many incoming connections should be directed to the
instance.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Remote listener is set to SCAN name


DATA FROM NERV01 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv03:
PASS => Remote listener is set to SCAN name


DATA FROM NERV03 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv04:
PASS => Remote listener is set to SCAN name


DATA FROM NERV04 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv05:
PASS => Remote listener is set to SCAN name


DATA FROM NERV05 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv02:
PASS => Remote listener is set to SCAN name


DATA FROM NERV02 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv08:
PASS => Remote listener is set to SCAN name


DATA FROM NERV08 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv07:
PASS => Remote listener is set to SCAN name


DATA FROM NERV07 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain

Status on nerv06:
PASS => Remote listener is set to SCAN name


DATA FROM NERV06 - RAC01 DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = rac02-scan.localdomain 

scan name =  rac02-scan.localdomain
Top

Top

tnsping to remote listener parameter

Recommendation
 If value of remote_listener parameter is set to non-pingable tnsnames,instances will not be cross registered and will not balance the load across cluster.In case of node or instance failure, connections may not failover to serviving node. For more information about remote_listener,load balancing and failover.

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV01 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 06:04:25

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521)))
OK (0 msec)

Status on nerv03:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV03 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 06:13:04

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521)))
OK (0 msec)

Status on nerv04:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV04 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 06:25:57

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521)))
OK (0 msec)

Status on nerv05:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV05 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 06:38:24

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521)))
OK (0 msec)

Status on nerv02:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV02 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 06:52:59

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521)))
OK (0 msec)

Status on nerv08:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV08 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 07:04:55

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521)))
OK (0 msec)

Status on nerv07:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV07 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 07:18:32

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521)))
OK (0 msec)

Status on nerv06:
PASS => Value of remote_listener parameter is able to tnsping


DATA FROM NERV06 - RAC01 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 25-SEP-2013 07:28:16

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.162)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.163)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.161)(PORT=1521)))
OK (10 msec)
Top

Top

tnsname alias defined as scanname:port

Recommendation
 Benecit/Impact

There should be local tnsnames alias defined in $ORACLE_HOME/network/admin/tnsnames.ora same as scan name : port.

Risk:

 it might disturb instance registration with listener services and you may not be able to achieve fail over and load balancing.

Action / Repair:

rename scan name:port tnsalias to some different name.
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV01 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

SERVICO =
  (DESCRIPTION =
Click for more data

Status on nerv03:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV03 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data

Status on nerv04:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV04 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data

Status on nerv05:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV05 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data

Status on nerv02:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV02 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data

Status on nerv08:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV08 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data

Status on nerv07:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV07 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data

Status on nerv06:
PASS => No tnsname alias is defined as scanname:port


DATA FROM NERV06 - RAC01 DATABASE - TNSNAME ALIAS DEFINED AS SCANNAME:PORT 



scan name = rac02-scan.localdomain

 /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


RAC01 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac02-scan.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC01)
    )
  )

DANIEL =
  (DESCRIPTION =
Click for more data
Top

Top

ezconnect configuration in sqlnet.ora

Recommendation
 EZCONNECT eliminates the need for service name lookups in tnsnames.ora files when connecting to an Oracle database across a TCP/IP network. In fact, no naming or directory system is required when using this method.It extends the functionality of the host naming method by enabling clients to connect to a database with an optional port and service name in addition to the host name of the database.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV01 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv03:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV03 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv04:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV04 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv05:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV05 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv02:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV02 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv08:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV08 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv07:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV07 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle


Status on nerv06:
PASS => ezconnect is configured in sqlnet.ora


DATA FROM NERV06 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle

Top

Top

Check for parameter parallel_execution_message_size

Success FactorCONFIGURE PARALLEL_EXECUTION_MESSAGE_SIZE FOR BETTER PARALLELISM PERFORMANCE
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

PARALLEL_EXECUTION_MESSAGE_SIZE = 16384 Improves Parallel Query performance
 
Links
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC013.parallel_execution_message_size = 16384                                  

Status on RAC011:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC011.parallel_execution_message_size = 16384                                  

Status on RAC012:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC012.parallel_execution_message_size = 16384                                  

Status on RAC015:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC015.parallel_execution_message_size = 16384                                  

Status on rac014:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

rac014.parallel_execution_message_size = 16384                                  

Status on RAC018:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC018.parallel_execution_message_size = 16384                                  

Status on RAC017:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC017.parallel_execution_message_size = 16384                                  

Status on RAC016:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

RAC016.parallel_execution_message_size = 16384                                  
Top

Top

Hang and Deadlock material

Recommendation
 Ways to troubleshoot database hang and deadlocks:- 

1. V$Wait_Chains - The DB (the dia0 BG process) samples local hanganalyze every 3 seconds and global hanganalyze every 10 seconds and stores it in memory.  V$Wait_Chains is an interface to seeing this "hanganalyze cache".  That means at any moment you can query v$wait_chains and see what hanganalyze knows about the current wait chains at any given time.  In 11.2 with a live hang this is the first thing you can use to know who the blocker and final blockers are.For more info see following: NOTE:1428210.1 - Troubleshooting Database Contention With V$Wait_Chains

2. Procwatcher - In v11, this script samples v$wait_chains every 90 seconds and collects interesting info about the processes involved in wait chains (short stacks, current wait, current SQL, recent ASH data, locks held, locks waiting for, latches held, etc...).  This script works in RAC and non-RAC and is a proactive way to trap hang data even if you can't predict when the problem will happen.  Some very large customers are proactively either planning to, or using this script on hundreds of systems to catch session contention.  For more info see followings: NOTE:459694.1 - Procwatcher: Script to Monitor and Examine Oracle DB and Clusterware Processes AND NOTE:1352623.1 - How To Troubleshoot Database Contention With Procwatcher.

3. Hanganalyze Levels - Hanganalyze format and output is completely different starting in version 11.  In general we recommend getting hanganalyze dumps at level 3. Make sure you always get a global hanganalyze in RAC.

4. Systemstate Levels - With a large SGA and a large number of processes, systemstate dumps at level 266 or 267 can dump a HUGE amount of data and take even hours to dump on large systems.  That situation should be avoided.  One lightweight alternative is a systemstate dump at level 258.  This is basically a level 2 systemstate plus short stacks and is much cheaper than level 266 or 267 and level 258 still has the most important info that support engineers typically look at like process info, latch info, wait events, short stacks, and more at a fraction of the cost.

Note that bugs 11800959 and 11827088 have significant impact on systemstate dumps.  If not on 11.2.0.3+ or a version that has both fixes applied, systemstate dumps at levels 10, 11, 266, and 267 can be VERY expensive in RAC.  In versions < 11.2.0.3 without these fixes applied, systemstate dumps at level 258 would typically be advised. 

NOTE:1353073.1 - Exadata Diagnostic Collection Guide while for Exadata many of the concepts for hang detection and analysis are the same for normal RAC systems.

5.Hang Management and LMHB provide good proactive hang related data.  For Hang Management see following note: 1270563.1 Hang Manager 11.2.0.2
 
Links
Needs attention onnerv01
Passed on-
Top

Top

Check for parameter recyclebin

Success FactorLOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact: 
  
Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice  values set at deployment time. By setting these database initialization  parameters as recommended, known problems may be avoided and performance  maximized. 
The parameters are common to all database instances. The impact of setting  these parameters is minimal.  The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance  settings can be done after careful performance evaluation and clear understanding of the performance impact. 
  
Risk: 
  
If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization  parameter is not set as recommended, and the actual set value. 
  
Action / Repair: 
  
"RECYCLEBIN = ON" provides higher availability by enabling the Flashback Drop  feature. "ON" is the default value and should not be changed. 

 
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC013.recyclebin = on                                                          

Status on RAC011:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC011.recyclebin = on                                                          

Status on RAC012:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC012.recyclebin = on                                                          

Status on RAC015:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC015.recyclebin = on                                                          

Status on rac014:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

rac014.recyclebin = on                                                          

Status on RAC018:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC018.recyclebin = on                                                          

Status on RAC017:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC017.recyclebin = on                                                          

Status on RAC016:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

RAC016.recyclebin = on                                                          
Top

Top

Check for parameter cursor_sharing

Recommendation
 We recommend that customers discontinue setting cursor_sharing = SIMILAR due to the many problematic situations customers have experienced using it. The ability to set this will be removed in version 12 of the Oracle Database (the settings of EXACT and FORCE will remain available). Instead, we recommend the use of Adaptive Cursor Sharing in 11g.
 
Links
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC013.cursor_sharing = EXACT                                                   

Status on RAC011:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC011.cursor_sharing = EXACT                                                   

Status on RAC012:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC012.cursor_sharing = EXACT                                                   

Status on RAC015:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC015.cursor_sharing = EXACT                                                   

Status on rac014:
PASS => Database parameter CURSOR_SHARING is set to recommended value

rac014.cursor_sharing = EXACT                                                   

Status on RAC018:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC018.cursor_sharing = EXACT                                                   

Status on RAC017:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC017.cursor_sharing = EXACT                                                   

Status on RAC016:
PASS => Database parameter CURSOR_SHARING is set to recommended value

RAC016.cursor_sharing = EXACT                                                   
Top

Top

Check for parameter fast_start_mttr_target

Success FactorCOMPUTER FAILURE PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

To optimize run time performance for write/redo generation intensive workloads.  Increasing fast_start_mttr_target from the default will reduce checkpoint writes from DBWR processes, making more room for LGWR IO.

Risk:

Performance implications if set too aggressively (lower setting = more aggressive), but a trade-off between performance and availability.  This trade-off and the type of workload needs to be evaluated and a decision made whether the default is needed to meet RTO objectives.  fast_start_mttr_target should be set to the desired RTO (Recovery Time Objective) while still maintaing performance SLAs. So this needs to be evaluated on a case by case basis.

Action / Repair:

Consider increasing fast_start_mttr_target to 300 (five minutes) from the default. The trade-off is that instance recovery will run longer, so if instance recovery is more important than performance, then keep fast_start_mttr_target at the default.

Keep in mind that an application with inadequately sized redo logs will likley not see an affect from this change due to frequent log switches so follow best practices for sizing redo logs.

Considerations for a direct writes in a data warehouse type of application: Even though direct operations aren't using the buffer cache, fast_start_mttr_target is very effective at controlling crash recovery time because it ensures adequate checkpointing for the few buffers that are resident (ex: undo segment headers).
 
Needs attention onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016
Passed on-

Status on RAC013:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC013.fast_start_mttr_target = 0                                               

Status on RAC011:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC011.fast_start_mttr_target = 0                                               

Status on RAC012:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC012.fast_start_mttr_target = 0                                               

Status on RAC015:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC015.fast_start_mttr_target = 0                                               

Status on rac014:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

rac014.fast_start_mttr_target = 0                                               

Status on RAC018:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC018.fast_start_mttr_target = 0                                               

Status on RAC017:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC017.fast_start_mttr_target = 0                                               

Status on RAC016:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

RAC016.fast_start_mttr_target = 0                                               
Top

Top

Check for parameter undo_retention

Success FactorLOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
 
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC013.undo_retention = 900                                                     

Status on RAC011:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC011.undo_retention = 900                                                     

Status on RAC012:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC012.undo_retention = 900                                                     

Status on RAC015:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC015.undo_retention = 900                                                     

Status on rac014:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

rac014.undo_retention = 900                                                     

Status on RAC018:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC018.undo_retention = 900                                                     

Status on RAC017:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC017.undo_retention = 900                                                     

Status on RAC016:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

RAC016.undo_retention = 900                                                     
Top

Top

Verify all "BIGFILE" tablespaces have non-default "MAXBYTES" values set

Recommendation
 Benefit / Impact:

"MAXBYTES" is the SQL attribute that expresses the "MAXSIZE" value that is used in the DDL command to set "AUTOEXTEND" to "ON". By default, for a bigfile tablespace, the value is "3.5184E+13", or "35184372064256". The benefit of having "MAXBYTES" set at a non-default value for "BIGFILE" tablespaces is that a runaway operation or heavy simultaneous use (e.g., temp tablespace) cannot take up all the space in a diskgroup.

The impact of verifying that "MAXBYTES" is set to a non-default value is minimal. The impact of setting the "MAXSIZE" attribute to a non-default value "varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.

Risk:

The risk of running out of space in a diskgroup varies by application and cannot be quantified here. A diskgroup running out of space may impact the entire database as well as ASM operations (e.g., rebalance operations).

Action / Repair:

To obtain a list of file numbers and bigfile tablespaces that have the "MAXBYTES" attribute at the default value, enter the following sqlplus command logged into the database as sysdba:
select file_id, a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_data_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name
union
select file_id,a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_temp_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name;

The output should be:no rows returned 

If you see output similar to:

   FILE_ID TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
         1 TEMP                           YES 3.5184E+13
         3 UNDOTBS1                       YES 3.5184E+13
         4 UNDOTBS2                       YES 3.5184E+13

Investigate and correct the condition.
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR RAC01 FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Clusterware status

Success FactorCLIENT FAILOVER OPERATIONAL BEST PRACTICES
Recommendation
 Oracle clusterware is required for complete client failover integration.  Please consult the following whitepaper for further information
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Clusterware is running


DATA FROM NERV01 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv03:
PASS => Clusterware is running


DATA FROM NERV03 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv04:
PASS => Clusterware is running


DATA FROM NERV04 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv05:
PASS => Clusterware is running


DATA FROM NERV05 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv02:
PASS => Clusterware is running


DATA FROM NERV02 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv08:
PASS => Clusterware is running


DATA FROM NERV08 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv07:
PASS => Clusterware is running


DATA FROM NERV07 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data

Status on nerv06:
PASS => Clusterware is running


DATA FROM NERV06 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       nerv01                                       
               ONLINE  ONLINE       nerv02                                       
               ONLINE  ONLINE       nerv03                                       
               ONLINE  ONLINE       nerv04                                       
               ONLINE  ONLINE       nerv05                                       
               ONLINE  ONLINE       nerv06                                       
               ONLINE  ONLINE       nerv07                                       
               ONLINE  ONLINE       nerv08                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       nerv01                                       
Click for more data
Top

Top

Flashback database on primary

Success FactorLOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
FAIL => Flashback on PRIMARY is not configured


DATA FOR RAC01 FOR FLASHBACK DATABASE ON PRIMARY 




Flashback status = NO                                                           
Top

Top

Database init parameter DB_BLOCK_CHECKING

Recommendation
 Critical

Benefit / Impact:

Intially db_block_checking is set to off due to potential performance impact. Performance testing is particularly important given that overhead is incurred on every block change. Block checking typically causes 1% to 10% overhead, but for update and insert intensive applications (such as Redo Apply at a standby database) the overhead can be much higher. OLTP compressed tables also require additional checks that can result in higher overhead depending on the frequency of updates to those tables. Workload specific testing is required to assess whether the performance overhead is acceptable.


Risk:

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair:

Based on performance testing results set the primary or standby database to either medium or full depending on the impact. If performance concerns prevent setting DB_BLOCK_CHECKING to either FULL or MEDIUM at a primary database, then it becomes even more important to enable this at the standby database. This protects the standby database from logical corruption that would be undetected at the primary database.
For higher data corruption detection and prevention, enable this setting but performance impacts vary per workload.Evaluate performance impact.

 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV01 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv03:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV03 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv04:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV04 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv05:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV05 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv02:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV02 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv08:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV08 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv07:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV07 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on nerv06:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.


DATA FROM NERV06 - RAC01 DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE
Top

Top

umask setting for RDBMS owner

Recommendation
 
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV01 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv03:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV03 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv04:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV04 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv05:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV05 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv02:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV02 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv08:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV08 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv07:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV07 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on nerv06:
PASS => umask for RDBMS owner is set to 0022


DATA FROM NERV06 - UMASK SETTING FOR RDBMS OWNER 



0022
Top

Top

Manage ASM Audit File Directory Growth with cron

Recommendation
 Benefit / Impact:

The audit file destination directories for an ASM instance can grow to contain a very large number of files if they are not regularly maintained. Use the Linux cron(8) utility and the find(1) command to manage the number of files in the audit file destination directories.

The impact of using cron(8) and find(1) to manage the number of files in the audit file destination directories is minimal.

Risk:

Having a very large number of files can cause the file system to run out of free disk space or inodes, or can cause Oracle to run very slowly due to file system directory scaling limits, which can have the appearance that the ASM instance is hanging on startup.

Action / Repair:

Refer to MOS Note 1298957.1. 
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV01 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 641

Status on nerv03:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV03 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 608

Status on nerv04:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV04 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 702

Status on nerv05:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV05 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 589

Status on nerv02:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV02 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 559

Status on nerv08:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV08 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 589

Status on nerv07:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV07 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 564

Status on nerv06:
PASS => ASM Audit file destination file count <= 100,000


DATA FROM NERV06 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /u01/app/11.2.0/grid/rdbms/audit = 587
Top

Top

GI shell limits hard stack

Recommendation
 The hard stack shell limit for the Oracle Grid Infrastructure software install owner should be >= 10240.

What's being checked here is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then to check the hard stack configuration while logged into the software owner account (eg. grid).

$ ulimit -Hs
10240

As long as the hard stack limit is 10240 or above then the configuration should be ok.

 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit hard stack for GI is configured according to recommendation


DATA FROM NERV01 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13901
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv03:
PASS => Shell limit hard stack for GI is configured according to recommendation


DATA FROM NERV03 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv08:
PASS => Shell limit hard stack for GI is configured according to recommendation


DATA FROM NERV08 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15539
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv07:
PASS => Shell limit hard stack for GI is configured according to recommendation


DATA FROM NERV07 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv06:
PASS => Shell limit hard stack for GI is configured according to recommendation


DATA FROM NERV06 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31469
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data
Top

Top

Check for parameter asm_power_limit

Recommendation
 ASM_POWER_LIMIT specifies the maximum power on an Automatic Storage Management instance for disk rebalancing. The higher the limit, the faster rebalancing will complete. Lower values will take longer, but consume fewer processing and I/O resources.

Syntax to specify power limit while adding or droping disk is :- alter diskgroup <diskgroup_name> add disk '/dev/raw/raw37' rebalance power 10;
 
Needs attention on-
Passed on+ASM4, +ASM1, +ASM2, +ASM3, +ASM5, +ASM6, +ASM7, +ASM8

Status on +ASM4:
PASS => asm_power_limit is set to recommended value of 1

+ASM4.asm_power_limit = 1                                                       

Status on +ASM1:
PASS => asm_power_limit is set to recommended value of 1

+ASM1.asm_power_limit = 1                                                       

Status on +ASM2:
PASS => asm_power_limit is set to recommended value of 1

+ASM2.asm_power_limit = 1                                                       

Status on +ASM3:
PASS => asm_power_limit is set to recommended value of 1

+ASM3.asm_power_limit = 1                                                       

Status on +ASM5:
PASS => asm_power_limit is set to recommended value of 1

+ASM5.asm_power_limit = 1                                                       

Status on +ASM6:
PASS => asm_power_limit is set to recommended value of 1

+ASM6.asm_power_limit = 1                                                       

Status on +ASM7:
PASS => asm_power_limit is set to recommended value of 1

+ASM7.asm_power_limit = 1                                                       

Status on +ASM8:
PASS => asm_power_limit is set to recommended value of 1

+ASM8.asm_power_limit = 1                                                       
Top

Top

Jumbo frames configuration for interconnect

Success FactorUSE JUMBO FRAMES IF SUPPORTED AND POSSIBLE IN THE SYSTEM
Recommendation
 A performance improvement can be seen with MTU frame size of approximately 9000.  Check with your SA and network admin first and if possible, configure jumbo frames for the interconnect.  Depending upon your network gear the supported frame sizes may vary between NICs and switches.  The highest setting supported by BOTH devices should be considered.  Please see below referenced notes for more detail specific to platform.    

To validate whether jumbo frames are configured correctly end to end (ie., NICs and switches), run the following commands as root.  Invoking ping using a specific interface requires root.

export CRS_HOME= To your GI or clusterware home like export CRS_HOME=/u01/app/12.1.0/grid

/bin/ping -s 8192 -c 2 -M do -I `$CRS_HOME/bin/oifcfg getif -type cluster_interconnect|tail -1|awk '{print $1}'` hostname 

Substitute your frame size as required for 8192 in the above command.  The actual frame size varies from one networking vendor to another.

If you get errors similar to the  following then jumbo frames are not configured properly for your frame size.

From 192.168.122.186 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.122.186 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- rws3060018.us.oracle.com ping statistics ---
0 packets transmitted, 0 received, +2 errors


if jumbo frames are configured properly for your frame size you should obtain output similar to the following:

8192 bytes from hostname (10.208.111.43): icmp_seq=1 ttl=64 time=0.683 ms
8192 bytes from hostname(10.208.111.43): icmp_seq=2 ttl=64 time=0.243 ms

--- hostname ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.243/0.463/0.683/0.220 ms
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV01 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr 1C:AF:F7:0D:73:C3  
          inet addr:192.168.3.101  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::1eaf:f7ff:fe0d:73c3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3648567 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3749569 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1744944875 (1.6 GiB)  TX bytes:1840350452 (1.7 GiB)
          Interrupt:19 


Status on nerv03:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV03 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr 00:26:5A:70:F3:FD  
          inet addr:192.168.3.103  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::226:5aff:fe70:f3fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6630272 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6522023 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4898750515 (4.5 GiB)  TX bytes:4626257493 (4.3 GiB)
          Interrupt:19 


Status on nerv04:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV04 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr 1C:AF:F7:0D:73:B5  
          inet addr:192.168.3.104  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::1eaf:f7ff:fe0d:73b5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4086809 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3956132 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2164310743 (2.0 GiB)  TX bytes:1901685332 (1.7 GiB)
          Interrupt:19 


Status on nerv05:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV05 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr D8:5D:4C:80:25:E7  
          inet addr:192.168.3.105  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::da5d:4cff:fe80:25e7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4501083 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4489303 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2145981850 (1.9 GiB)  TX bytes:2180207418 (2.0 GiB)
          Interrupt:18 Base address:0xec00 


Status on nerv02:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV02 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr 1C:AF:F7:0D:73:B7  
          inet addr:192.168.3.102  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::1eaf:f7ff:fe0d:73b7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3764900 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3860459 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1809754691 (1.6 GiB)  TX bytes:1910909030 (1.7 GiB)
          Interrupt:19 


Status on nerv08:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV08 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr 1C:AF:F7:0D:73:B9  
          inet addr:192.168.3.108  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::1eaf:f7ff:fe0d:73b9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17188911 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16667237 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:8296104729 (7.7 GiB)  TX bytes:8632777374 (8.0 GiB)
          Interrupt:21 


Status on nerv07:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV07 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr 00:26:5A:70:E8:DB  
          inet addr:192.168.3.107  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::226:5aff:fe70:e8db/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3984737 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3974377 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1911217808 (1.7 GiB)  TX bytes:1994382063 (1.8 GiB)
          Interrupt:20 


Status on nerv06:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect


DATA FROM NERV06 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth1      Link encap:Ethernet  HWaddr D8:5D:4C:80:25:E2  
          inet addr:192.168.3.106  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::da5d:4cff:fe80:25e2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4187583 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4328200 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1960274525 (1.8 GiB)  TX bytes:2119381729 (1.9 GiB)
          Interrupt:18 Base address:0xec00 

Top

Top

OSWatcher status

Success FactorINSTALL AND RUN OSWATCHER PROACTIVELY FOR OS RESOURCE UTILIZATION DIAGNOSIBILITY
Recommendation
 Operating System Watcher  (OSW) is a collection of UNIX shell scripts intended to collect and archive operating system and network metrics to aid diagnosing performance issues. OSW is designed to run continuously and to write the metrics to ASCII files which are saved to an archive directory. The amount of archived data saved and frequency of collection are based on user parameters set when starting OSW.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV01 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv03:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV03 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv04:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV04 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv05:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV05 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv02:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV02 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv08:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV08 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv07:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV07 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on nerv06:
WARNING => OSWatcher is not running as is recommended.


DATA FROM NERV06 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running
Top

Top

CSS reboot time

Success FactorUNDERSTAND CSS TIMEOUT COMPUTATION IN RAC 10G
Recommendation
 Reboottime (default 3 seconds) is the amount of time allowed for a node to complete a reboot after the CSS daemon has been evicted.
 
Links
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => CSS reboottime is set to the default value of 3


DATA FROM NERV01 - CSS REBOOT TIME 



CRS-4678: Successful get reboottime 3 for Cluster Synchronization Services.
Top

Top

CSS disktimeout

Success FactorUNDERSTAND CSS TIMEOUT COMPUTATION IN RAC 10G
Recommendation
 The maximum amount of time allowed for a voting file I/O to complete; if this time is exceeded the voting disk will be marked as offline.  Note that this is also the amount of time that will be required for initial cluster formation, i.e. when no nodes have previously been up and in a cluster.
 
Links
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => CSS disktimeout is set to the default value of 200


DATA FROM NERV01 - CSS DISKTIMEOUT 



CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.
Top

Top

VIP NIC bonding config.

Success FactorCONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for VIPs, Oracle highly recommends to configure redundant network for VIPs using NIC BONDING.  Follow below note for more information on how to configure bonding in linux
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV01 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr 10:78:D2:B9:29:96  
          inet addr:192.168.0.101  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::1278:d2ff:feb9:2996/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2923480 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2728024 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2974044103 (2.7 GiB)  TX bytes:1344503271 (1.2 GiB)
          Interrupt:43 Base address:0xe000 


Status on nerv03:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV03 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr 10:78:D2:B9:27:E0  
          inet addr:192.168.0.103  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::1278:d2ff:feb9:27e0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2636223 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2441109 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2793910984 (2.6 GiB)  TX bytes:1204335397 (1.1 GiB)
          Interrupt:43 Base address:0xe000 


Status on nerv04:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV04 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr 10:78:D2:B9:29:54  
          inet addr:192.168.0.104  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::1278:d2ff:feb9:2954/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3091880 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2512461 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3618382990 (3.3 GiB)  TX bytes:938316610 (894.8 MiB)
          Interrupt:43 Base address:0xe000 


Status on nerv05:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV05 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr 00:25:11:DC:9F:62  
          inet addr:192.168.0.105  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::225:11ff:fedc:9f62/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4597176 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3508082 errors:0 dropped:0 overruns:0 carrier:2
          collisions:0 txqueuelen:1000 
          RX bytes:5616581968 (5.2 GiB)  TX bytes:1272663072 (1.1 GiB)
          Memory:feac0000-feb00000 


Status on nerv02:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV02 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr 10:78:D2:B9:29:93  
          inet addr:192.168.0.102  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::1278:d2ff:feb9:2993/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3242600 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2634528 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3797164388 (3.5 GiB)  TX bytes:995317815 (949.2 MiB)
          Interrupt:43 Base address:0xe000 


Status on nerv08:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV08 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr C8:9C:DC:C7:32:54  
          inet addr:192.168.0.108  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::ca9c:dcff:fec7:3254/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:36828996 errors:0 dropped:291 overruns:0 frame:0
          TX packets:14709438 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:50725246233 (47.2 GiB)  TX bytes:4412310055 (4.1 GiB)
          Interrupt:42 Base address:0xe000 


Status on nerv07:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV07 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr C8:9C:DC:C7:32:A5  
          inet addr:192.168.0.107  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::ca9c:dcff:fec7:32a5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2539629 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1811382 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2911467152 (2.7 GiB)  TX bytes:990498565 (944.6 MiB)
          Interrupt:41 Base address:0xe000 


Status on nerv06:
WARNING => NIC bonding is NOT configured for public network (VIP)


DATA FROM NERV06 - VIP NIC BONDING CONFIG. 



eth0      Link encap:Ethernet  HWaddr 00:25:11:DC:C0:30  
          inet addr:192.168.0.106  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::225:11ff:fedc:c030/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3376050 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2743944 errors:0 dropped:0 overruns:0 carrier:2
          collisions:0 txqueuelen:1000 
          RX bytes:3961454535 (3.6 GiB)  TX bytes:1019306297 (972.0 MiB)
          Memory:feac0000-feb00000 

Top

Top

Interconnect NIC bonding config.

Success FactorCONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for interconnect, Oracle highly recommends to configure redundant network for interconnect using NIC BONDING.  Follow below note for more information on how to configure bonding in linux.

NOTE: If customer is on 11.2.0.2 or above and HAIP is in use with two or more interfaces then this finding can be ignored.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV01 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv03:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV03 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv04:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV04 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv05:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV05 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv02:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV02 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv08:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV08 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv07:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV07 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv06:
WARNING => NIC bonding is not configured for interconnect


DATA FROM NERV06 - INTERCONNECT NIC BONDING CONFIG. 



eth1  192.168.3.0  global  cluster_interconnect
Top

Top

Verify operating system hugepages count satisfies total SGA requirements

Recommendation
 Benefit / Impact:

Properly configuring operating system hugepages on Linux and using the database initialization parameter "use_large_pages" to "only" results in more efficient use of memory and reduced paging.
The impact of validating that the total current hugepages are greater than or equal to estimated requirements for all currently active SGAs is minimal. The impact of corrective actions will vary depending on the specific configuration, and may require a reboot of the database server.

Risk:

The risk of not correctly configuring operating system hugepages in advance of setting the database initialization parameter "use_large_pages" to "only" is that if not enough huge pages are configured, some databases will not start after you have set the parameter.

Action / Repair:

Pre-requisite: All database instances that are supposed to run concurrently on a database server must be up and running for this check to be accurate.

NOTE: Please refer to below referenced My Oracle Support notes for additional details on configuring hugepages.

NOTE: If you have not reviewed the below referenced My Oracle Support notes and followed their guidance BEFORE using the database parameter "use_large_pages=only", this check will pass the environment but you will still not be able to start instances once the configured pool of operating system hugepages have been consumed by instance startups. If that should happen, you will need to change the "use_large_pages" initialization parameter to one of the other values, restart the instance, and follow the instructions in the below referenced My Oracle Support notes. The brute force alternative is to increase the huge page count until the newest instance will start, and then adjust the huge page count after you can see the estimated requirements for all currently active SGAs.
 
Links
Needs attention onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06
Passed on-

Status on nerv01:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV01 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv03:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV03 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv04:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV04 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv05:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV05 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv02:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV02 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv08:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV08 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv07:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV07 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on nerv06:
FAIL => Operating system hugepages count does not satisfy total SGA requirements


DATA FROM NERV06 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).

Top

Top

Check for parameter memory_target

Recommendation
 It is recommended to use huge pages for efficient use of memory and reduced paging. Huge pages can not be configured if database is using automatic memory management. To take benefit of huge pages, its recommended to disable automatic memory management by unsetting to following init parameters.
MEMORY_TARGET
MEMORY_MAX_TARGET
 
Needs attention onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016
Passed on-

Status on RAC013:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC013.memory_target = 536870912                                                

Status on RAC011:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC011.memory_target = 536870912                                                

Status on RAC012:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC012.memory_target = 536870912                                                

Status on RAC015:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC015.memory_target = 536870912                                                

Status on rac014:
WARNING => Database Parameter memory_target is not set to the recommended value

rac014.memory_target = 536870912                                                

Status on RAC018:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC018.memory_target = 536870912                                                

Status on RAC017:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC017.memory_target = 536870912                                                

Status on RAC016:
WARNING => Database Parameter memory_target is not set to the recommended value

RAC016.memory_target = 536870912                                                
Top

Top

CRS and ASM version comparison

Recommendation
 you should always run equal or higher version of CRS than ASM. running higher ASM version than CRS is non-supported configuration and may run into issues.
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV01 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv03:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV03 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv04:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV04 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv05:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV05 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv02:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV02 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv08:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV08 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv07:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV07 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on nerv06:
PASS => CRS version is higher or equal to ASM version.


DATA FROM NERV06 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040
Top

Top

Local listener set to node VIP

Recommendation
 The LOCAL_LISTENER parameter should be set to the node VIP. If you need fully qualified domain names, ensure that LOCAL_LISTENER is set to the fully qualified domain name (node-vip.mycompany.com). By default a local listener is created during cluster configuration that runs out of the grid infrastructure home and listens on the specified port(default is 1521) of the node VIP.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV01 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.111)(PORT=1521)) VIP Names=nerv01-vip VIP IPs=192.168.0.111

Status on nerv03:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV03 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.113)(PORT=1521)) VIP Names=nerv03-vip VIP IPs=192.168.0.113

Status on nerv04:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV04 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.114)(PORT=1521)) VIP Names=nerv04-vip VIP IPs=192.168.0.114

Status on nerv05:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV05 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.115)(PORT=1521)) VIP Names=nerv05-vip VIP IPs=192.168.0.115

Status on nerv02:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV02 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.102)(PORT=1521)) VIP Names=nerv02 VIP IPs=192.168.0.102

Status on nerv08:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV08 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.118)(PORT=1521)) VIP Names=nerv08-vip VIP IPs=192.168.0.118

Status on nerv07:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV07 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.117)(PORT=1521)) VIP Names=nerv07-vip VIP IPs=192.168.0.117

Status on nerv06:
PASS => Local listener init parameter is set to local node VIP


DATA FROM NERV06 - RAC01 DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.116)(PORT=1521)) VIP Names=nerv06-vip VIP IPs=192.168.0.116
Top

Top

Number of SCAN listeners

Recommendation
 Benefit / Impact:

Application scalability and/or availability

Risk:

Potential reduced scalability and/or availability of applications

Action / Repair:

The recommended number of SCAN listeners is 3....  See the referenced document for more details.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV01 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv03:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV03 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv04:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV04 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv05:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV05 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv02:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV02 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv08:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV08 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv07:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV07 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Status on nerv06:
PASS => Number of SCAN listeners is equal to the recommended number of 3.


DATA FROM NERV06 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
Top

Top

Voting disk status

Success FactorUSE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 Benefit / Impact:

Stability, Availability

Risk:

Cluster instability

Action / Repair:

Voting disks that are not online would indicate a problem with the clusterware
and should be investigated as soon as possible.  All voting disks are expected to be ONLINE.

Use the following command to list the status of the voting disks

$CRS_HOME/bin/crsctl query css votedisk|sed 's/^ //g'|grep ^[0-9]

The output should look similar to the following, one row were voting disk, all disks should indicate ONLINE

1. ONLINE   192c8f030e5a4fb3bf77e43ad3b8479a (o/192.168.10.102/DBFS_DG_CD_02_sclcgcel01) [DBFS_DG]
2. ONLINE   2612d8a72d194fa4bf3ddff928351c41 (o/192.168.10.104/DBFS_DG_CD_02_sclcgcel03) [DBFS_DG]
3. ONLINE   1d3cceb9daeb4f0bbf23ee0218209f4c (o/192.168.10.103/DBFS_DG_CD_02_sclcgcel02) [DBFS_DG]
 
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => All voting disks are online


DATA FROM NERV01 - VOTING DISK STATUS 



##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   5479e30c0d714f5abfd4310540a24a25 (/u01/shared_config/rac02/bkp_volting) []
Located 1 voting disk(s).
Top

Top

css misscount

Success FactorUNDERSTAND CSS TIMEOUT COMPUTATION IN RAC 10G
Recommendation
 The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat can be missed before entering into a cluster reconfiguration to evict the node
 
Links
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => CSS misscount is set to the default value of 30


DATA FROM NERV01 - CSS MISSCOUNT 



CRS-4678: Successful get misscount 30 for Cluster Synchronization Services.
Top

Top

Same size of redo log files

Recommendation
 Having asymmetrical size of redo logs can lead to database hang and its best practice to keep same size for all redo log files. run following query to find out size of each member. 
column member format a50
select f.member,l.bytes/1024/1024 as "Size in MB" from v$log l,v$logfile f where l.group#=f.group#;
Resizing redo logs to make it same size does not need database downtime. 
 
Links
Needs attention onRAC01
Passed on-

Status on RAC01:
INFO => All redo log files are not same size.


DATA FOR RAC01 FOR SAME SIZE OF REDO LOG FILES 




         1           .048828125                                                 
         2           .048828125                                                 
         3           .048828125                                                 
         4           .048828125                                                 
         5            .09765625                                                 
         6           .048828125                                                 
         7           .048828125                                                 
         8            .09765625                                                 
         9           .048828125                                                 
        10           .048828125                                                 
        11            .09765625                                                 
        12           .048828125                                                 
        13           .048828125                                                 

        14            .09765625                                                 
Click for more data
Top

Top

SELinux status

Success FactorRPM THROWS ERROR WITH SELINUX ENABLED
Recommendation
 On Rhel4 u3 x86_64 2.6.9-34.ELsmp kernel , when selinux is enabled, rpm
installation gives the error:
'scriptlet failed, exit status 255'

The default selinux settings are used
# cat /etc/sysconfig/selinux
SELINUX=enforcing
SELINUXTYPE=targeted
e.g. on installing asm rpms:
# rpm -ivh *.rpm
Preparing...                ###########################################
[100%]
  1:oracleasm-support      ########################################### [33%]
  error: %post(oracleasm-support-2.0.2-1.x86_64) scriptlet failed, exit status 255
  2:oracleasm-2.6.9-34.ELsm########################################### [67%]
  error: %post(oracleasm-2.6.9-34.ELsmp-2.0.2-1.x86_64) scriptlet failed, exit status 255
   3:oracleasmlib           ###########################################  [100%]

However, asm rpms gets installed
# rpm -qa | grep asm
oracleasm-support-2.0.2-1
oracleasmlib-2.0.2-1
oracleasm-2.6.9-34.ELsmp-2.0.2-1

There is no error during oracleasm configure, creadisks, Also, oracleasm is able to start on reboot and the tests done around rac/asm seems to be fine.

# rpm -q -a | grep -i selinux
selinux-policy-targeted-1.17.30-2.126
selinux-policy-targeted-sources-1.17.30-2.126
libselinux-1.19.1-7
libselinux-1.19.1-7

Solution
--
If the machine is installed with selinux --disabled, it is possible that the selinux related pre/post activities have not been performed during the installation and as a result extended attribute is not getting set for /bin/*sh 

1. ensure that the kickstart config file does not have 'selinux --disabled'
Also, not specifying selinux in the config file will default to selinux --enforcing and the extended attribute will get set for /bin/*sh 
OR
2. If the machine has been installed with selinux --disabled then perform the below step manually # setfattr -n security.selinux --value="system_u:object_r:shell_exec_t\000"
bin/sh 
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => SELinux is not being Enforced.


DATA FROM NERV01 - SELINUX STATUS 



Disabled

Status on nerv03:
PASS => SELinux is not being Enforced.


DATA FROM NERV03 - SELINUX STATUS 



Disabled

Status on nerv04:
PASS => SELinux is not being Enforced.


DATA FROM NERV04 - SELINUX STATUS 



Disabled

Status on nerv05:
PASS => SELinux is not being Enforced.


DATA FROM NERV05 - SELINUX STATUS 



Disabled

Status on nerv02:
PASS => SELinux is not being Enforced.


DATA FROM NERV02 - SELINUX STATUS 



Disabled

Status on nerv08:
PASS => SELinux is not being Enforced.


DATA FROM NERV08 - SELINUX STATUS 



Disabled

Status on nerv07:
PASS => SELinux is not being Enforced.


DATA FROM NERV07 - SELINUX STATUS 



Disabled

Status on nerv06:
PASS => SELinux is not being Enforced.


DATA FROM NERV06 - SELINUX STATUS 



Disabled
Top

Top

Public interface existence

Recommendation
 it is important to ensure that your public interface is properly marked as public and not private. This can be checked with the oifcfg getif command. If it is inadvertantly marked private, you can get errors such as "OS system dependent operation:bind failed with status" and "OS failure message: Cannot assign requested address". It can be corrected with a command like oifcfg setif -global eth0/<public IP address>:public
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV01 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv03:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV03 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv04:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV04 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv05:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV05 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv02:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV02 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv08:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV08 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv07:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV07 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect

Status on nerv06:
PASS => Public interface is configured and exists in OCR


DATA FROM NERV06 - PUBLIC INTERFACE EXISTENCE 



eth0  192.168.0.0  global  public
eth1  192.168.3.0  global  cluster_interconnect
Top

Top

ip_local_port_range

Recommendation
 Starting with Oracle Clusterware 11gR1, ip_local_port_range should be between 9000 (minimum) and 65500 (maximum).
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV01 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv03:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV03 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv04:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV04 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv05:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV05 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv02:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV02 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv08:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV08 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv07:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV07 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on nerv06:
PASS => ip_local_port_range is configured according to recommendation


DATA FROM NERV06 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500
Top

Top

kernel.shmmax

Recommendation
 Benefit / Impact:

Optimal system memory management.

Risk:

In an Oracle RDBMS application, setting kernel.shmmax too high is not needed and could enable configurations that may leave inadequate system memory for other necessary functions.

Action / Repair:

Oracle Support officially recommends a "minimum" for SHMMAX of 1/2 of physical RAM. However, many Oracle customers choose a higher fraction, at their discretion.  Setting the kernel.shmmax as recommended only causes a few more shared memory segments to be used for whatever total SGA that you subsequently configure in Oracle.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV01 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 1837289472
1/2 total system memory = 918644736

Status on nerv03:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV03 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 1837289472
1/2 total system memory = 918644736

Status on nerv04:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV04 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 1837289472
1/2 total system memory = 918644736

Status on nerv05:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV05 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 4142653440
1/2 total system memory = 2071326720

Status on nerv02:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV02 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 1837289472
1/2 total system memory = 918644736

Status on nerv08:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV08 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 2051964928
1/2 total system memory = 1025982464

Status on nerv07:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV07 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 2051969024
1/2 total system memory = 1025984512

Status on nerv06:
PASS => kernel.shmmax parameter is configured according to recommendation


DATA FROM NERV06 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4398046511104
total system memory = 4142653440
1/2 total system memory = 2071326720
Top

Top

Check for parameter fs.file-max

Recommendation
 - In 11g we introduced automatic memory management which requires more file descriptors than previous versions.

- At a _MINIMUM_ we require 512*PROCESSES (init parameter) file descriptors per database instance + some for the OS and other non-oracle processes

- Since we cannot know at install time how many database instances the customer may run and how many PROCESSES they may configure for those instances and whether they will use automatic memory management or how many non-Oracle processes may be run and how many file descriptors they will require we recommend the file descriptor limit be set to a very high number (6553600) to minimize the potential for running out.

- Setting fs.file-max "too high" doesn't hurt anything because file descriptors are allocated dynamically as needed up to the limit of fs.file-max

- Oracle is not aware of any customers having problems from setting fs.file-max "too high" but we have had customers have problems from setting it too low.  A problem from having too few file descriptors is preventable.

- As for a formula, given 512*PROCESSES (as a minimum) fs.file-max should be a sufficiently high number to minimize the chance that ANY customer would suffer an outage from having fs.file-max set too low.  At a limit of 6553600 customers are likely to have other problems to worry about before they hit that limit. 

- If an individual customer wants to deviate from fs.file-max = 6553600 then they are free to do so based on their knowledge of their environment and implementation as long as they make sure they have enough file descriptors to cover all their database instances, other non-oracle processes and the OS.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv03:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv04:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv05:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv02:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv08:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv07:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on nerv06:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744
Top

Top

DB shell limits hard stack

Recommendation
 The hard stack shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 10240.

What's being checked here is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure  Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then to check the hard stack configuration while logged into the software owner account (eg. oracle).

$ ulimit -Hs
10240

As long as the hard stack limit is 10240 or above then the configuration should be ok.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV01 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv03:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV03 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv04:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV04 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv05:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV05 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv02:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV02 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv08:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV08 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv07:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV07 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768

Status on nerv06:
PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM NERV06 - DB SHELL LIMITS HARD STACK 



oracle hard stack 32768
Top

Top

/tmp directory free space

Recommendation
 There should be a minimum of 1GB of free space in the /tmp directory
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV01 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             4.9G  142M  4.5G   4% /tmp

Status on nerv03:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV03 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             4.9G  398M  4.2G   9% /tmp

Status on nerv04:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV04 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             4.9G  619M  4.0G  14% /tmp

Status on nerv05:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV05 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             286G   14G  258G   5% /

Status on nerv02:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV02 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             4.9G  142M  4.5G   4% /tmp

Status on nerv08:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV08 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             4.9G  145M  4.5G   4% /tmp

Status on nerv07:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV07 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             4.9G  142M  4.5G   4% /tmp

Status on nerv06:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB


DATA FROM NERV06 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             286G   13G  259G   5% /
Top

Top

GI shell limits hard nproc

Recommendation
 The hard nproc shell limit for the Oracle GI software install owner should be >= 16384.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit hard nproc for GI is configured according to recommendation


DATA FROM NERV01 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13901
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv03:
PASS => Shell limit hard nproc for GI is configured according to recommendation


DATA FROM NERV03 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv08:
PASS => Shell limit hard nproc for GI is configured according to recommendation


DATA FROM NERV08 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15539
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv07:
PASS => Shell limit hard nproc for GI is configured according to recommendation


DATA FROM NERV07 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv06:
PASS => Shell limit hard nproc for GI is configured according to recommendation


DATA FROM NERV06 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31469
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data
Top

Top

DB shell limits soft nofile

Recommendation
 The soft nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 1024.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV01 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv03:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV03 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv04:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV04 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv05:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV05 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv02:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV02 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv08:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV08 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv07:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV07 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on nerv06:
PASS => Shell limit soft nofile for DB is configured according to recommendation


DATA FROM NERV06 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536
Top

Top

GI shell limits hard nofile

Recommendation
 The hard nofile shell limit for the Oracle GI software install owner should be >= 65536
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit hard nofile for GI is configured according to recommendation


DATA FROM NERV01 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13901
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv03:
PASS => Shell limit hard nofile for GI is configured according to recommendation


DATA FROM NERV03 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv08:
PASS => Shell limit hard nofile for GI is configured according to recommendation


DATA FROM NERV08 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15539
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv07:
PASS => Shell limit hard nofile for GI is configured according to recommendation


DATA FROM NERV07 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv06:
PASS => Shell limit hard nofile for GI is configured according to recommendation


DATA FROM NERV06 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31469
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data
Top

Top

DB shell limits hard nproc

Recommendation
 The hard nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 16384.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV01 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv03:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV03 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv04:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV04 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv05:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV05 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv02:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV02 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv08:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV08 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv07:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV07 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on nerv06:
PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM NERV06 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384
Top

Top

GI shell limits soft nofile

Recommendation
 The soft nofile shell limit for the Oracle GI software install owner should be >= 1024.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit soft nofile for GI is configured according to recommendation


DATA FROM NERV01 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13901
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv03:
PASS => Shell limit soft nofile for GI is configured according to recommendation


DATA FROM NERV03 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv08:
PASS => Shell limit soft nofile for GI is configured according to recommendation


DATA FROM NERV08 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15539
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv07:
PASS => Shell limit soft nofile for GI is configured according to recommendation


DATA FROM NERV07 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv06:
PASS => Shell limit soft nofile for GI is configured according to recommendation


DATA FROM NERV06 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31469
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data
Top

Top

GI shell limits soft nproc

Recommendation
 The soft nproc shell limit for the Oracle GI software install owner should be >= 2047.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit soft nproc for GI is configured according to recommendation


DATA FROM NERV01 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13901
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv03:
PASS => Shell limit soft nproc for GI is configured according to recommendation


DATA FROM NERV03 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv08:
PASS => Shell limit soft nproc for GI is configured according to recommendation


DATA FROM NERV08 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15539
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv07:
PASS => Shell limit soft nproc for GI is configured according to recommendation


DATA FROM NERV07 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data

Status on nerv06:
PASS => Shell limit soft nproc for GI is configured according to recommendation


DATA FROM NERV06 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31469
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
Click for more data
Top

Top

DB shell limits hard nofile

Recommendation
 The hard nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 65536.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV01 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv03:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV03 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv04:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV04 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv05:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV05 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv02:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV02 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv08:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV08 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv07:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV07 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on nerv06:
PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM NERV06 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536
Top

Top

DB shell limits soft nproc

Recommendation
 This recommendation represents a change or deviation from the documented values and should be considered a temporary measure until the code addresses the problem in a more permanent way.

Problem Statement: 
------------------ 
The soft limit of nproc is not adjusted at runtime by the database. As a 
result, if that limit is reached, the database may become unstable since it 
will fail to fork additional processes. 

Workaround: 
----------- 
Ensure that the soft limit for nproc in /etc/security/limits.conf is set high 
enough to accommodate the maximum number of concurrent threads on the system 
for the given workload. If in doubt, set it to the hard limit. For example: 

oracle  soft    nproc   16384 
oracle  hard    nproc   16384

The soft nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 2047.  So the above advice of setting soft nproc = hard nproc = 16384 should be considered a temporary proactive measure to avoid the possibility of the database not being able to fork enough processes.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV01 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv03:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV03 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv04:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV04 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv05:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV05 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv02:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV02 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv08:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV08 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv07:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV07 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on nerv06:
PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM NERV06 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384
Top

Top

Linux Swap Size

Success FactorCORRECTLY SIZE THE SWAP SPACE
Recommendation
 The following table describes the relationship between installed RAM and the configured swap space requirement:

Note:
On Linux, the Hugepages feature allocates non-swappable memory for large page tables using memory-mapped files. If you enable Hugepages, then you should deduct the memory allocated to Hugepages from the available RAM before calculating swap space.

RAM between 1 GB and 2 GB, Swap 1.5 times the size of RAM (minus memory allocated to Hugepages)

RAM between 2 GB and 16 GB, Swap equal to the size of RAM (minus memory allocated to Hugepages)

RAM (minus memory allocated to Hugepages)
more than 16 GB,  Swap 16 GB

In other words the maximum swap size for Linux that Oracle would recommend would be 16GB
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV01 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 1794228
Swap memory found on system = 8392700
Recommended Swap = 2691342

Status on nerv03:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV03 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 1794228
Swap memory found on system = 8392700
Recommended Swap = 2691342

Status on nerv04:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV04 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 1794228
Swap memory found on system = 8392700
Recommended Swap = 2691342

Status on nerv05:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV05 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 4045560
Swap memory found on system = 8392700
Recommended Swap = 4045560

Status on nerv02:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV02 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 1794228
Swap memory found on system = 8392700
Recommended Swap = 2691342

Status on nerv08:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV08 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 2003872
Swap memory found on system = 8392700
Recommended Swap = 3005808

Status on nerv07:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV07 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 2003876
Swap memory found on system = 8392700
Recommended Swap = 3005814

Status on nerv06:
PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM NERV06 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 4045560
Swap memory found on system = 8392700
Recommended Swap = 4045560
Top

Top

Non-autoextensible data and temp files

Recommendation
 Benefit / Impact:

The benefit of having "AUTOEXTEND" on is that applications may avoid out of space errors.
The impact of verifying that the "AUTOEXTEND" attribute is "ON" is minimal. The impact of setting "AUTOEXTEND" to "ON" varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.

Risk:

The risk of running out of space in either the tablespace or diskgroup varies by application and cannot be quantified here. A tablespace that runs out of space will interfere with an application, and a diskgroup running out of space could impact the entire database as well as ASM operations (e.g., rebalance operations).

Action / Repair:

To obtain a list of tablespaces that are not set to "AUTOEXTEND", enter the following sqlplus command logged into the database as sysdba:
select file_id, file_name, tablespace_name from dba_data_files where autoextensible <>'YES'
union
select file_id, file_name, tablespace_name from dba_temp_files where autoextensible <> 'YES'; 
The output should be:
no rows selected
If any rows are returned, investigate and correct the condition.
NOTE: Configuring "AUTOEXTEND" to "ON" requires comparing space utilization growth projections at the tablespace level to space available in the diskgroups to permit the expected projected growth while retaining sufficient storage space in reserve to account for ASM rebalance operations that occur either as a result of planned operations or component failure. The resulting growth targets are implemented with the "MAXSIZE" attribute that should always be used in conjunction with the "AUTOEXTEND" attribute. The "MAXSIZE" settings should allow for projected growth while minimizing the prospect of depleting a disk group. The "MAXSIZE" settings will vary by customer and a blanket recommendation cannot be given here.

NOTE: When configuring a file for "AUTOEXTEND" to "ON", the size specified for the "NEXT" attribute should cover all disks in the diskgroup to optimize balance. For example, with a 4MB AU size and 168 disks, the size of the "NEXT" attribute should be a multiple of 672M (4*168).
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => All data and temporary are autoextensible


DATA FOR RAC01 FOR NON-AUTOEXTENSIBLE DATA AND TEMP FILES 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Non-multiplexed redo logs

Recommendation
 The online redo logs of an Oracle database are critical to availability and recoverability and should always be multiplexed even in cases where fault tolerance is provided at the storage level.
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Redo logs are multiplexed


DATA FOR RAC01 FOR NON-MULTIPLEXED REDO LOGS 




         1          2                                                           
        22          2                                                           
         6          2                                                           
        11          2                                                           
        13          2                                                           
         2          2                                                           
        14          2                                                           
        20          2                                                           
        21          2                                                           
         4          2                                                           
         5          2                                                           
         8          2                                                           
        17          2                                                           

        23          2                                                           
Click for more data
Top

Top

Multiplexed controlfiles

Recommendation
 The controlfile of an Oracle database is critical to availability and recoverability and should always be multiplexed even in cases where fault tolerance is provided at the storage level.
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Controlfile is multiplexed


DATA FOR RAC01 FOR MULTIPLEXED CONTROLFILES 




+DATA/rac01/controlfile/current.261.826713231                                   
+DATA/rac01/controlfile/current.260.826713235                                   
Top

Top

Check for parameter remote_login_passwordfile

Recommendation
 For security reasons remote_login_passwordfile should be set to SHARED or  EXCLUSIVE.  The two are functionally equivalent.
 
Links
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => remote_login_passwordfile is configured according to recommendation

RAC013.remote_login_passwordfile = EXCLUSIVE                                    

Status on RAC011:
PASS => remote_login_passwordfile is configured according to recommendation

RAC011.remote_login_passwordfile = EXCLUSIVE                                    

Status on RAC012:
PASS => remote_login_passwordfile is configured according to recommendation

RAC012.remote_login_passwordfile = EXCLUSIVE                                    

Status on RAC015:
PASS => remote_login_passwordfile is configured according to recommendation

RAC015.remote_login_passwordfile = EXCLUSIVE                                    

Status on rac014:
PASS => remote_login_passwordfile is configured according to recommendation

rac014.remote_login_passwordfile = EXCLUSIVE                                    

Status on RAC018:
PASS => remote_login_passwordfile is configured according to recommendation

RAC018.remote_login_passwordfile = EXCLUSIVE                                    

Status on RAC017:
PASS => remote_login_passwordfile is configured according to recommendation

RAC017.remote_login_passwordfile = EXCLUSIVE                                    

Status on RAC016:
PASS => remote_login_passwordfile is configured according to recommendation

RAC016.remote_login_passwordfile = EXCLUSIVE                                    
Top

Top

Check audit_file_dest

Recommendation
 we should clean old audit files from audit_file_dest regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV01 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv03:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV03 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv04:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV04 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv05:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV05 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv02:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV02 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv08:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV08 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv07:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV07 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0

Status on nerv06:
PASS => audit_file_dest does not have any audit files older than 30 days


DATA FROM NERV06 - RAC01 DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /u01/app/oracle/admin/RAC01/adump = 0
Top

Top

Avg message sent queue time on ksxp

Recommendation
 Avg message sent queue time on ksxp (ms) should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools.  
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Avg message sent queue time on ksxp is <= recommended


DATA FOR RAC01 FOR AVG MESSAGE SENT QUEUE TIME ON KSXP 




avg_message_sent_queue_time_on_ksxp_in_ms = 0                                   
Top

Top

Avg message sent queue time (ms)

Recommendation
 Avg message sent queue time (ms) as derived from AWR should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools. 
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Avg message sent queue time is <= recommended


DATA FOR RAC01 FOR AVG MESSAGE SENT QUEUE TIME (MS) 




avg_message_sent_queue_time_in_ms = 0                                           
Top

Top

Avg message received queue time

Recommendation
 Avg message receive queue time (ms) as derived from AWR should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools. 
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Avg message received queue time is <= recommended


DATA FOR RAC01 FOR AVG MESSAGE RECEIVED QUEUE TIME 




avg_message_received_queue_time_in_ms = 0                                       
Top

Top

GC block lost

Success FactorGC LOST BLOCK DIAGNOSTIC GUIDE
Recommendation
 The RDBMS reports global cache lost blocks statistics ("gc cr block lost" and/or "gc current block lost") which could indicate a negative impact on interconnect performance and global cache processing. 

The vast majority of escalations attributed to RDBMS global cache lost blocks can be directly related to faulty or misconfigured interconnects. This guide serves as a starting point for evaluating common (and sometimes obvious) causes.

<b> 1. Is Jumbo Frames configured? </b>

A Jumbo Frame is a Packet Size around 9000bytes. 5000 bytes are called Mini Jumbo Frames.  All the servers , switches and routers in operation must be configured to support the same size of packets.

Primary Benefit: performance
Secondary Benefit: cluster stability for IP overhead, less misses for network heartbeat checkins.

<b> 2. What is the configured MTU size for each interconnect interface and interconnect switch ports? </b>

The MTU is the "Maximum Transmission Unit" or the frame size.  The default is 1500 bytes for Ethernet.

<b> 3. Do you observe frame loss at the OS, NIC or switch layer?  </b> netstat, ifconfig, ethtool, switch port stats would help you determine that.

Using netstat -s look for:
x fragments dropped after timeout
x packet reassembles failed

<b> 4. Are network cards speed force full duplex? </b>

<b> 5. Are network card speed and mode (autonegotiate, fixed full duplex, etc) identical on all nodes and switch? </b>

<b> 6. Is the PCI bus at the same speed on all nodes that the NIC (Network Interface Cards) are using?  </b>

<b> 7. Have you modified the ring buffers away from default for the interconnect NIC for all nodes? </b>

<b> 8. Have you measured interconnect capacity and are you saturating available bandwidth? </b>

Remember that all network values are averaged over a time period.  Best to keep the average time period as small as possible so that spikes of activity are not masked out.

<b> 9. Are the CPUs overloaded (ie load average > 20 for new lintel architecture) on the nodes that exhibit block loss?  </b>"Uptime" command will display load average information on most platforms.

<b> 10. Have you modified transmit and receive (tx/rx) UDP buffer queue size for the OS from recommended settings?  </b>
          Send and receive queues should be the same size. 
          Queue max and default should be the same size. 
          Recommended queue size = 4194304 (4 megabytes). 
                  
<b> 11. What is the NIC driver version and is it the same on all nodes? </b>

<b> 12. Is the NIC driver NAPI (New Application Program Interface) enabled on all nodes (recommended)? </b>

<b> 13. What is the % of block loss compared to total gc block processing for that node? </b> View AWR reports for peak load periods.

Total # of blocks lost:
SQL> select INST_ID, NAME, VALUE from gv$sysstat where name like 'global cache %lost%' and value > 0;

<b> 14. Is flow control enabled (tx & rx) for switch and NIC? </b>  Its not just the servers that need the transmission to pause (Xoff) but also the network equipment.

<b> 15. </b> Using a QOS (Quality of Service) is not advised for the Network that the RAC Private Interconnect is comminucating to other nodes of the cluster with.  This includes the Server, Switch and DNS (or any other item connected on this segment of the network).
We have a case at AIX QOS service was turned on but not configured on Cisco 3750 switch causing excessive amount of gc cr block lost and other GC waits. Waits caused application performance issues. 
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => No Global Cache lost blocks detected


DATA FOR RAC01 FOR GC BLOCK LOST 




No of GC lost block in last 24 hours = 5                                        
Top

Top

Session Failover configuration

Success FactorCONFIGURE ORACLE NET SERVICES LOAD BALANCING PROPERLY TO DISTRIBUTE CONNECTIONS
Recommendation
 Benefit / Impact:

Higher application availability

Risk:

Application availability problems in case of failed nodes or database instances

Action / Repair:

Application connection failover and load balancing is highly recommended for OLTP environments but may not apply for DSS workloads.  DSS application customers may want to ignore this warning.


The following query will identify the application user sessions that do not have basic connection failover configured:

select username, sid, serial#,process,failover_type,failover_method FROM gv$session where upper(failover_method) != 'BASIC' and upper(failover_type) !='SELECT' and upper(username) not in ('SYS','SYSTEM','SYSMAN','DBSNMP');

 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR RAC01 FOR SESSION FAILOVER CONFIGURATION 




Query returned no rows which is expected when the SQL check passes.

Top

Top

User Open File Limit

Recommendation
 Please consult 

Oracle Database Installation Guide for Linux
Configure Oracle Installation Owner Shell Limits
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV01 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13901
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv03:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV03 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv04:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV04 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv05:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV05 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31467
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv02:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV02 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 13878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv08:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV08 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15539
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv07:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV07 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Status on nerv06:
PASS => Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited


DATA FROM NERV06 - USER OPEN FILE LIMIT 



core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31469
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Top

Top

Redo log Checkpoint not complete

Recommendation
 If Checkpoints are not being completed the database may hang or experience performance degradation.  Under this circumstance the alert.log will contain "checkpoint not complete" messages and it is recommended that the online redo logs be recreated with a larger size.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No indication of checkpoints not being completed


DATA FROM NERV01 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC013/trace/alert_RAC013.log = 0

Status on nerv03:
PASS => No indication of checkpoints not being completed


DATA FROM NERV03 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC011/trace/alert_RAC011.log = 0

Status on nerv04:
PASS => No indication of checkpoints not being completed


DATA FROM NERV04 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC012/trace/alert_RAC012.log = 0

Status on nerv05:
PASS => No indication of checkpoints not being completed


DATA FROM NERV05 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC015/trace/alert_RAC015.log = 0

Status on nerv02:
PASS => No indication of checkpoints not being completed


DATA FROM NERV02 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/rac014/trace/alert_rac014.log = 0

Status on nerv08:
PASS => No indication of checkpoints not being completed


DATA FROM NERV08 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC018/trace/alert_RAC018.log = 0

Status on nerv07:
PASS => No indication of checkpoints not being completed


DATA FROM NERV07 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC017/trace/alert_RAC017.log = 0

Status on nerv06:
PASS => No indication of checkpoints not being completed


DATA FROM NERV06 - RAC01 DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



checkpoint not complete messages in /u01/app/oracle/diag/rdbms/rac01/RAC016/trace/alert_RAC016.log = 0
Top

Top

Avg GC Current Block Receive Time

Recommendation
 The average gc current block receive time should typically be less than 15 milliseconds depending on your system configuration and volume.  This is the average latency of a current request round-trip from the requesting instance to the holding instance and back to the requesting instance.

Use the following query to determine the average gc current block receive time for each instance.

set numwidth 20 
column "AVG CURRENT BLOCK RECEIVE TIME (ms)" format 9999999.9 
select b1.inst_id, ((b1.value / decode(b2.value,0,1)) * 10) "AVG CURRENT BLOCK RECEIVE TIME (ms)" 
from gv$sysstat b1, gv$sysstat b2 
where b1.name = 'gc current block receive time' and 
b2.name = 'gc current blocks received' and b1.inst_id = b2.inst_id ;
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Avg GC CURRENT Block Receive Time Within Acceptable Range


DATA FOR RAC01 FOR AVG GC CURRENT BLOCK RECEIVE TIME 




avg_gc_current_block_receive_time_15ms_exceeded = 0                             
Top

Top

Avg GC CR Block Receive Time

Recommendation
 The average gc cr block receive time should typically be less than 15 milliseconds depending on your system configuration and volume.  This is the average latency of a consistent-read request round-trip from the requesting instance to the holding instance and back to the requesting instance.

Use the following query to determine the average gc cr block receive time for each instance.

set numwidth 20 
column "AVG CR BLOCK RECEIVE TIME (ms)" format 9999999.9 
select b1.inst_id, ((b1.value / decode(b2.value,0,1)) * 10) "AVG CR BLOCK RECEIVE TIME (ms)" 
from gv$sysstat b1, gv$sysstat b2 
where b1.name = 'gc cr block receive time' and 
b2.name = 'gc cr blocks received' and b1.inst_id = b2.inst_id ;
 
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Avg GC CR Block Receive Time Within Acceptable Range


DATA FOR RAC01 FOR AVG GC CR BLOCK RECEIVE TIME 




avg_gc_cr_block_receive_time_15ms_exceeded = 0                                  
Top

Top

Tablespace allocation type

Recommendation
 It is recommended that for all locally managed tablespaces the allocation type specified be SYSTEM to allow Oracle to automatically determine extent size based on the data profile.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => Tablespace allocation type is SYSTEM for all appropriate tablespaces for RAC01


DATA FOR RAC01 FOR TABLESPACE ALLOCATION TYPE 




Query returned no rows which is expected when the SQL check passes.

Top

Top

Old trace files in background dump destination

Recommendation
 we should clean old trace files from background_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV01 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv03:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV03 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv04:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV04 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv05:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV05 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv02:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV02 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv08:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV08 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv07:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV07 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0

Status on nerv06:
PASS => background_dump_dest does not have any files older than 30 days


DATA FROM NERV06 - RAC01 DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



bdump dest files older than 30 days = 0
Top

Top

Alert log file size

Recommendation
 If alert log file is larger than 50 MB, it should be rolled over to new file and old file should be backed up.
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Alert log is not too big


DATA FROM NERV01 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 185390 Sep 25 04:36 /u01/app/oracle/diag/rdbms/rac01/RAC013/trace/alert_RAC013.log

Status on nerv03:
PASS => Alert log is not too big


DATA FROM NERV03 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 389647 Sep 25 04:36 /u01/app/oracle/diag/rdbms/rac01/RAC011/trace/alert_RAC011.log

Status on nerv04:
PASS => Alert log is not too big


DATA FROM NERV04 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 361853 Sep 25 02:00 /u01/app/oracle/diag/rdbms/rac01/RAC012/trace/alert_RAC012.log

Status on nerv05:
PASS => Alert log is not too big


DATA FROM NERV05 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 165767 Sep 25 02:00 /u01/app/oracle/diag/rdbms/rac01/RAC015/trace/alert_RAC015.log

Status on nerv02:
PASS => Alert log is not too big


DATA FROM NERV02 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 171048 Sep 25 04:36 /u01/app/oracle/diag/rdbms/rac01/rac014/trace/alert_rac014.log

Status on nerv08:
PASS => Alert log is not too big


DATA FROM NERV08 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 170925 Sep 25 03:01 /u01/app/oracle/diag/rdbms/rac01/RAC018/trace/alert_RAC018.log

Status on nerv07:
PASS => Alert log is not too big


DATA FROM NERV07 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 167853 Sep 25 06:00 /u01/app/oracle/diag/rdbms/rac01/RAC017/trace/alert_RAC017.log

Status on nerv06:
PASS => Alert log is not too big


DATA FROM NERV06 - RAC01 DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle oinstall 190037 Sep 25 04:36 /u01/app/oracle/diag/rdbms/rac01/RAC016/trace/alert_RAC016.log
Top

Top

Check ORA-07445 errors

Recommendation
 ORA-07445 errors may lead to database block corruption or some serious issue. Please see the trace file for more information next to ORA-07445 error in alert log.If you are not able to resolve the problem,Please open service request with Oracle support.
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV01 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv03:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV03 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv04:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV04 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv05:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV05 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv02:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV02 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv08:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV08 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv07:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV07 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 




Status on nerv06:
PASS => No ORA-07445 errors found in alert log


DATA FROM NERV06 - RAC01 DATABASE - CHECK ORA-07445 ERRORS 



Top

Top

Check ORA-00600 errors

Recommendation
 ORA-00600 errors may lead to database block corruption or some serious issue. Please see the trace file for more information next to ORA-00600 error in alert log.If you are not able to resolve the problem,Please open service request with Oracle support.
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV01 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv03:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV03 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv04:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV04 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv05:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV05 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv02:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV02 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv08:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV08 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv07:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV07 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 




Status on nerv06:
PASS => No ORA-00600 errors found in alert log


DATA FROM NERV06 - RAC01 DATABASE - CHECK ORA-00600 ERRORS 



Top

Top

Check user_dump_destination

Recommendation
 we should clean old trace files from user_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV01 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC013/trace which are older than 30 days

Status on nerv03:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV03 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC011/trace which are older than 30 days

Status on nerv04:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV04 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC012/trace which are older than 30 days

Status on nerv05:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV05 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC015/trace which are older than 30 days

Status on nerv02:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV02 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/rac014/trace which are older than 30 days

Status on nerv08:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV08 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC018/trace which are older than 30 days

Status on nerv07:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV07 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC017/trace which are older than 30 days

Status on nerv06:
PASS => user_dump_dest does not have trace files older than 30 days


DATA FROM NERV06 - RAC01 DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC016/trace which are older than 30 days
Top

Top

Check core_dump_destination

Recommendation
 we should clean old trace files from core_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV01 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC013/cdump which are older than 30 days

Status on nerv03:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV03 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC011/cdump which are older than 30 days

Status on nerv04:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV04 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC012/cdump which are older than 30 days

Status on nerv05:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV05 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC015/cdump which are older than 30 days

Status on nerv02:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV02 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/rac014/cdump which are older than 30 days

Status on nerv08:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV08 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC018/cdump which are older than 30 days

Status on nerv07:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV07 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC017/cdump which are older than 30 days

Status on nerv06:
PASS => core_dump_dest does not have too many older core dump files


DATA FROM NERV06 - RAC01 DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /u01/app/oracle/diag/rdbms/rac01/RAC016/cdump which are older than 30 days
Top

Top

Check for parameter semmns

Recommendation
 SEMMNS should be set >= 32000
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv03:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv04:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv05:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv02:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv08:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv07:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on nerv06:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000
Top

Top

Check for parameter kernel.shmmni

Recommendation
 kernel.shmmni  should be >= 4096
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv03:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv04:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv05:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv02:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv08:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv07:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on nerv06:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096
Top

Top

Check for parameter semmsl

Recommendation
 SEMMSL should be set >= 250
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv03:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv04:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv05:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv02:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv08:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv07:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on nerv06:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250
Top

Top

Check for parameter semmni

Recommendation
 SEMMNI should be set >= 128
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv03:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv04:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv05:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv02:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv08:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv07:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on nerv06:
PASS => Kernel Parameter SEMMNI OK

semmni = 128
Top

Top

Check for parameter semopm

Recommendation
 SEMOPM should be set >= 100 
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv03:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv04:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv05:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv02:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv08:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv07:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on nerv06:
PASS => Kernel Parameter SEMOPM OK

semopm = 100
Top

Top

Check for parameter kernel.shmall

Recommendation
 Starting with Oracle 10g, kernel.shmall should be set >= 2097152.
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv03:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv04:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv05:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv02:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv08:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv07:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824

Status on nerv06:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 1073741824
Top

Top

Verify sys and system users default tablespace is system

Success FactorDATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Benefit / Impact:

It's recommended to Keep the Default Tablespace for SYS and SYSTEM Schema mapped to the Default SYSTEM. All Standard Dictionary objects as well as all the added option will be located in the same place with no risk to record Dictionary data in other Datafiles.

Risk

If Default tablespace for SYS and SYSTEM is not set to SYSTEM, Data dictionary Object can be created in other locations and cannot be controlled during maintenance activitiesof the database. Due to this, there's a potentil risk to run into severe Data Dictionary Corruptuion that may implicate time consuming Recovery Steps.

Action / Repair:

If SYS or SYSTEM schema have a Default Tablespace different than SYSTEM, it's recommended to follow instruction given into NoteID? : 1111111.2

SQL> SELECT username, default_tablespace
     FROM dba_users
     WHERE username in ('SYS','SYSTEM');

If  DEFAULT_TABLESPACE is anything other than SYSTEM tablespace, modify the default tablespace to SYSTEM by using the below command.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => The SYS and SYSTEM userids have a default tablespace of SYSTEM


DATA FOR RAC01 FOR VERIFY SYS AND SYSTEM USERS DEFAULT TABLESPACE IS SYSTEM 




SYSTEM                                                                          
SYSTEM                                                                          
Top

Top

Check for parameter remote_listener

Recommendation
 Using remote listener init parameter, you can register instances running on remote node with local listener and that way you can achieve load balancing and failover if local listener or node goes down.
 
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC013.remote_listener = rac02-scan.localdomain:1521                            

Status on RAC011:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC011.remote_listener = rac02-scan.localdomain:1521                            

Status on RAC012:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC012.remote_listener = rac02-scan.localdomain:1521                            

Status on RAC015:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC015.remote_listener = rac02-scan.localdomain:1521                            

Status on rac014:
PASS => Remote listener parameter is set to achieve load balancing and failover

rac014.remote_listener = rac02-scan.localdomain:1521                            

Status on RAC018:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC018.remote_listener = rac02-scan.localdomain:1521                            

Status on RAC017:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC017.remote_listener = rac02-scan.localdomain:1521                            

Status on RAC016:
PASS => Remote listener parameter is set to achieve load balancing and failover

RAC016.remote_listener = rac02-scan.localdomain:1521                            
Top

Top

maximum parallel asynch io

Recommendation
 A message in the alert.log similar to the one below is indicative of /proc/sys/fs/aio-max-nr being too low but you should set this to 1048576 proactively and even increase it if you get a similar message.  A problem in this area could lead to availability issues.

Warning: OS async I/O limit 128 is lower than recovery batch 1024
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV01 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv03:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV03 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv04:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV04 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv05:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV05 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv02:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV02 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv08:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV08 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv07:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV07 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on nerv06:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM NERV06 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576
Top

Top

Old log files in client directory in crs_home

Recommendation
 Having many old log files in $CRS_HOME/log/hostname/client directory can cause CRS performance issue.  So please delete log files older than 15 days.
 
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV01 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv01/client directory are older than 15 days

Status on nerv03:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV03 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv03/client directory are older than 15 days

Status on nerv04:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV04 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv04/client directory are older than 15 days

Status on nerv05:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV05 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv05/client directory are older than 15 days

Status on nerv02:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV02 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv02/client directory are older than 15 days

Status on nerv08:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV08 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv08/client directory are older than 15 days

Status on nerv07:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV07 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv07/client directory are older than 15 days

Status on nerv06:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files


DATA FROM NERV06 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /u01/app/11.2.0/grid/log/nerv06/client directory are older than 15 days
Top

Top

OCR backup

Success FactorUSE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three  backup copies of the OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week.
 
Needs attention on-
Passed onnerv01

Status on nerv01:
PASS => OCR is being backed up daily


DATA FROM NERV01 - OCR BACKUP 




nerv08     2013/09/25 05:53:22     /u01/shared_config/rac02/backup_ocr/backup00.ocr

nerv08     2013/09/25 01:53:15     /u01/shared_config/rac02/backup_ocr/backup01.ocr

nerv08     2013/09/24 21:53:07     /u01/shared_config/rac02/backup_ocr/backup02.ocr

nerv04     2013/09/24 03:51:55     /u01/shared_config/rac02/backup_ocr/day.ocr

nerv03     2013/09/20 21:10:20     /u01/app/11.2.0/grid/cdata/rac02/week.ocr

nerv04     2013/09/24 15:11:55     /u01/shared_config/rac02/backup_ocr/backup_20130924_151155.ocr

nerv03     2013/09/21 17:31:54     /u01/shared_config/rac02/backup_ocr/backup_20130921_173154.ocr

nerv03     2013/09/21 17:31:03     /u01/shared_config/rac02/backup_ocr/backup_20130921_173103.ocr
Click for more data
Top

Top

Check for parameter net.core.rmem_max

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv03:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv04:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv05:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv02:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv08:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv07:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on nerv06:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304
Top

Top

Check for parameter spfile

Recommendation
 Oracle recommendes to use one spfile for all instances in clustered database.  Using spfile, DBA can change many parameters dynamically.
 
Links
Needs attention on-
Passed onRAC013, RAC011, RAC012, RAC015, rac014, RAC018, RAC017, RAC016

Status on RAC013:
PASS => Instance is using spfile

RAC013.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on RAC011:
PASS => Instance is using spfile

RAC011.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on RAC012:
PASS => Instance is using spfile

RAC012.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on RAC015:
PASS => Instance is using spfile

RAC015.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on rac014:
PASS => Instance is using spfile

rac014.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on RAC018:
PASS => Instance is using spfile

RAC018.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on RAC017:
PASS => Instance is using spfile

RAC017.spfile = +DATA/rac01/spfilerac01.ora                                     

Status on RAC016:
PASS => Instance is using spfile

RAC016.spfile = +DATA/rac01/spfilerac01.ora                                     
Top

Top

Non-routable network for interconnect

Success FactorUSE NON-ROUTABLE NETWORK ADDRESSES FOR PRIVATE INTERCONNECT
Recommendation
 Interconnect should be configured on non-routable private LAN. Interconnect IP should not be accessible outside LAN 
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV01 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv03:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV03 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv04:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV04 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv05:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV05 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv02:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV02 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv08:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV08 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv07:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV07 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect

Status on nerv06:
PASS => Interconnect is configured on non-routable network addresses


DATA FROM NERV06 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  192.168.3.0  global  cluster_interconnect
Top

Top

Hostname Formating

Success FactorDO NOT USE UNDERSCORE IN HOST OR DOMAIN NAME
Recommendation
 Underscores should not be used in a  host or domainname..

 According to RFC952 - DoD Internet host table specification 
The same applies for Net, Host, Gateway, or Domain name.


 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV01 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv03:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV03 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv04:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV04 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv05:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV05 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv02:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV02 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv08:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV08 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv07:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV07 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06

Status on nerv06:
PASS => None of the hostnames contains an underscore character


DATA FROM NERV06 - HOSTNAME FORMATING 



nerv03
nerv04
nerv05
nerv01
nerv02
nerv08
nerv07
nerv06
Top

Top

Check for parameter net.core.rmem_default

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv03:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv04:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv05:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv02:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv08:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv07:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on nerv06:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144
Top

Top

Check for parameter net.core.wmem_max

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv03:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv04:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv05:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv02:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv08:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv07:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on nerv06:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576
Top

Top

Check for parameter net.core.wmem_default

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv03:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv04:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv05:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv02:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv08:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv07:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on nerv06:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144
Top

Top

CRS HOME env variable

Success FactorAVOID SETTING ORA_CRS_HOME ENVIRONMENT VARIABLE
Recommendation
 Benefit / Impact:

Avoid unexpected results running various Oracle utilities

Risk:

Setting this variable can cause problems for various Oracle components, and it is never necessary for CRS programs because they all have wrapper scripts.

Action / Repair:

Unset ORA_CRS_HOME in the execution environment.  If a variable is needed for automation purposes or convenience then use a different variable name (eg., GI_HOME, etc.)
 
Links
Needs attention on-
Passed onnerv01, nerv03, nerv04, nerv05, nerv02, nerv08, nerv07, nerv06

Status on nerv01:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV01 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv03:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV03 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv04:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV04 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv05:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV05 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv02:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV02 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv08:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV08 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv07:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV07 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on nerv06:
PASS => ORA_CRS_HOME environment variable is not set


DATA FROM NERV06 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set

Top

Top

AUDSES$ sequence cache size

Success FactorCACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Use large cache value of maybe 10,000 or more. NOORDER most effective, but impact on strict ordering. Performance. Might not get strict time ordering of sequence numbers.
There are problems reported with Audses$ and ora_tq_base$ which are both internal sequences  . Also particularly if the order of the application sequence is not important or this is used during the login process and hence can be involved in a login storm then this needs to be taken care of. Some sequences need to be presented in a particular order and hence caching those is not a good idea but in the interest of performance if order does not matter then this could be cached and presented. This also manifests itself as waits in "rowcache" for "dc_sequences" which is a rowcache type for sequences. 


For Applications this can cause significant issues especially with Transactional Sequences.  
Please see note attached.

Oracle General Ledger - Version: 11.5.0 to 11.5.10
Oracle Payables - Version: 11.5.0 to 11.5.10
Oracle Receivables - Version: 11.5.10.2
Information in this document applies to any platform.
ARXTWAI,ARXRWMAI 

Increase IDGEN1$ to a value of 1000, see notes below.  This is the default as of 11.2.0.1.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR RAC01 FOR AUDSES$ SEQUENCE CACHE SIZE 




audses$.cache_size = 10000                                                      
Top

Top

IDGEN$ sequence cache size

Success FactorCACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Sequence contention (SQ enqueue) can occur if SYS.IDGEN1$ sequence is not cached to 1000.  This condition can lead to performance issues in RAC.  1000 is the default starting in version 11.2.0.1.
 
Links
Needs attention on-
Passed onRAC01

Status on RAC01:
PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR RAC01 FOR IDGEN$ SEQUENCE CACHE SIZE 




idgen1$.cache_size = 1000                                                       
Top

Top

Skipped Checks

skipping GI shell limits soft nproc(checkid:-841C7DEB776DB4BBE040E50A1EC0782E) because o_crs_user_limits_nerv04.out not found
skipping GI shell limits soft nofile(checkid:-841D87785594F263E040E50A1EC020D6) because o_crs_user_limits_nerv04.out not found
skipping GI shell limits hard nofile(checkid:-841E706550995C68E040E50A1EC05EFB) because o_crs_user_limits_nerv04.out not found
skipping GI shell limits hard nproc(checkid:-841F8C3E78906005E040E50A1EC00357) because o_crs_user_limits_nerv04.out not found
skipping GI shell limits hard stack(checkid:-9DAFD1040CA9389FE040E50A1EC0307C) because o_crs_user_limits_nerv04.out not found
skipping Broadcast Requirements for Networks(checkid:-D112D25A574F13DCE0431EC0E50A55CD) because o_arping_broadcast_nerv04.out not found
skipping OLR Integrity(checkid:-E1500ADF060A3EA2E04313C0E50A3676) because o_olrintegrity_nerv04.out not found
skipping GI shell limits soft nproc(checkid:-841C7DEB776DB4BBE040E50A1EC0782E) because o_crs_user_limits_nerv05.out not found
skipping GI shell limits soft nofile(checkid:-841D87785594F263E040E50A1EC020D6) because o_crs_user_limits_nerv05.out not found
skipping GI shell limits hard nofile(checkid:-841E706550995C68E040E50A1EC05EFB) because o_crs_user_limits_nerv05.out not found
skipping GI shell limits hard nproc(checkid:-841F8C3E78906005E040E50A1EC00357) because o_crs_user_limits_nerv05.out not found
skipping GI shell limits hard stack(checkid:-9DAFD1040CA9389FE040E50A1EC0307C) because o_crs_user_limits_nerv05.out not found
skipping Broadcast Requirements for Networks(checkid:-D112D25A574F13DCE0431EC0E50A55CD) because o_arping_broadcast_nerv05.out not found
skipping OLR Integrity(checkid:-E1500ADF060A3EA2E04313C0E50A3676) because o_olrintegrity_nerv05.out not found
skipping GI shell limits soft nproc(checkid:-841C7DEB776DB4BBE040E50A1EC0782E) because o_crs_user_limits_nerv02.out not found
skipping GI shell limits soft nofile(checkid:-841D87785594F263E040E50A1EC020D6) because o_crs_user_limits_nerv02.out not found
skipping GI shell limits hard nofile(checkid:-841E706550995C68E040E50A1EC05EFB) because o_crs_user_limits_nerv02.out not found
skipping GI shell limits hard nproc(checkid:-841F8C3E78906005E040E50A1EC00357) because o_crs_user_limits_nerv02.out not found
skipping GI shell limits hard stack(checkid:-9DAFD1040CA9389FE040E50A1EC0307C) because o_crs_user_limits_nerv02.out not found
skipping Broadcast Requirements for Networks(checkid:-D112D25A574F13DCE0431EC0E50A55CD) because o_arping_broadcast_nerv02.out not found
skipping OLR Integrity(checkid:-E1500ADF060A3EA2E04313C0E50A3676) because o_olrintegrity_nerv02.out not found