Índice do fórum Treinamentos Avançados Treinamento Oracle RAC Adicionando o nó 2 ao Rack.

Adicionando o nó 2 ao Rack.

Dúvidas, dicas e atualizações sobre o Treinamento Oracle RAC.

Mensagem Qua Mai 28, 2014 5:38 pm

Mensagens: 0
Boa tarde.

Estou tentando adicionar o nó 2 ao rac e estou tendo o seguinte problema.

[oracle@servrac01 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={servrac02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={servrac02-vip}"

Performing pre-checks for node addition
Shared resources check for node addition failed Cannot identify existing nodes in cluster The required component "olsnodes" is missing

Verification cannot proceed

[oracle@servrac01 bin]$ olsnodes -n -t
servrac01 1 Pinned

O que será que esqueci de realizar?

Mensagem Qua Mai 28, 2014 6:51 pm
portilho Site Admin

Mensagens: 439
Olá!

Execute o comando abaixo e coloque aqui a saída completa:

cluvfy stage -pre nodeadd -n servrac02 -vip servrac02-vip -verbose

Mensagem Qua Mai 28, 2014 7:34 pm

Mensagens: 0
[oracle@servrac01 ~]$ cluvfy stage -pre nodeadd -n servrac02 -vip servrac02-vip -verbose

Performing pre-checks for node addition
Shared resources check for node addition failed Cannot identify existing nodes in cluster The required component "olsnodes" is missing

Verification cannot proceed

Osvaldo Correa.

Mensagem Qua Mai 28, 2014 7:38 pm

Mensagens: 0
Executando o comando a baixo.

[oracle@servrac01 ~]$ $GRID_HOME/bin/cluvfy stage -post hwos -n servrac01,servrac02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "servrac01"
Destination Node Reachable?
------------------------------------ ------------------------
servrac01 yes
servrac02 yes
Result: Node reachability check passed from node "servrac01"


Checking user equivalence...

Check: User equivalence for user "oracle"
Node Name Status
------------------------------------ ------------------------
servrac01 passed
servrac02 passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
servrac01 passed
servrac02 passed

Verification of the hosts config file successful


Interface information for node "servrac01"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth2 172.16.230.11 172.16.230.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:44 9000
eth2 169.254.38.119 169.254.0.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:44 9000
bond0 10.10.1.201 10.10.1.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:42 1500
bond0 10.10.1.219 10.10.1.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:42 1500
bond0 10.10.1.220 10.10.1.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:42 1500
bond0 10.10.1.218 10.10.1.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:42 1500
bond0 10.10.1.211 10.10.1.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:E3:42 1500


Interface information for node "servrac02"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth2 172.16.230.12 172.16.230.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:EB:84 9000
bond0 10.10.1.202 10.10.1.0 0.0.0.0 10.10.1.150 6C:AE:8B:23:EB:82 1500


Check: Node connectivity of subnet "172.16.230.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
servrac01[172.16.230.11] servrac02[172.16.230.12] yes
Result: Node connectivity passed for subnet "172.16.230.0" with node(s) servrac01,servrac02


Check: TCP connectivity of subnet "172.16.230.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
servrac01:172.16.230.11 servrac02:172.16.230.12 passed
Result: TCP connectivity check passed for subnet "172.16.230.0"


Check: Node connectivity of subnet "169.254.0.0"
Result: Node connectivity passed for subnet "169.254.0.0" with node(s) servrac01


Check: TCP connectivity of subnet "169.254.0.0"
Result: TCP connectivity check passed for subnet "169.254.0.0"


Check: Node connectivity of subnet "10.10.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
servrac01[10.10.1.201] servrac01[10.10.1.219] yes
servrac01[10.10.1.201] servrac01[10.10.1.220] yes
servrac01[10.10.1.201] servrac01[10.10.1.218] yes
servrac01[10.10.1.201] servrac01[10.10.1.211] yes
servrac01[10.10.1.201] servrac02[10.10.1.202] yes
servrac01[10.10.1.219] servrac01[10.10.1.220] yes
servrac01[10.10.1.219] servrac01[10.10.1.218] yes
servrac01[10.10.1.219] servrac01[10.10.1.211] yes
servrac01[10.10.1.219] servrac02[10.10.1.202] yes
servrac01[10.10.1.220] servrac01[10.10.1.218] yes
servrac01[10.10.1.220] servrac01[10.10.1.211] yes
servrac01[10.10.1.220] servrac02[10.10.1.202] yes
servrac01[10.10.1.218] servrac01[10.10.1.211] yes
servrac01[10.10.1.218] servrac02[10.10.1.202] yes
servrac01[10.10.1.211] servrac02[10.10.1.202] yes
Result: Node connectivity passed for subnet "10.10.1.0" with node(s) servrac01,servrac02


Check: TCP connectivity of subnet "10.10.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
servrac01:10.10.1.201 servrac01:10.10.1.219 passed
servrac01:10.10.1.201 servrac01:10.10.1.220 passed
servrac01:10.10.1.201 servrac01:10.10.1.218 passed
servrac01:10.10.1.201 servrac01:10.10.1.211 passed
servrac01:10.10.1.201 servrac02:10.10.1.202 passed
Result: TCP connectivity check passed for subnet "10.10.1.0"


Interfaces found on subnet "10.10.1.0" that are likely candidates for VIP are:
servrac01 bond0:10.10.1.201 bond0:10.10.1.219 bond0:10.10.1.220 bond0:10.10.1.218 bond0:10.10.1.211
servrac02 bond0:10.10.1.202

Interfaces found on subnet "172.16.230.0" that are likely candidates for a private interconnect are:
servrac01 eth2:172.16.230.11
servrac02 eth2:172.16.230.12
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "172.16.230.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed for subnet "10.10.1.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "172.16.230.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.16.230.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "169.254.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "169.254.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.10.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.1.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Time zone consistency
Result: Time zone consistency check passed

Checking shared storage accessibility...

Disk Sharing Nodes (2 in count)
------------------------------------ ------------------------
/dev/sdak servrac01
/dev/sdao servrac01
/dev/sdam servrac01
/dev/sdan servrac01
/dev/sdap servrac01
/dev/sdal servrac01
/dev/sdas servrac01
/dev/sdaq servrac01
/dev/sdar servrac01
/dev/sdaj servrac01
/dev/sdat servrac01
/dev/sdau servrac01
/dev/sdaw servrac01
/dev/sdax servrac01
/dev/sdav servrac01
/dev/sdaz servrac01
/dev/sdba servrac01
/dev/sdbb servrac01
/dev/sdbe servrac01
/dev/sdbc servrac01
/dev/sdbd servrac01
/dev/sdbf servrac01
/dev/sday servrac01
/dev/sdbg servrac01
/dev/sdbi servrac01
/dev/sdbh servrac01
/dev/sdbj servrac01
/dev/sdbm servrac01
/dev/sdbk servrac01
/dev/sdbl servrac01
/dev/sdbp servrac01
/dev/sdbn servrac01
/dev/sdbq servrac01
/dev/sdbo servrac01
/dev/mapper/disk_ocr_01 servrac01 servrac02
/dev/mapper/disk_ocr_02 servrac01 servrac02
/dev/mapper/disk_ocr_03 servrac01 servrac02
/dev/mapper/disk_bin_01 servrac01
/dev/mapper/disk_01 servrac01 servrac02
/dev/mapper/disk_02 servrac01 servrac02
/dev/mapper/disk_03 servrac01 servrac02
/dev/mapper/disk_04 servrac01 servrac02
/dev/mapper/disk_05 servrac01 servrac02
/dev/mapper/disk_06 servrac01 servrac02
/dev/mapper/disk_07 servrac01 servrac02
/dev/mapper/disk_08 servrac01 servrac02
/dev/mapper/disk_09 servrac01 servrac02
/dev/mapper/disk_10 servrac01 servrac02
/dev/mapper/disk_11 servrac01 servrac02
/dev/mapper/disk_12 servrac01 servrac02
/dev/mapper/disk_13 servrac01 servrac02
/dev/mapper/disk_14 servrac01 servrac02
/dev/mapper/disk_15 servrac01 servrac02
/dev/mapper/disk_16 servrac01 servrac02
/dev/mapper/disk_17 servrac01 servrac02
/dev/mapper/disk_18 servrac01 servrac02
/dev/mapper/disk_19 servrac01 servrac02
/dev/mapper/disk_20 servrac01 servrac02
/dev/mapper/disk_21 servrac01 servrac02
/dev/mapper/disk_22 servrac01 servrac02
/dev/mapper/disk_23 servrac01 servrac02
/dev/mapper/disk_24 servrac01 servrac02
/dev/mapper/disk_25 servrac01 servrac02
/dev/mapper/disk_26 servrac01 servrac02
/dev/mapper/disk_27 servrac01 servrac02
/dev/mapper/disk_28 servrac01 servrac02
/dev/mapper/disk_29 servrac01 servrac02
/dev/sdd servrac02
/dev/sdc servrac02
/dev/sde servrac02
/dev/sdf servrac02
/dev/sdg servrac02
/dev/sdi servrac02
/dev/sdj servrac02
/dev/sdh servrac02
/dev/sdk servrac02
/dev/sdl servrac02
/dev/sdm servrac02
/dev/sdn servrac02
/dev/sdp servrac02
/dev/sdq servrac02
/dev/sdo servrac02
/dev/sdr servrac02
/dev/sdu servrac02
/dev/sdv servrac02
/dev/sdw servrac02
/dev/sdt servrac02
/dev/sds servrac02
/dev/sdx servrac02
/dev/sdz servrac02
/dev/sdy servrac02
/dev/sdab servrac02
/dev/sdaa servrac02
/dev/sdae servrac02
/dev/sdaf servrac02
/dev/sdad servrac02
/dev/sdb servrac02
/dev/sdag servrac02
/dev/sdac servrac02
/dev/sdah servrac02
/dev/sdai servrac02
/dev/mapper/disk_bin_02 servrac02

Disk Sharing Nodes (2 in count)
------------------------------------ ------------------------
/dev/sdb servrac01
/dev/sdd servrac01
/dev/sda servrac01
/dev/sdf servrac01
/dev/sde servrac01
/dev/sdg servrac01
/dev/sdj servrac01
/dev/sdc servrac01
/dev/sdi servrac01
/dev/sdh servrac01
/dev/sdk servrac01
/dev/sdl servrac01
/dev/sdn servrac01
/dev/sdo servrac01
/dev/sdm servrac01
/dev/sdp servrac01
/dev/sdr servrac01
/dev/sds servrac01
/dev/sdu servrac01
/dev/sdq servrac01
/dev/sdt servrac01
/dev/sdz servrac01
/dev/sdv servrac01
/dev/sdx servrac01
/dev/sdw servrac01
/dev/sdy servrac01
/dev/sdab servrac01
/dev/sdac servrac01
/dev/sdaa servrac01
/dev/sdaf servrac01
/dev/sdae servrac01
/dev/sdag servrac01
/dev/sdad servrac01
/dev/sdah servrac01
/dev/sdaj servrac02
/dev/sdal servrac02
/dev/sdan servrac02
/dev/sdak servrac02
/dev/sdap servrac02
/dev/sdao servrac02
/dev/sdar servrac02
/dev/sdam servrac02
/dev/sdat servrac02
/dev/sdaq servrac02
/dev/sdas servrac02
/dev/sdav servrac02
/dev/sdaw servrac02
/dev/sdax servrac02
/dev/sdau servrac02
/dev/sdaz servrac02
/dev/sday servrac02
/dev/sdbd servrac02
/dev/sdbc servrac02
/dev/sdba servrac02
/dev/sdbb servrac02
/dev/sdbf servrac02
/dev/sdbg servrac02
/dev/sdbi servrac02
/dev/sdbe servrac02
/dev/sdbh servrac02
/dev/sdbj servrac02
/dev/sdbl servrac02
/dev/sdbk servrac02
/dev/sdbm servrac02
/dev/sdbn servrac02
/dev/sdbo servrac02
/dev/sdbp servrac02
/dev/sdbq servrac02


Shared storage check was successful on nodes "servrac01,servrac02"

Post-check for hardware and operating system setup was successful.

Osvaldo Correa.

Mensagem Qui Mai 29, 2014 9:56 am
portilho Site Admin

Mensagens: 439
Opa, mas você ainda não removeu o segundo nó.
Mesmo que você já o tenha perdido, precisa avisar o RAC disto, removendo ele logicamente.

Te sugiro ler e seguir estas notas, de acordo com o que aconteceu com o segundo nó.

Steps to Remove Node from Cluster When the Node Crashes Due to OS/Hardware Failure and cannot boot up [ID 466975.1]
How to remove/delete a node from Grid Infrastructure Clusterware when the node has failed [ID 1262925.1]

Mensagem Dom Jun 01, 2014 7:30 pm

Mensagens: 0
Portilho, executei todos os procedimentos da nota "Doc ID 1262925.1".

O único serviço que não havia removido era o vip do nó dois.
Removi a instância.
[oracle@servrac01 ~]$ srvctl config service -d csorcl
Service name: csintegrador
Service is enabled
Server pool: csorcl_csintegrador
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: true
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 10
TAF failover delay: 1
Connection Load Balancing Goal: SHORT
Runtime Load Balancing Goal: SERVICE_TIME
TAF policy specification: PRECONNECT
Edition:
Preferred instances: csorcl1
Available instances:

Executei o update de inventario.
[oracle@servrac01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES={servrac01}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 17386 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /app/oraInventory
'UpdateNodeList' was successful.

O vip "servrac02-vip" estava parado, somente removi o mesmo.
./srvctl remove vip -i servrac02-vip -f

[root@servrac01 bin]# ./crsctl delete node -n servrac02
CRS-4660: Could not find node servrac02 to delete.
CRS-4000: Command Delete failed, or completed with errors.
[root@servrac01 bin]# ./olsnodes
servrac01


[oracle@servrac01 ContentsXML]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASM.dg
ONLINE ONLINE servrac01
ora.DATA.dg
ONLINE ONLINE servrac01
ora.LISTENER.lsnr
ONLINE ONLINE servrac01
ora.asm
ONLINE ONLINE servrac01 Started
ora.gsd
ONLINE OFFLINE servrac01
ora.net1.network
ONLINE ONLINE servrac01
ora.ons
ONLINE ONLINE servrac01
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE servrac01
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE servrac01
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE servrac01
ora.csorcl.csintegrador.svc
1 ONLINE ONLINE servrac01
ora.csorcl.csintegrador_preconnect.svc
1 ONLINE OFFLINE
ora.csorcl.db
1 ONLINE ONLINE servrac01 Open
ora.cvu
1 ONLINE ONLINE servrac01
ora.oc4j
1 ONLINE ONLINE servrac01
ora.scan1.vip
1 ONLINE ONLINE servrac01
ora.scan2.vip
1 ONLINE ONLINE servrac01
ora.scan3.vip
1 ONLINE ONLINE servrac01
ora.servrac01.vip
1 ONLINE ONLINE servrac01

Ainda continuo com o mesmo problema.
[oracle@servrac01 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={servrac02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={servrac02-vip}"

Performing pre-checks for node addition
Shared resources check for node addition failed Cannot identify existing nodes in cluster The required component "olsnodes" is missing

Verification cannot proceed

[oracle@servrac01 bin]$ olsnodes
servrac01

deste já agradeço, pela força.

Mensagem Ter Jun 03, 2014 2:28 pm
portilho Site Admin

Mensagens: 439
O ssh sem senha está funcionando entre as máquinas?

Mensagem Qui Jun 05, 2014 11:26 am

Mensagens: 0
Tudo funcionando.

Vou ativar o standby backup e formatar e re-instalar o ambiente.

Muito obrigado, pela ajuda.

Mensagem Sex Jun 06, 2014 8:03 pm
portilho Site Admin

Mensagens: 439
Se os comandos "olsnodes -i -v" e "olsnodes -s -v" não retornarem nenhum erro nem nenhuma informação residual sobre o nó 2, te recomendo mesmo abrir SR, ou se puder como você falou, fazer o Switchover e refazer.

Mas eu abriria o SR, seria interessante saber o que acontecer e como resolver.


Voltar para Treinamento Oracle RAC

cron