Sunday, January 17, 2016

RAC NODE ADDITION

node addition does not require down time for surviving nodes, you have to perform certain prerequiste on new node and some on exixsting node


processes is it copies binaries from one of the existing node to the new node
once copy is completed , it will add information about the 3rd node to the inventory of first node and secound node and also on the third node it will have inventory of 3 node
 make sure that cluster bacground process are up and running from the third node

Most of prerequiste steps happen at OS Level and storage level

Storage administarator should provide access to third node

Once storage level access is given system administrator should mount that disk
on the operating syste


then DBA has to
1)install rpm's for oracle ASM and do scandisk and list disks
2)make sure that same group and user is available on third node
3)make sure that /etc/hosts file is updated and have entries
5)enable password les ssh on all three nodes

existing nodes are
1)rac1
2)rac2
new node
3)rac3
make an entry for 3rd node on /etc/hosts on all three machines

4)Login as Oracle user on first node
cd $GRID_HOME/oui/bin
./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAME={rac3-vip}

its not gui installation its command line installation

5)once addNode.sh is completed , proceed with orainstroot.sh and root.sh on the third node
"
we have extended grid ifrastruture home now we have to extend RDBMS HOME


6)Login as Oracle user on first node

 cd $ORACLE_HOME/oui/bin
./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

7)create new Instance on third from node1 as oracle user
run dbc
=====================================================================


MAKE SURE USERS , DIRECTORY STRUCTURE IS CONSISTANT ACROSS ALL NODES
=====================================================================




1. oracle user should have oinstall as primary group and dba, asmdba as secondary group.

useradd -u 500 -g oinstall -G dba,asmdba oracle
useradd -u 501 -g oinstall -G asmadmin,asmoper,asmdba grid

userdel oracle  === To delete unix account

/home/<username>

2. ownership

chown -R grid:oinstall  /syed/11.2.0    # ORACLE_BASE
chown -R grid:oinstall /syed/oraInventory   #ORAINVENTORY
chown -R grid:oinstall  /syed/grid_home   # GRID_HOME  i.e
chown -R oracle:oinstall   /syed/dbhome_1 # ORACLE_HOME

3. Permission

chmod -R 775 /syed/11.2.0
chmod -R 775/syed/oraInventory
chmod -R 775/syed/grid_home
chmod  -R 775 /syed/dbhome_1


ADD 3RD NODE IN /ETC/HOSTS FILE  ,MAKE SURE /ETC/HOSTS IS CONSISTENT ACROSS ALL NODES
=====================================================

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1        localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
#Public IP#################################
192.168.1.203   rac1.gmail.com rac1
192.168.1.205   rac2.gmail.com rac2
192.168.1.206   rac3.gmail.com rac3
#Private IP########
192.168.2.203   rac1-priv.gmail.com rac1-priv
192.168.2.205   rac2-priv.gmail.com rac2-priv
192.168.2.206   rac3-priv.gmail.com rac3-priv

#Virtual IP#################################
192.168.1.22   rac1-vip.gmail.com rac1-vip
192.168.1.21   rac2-vip.gmail.com rac2-vip


#Storage IP#################################
192.168.2.52 openfiler1


[root@rac1 ~]# 
[root@rac1 ~]# 

CONFIGURING STORAGE ON 3RD  NODE
=================================


Make an entry of 3rd node on storage  (openfiler)
and allow access to new node to available disks



login as: root
root@192.168.1.206's password:
Last login: Sun Jan 17 17:49:09 2016
[root@rac3 ~]# cd /media
[root@rac3 media]# ls
Enterprise Linux dvd 20090908
[root@rac3 media]# cd Enterprise\ Linux\ dvd\ 20090908/
[root@rac3 Enterprise Linux dvd 20090908]# cd Server/

[root@rac3 Server]# ls -lrt iscsi*
-rw-r--r-- 2 root root 824162 Sep  3  2009 iscsi-initiator-utils-6.2.0.871-0.10.                                                                  el5.x86_64.rpm

[root@rac3 Server]# rpm -ivh iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]

[root@rac3 Server]# service iscsi status
iscsid is stopped

[root@rac3 Server]#  service iscsi start
iscsid is stopped
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rac3 Server]#  service iscsi stop
Stopping iSCSI daemon:

[root@rac3 Server]#  service iscsi restart
Stopping iSCSI daemon: iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: initiator reported error (20 - could not connect to iscsid)
iscsiadm: Could not stop iscsid. Trying sending iscsid SIGTERM or SIGKILL signals manually


iscsid dead but pid file exists                            [  OK  ]
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rac3 Server]#  service iscsi start
iscsid (pid  7003) is running...
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]

[root@rac3 Server]# chkconfig  iscsi on

[root@rac3 Server]# chkconfig  iscsid on

[root@rac3 Server]#  iscsiadm -m discovery -t sendtargets -p openfiler1
192.168.2.52:3260,1 iqn.2006-01.com.openfiler:crs.crs
192.168.2.52:3260,1 iqn.2006-01.com.openfiler:crs.data

[root@rac3 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs.crs -p 192.168.2.52 -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:crs.crs, portal: 192.168.2.52,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:crs.crs, portal: 192.168.2.52,3260]: successful
You have mail in /var/spool/mail/root

[root@rac3 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs.data -p 192.168.2.52 -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:crs.data, portal: 192.168.2.52,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:crs.data, portal: 192.168.2.52,3260]: successful

[root@rac3 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs.crs -p 192.168.2.52 --op update -n node                                     .startup -v automatic
[root@rac3 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs.data -p 192.168.2.52 --op update -n nod                                     e.startup -v automatic
[root@rac3 Server]#




[Unable to write the driver configuration
[grid@rac3 ~]$ su - root
Password:
[root@rac3 ~]# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

[root@rac3 ~]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
[root@rac3 ~]#

[root@rac3 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "VOTE1"
Unable to instantiate disk "VOTE1"
Instantiating disk "DATA1"
Unable to instantiate disk "DATA1"


[root@rac3 ~]#  /usr/sbin/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no

[root@rac3 ~]#  /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm

[root@rac3 ~]#  /usr/sbin/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@rac3 ~]#  /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "VOTE1"
Instantiating disk "DATA1"

[root@rac3 ~]#  /usr/sbin/oracleasm listdisks
DATA1
VOTE1
[root@rac3 ~]#





login as: root
root@192.168.1.203's password:
Last login: Sat Jan 16 11:55:10 2016 from 192.168.1.162
[root@rac1 ~]# cd /syed/grid_home
[root@rac1 grid_home]# cd bin
[root@rac1 bin]# ./olsnodes -n -s
rac1    1       Active
rac2    2       Active
[root@rac1 bin]#





[root@rac3 ~]# ping rac1
PING rac1.gmail.com (192.168.1.203) 56(84) bytes of data.
64 bytes from rac1.gmail.com (192.168.1.203): icmp_seq=1 ttl=64 time=0.573 ms
64 bytes from rac1.gmail.com (192.168.1.203): icmp_seq=2 ttl=64 time=0.212 ms
64 bytes from rac1.gmail.com (192.168.1.203): icmp_seq=3 ttl=64 time=0.262 ms

--- rac1.gmail.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.212/0.349/0.573/0.159 ms

[root@rac3 ~]# ping rac2
PING rac2.gmail.com (192.168.1.205) 56(84) bytes of data.
64 bytes from rac2.gmail.com (192.168.1.205): icmp_seq=1 ttl=64 time=4.92 ms
64 bytes from rac2.gmail.com (192.168.1.205): icmp_seq=2 ttl=64 time=0.303 ms

--- rac2.gmail.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.303/2.614/4.925/2.311 ms


[root@rac3 ~]# ping openfiler1
PING openfiler1 (192.168.2.52) 56(84) bytes of data.
64 bytes from openfiler1 (192.168.2.52): icmp_seq=1 ttl=64 time=0.753 ms
64 bytes from openfiler1 (192.168.2.52): icmp_seq=2 ttl=64 time=0.396 ms
64 bytes from openfiler1 (192.168.2.52): icmp_seq=3 ttl=64 time=0.168 ms

--- openfiler1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.168/0.439/0.753/0.240 ms
[root@rac3 ~]#



VIP SHOULD NOT PING
===================
[root@rac3 ~]# ping rac1-vip.gmail.com rac1-vip
PING rac1-vip.gmail.com (192.168.1.22) 56(124) bytes of data.

--- rac1-vip.gmail.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3003ms


VARIFY NSLOOKUP
==================


[root@rac3 syed]# vi /etc/resolv.conf
[root@rac3 syed]# nslookup rac-scan
;; connection timed out; no servers could be reached

[root@rac3 syed]# nslookup rac-scan
Server:         192.168.1.203
Address:        192.168.1.203#53

Name:   rac-scan.gmail.com
Address: 192.168.1.90
Name:   rac-scan.gmail.com
Address: 192.168.1.91
Name:   rac-scan.gmail.com
Address: 192.168.1.92



[root@rac1 syed]# nslookup rac-scan
Server:         192.168.1.203
Address:        192.168.1.203#53

Name:   rac-scan.gmail.com
Address: 192.168.1.91
Name:   rac-scan.gmail.com
Address: 192.168.1.92
Name:   rac-scan.gmail.com
Address: 192.168.1.90






 ENABLE SSH BETWEEN THE NODES

==============================



T
 [grid@syedsrac9 ~]$ cd /home/grid/.ssh
-bash: cd: /home/grid/.ssh: No such file or directory
[grid@syedsrac9 ~]$ pwd
/home/grid
[grid@syedsrac9 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Created directory '/home/grid/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
4c:6d:8a:2a:e4:39:c5:65:8d:03:42:d5:ac:e9:b7:bb grid@syedsrac9

[root@rac3 syed]# su - grid
[grid@rac3 ~]$ pwd
/home/grid
[grid@rac3 ~]$ ls
[grid@rac3 ~]$ cd /home/grid/.ssh
[grid@rac3 .ssh]$ ls -altr
total 16
-rw-r--r-- 1 grid oinstall  391 Jan 17 20:43 id_rsa.pub
-rw------- 1 grid oinstall 1743 Jan 17 20:43 id_rsa
drwx------ 5 grid oinstall 4096 Jan 17 21:02 ..
drwx------ 2 grid oinstall 4096 Jan 17 21:02 .
[grid@rac3 .ssh]$ vi id_rsa.pub

[grid@rac3 .ssh]$ cat id_rsa.pub
ssh-rsa 24:ca:b6:d7:c9:08:e7:3a:46:5b:02:b5:28:24:be:ad grid@rac3
[grid@rac3 .ssh]$


[root@rac1 syed]# su - grid
[grid@rac1 ~]$ id
uid=501(grid) gid=501(oinstall) groups=501(oinstall),503(asmadmin),504(asmdba),505(asmoper)
[grid@rac1 ~]$ cd /syed
[grid@rac1 syed]$ grid
-bash: grid: command not found
[grid@rac1 syed]$ pwd
/syed
[grid@rac1 syed]$ cd
[grid@rac1 ~]$ cd ssh
-bash: cd: ssh: No such file or directory
[grid@rac1 ~]$ pwd
/home/grid
[grid@rac1 ~]$ cd .ssh
[grid@rac1 .ssh]$ pwd
/home/grid/.ssh
[grid@rac1 .ssh]$ ls -lrt
total 32
-rw-r--r-- 1 grid oinstall   22 Jan 14 16:40 config
-rw-r--r-- 1 grid oinstall  239 Jan 14 16:40 authorized_keys.ri.bak
-rw-r--r-- 1 grid oinstall  229 Jan 14 16:40 id_rsa.pub
-rw------- 1 grid oinstall  883 Jan 14 16:40 id_rsa
-rw-r--r-- 1 grid oinstall  229 Jan 14 16:40 identity.pub
-rw------- 1 grid oinstall  883 Jan 14 16:40 identity
-rw-r--r-- 1 grid oinstall  936 Jan 14 16:40 authorized_keys
-rw-r--r-- 1 grid oinstall 2397 Jan 14 16:40 known_hosts
[grid@rac1 .ssh]$ cat id_rsa.pub


[grid@rac1 .ssh]$ cat id_rsa.pub >> authorized_keys
[grid@rac1 .ssh]$ more authorized_keys
[grid@rac1 .ssh]$ scp authorized_keys grid@rac3:/home/grid/.ssh/
The authenticity of host 'rac3 (192.168.1.206)' can't be established.
RSA key fingerprint is fb:48:1a:03:24:2d:66:b5:86:90:dc:59:df:dd:01:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac3,192.168.1.206' (RSA) to the list of known hosts                                                                  .
grid@rac3's password:
authorized_keys                               100% 1165     1.1KB/s   00:00
[grid@rac1 .ssh]$ ssh rac1


login as: root
root@192.168.1.205's password:
Last login: Sun Jan 17 19:44:53 2016 from 192.168.1.162
[root@rac2 ~]# su - grid
[grid@rac2 ~]$ pwd
/home/grid
[grid@rac2 ~]$ cd .ssh
[grid@rac2 .ssh]$ pwd
/home/grid/.ssh
[grid@rac2 .ssh]$ ls -lrt
total 24
-rw-r--r-- 1 grid oinstall  229 Jan 14 16:40 id_rsa.pub
-rw------- 1 grid oinstall  883 Jan 14 16:40 id_rsa
-rw-r--r-- 1 grid oinstall  229 Jan 14 16:40 identity.pub
-rw------- 1 grid oinstall  883 Jan 14 16:40 identity
-rw-r--r-- 1 grid oinstall 2397 Jan 14 16:40 known_hosts
-rw-r--r-- 1 grid oinstall  936 Jan 14 16:40 authorized_keys
[grid@rac2 .ssh]$ cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAqKUa7+ffVT6qX92EZvOA0Qh67eGPG0oAg7dTskud7EFX5hpg1ejvPKfU2aKv7y7c0ChmX56sdWQUAjN7tB/GeXFYqZKk2G6p6E0doUold1trX5qtS084ZtuiM6Slh8X6pLuOD1RieCtzb6IfzZHjOQeEwXO/whio1vJEhGSscF8= grid@rac2.gmail.com
[grid@rac2 .ssh]$ ssh rac1
Last login: Sun Jan 17 21:24:35 2016 from rac1.gmail.com
[grid@rac1 ~]$ id
uid=501(grid) gid=501(oinstall) groups=501(oinstall),503(asmadmin),504(asmdba),505(asmoper)
[grid@rac1 ~]$ exit
logout
Connection to rac1 closed.
[grid@rac2 .ssh]$ pwd
/home/grid/.ssh
[grid@rac2 .ssh]$ ssh rac3
The authenticity of host 'rac3 (192.168.1.206)' can't be established.
RSA key fingerprint is fb:48:1a:03:24:2d:66:b5:86:90:dc:59:df:dd:01:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac3,192.168.1.206' (RSA) to the list of known hosts.



========================================================================
[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ xhost +
access control disabled, clients can connect from any host
[grid@rac1 ~]$ cd /syed/grid_home
[grid@rac1 grid_home]$ cd oui
[grid@rac1 oui]$ cd bin
[grid@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2697 MB    Passed
Oracle Universal Installer, Version 11.2.0.1.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.


Performing tests to see whether nodes rac2,rac3 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /syed/grid_home
   New Nodes
Space Requirements
   New Nodes
      rac3
         /: Required 4.08GB : Available 57.12GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.1.0 
      Sun JDK 1.5.0.17.0 
      Installer SDK Component 11.2.0.1.0 
      Oracle One-Off Patch Installer 11.2.0.0.2 
      Oracle Universal Installer 11.2.0.1.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Enterprise Manager Common Core Files 10.2.0.4.2 
      Oracle DBCA Deconfiguration 11.2.0.1.0 
      Oracle RAC Deconfiguration 11.2.0.1.0 
      Oracle Quality of Service Management (Server) 11.2.0.1.0 
      Installation Plugin Files 11.2.0.1.0 
      Universal Storage Manager Files 11.2.0.1.0 
      Oracle Text Required Support Files 11.2.0.1.0 
      Automatic Storage Management Assistant 11.2.0.1.0 
      Oracle Database 11g Multimedia Files 11.2.0.1.0 
      Oracle Multimedia Java Advanced Imaging 11.2.0.1.0 
      Oracle Globalization Support 11.2.0.1.0 
      Oracle Multimedia Locator RDBMS Files 11.2.0.1.0 
      Oracle Core Required Support Files 11.2.0.1.0 
      Bali Share 1.1.18.0.0 
      Oracle Database Deconfiguration 11.2.0.1.0 
      Oracle Quality of Service Management (Client) 11.2.0.1.0 
      Expat libraries 2.0.1.0.1 
      Oracle Containers for Java 11.2.0.1.0 
      Perl Modules 5.10.0.0.1 
      Secure Socket Layer 11.2.0.1.0 
      Oracle JDBC/OCI Instant Client 11.2.0.1.0 
      Oracle Multimedia Client Option 11.2.0.1.0 
      LDAP Required Support Files 11.2.0.1.0 
      Character Set Migration Utility 11.2.0.1.0 
      Perl Interpreter 5.10.0.0.1 
      PL/SQL Embedded Gateway 11.2.0.1.0 
      OLAP SQL Scripts 11.2.0.1.0 
      Database SQL Scripts 11.2.0.1.0 
      Oracle Extended Windowing Toolkit 3.4.47.0.0 
      SSL Required Support Files for InstantClient 11.2.0.1.0 
      SQL*Plus Files for Instant Client 11.2.0.1.0 
      Oracle Net Required Support Files 11.2.0.1.0 
      Oracle Database User Interface 2.2.13.0.0 
      RDBMS Required Support Files for Instant Client 11.2.0.1.0 
      Enterprise Manager Minimal Integration 11.2.0.1.0 
      XML Parser for Java 11.2.0.1.0 
      Oracle Security Developer Tools 11.2.0.1.0 
      Oracle Wallet Manager 11.2.0.1.0 
      Enterprise Manager plugin Common Files 11.2.0.1.0 
      Platform Required Support Files 11.2.0.1.0 
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 
      RDBMS Required Support Files 11.2.0.1.0 
      Oracle Ice Browser 5.2.3.6.0 
      Oracle Help For Java 4.2.9.0.0 
      Enterprise Manager Common Files 10.2.0.4.2 
      Deinstallation Tool 11.2.0.1.0 
      Oracle Java Client 11.2.0.1.0 
      Cluster Verification Utility Files 11.2.0.1.0 
      Oracle Notification Service (eONS) 11.2.0.1.0 
      Oracle LDAP administration 11.2.0.1.0 
      Cluster Verification Utility Common Files 11.2.0.1.0 
      Oracle Clusterware RDBMS Files 11.2.0.1.0 
      Oracle Locale Builder 11.2.0.1.0 
      Oracle Globalization Support 11.2.0.1.0 
      Buildtools Common Files 11.2.0.1.0 
      Oracle RAC Required Support Files-HAS 11.2.0.1.0 
      SQL*Plus Required Support Files 11.2.0.1.0 
      XDK Required Support Files 11.2.0.1.0 
      Agent Required Support Files 10.2.0.4.2 
      Parser Generator Required Support Files 11.2.0.1.0 
      Precompiler Required Support Files 11.2.0.1.0 
      Installation Common Files 11.2.0.1.0 
      Required Support Files 11.2.0.1.0 
      Oracle JDBC/THIN Interfaces 11.2.0.1.0 
      Oracle Multimedia Locator 11.2.0.1.0 
      Oracle Multimedia 11.2.0.1.0 
      HAS Common Files 11.2.0.1.0 
      Assistant Common Files 11.2.0.1.0 
      PL/SQL 11.2.0.1.0 
      HAS Files for DB 11.2.0.1.0 
      Oracle Recovery Manager 11.2.0.1.0 
      Oracle Database Utilities 11.2.0.1.0 
      Oracle Notification Service 11.2.0.0.0 
      SQL*Plus 11.2.0.1.0 
      Oracle Netca Client 11.2.0.1.0 
      Oracle Net 11.2.0.1.0 
      Oracle JVM 11.2.0.1.0 
      Oracle Internet Directory Client 11.2.0.1.0 
      Oracle Net Listener 11.2.0.1.0 
      Cluster Ready Services Files 11.2.0.1.0 
      Oracle Database 11g 11.2.0.1.0 
-----------------------------------------------------------------------------


Instantiating scripts for add node (Sunday, January 17, 2016 9:34:05 PM EST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Sunday, January 17, 2016 9:34:15 PM EST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Sunday, January 17, 2016 9:45:46 PM EST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. 
To register the new inventory please run the script at '/syed/orainventory/orainstRoot.sh' with root privileges on nodes 'rac3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/syed/orainventory/orainstRoot.sh #On nodes rac3
/syed/grid_home/root.sh #On nodes rac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /syed/grid_home was successful.
Please check '/tmp/silentInstall.log' for more details.
[grid@rac1 bin]$ 





[root@rac3 ~]# cd /syed/orainventory
[root@rac3 orainventory]# ./orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /syed/orainventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /syed/orainventory to oinstall.
The execution of the script is complete.
[root@rac3 orainventory]#






Changing groupname of /syed/orainventory to oinstall.
The execution of the script is complete.
[root@rac3 orainventory]# cd ..
[root@rac3 syed]# ls
11.2.0    grid_home                          oracleasmlib-2.0.4-1.el5.x86_64.rpm     orainventory
db_home1  oracleasmlib-2.0.4-1.el5.i386.rpm  oracleasm-support-2.1.7-1.el5.i386.rpm
[root@rac3 syed]# cd grid_home

[root@rac3 grid_home]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /syed/grid_home

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2016-01-17 22:21:49: Parsing the host name
2016-01-17 22:21:49: Checking for super user privileges
2016-01-17 22:21:49: User has super user privileges
Using configuration parameter file: /syed/grid_home/crs/install/crsconfig_params
Creating trace directory
/syed/grid_home/bin/cluutil -sourcefile /etc/oracle/ocr.loc -sourcenode rac2 -destfile /syed/grid_home/srvm/admin/ocrloc.tmp -nodelist rac2 ... failed
Unable to copy OCR locations
validateOCR failed for +CRS at /syed/grid_home/crs/install/crsconfig_lib.pm line 7979.
[root@rac3 grid_home]#

to deinstall root.sh


 /syed/grid_home/crs/install/rootcrs.pl -deconfig -force

No comments:

Post a Comment