Saturday, January 19, 2013

0 Install Oracle 11gR2 RAC with ASM on OEL 6.3

This installation note is reference the Oracle Base with my notes.

Software

  • Oracle Enterprise Linux 6.3
  • Virtual Box 4.2.6
  • Oracle 11gR2 (11.2.0.3)
  • ASM
  • Oracle Clusterware: Oracle Grid 11.2.0.3

Environment

The goal is to build 2 Oracle RAC nodes with shared disk. Each nodes would have 2 Ethernet cards. One for the Inter node connection ( private IP) and the other one for the user connection (public ip) . The public ip Ethernet card  would have 2 ip address. One is the host static ip. Another one is Virtual IP address (VIP). The RAC listener listen both IP addresses.The VIP is responds for transfer user connection in the event the RAC node is not available. To ensure the user connection continuously, Oracle introduce the  SCAN ( Single Client Access Name). It the the virtual layer on the listener. As long as user connection is connect to the SCAN IP, listener would redirect the user connection to the available RAC nodes.

RAC1
  • Eth0: 10.0.1.171
  • Eth1: 192.168.0.101
RAC2
  • Eth0: 10.0.1.172
  • Eth1: 192.168.0.102
Panda: DNS Server, NTP server
  • Eth0: 10.0.1.101

Install the Oracle Linux on Virtual Box

Refer here.

Configure Hosts Name

/etc/hosts and /etc/sysconfig/network must have the host name.

image

 

Create the share disks.

Refer here.

Install Oracle database Prerequisites

This package would also update the kernel parameter as Oracle suggest. Before it modify the parameter, it would backup the old one to the /etc/sysctl.conf.orabackup. For kernel parameter, you can refer here.

 yum install oracle-rdbms-server-11gR2-preinstall

image

Configure Kernel Parameter and number of open file descriptor

Refer here for the Oracle Requirement and here for how to fix kernel parameters. oracle-rdbms-server-11gR2-preinstall would take care of the changes, just double check and adjust it if necessary.

image

Install Cluster Verification Framework for Oracle Grid ( cvuqdisk)

The RPM can only be found in the cluster ware download media ( part 3).

rpm -Uvh cvuqdisk-1.0.9-1.rpm
rpm -iq cvuqdisk-1.0.9-1.rpm

 

image

 

Install ASM package

 

  • ASM Kernel: Part of the Oracle Linux Kernel ( UEK). Can be upgrade from Oracle public yum repository. Part of Oracle Linux installation media.
  • ASM support: Can be upgrade from Oracle public yum repository. Part of Oracle Linux installation media.
  • ASM lib: Only can be download from Oracle side.

Kernel 2.6.39-300.17.3.el6uek will break the Virtual Box Guest Additions.

rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64.rpm
yum install oracleasm-support

image

image

 

Create user/group

groupadd dba
useradd grid -g dba
useradd oracle -g dba
# set password
passwd grid
passwd oracle

image

uid and gid needs to be the same across the RAC nodes. We may need to specify the gid and uid when create the user.

groupadd -g 501 dba
useradd -u 502 -g dba oracle
useradd -u 503 -g dba grid

Create Software installation folders

  • /u01/app/oracle –> for $ORACLE_BASE for Oracle Database instance
  • /u01/app/grid –> for $ORACLE_BASE for ASM instance
  • /u01/app/11.2.0/grid –> for $ORACLE_HOME for ASM instance

 

mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
 
chown -R grid:dba /u01
chown -R oracle:dba /u01/app/oracle

image

Configure oracle User profile

~oracle/.bash_profile

TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
 
ORACLE_HOSTNAME=RAC1.localdomain; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
DB_HOME=$u01/app/oracle/product/11.2.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

~oracle/.bashrc

alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

~oracle/grid_env

ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

~oracle/db_env

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_SID=RAC1; export ORACLE_SID

ORACLE_HOME=$DB_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

 

 

Configure ASM

/etc/init.d/oracleasm configure

image

Use chkconfig to check whether oracleasm service is set to start up automatically. and also check the /etc/rc.*/

image

Create ASM Disk

Disk needs to be partition first before we use oracleasm.

# list all the disks
fdisk -l
oracleasm createdisk DISK_NAME DEVICE_NAME

 

image

oracleasm listdisks

image

oracleasm querydisk /dev/sdb1

image

Remove the ASM disk

oracleasm deletedisk DISK_NAME

image

 

Clone the VM

  • Shutdown the source VM( RAC1)
  • Use command line to clone the vdi file.
  • Create the VM frm cloned vdi file.
  • Attached the shared disk

 

image

 

image

image

image

image

Since my source VM has 4 shared disk attached, the cloned VM also have it with duplicate copy. My goal is for those shared disk to be share between VMs. Hence, I need to delete it first and reattached the original shared disks.

image

"C:\program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach RAC2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium D:\ShareDisk\asm1.vdi --mtype shareable
"C:\program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach RAC2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium D:\ShareDisk\asm2.vdi --mtype shareable
"C:\program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach RAC2 --storagectl "SATA" --port 3 --device 0 --type hdd --medium D:\ShareDisk\asm3.vdi --mtype shareable
"C:\program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach RAC2 --storagectl "SATA" --port 4 --device 0 --type hdd --medium D:\ShareDisk\asm4.vdi --mtype shareable

image

Update new VM host environment

  • Hostname: /etc/sysconfig/network
  • Ip address: /etc/sysconfig/network-scripts/ifcfg-eth*
  • Domain Name resolution: /etc/hosts –> Contains the private ip address

Below is the DNS zone files:

image

image

Create SSH connection between RAC nodes

Perform the below work on both oracle and grid account.

mkdir .ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa

image

cat id_rsa.pub >> authorized_keys
scp authorized_keys rac2:~/.ssh/authorized_keys

image

Repeats above steps on the node2; RAC2. The ideal for doing this to allow both nodes has the authorized keys.

image

Therefore, we can ssh to the other node without entering the password.

image

Check Cluster requirement

 

runcluvfy.sh stage -pre crsinst -n ol6-112-rac1,ol6-112-rac2 -verbose

image

 

Install the Grid Infrastructure

start the runInstaller

image

image

Choice “Install and configure Oracle Grid Infrastructure for a Cluster”

image

image

Change the SCAN Name and click Add to add another RAC node.

image

image

Setup the password for the SSH Connectivity

image

image

Click the “Identify the network interface” to verify the network interface setting.

image

select the ASM

image

We need at least 3 disks for voting , OCR …etc.

image

image

Because I am using VM. The VM clock is virtual , therefore the NTP sync is always off. The offset is somehow too large for the installer. I decide to ignore this error.

/etc/resolve.conf has the same value across the nodes. Not sure why the installer verify fail. Since I can use nslookup to verify the host name, I decide to ignore it too.

image

image

image

image

Here is the sample output for running the orainsRoot.sh on the rac1.

imageHere is the sample output for running the root.sh on the rac1.

image

Because the ntp error we ignore earlier, the cluster installation may fail on the ntp time offest verification. When this happen, we have to stop the ntpd service and use ntpdate to sync the time and start the ntpd service.

ntpd panda

Install the database software

start the runInstaller

image

image

image

image

image

 

image

image

image

image

image

image

image

run the root.sh on both nodes.

image

image

Create Oracle Database

Execute dbca

image

image

image

image

image

image

image

image

image

image

image

image

image

image

image

 

image

The SID name would be RAC11 and RAC12. RAC1 is the prefix we setup earlier.

image

 

 

 

 

 

 

Reference

0 comments:

Post a Comment

 

SQL Panda Copyright © 2011 - |- Template created by O Pregador - |- Powered by Blogger Templates