We had lab1 & lab2 in existing CVM two node cluster, as part of exercise, we will be adding node lab3 in existing node.
1. Make sure that you
meet the following requirements:
·
The node must be connected to the same shared storage
devices as the existing nodes.
·
The node must have private network connections to two
independent switches for the cluster.
For more information, see the Veritas Cluster Server Installation Guide.
For more information, see the Veritas Cluster Server Installation Guide.
·
The network interface names used for the private
interconnects on the new node must be the same as that of the existing nodes in
the cluster.
2. Install SFCFS on
the new system.
3. Update /etc/host
4. Pass ssh key to
each node.
Starting Volume Manager on the new node
Volume Manager uses license keys to control access. As you run the vxinstall utility,
answer n to prompts about licensing. You installed the
appropriate license when you ran the installsfcfs program.
To start Volume
Manager on the new node
1.
To start Veritas Volume Manager on the new node,
use the vxinstall utility:
# vxinstall
# vxinstall
2.
Enter n when
prompted to set up a system wide disk group for the system.
The installation completes.
The installation completes.
3.
Verify that the daemons are up and running.
Enter the command:
# vxdisk list
Make sure the output displays the shared disks without errors.
# vxdisk list
Make sure the output displays the shared disks without errors.
Cluster Status Before adding new node:
[root@lab1
/]# hastatus -summ
--
SYSTEM STATE
--
System State Frozen
A lab1 RUNNING 0
A lab2 RUNNING 0
--
GROUP STATE
--
Group System Probed AutoDisabled State
B app_sg lab1 Y N ONLINE
B app_sg lab2 Y N ONLINE
B cvm lab1 Y N ONLINE
B cvm lab2 Y N ONLINE
[root@lab1
/]# cfscluster status
Node :
lab1
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Node :
lab2
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Configuring LLT and GAB on the new node (lab3)
Perform the
steps in the following procedure to configure LLT and GAB on the new node.
To configure LLT and GAB on the new node (lab3)
1.
Edit
the /etc/llthosts file on the existing nodes. Using vi or another text editor,
add the line for the new node to the file. The file resembles:
[root@lab1
/]# cat /etc/llthosts
0 lab1
1 lab2
[root@lab1
/]# vi /etc/llthosts
[root@lab1
/]# cat /etc/llthosts
0 lab1
1 lab2
2 lab3
[root@lab2
~]# cat /etc/llthosts
0 lab1
1 lab2
[root@lab2
~]# vi /etc/llthosts
[root@lab2
~]# cat /etc/llthosts
0 lab1
1 lab2
2 lab3
2.
Copy
the /etc/llthosts file from one of the existing systems over
to the new system. The /etc/llthosts file must be identical on
all nodes in the cluster.
[root@lab3 ~]# cat /etc/llthosts
0 lab1
1 lab2
2 lab3
3.
Create an /etc/llttab file on the new system.
For example:
set-node saturn
set-cluster 101
link en1 /dev/dlpi/en:1 - ether - -
link en2 /dev/dlpi/en:2 - ether - -
Except for the first line that refers to the node, the file resembles the /etc/llttab files on the existing nodes. The second line, the cluster ID, must be the same as in the existing nodes.
set-node saturn
set-cluster 101
link en1 /dev/dlpi/en:1 - ether - -
link en2 /dev/dlpi/en:2 - ether - -
Except for the first line that refers to the node, the file resembles the /etc/llttab files on the existing nodes. The second line, the cluster ID, must be the same as in the existing nodes.
[root@lab3
~]# cat /etc/llttab
set-node
lab3
set-cluster
28805
link
eth4 eth-00:0C:29:22:2D:D7 - ether - -
link
eth5 eth-00:0C:29:22:2D:CD - ether - -
link-lowpri
eth3 eth-00:0C:29:22:2D:C3 - ether - -
4.
Use vi or another text editor to create the
file /etc/gabtab on the new node. This file must contain a line that
resembles the following example:
#/sbin/gabconfig -c –nN
Where
N represents the number of systems in the cluster. For a three-system cluster,
N would equal 3.
[root@lab3
~]# cat /etc/gabtab
/sbin/gabconfig
-c -n3
5.
Edit
the /etc/gabtab file on each of the existing systems, changing the content to
match the file on the new system.
[root@lab1
/]# cat /etc/gabtab
/sbin/gabconfig
-c -n2
[root@lab1
/]# vi /etc/gabtab
[root@lab1
/]# cat /etc/gabtab
/sbin/gabconfig
-c -n3
[root@lab2
~]# cat /etc/gabtab
/sbin/gabconfig
-c -n2
[root@lab2
~]# vi /etc/gabtab
[root@lab2
~]# cat /etc/gabtab
/sbin/gabconfig
-c -n3
6.
Use vi or another text editor to create the file /etc/VRTSvcs/conf/sysname on
the new node. This file must contain the name of the new node added to the
cluster.
For example:
Lab3
[root@lab3
~]# cat /etc/VRTSvcs/conf/sysname
lab3
7. Create
the Unique Universal Identifier file /etc/vx/.uuids/clusuuid on the
new node:
# uuidconfig.pl -rsh -clus -copy -from_sys galaxy -to_sys lab3
# uuidconfig.pl -rsh -clus -copy -from_sys galaxy -to_sys lab3
8.
Start the LLT, GAB, and ODM drivers on the new
node:
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
# /etc/methods/gmskextadm load
# /etc/rc.d/rc2.d/S99odm start
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
# /etc/methods/gmskextadm load
# /etc/rc.d/rc2.d/S99odm start
[root@lab3
~]# /etc/init.d/llt start
Starting
LLT:
LLT:
loading module...
Loaded
2.6.32-431.el6.x86_64 on kernel 2.6.32-431.el6.x86_64
LLT:
configuring module…
[root@lab3
~]# /etc/init.d/gab start
Starting
GAB:
Loaded
2.6.32-431.el6.x86_64 on kernel 2.6.32-431.el6.x86_64
Started
gablogd
gablogd:
Keeping 20 log files of 8388608 bytes each in |/var/log/gab_ffdc| directory.
Daemon log size limit 8388608 bytes
9.
On the new node, verify that the GAB port
memberships are a, b, d, h, v, w and f:
# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen df204 membership 012
Port b gen df20a membership 012
Port d gen df207 membership 012
Port h gen df207 membership 012
# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen df204 membership 012
Port b gen df20a membership 012
Port d gen df207 membership 012
Port h gen df207 membership 012
[root@lab3
~]# gabconfig -a
GAB
Port Memberships
===============================================================
Port a
gen 4d703 membership 012
Configuring CVM and CFS on the new node
Modify the existing cluster configuration to configure CVM and CFS for
the new node.
1.
Make a backup copy of the main.cf file,
if not backed up in previous procedures. For example:
# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.2node
# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.2node
[root@lab1
/]# cd /etc/VRTSvcs/conf/config
[root@lab1
config]# cp -p main.cf main.cf.2node
2. On
one of the nodes in the existing cluster, set the cluster configuration to
read-write mode:
# haconf -makerw
# haconf -makerw
[root@lab1 /]#
haconf -makerw
3. Add
the new node to the VCS configuration, if not already added:
# hasys -add lab3
# hasys -add lab3
[root@lab1 /]#
hasys -add lab3
4. To
enable the existing cluster to recognize the new node, run the following
commands on one of the existing nodes:
# hagrp -modify cvm SystemList -add lab3 2
# hagrp -modify cvm AutoStartList -add lab3
# hares -modify cvm_clus CVMNodeId -add lab3 2
# haconf -dump -makero
# /etc/vx/bin/vxclustadm -m vcs reinit
# /etc/vx/bin/vxclustadm nidmap
# hagrp -modify cvm SystemList -add lab3 2
# hagrp -modify cvm AutoStartList -add lab3
# hares -modify cvm_clus CVMNodeId -add lab3 2
# haconf -dump -makero
# /etc/vx/bin/vxclustadm -m vcs reinit
# /etc/vx/bin/vxclustadm nidmap
[root@lab1 /]# hagrp -modify cvm
SystemList -add lab3 2
[root@lab1
/]# hagrp -modify cvm AutoStartList -add lab3
[root@lab1
/]# hares -modify cvm_clus CVMNodeId -add lab3 2
[root@lab1
/]# hagrp -modify app_sg SystemList -add lab3 2
[root@lab1
/]# hagrp -modify app_sg AutoStartList -add lab3
[root@lab1
/]# hares -modify cfsmount1 NodeList -add lab3
[root@lab1
/]# hares -modify cfsmount2 NodeList -add lab3
[root@lab1
/]# hares -list
cfsmount1
lab1
cfsmount1
lab2
cfsmount1
lab3
cfsmount2
lab1
cfsmount2
lab2
cfsmount2
lab3
cvm_clus
lab1
cvm_clus lab2
cvm_clus
lab3
cvm_vxconfigd lab1
cvm_vxconfigd lab2
cvm_vxconfigd lab3
cvmvoldg1
lab1
cvmvoldg1
lab2
cvmvoldg1
lab3
vxattachd
lab1
vxattachd
lab2
vxattachd
lab3
vxfsckd
lab1
vxfsckd
lab2
vxfsckd
lab3
[root@lab1
/]# /etc/vx/bin/vxclustadm -m vcs reinit
[root@lab1
/]# /etc/vx/bin/vxclustadm nidmap
Name CVM Nid CM Nid
State
lab1 1 0 Joined: Master
lab2 0 1 Joined: Slave
lab3 2 2 Out of Cluster
5. On
the remaining nodes of the existing cluster, run the following commands:
# /etc/vx/bin/vxclustadm -m vcs reinit
# /etc/vx/bin/vxclustadm -m vcs reinit
#
/etc/vx/bin/vxclustadm nidmap
[root@lab2
~]# /etc/vx/bin/vxclustadm -m vcs reinit
[root@lab2
~]# /etc/vx/bin/vxclustadm nidmap
Name CVM Nid CM Nid
State
lab1 1 0 Joined: Master
lab2 0 1 Joined: Slave
lab3 2 2 Out of Cluster
6. Copy
the configuration files from one of the nodes in the existing cluster to the
new node:
# rcp /etc/VRTSvcs/conf/config/main.cf \
lab3:/etc/VRTSvcs/conf/config/main.cf
# rcp /etc/VRTSvcs/conf/config/main.cf \
lab3:/etc/VRTSvcs/conf/config/main.cf
# rcp /etc/VRTSvcs/conf/config/CFSTypes.cf \
lab3:/etc/VRTSvcs/conf/config/CFSTypes.cf
# rcp /etc/VRTSvcs/conf/config/CVMTypes.cf \
lab3:/etc/VRTSvcs/conf/config/CVMTypes.cf
lab3:/etc/VRTSvcs/conf/config/CFSTypes.cf
# rcp /etc/VRTSvcs/conf/config/CVMTypes.cf \
lab3:/etc/VRTSvcs/conf/config/CVMTypes.cf
[root@lab1
/]# scp /etc/VRTSvcs/conf/config/main.cf
192.168.1.33:/etc/VRTSvcs/conf/config/main.cf
[root@lab1
/]# scp /etc/VRTSvcs/conf/config/CFSTypes.cf
192.168.1.33:/etc/VRTSvcs/conf/config/CFSTypes.cf
root@lab1 /]# scp
/etc/VRTSvcs/conf/config/CVMTypes.cf 192.168.1.33:/etc/VRTSvcs/conf/config/CVMTypes.cf
[root@lab1
/]# hastatus -summ
--
SYSTEM STATE
--
System State Frozen
A lab1 RUNNING 0
A lab2 RUNNING 0
A lab3 UNKNOWN 0
--
GROUP STATE
--
Group System Probed AutoDisabled State
B app_sg lab1 Y N ONLINE
B app_sg lab2 Y N ONLINE
B app_sg lab3 N N OFFLINE
B cvm lab1 Y N ONLINE
B cvm lab2 Y N ONLINE
B cvm lab3 N N OFFLINE
--
RESOURCES NOT PROBED
--
Group Type Resource System
E app_sg CFSMount cfsmount1 lab3
E app_sg CFSMount cfsmount2 lab3
E app_sg
CVMVolDg cvmvoldg1 lab3
E cvm CFSfsckd vxfsckd lab3
E cvm CVMCluster cvm_clus lab3
E cvm CVMVxconfigd cvm_vxconfigd lab3
E cvm ProcessOnOnly vxattachd lab3
Starting VCS after adding the new node
Start VCS on the new node.
1. Start
VCS on the new node:
# hastart
VCS brings the CVM group online.
# hastart
VCS brings the CVM group online.
2. Verify
that the CVM group is online:
# hagrp -state
# hagrp -state
[root@lab1
Desktop]# hastatus -summ
--
SYSTEM STATE
--
System State Frozen
A lab1 RUNNING 0
A lab2 RUNNING 0
A lab3 RUNNING 0
--
GROUP STATE
--
Group System Probed AutoDisabled State
B app_sg
lab1 Y N ONLINE
B app_sg lab2 Y N ONLINE
B app_sg lab3 Y N ONLINE
B cvm lab1 Y N ONLINE
B cvm lab2 Y N ONLINE
B
cvm lab3 Y N ONLINE
Thank you for reading.
No comments:
Post a Comment