To remove
an SFCFS node from an existing SFCFS cluster, do the following:
We are
removing node lab3 from existing CVM cluster of lab1,lab2, & lab3.
Current Cluster Status:
[root@lab1
/]# cfscluster status
Node :
lab1
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Node :
lab2
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Node :
lab3
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
[root@lab1
/]# hastatus -summ
--
SYSTEM STATE
--
System State Frozen
A lab1 RUNNING 0
A lab2 RUNNING 0
A lab3 RUNNING 0
--
GROUP STATE
--
Group System Probed AutoDisabled State
B app_sg lab1 Y N ONLINE
B app_sg lab2 Y N ONLINE
B app_sg lab3
Y N ONLINE
B cvm lab1 Y N ONLINE
B cvm lab2 Y N ONLINE
B cvm lab3 Y N ONLINE
1.
Log in as superuser on a node other than the
node to be removed. The node to removed in this example is lab3.
2. Stop all the cluster components:
# cfscluster
stop -f lab3
[root@lab1
/]# cfscluster stop -f lab3
Trying to stop cluster manager on lab3...
Shared Volume [/dev/vx/dsk/appdg/appvol01]
unmounted from /app/test on node lab3
Shared Volume [/dev/vx/dsk/appdg/appvol02]
unmounted from /app/test1 on node lab3
Cluster manager stopped successfully on lab3
Cluster Status:
[root@lab1
/]# cfscluster status
Node :
lab1
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Node :
lab2
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Node :
lab3
Cluster Manager :
not-running
CVM state :
not-running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg NOT MOUNTED
/app/test1
appvol02 appdg NOT MOUNTED
[root@lab1
/]# hastatus -summ
--
SYSTEM STATE
--
System State Frozen
A lab1 RUNNING 0
A lab2 RUNNING 0
A lab3 EXITED 0
--
GROUP STATE
--
Group System Probed
AutoDisabled State
B app_sg lab1 Y N ONLINE
B app_sg lab2 Y N ONLINE
B app_sg lab3 Y Y OFFLINE
B cvm lab1 Y N ONLINE
B cvm lab2 Y N ONLINE
B cvm lab3 Y Y OFFLINE
Checking Number of Group in CFS Configuraiton:
[root@lab1
/]# grep -i group /etc/VRTSvcs/conf/config/main.cf
group
app_sg (
CVMDiskGroup
= appdg
requires
group cvm online local firm
// group
app_sg
group
cvm (
// group
cvm
3.
Open
the VCS configuration for writing:
[root@lab1 /]#
haconf -makerw
4.
Remove
system03 from the system list attribute of the CVM and SFCFS service groups:
[root@lab1
/]# hagrp -modify app_sg SystemList -delete lab3
[root@lab1
/]# hagrp -modify cvm SystemList -delete lab3
where service_group is
the command that displays the service groups by hagrp -dep cvm.
If an error
message similar to the following is displayed by either of the above commands:
VCS:10456:Configuration
must be ReadWrite. (’hagrp -modify ...
-delete(0x10f)’,Sysstate=RUNNING,Channel=IPM,Flags=0x0)
Repeat step
3 and the command that failed in step 4.
1. Remove the node
from the SystemList attribute of the service group:
2.
Remove the node from the CVMNodeId attribute of
the service group:
# hares -modify cvm_clus CVMNodeId -delete lab3
# hares -modify cvm_clus CVMNodeId -delete lab3
#hares -modify
cvm_clus CVMNodeId -delete lab3
3. Remove
the deleted node from the NodeList attribute of all CFS mount resources:
# hares -modify CFSMount NodeList -delete lab3
# hares -modify CFSMount NodeList -delete lab3
#hares
-modify cfsmount1 NodeList -delete lab3
#hares
-modify cfsmount2 NodeList -delete lab3
4. Remove
the deleted node from the cluster system list:
# hasys -delete lab3
# hasys -delete lab3
# hasys -delete
lab3
Verify that
the node is removed from the VCS configuration.
#
grep -i system3 /etc/VRTSvcs/conf/config/main.cf
[root@lab1 /]#
grep -i lab3 /etc/VRTSvcs/conf/config/main.cf
If the node is not removed, use the VCS commands as described in this
procedure to remove the node.
5.
Write
the new VCS configuration to disk:
[root@lab1 /]#
haconf -dump -makero
6. Edit /etc/llthosts on the remaining
nodes of the cluster, and remove the entry corresponding to the node being
removed.
[root@lab1
/]# cat /etc/llthosts
0 lab1
1 lab2
2 lab3
[root@lab1
/]# vi /etc/llthosts
[root@lab1
/]# cat /etc/llthosts
0 lab1
1 lab2
[root@lab2
~]# cat /etc/llthosts
0 lab1
1 lab2
2 lab3
[root@lab2
~]# vi /etc/llthosts
[root@lab2
~]# cat /etc/llthosts
0 lab1
1 lab2
7. Edit /etc/gabtab on the remaining
nodes of the cluster and edit the gabconfig command to reflect the correct and
new number of nodes in the cluster.
[root@lab1
/]# cat /etc/gabtab
/sbin/gabconfig
-c -n3
[root@lab1
/]# vi /etc/gabtab
[root@lab1
/]# cat /etc/gabtab
/sbin/gabconfig
-c -n2
[root@lab2
~]# cat /etc/gabtab
/sbin/gabconfig
-c -n3
[root@lab2
~]# vi /etc/gabtab
[root@lab2
~]# cat /etc/gabtab
/sbin/gabconfig
-c -n2
[root@lab1
Desktop]# hastatus -summ
--
SYSTEM STATE
--
System State Frozen
A lab1 RUNNING 0
A lab2 RUNNING 0
A lab3 RUNNING 0
--
GROUP STATE
--
Group System Probed AutoDisabled State
B app_sg lab1 Y N ONLINE
B app_sg lab2 Y N ONLINE
B app_sg lab3 Y N ONLINE
B cvm lab1 Y N ONLINE
B cvm lab2 Y N ONLINE
B cvm lab3 Y N ONLINE
8.
Login
to lab3 and remove the following files:
# rm
/etc/vxfenmode
# rm
/etc/llthosts
# rm
/etc/llttab
# rm
/etc/gabtab
[root@lab3
~]# cp -p /etc/vxfenmode /etc/vxfenmode_23072016
[root@lab3
~]# rm /etc/vxfenmode
rm:
remove regular file `/etc/vxfenmode'? y
[root@lab3
~]# cp -p /etc/llthosts /etc/llthosts_23072016
[root@lab3
~]# rm /etc/llthosts
rm:
remove regular file `/etc/llthosts'? y
[root@lab3
~]# cp -p /etc/llttab /etc/llttab_23072016
[root@lab3
~]# rm /etc/llttab
rm:
remove regular file `/etc/llttab'? y
[root@lab3
~]# cp -p /etc/gabtab /etc/gabtab_23072016
[root@lab3
~]# rm /etc/gabtab
rm:
remove regular file `/etc/gabtab'? y
9.
If
fencing was enabled on the cluster, run the following commands:
# rm
/etc/vxfentab
# rm
/etc/vxfendg
1. Reboot lab3: (not mandatory)
# init 6
10. Change to the install directory:
# cd
/opt/VRTS/install
11. From the scripts directory, run the
uninstallsfcfs script and remove SFCFS on system03:
# ./uninstallsfcfs
If you do
not want to remove the Veritas Cluster Server software, enter n when
prompted to uninstall VCS.
[root@lab2
/]# cfscluster status
Node :
lab1
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1
appvol02 appdg MOUNTED
Node :
lab2
Cluster Manager :
running
CVM state :
running
MOUNT POINT
SHARED VOLUME DISK GROUP STATUS
/app/test
appvol01 appdg MOUNTED
/app/test1 appvol02 appdg MOUNTED
Thank you for reading.
No comments:
Post a Comment