There can be
several methods to do this task however for me below steps worked very well but
if anyone has any better set of instructions doing this then request to share
those.
Let's start
then -
Output of
vxfencing status:
root@lab1:/opt/SAN #
vxfenadm -d
I/O Fencing Cluster Information:
================================
Fencing
Protocol Version: 201
Fencing Mode:
SCSI3
Fencing SCSI3
Disk Policy: dmp
Cluster
Members:
* 0
(webap11p)
1
(webap12p)
RFSM State
Information:
node 0 in state 8 (running)
node 1 in state 8 (running)
Old Disk in
Fencing DG:
root@lab1:/opt/SAN # vxdisk list|grep fencing_dg
emc0_0e3c
auto:cdsdisk emc0_0e3c fencing_dg
online
emc0_0e3d
auto:cdsdisk emc0_0e3d fencing_dg
online
emc0_0e3e
auto:cdsdisk emc0_0e3e fencing_dg
online
1. If VCS is
running, shut it down locally:
# hastop -all -force
2. Stop I/O
fencing on all nodes. This removes any registration keys on the disks.
# /etc/init.d/vxfen stop (on all 2 cluster nodes) (Not Mandatory)
3. Import the coordinator disk group. The file /etc/vxfendg includes
the name of the disk group (for example, fencing_dg) that contains the coordinator disks, so use the command -
# vxdg -tfC import 'cat /etc/vxfendg'
OR
# vxdg -tfC import fencing_dg
Where:
-t specifies that the disk group is
imported only until the system restarts.
-f specifies that the import is to
be done forcibly, which is necessary if one or more disks is not accessible.
-C specifies that any import blocks
are removed.
4. Turn off
the coordinator attribute value for the coordinator disk group.
# vxdg -g fencing_dg
set coordinator=off
5. Add new and
Remove old disks
Add disk for
fence disk group
# vxdg -g fencing_dg adddisk
emc1_0be1=emc1_0be1
# vxdg -g fencing_dg adddisk
emc1_0be2=emc1_0be2
# vxdg -g fencing_dg adddisk
emc1_0be3=emc1_0be3
Removing
old disk from fence disk group
# vxdg -g fencing_dg rmdisk emc0_0e3c
# vxdg -g fencing_dg rmdisk emc0_0e3d
# vxdg -g fencing_dg rmdisk emc0_0e3e
6. Set the
coordinator attribute value as "on" for the coordinator disk group.
# vxdg -g fencing_dg
set coordinator=on
7. Run disk
scan on all nodes
# vxdisk scandisks (Run on all cluster
nodes)
8. Check if
fencing disks are visible on all nodes
# vxdisk -o alldgs list | grep fen
9. After
replacing disks in a coordinator disk group, deport the disk group:
# vxdg deport 'cat /etc/vxfendg'
OR
# vxdg deport fencing_dg
10. Verify if
the fencing diskgroup is deported
# vxdisk -o alldgs list | grep fen
11. On each
node in the cluster, start the I/O fencing driver:
# /etc/init.d/vxfen start (on all 3 cluster nodes)
12. hastart on all cluster nodes.
# hastart (on all 3 cluster nodes)
That's it,
these 12 steps takes you through migrating disks within coordinator disk group
from one array to another.
- Modify /etc/vxfentab in all the cluster node :
Example:
root@lab1:/opt/SAN # cat /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/emc0_0e3c
EMC%5FSYMMETRIX%5F000290100769%5F6900E3C000
/dev/vx/rdmp/emc0_0e3d
EMC%5FSYMMETRIX%5F000290100769%5F6900E3D000
/dev/vx/rdmp/emc0_0e3e
EMC%5FSYMMETRIX%5F000290100769%5F6900E3E000
To:
root@lab1:/opt/SAN # cat /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/emc1_0be1
EMC%5FSYMMETRIX%5F000298700104%5F0400BE1000
/dev/vx/rdmp/emc1_0be2
EMC%5FSYMMETRIX%5F000298700104%5F0400BE2000
/dev/vx/rdmp/emc1_0be3
EMC%5FSYMMETRIX%5F000298700104%5F0400BE3000
Fencing DG
with new disk
root@lab1:/root # vxdisk -o alldgs list|grep
fencing_dg
emc1_0be1
auto:cdsdisk - (fencing_dg) online thinrclm
emc1_0be2
auto:cdsdisk - (fencing_dg) online thinrclm
emc1_0be3
auto:cdsdisk - (fencing_dg) online thinrclm
Thank you for reading.
For Reading other article, visit to “https://sites.google.com/site/unixwikis/”
No comments:
Post a Comment