Search This Blog

Friday, July 25, 2014

Using vxlist and vxinfo for volume information

The vxlist command is useful in summarizing the volume information on the system. You can also use this command to display the disks and the plexes associated with a specific volume, using the following command options:

#vxlist –s disk vol volume_name

#vxlist -s disk vol test_campaign
disks
TY   DEVICE          DISK              NPATHS ENCLR_NAME   ENCLR_SNO      STATUS
disk apevmx09_01d6   apevmx09_01d6          4 apevmx09     000292603749   imported

#vxlist –s plexes vol volume_name

#vxlist -s plex vol test_campaign
plexes
TY   NAME               TYPE     STATUS
plex test_campain-01   simple   attached

The vxinfo command prints the accessibility and the usability information on VxVM volumes. The -p option with vxinfo also reports the name and status of each plex within the volume.

#vxinfo -g testdg -p testvol
vol  testvol        fsgen    Started


plex testvol-01     ACTIVE

Thursday, July 24, 2014

Removing Volume in VxVM

Removing a Volume:

  • Before Removing Volume:
    • Application must be stopped
    • File System must be unmounted
  • When a volume is removed, volume disk space is freed

VEA

·          Select the volume that you want to remove
·          Select Actions - > Delete Volume

Removing Volume using vxassist command


1.  Umount the file system# umount /dev/vx/dsk / volume_name
2.  If the volume is listed in /etc/fstab, remove its entry
3.  Make sure volume is stopped with the command.

# vxvol stop volume_name
# vxassit -g <diskgroup> remove volume <volume_name>

Example:
# vxassist -g testdg remove volume testvol
# vxprint -th testvol
VxVM vxprint ERROR V-5-1-924 Record testvol not found

Remove the volume using vxedit

# vxedit -g <diskgroup> -rf rm <volume_name>

-r recursive removal of all plexes associated with volume and all subdisk associate with those plexes

-f forces removal


HOT-Relocation Process in VxvM


HOT-Relocation Process:

The hot-relocation feature is enabled by default. No system administrator action is needed to start hot relocation when a failure occurs.

The vxrelocd daemon starts during system startup and monitors VxVM for failures involving disks, plexes, or RAID-5 subdisks. When a failure occurs, vxrelocd triggers a hot-relocation attempt and notifies the system administrator, through e-mail, of failures and any relocation and recovery actions. The vxrelocd daemon is started by a VxVM start-up script during system boot up. The argument to vxrelocd is the list of people to e-mail notice of a relocation (default is root). To disable vxrelocd, you can place a “#” in front of the line in the corresponding start-up file.

A successful hot-relocation process involves:
1.  Failure detection: Detecting the failure of a disk, plex, or RAID-5 subdisk (The affected Volume Manager objects are identified and the system administrator and other designated users are notified.)
2.  Relocation: Determining which subdisks can be relocated, finding space for those subdisks, and relocating the subdisks (The system administrator and other designated users are notified of the success or failure of these actions. Hot relocation does not guarantee the same layout of data or the same performance after relocation.)
3.  Recovery: Initiating recovery procedures, if necessary, to restore the volumes and data (Again, the system administrator and other designated users are notified of the recovery attempt.)

How is space selected for relocation?
A spare disk must be initialized and placed in a disk group as a spare before it can be used for replacement purposes.
·          Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk, if possible.
·          If no disks have been designated as spares, VxVM automatically uses any available free space in the disk group not currently on a disk used by the volume.
·          If there is not enough spare disk space, a combination of spare disk space and free space is used. Free space that you exclude from hot relocation is not used.

In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk group, which is physically closest to the failing or failed disk. Note that if there is not enough free space, it is possible for a subdisk to be relocated to multiple subdisks scattered to different disks.

When hot relocation occurs, the failed subdisk is removed from the configuration database. The disk space used by the failed subdisk is not recycled as free space.

Marking/Unmarking a Disk as a Hot-Relocation Spare:

Syntax: # vxedit –g diskgroup set spare=on|off dm_name

To designate a disk as a spare

For Example
# vxedit –g rootdg set spare=on rootdg06
# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c2t0d0s2     auto:hpdisk     rootdisk02   rootdg       online
c2t1d0s2     auto:LVM        -            -            LVM
c3t2d0s2     auto:hpdisk     rootdisk01   rootdg       online
c5t0d1       auto:cdsdisk    rootdg02     rootdg       online
c5t0d2       auto:cdsdisk    rootdg03     rootdg       online
c5t0d3       auto:cdsdisk    rootdg04     rootdg       online
c5t0d4       auto:cdsdisk    rootdg05     rootdg       online
c5t0d5       auto:cdsdisk    rootdg06     rootdg       online spare
c5t0d6       auto:cdsdisk    rootdg07     rootdg       online
c5t0d7       auto:cdsdisk    rootdg08     rootdg       online
c5t1d0       auto:cdsdisk    rootdg01     rootdg       online

Removing a Disk from User as a Hot-Relocation Spare


To make disk rootdg06 available for normal use:
# vxedit –g rootdg set spare=off rootdg06
[rx26-159]/
# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c2t0d0s2     auto:hpdisk     rootdisk02   rootdg       online
c2t1d0s2     auto:LVM        -            -            LVM
c3t2d0s2     auto:hpdisk     rootdisk01   rootdg       online
c5t0d1       auto:cdsdisk    rootdg02     rootdg       online
c5t0d2       auto:cdsdisk    rootdg03     rootdg       online
c5t0d3       auto:cdsdisk    rootdg04     rootdg       online
c5t0d4       auto:cdsdisk    rootdg05     rootdg       online
c5t0d5       auto:cdsdisk    rootdg06     rootdg       online
c5t0d6       auto:cdsdisk    rootdg07     rootdg       online
c5t0d7       auto:cdsdisk    rootdg08     rootdg       online
c5t1d0       auto:cdsdisk    rootdg01     rootdg       online

To Exclude or include a Disk from Hot-Relocation

Syntax: # vxedit –g diskgroup set nohotuse=on|off dm_name

# vxedit set spare=on rootdg06
# vxedit -g rootdg set nohotuse=off rootdg06
# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c2t0d0s2     auto:hpdisk     rootdisk02   rootdg       online
c2t1d0s2     auto:LVM        -            -            LVM
c3t2d0s2     auto:hpdisk     rootdisk01   rootdg       online
c5t0d1       auto:cdsdisk    rootdg02     rootdg       online
c5t0d2       auto:cdsdisk    rootdg03     rootdg       online
c5t0d3       auto:cdsdisk    rootdg04     rootdg       online
c5t0d4       auto:cdsdisk    rootdg05     rootdg       online
c5t0d5       auto:cdsdisk    rootdg06     rootdg       online spare
c5t0d6       auto:cdsdisk    rootdg07     rootdg       online
c5t0d7       auto:cdsdisk    rootdg08     rootdg       online
c5t1d0       auto:cdsdisk    rootdg01     rootdg       online

To force hot relocation to only use spare disks:
Add spare=only in /etc/default/vxassist

Friday, July 11, 2014

Replacing a Boot Mirrored SAS Disk in HP-UX 11.31 (11i v3)

Overview
SAS controllers use a different addressing scheme than parallel SCSI like U320 or SCSI-2.Where SCSI ID and therefore disk slot numbers were important, SAS uses a unique address of the disks themselves to identify the disks as part of a LUN (logical unit) or an HP-UX special device file like /dev/dsk/c0t0d0.  You can take a disk out of one slot (referred to as bays) and put it in a different slot or bay.  The controller finds it and presents the disk to the O/S with the same special device file.  This does complicate the procedure needed for replacing a failed disk. !

When a disk fails, and a new disk is put into the same bay that the failed disk came out of,the SAS controller knows it is a different disk by its SAS address.  The O/S driver assigns the next available target for the hardware path (viewed with ioscan) and special device file if insf -e is executed. 

If the customer used a legacy devicefile (e.g. /dev/dsk/c2t2d0) in his LVM configuration, you have to make sure that the new disk will get the old legacy file again.
This can be done by sasmgr(http://wtec.cup.hp.com/~cpuhw/storage/sas/sasmgr_man.htm  with the “replace_tgt” option. Even this will result in an Error that the file is still busy, a consecutive “io_redirect_dsf” on the corresponding persisten file will round up the move, and result in the old legacy/persisten name pair as before.

If the customer used a persistent devicefile (e.g. /dev/disk/disk5) in his LVM config, you have to make sure that the new disk uses the same persistent devicefile again. Ich you don´t care about the new created legacy dsf and only use the persistent one, just only use  “io_redirect_dsf(1M).

Both methods, sasmgr replace_tgt  and io_redirect_dsf, are very simple as long as there are no I/O's pending or no I/O drivers that have the special device file open for reading/writing.  This is rarely the case however, and LVM will continue to try to access that special device file waiting for the failed disk to return.

You must stop all access to the special device file first by executing “pvchange –a N” to deactivate that physical volume. 

Once the new disk is inserted, create the EFI partitions, and vgcfgrestore the LVM information.

As an alternative, you can unmirror the volume group and vgreduce the bad disk from the volume group.  Following is the procedure using pvchange and vgcfgrestore to replace a mirrored boot disk in vg00.

1.  Check which disk has failed and which devicefiles are in use

# vgdisplay -v vg00
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
………
   --- Logical volumes ---
   LV Name                     /dev/vg00/lvol1
   LV Status                   available/stale
       ………
   LV Name                     /dev/vg00/lvol8
   LV Status                   available/stale
   LV Size (Mbytes)            5696
   Current LE                  712
   Allocated PE                712
   Used PV                     2
   --- Physical volumes ---
   PV Name                     /dev/dsk/disk2_p2
   PV Status                   available
   Total PE                    4228
   Free PE                     1224
   Autoswitch                  On

   PV Name                     /dev/dsk/disk3_p2
   PV Status                   unavailable
   Total PE                    4228
   Free PE                     1224
   Autoswitch                  On


In this case, the disk using the persistent device file /dev/dsk/disk3_p2  is unavailable.

# ioscan -m lun
Class     I  Lun H/W Path  Driver  S/W State   H/W Type     Health  Description
======================================================================
disk      2  64000/0xfa00/0x0   esdisk  CLAIMED     DEVICE       online  HP    
             0/2/1/0.0x5000c50008210fa5.0x0
                      /dev/disk/disk2      /dev/rdisk/disk2  
                      /dev/disk/disk2_p1   /dev/rdisk/disk2_p1
                      /dev/disk/disk2_p2   /dev/rdisk/disk2_p2
                      /dev/disk/disk2_p3   /dev/rdisk/disk2_p3
disk      3  64000/0xfa00/0x1   esdisk  NO_HW     DEVICE       online  HP    
             0/2/1/0.0x5000c5000820ee41.0x0
                      /dev/disk/disk3      /dev/rdisk/disk3
                      /dev/disk/disk3_p1   /dev/rdisk/disk3_p1
                      /dev/disk/disk3_p2   /dev/rdisk/disk3_p2
                      /dev/disk/disk3_p3   /dev/rdisk/disk3_p3

But also make a note of the failed legacy device files,. You will need them for sasmgr commands.

# ioscan -fnH 0/2
Class        I  H/W Path       Driver    S/W State   H/W Type     Description
==============================================================================
ba           2  0/2            lba         CLAIMED     BUS_NEXUS    Local PCI-X)
escsi_ctlr   0  0/2/1/0        sasd        CLAIMED     INTERFACE    HP  PCI/PCIr
                              /dev/sasd0
ext_bus      0  0/2/1/0.0.0    sasd_vbus   CLAIMED     INTERFACE    SAS Device e
target       0  0/2/1/0.0.0.0  tgt         CLAIMED     DEVICE      
disk         4  0/2/1/0.0.0.0.0  sdisk       CLAIMED     DEVICE       HP      D4
                              /dev/dsk/c0t0d0     /dev/rdsk/c0t0d0 
                              /dev/dsk/c0t0d0s1   /dev/rdsk/c0t0d0s1
                              /dev/dsk/c0t0d0s2   /dev/rdsk/c0t0d0s2
                              /dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3
target       1  0/2/1/0.0.0.1  tgt         NO_HW       DEVICE      
disk         0  0/2/1/0.0.0.1.0  sdisk       NO_HW       DEVICE       HP      D4
                              /dev/dsk/c0t1d0     /dev/rdsk/c0t1d0 
                              /dev/dsk/c0t1d0s1   /dev/rdsk/c0t1d0s1
                              /dev/dsk/c0t1d0s2   /dev/rdsk/c0t1d0s2
                              /dev/dsk/c0t1d0s3   /dev/rdsk/c0t1d0s3

Is the disk c0t1d0  alias  disk3 an IR Volume or a JBOD ? (look for a disk WWID ending with …..ee41)

# sasmgr get_info -D /dev/sasd0 -q raid
Thu Dec 14 14:59:28 2006
---------- PHYSICAL DRIVES ----------
LUN dsf              SAS Address          Enclosure    Bay      Size(MB)

/dev/rdsk/c0t0d0     0x5000c50008210fa5     1            1      140014   
/dev/rdsk/c0t1d0     0x5000c5000820ee41     1            2      140014   
 

The SAS controller still sees the disk and has no Raids configured. We have a JBOD disk that, when replaced with a new one, surely will result in new legacy and persistent devicefiles


Failed disk
New disk
Type
SAS-JBOD
SAS-JBOD
HW Path
0/2/1/0.0.0.1.0
T.B.D.
LunPath
0/2/1/0.0x5000c5000820ee41.0x0
T.B.D.
Legacy Devicefile
/dev/dsk/c0t1d0
T.B.D.
Persistent Devicefile
/dev/disk/disk3
T.B.D.

2.  Halt LVM access to the defective disk: (be sure to use “capital N” with pvchange )

# pvchange -a N /dev/disk/disk3_p2
Warning: Detaching a physical volume reduces the availability of data
within the logical volumes residing on that disk.
Prior to detaching a physical volume or the last available path to it,
verify that there are alternate copies of the data
available on other disks in the volume group.
If necessary, use pvchange(1M) to reverse this operation.
Physical volume "/dev/disk/disk3_p2" has been successfully changed.

3. Turn on the disk's locator LED of disk Bay 2 to ensure the correct disk is removed:
# sasmgr set_attr -D /dev/sasd0 -q lun=/dev/rdisk/disk3 -q locate_led=on
Locate LED set to ON.
# sasmgr get_info -D /dev/sasd0 -q lun=all -q lun_locate
/dev/rdsk/c0t0d0          0/2/1/0.0.0.0.0           1     1     OFF
/dev/rdsk/c0t1d0          0/2/1/0.0.0.1.0           1     2     ON
 
4. Replace the Disk
At this point the disk in bay 2 is pulled out of the server.
A new disk is inserted in the same bay.  The server should not be rebooted or taken down between the time the disk fails and the time the new disk is inserted
Failed disk highlighted in red    (State NO_HW)
mirror of that disk in blue       (still operating, CLAIMED and untouched)
New replacement disk is in green.  (Got a new target ID “2” by SAS Controller!!!!)
5. Check for new created devicefiles
In this case the customer uses persistent devicefiles, so we don't care about the new legacy devicefile, but instead check whick new persistent device the system has created for it:
# ioscan -fnH 0/2
Class        I  H/W Path       Driver    S/W State   H/W Type     Description
==============================================================================
ba           2  0/2            lba         CLAIMED     BUS_NEXUS    Local PCI-X
Bus Adapter (122e)
escsi_ctlr   0  0/2/1/0        sasd        CLAIMED     INTERFACE    HP  PCI/PCI-
X SAS MPT Adapter
                              /dev/sasd0
ext_bus      0  0/2/1/0.0.0    sasd_vbus   CLAIMED     INTERFACE    SAS Device I
nterface
target       1  0/2/1/0.0.0.0  tgt         CLAIMED     DEVICE      
disk         1  0/2/1/0.0.0.0.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t0d0     /dev/rdsk/c0t0d0 
                              /dev/dsk/c0t0d0s1   /dev/rdsk/c0t0d0s1
                              /dev/dsk/c0t0d0s2   /dev/rdsk/c0t0d0s2
                              /dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3
target       0  0/2/1/0.0.0.1    tgt         NO_HW     DEVICE      
disk         0  0/2/1/0.0.0.1.0  sdisk       NO_HW     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t1d0     /dev/rdsk/c0t1d0 
                              /dev/dsk/c0t1d0s1   /dev/rdsk/c0t1d0s1
                              /dev/dsk/c0t1d0s2   /dev/rdsk/c0t1d0s2
                              /dev/dsk/c0t1d0s3   /dev/rdsk/c0t1d0s3
target       1  0/2/1/0.0.2.0    tgt         CLAIMED     DEVICE      
disk         1  0/2/1/0.0.2.0.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t2d0     /dev/rdsk/c0t2d0 

# sasmgr get_info -D /dev/sasd0 -q raid
Thu Dec 14 14:59:28 2006
---------- PHYSICAL DRIVES ----------
LUN dsf              SAS Address          Enclosure    Bay      Size(MB)

/dev/rdsk/c0t0d0     0x5000c50008210fa5     1            1      140014   
/dev/rdsk/c0t2d0     0x5000cca000101799     1            2      140014   



# ioscan -m dsf
Persistent DSF           Legacy DSF(s)
========================================
/dev/rdisk/disk2         /dev/rdsk/c0t0d0
/dev/rdisk/disk2_p1      /dev/rdsk/c0t0d0s1
/dev/rdisk/disk2_p2      /dev/rdsk/c0t0d0s2
/dev/rdisk/disk2_p3      /dev/rdsk/c0t0d0s3
/dev/rdisk/disk3         /dev/rdsk/c0t1d0
/dev/rdisk/disk3_p1      /dev/rdsk/c0t1d0s1
/dev/rdisk/disk3_p2      /dev/rdsk/c0t1d0s2
/dev/rdisk/disk3_p3      /dev/rdsk/c0t1d0s3
/dev/rdisk/disk5         /dev/rdsk/c0t2d0       <-  new disk !

#
ioscan -m lun
Class     I  Lun H/W Path  Driver  S/W State   H/W Type     Health  Description
======================================================================
disk      2  64000/0xfa00/0x0   esdisk  CLAIMED   DEVICE       online  HP    
             0/2/1/0.0x5000c50008210fa5.0x0
                      /dev/disk/disk2      /dev/rdisk/disk2  
                      /dev/disk/disk2_p1   /dev/rdisk/disk2_p1
                      /dev/disk/disk2_p2   /dev/rdisk/disk2_p2
                      /dev/disk/disk2_p3   /dev/rdisk/disk2_p3
disk      3  64000/0xfa00/0x1   esdisk  NO_HW     DEVICE       online  HP    
             0/2/1/0.0x5000c5000820ee41.0x0
                      /dev/disk/disk3      /dev/rdisk/disk3  
                      /dev/disk/disk3_p1   /dev/rdisk/disk3_p1
                      /dev/disk/disk3_p2   /dev/rdisk/disk3_p2
                      /dev/disk/disk3_p3   /dev/rdisk/disk3_p3
disk      5  64000/0xfa00/0x2   esdisk  CLAIMED   DEVICE       online  HP    
             0/2/1/0.
0x5000cca000101799.0x0
                      /dev/disk/disk5      /dev/rdisk/disk5  

# diskinfo /dev/disk/disk5
diskinfo: Character device required
# diskinfo /dev/rdisk/disk3
SCSI describe of /dev/rdisk/disk3:
             vendor: HP     
         product id: DG146ABAB4     
               type: direct access
               size: 143374744 Kbytes
   bytes per sector: 512

Collect your data now :

Failed disk
New disk
Type
SAS-JBOD
SAS-JBOD
HW Path
0/2/1/0.0.0.1.0
0/2/1/0.0.0.2.0
LunPath
0/2/1/0.0x5000c5000820ee41.0x0
0/2/1/0. 0x5000cca000101799.0x0
Legacy Devicefile
/dev/dsk/c0t1d0
/dev/dsk/c0t2d0
Persistent Devicefile
/dev/disk/disk3
/dev/disk/disk5

6. Stop the LED (now using the new legacy devicefile)
# sasmgr get_info -D /dev/sasd0 -q lun=all -q lun_locate
/dev/rdsk/c0t0d0          0/2/1/0.0.0.0.0           1     1     OFF
/dev/rdsk/c0t2d0          0/2/1/0.0.0.2.0           1     2     ON 
# sasmgr set_attr -D /dev/sasd0 -q lun=/dev/rdsk/c0t2d0 -q locate_led=off

7. Restore the IA64 Partitioning Scheme of the new boot disk
Note: As the tools to retain the old defvcefiles after disk replacement do only allow replacement of disks with identical number of devicefiles, You have to make sure now that the new disk has the same partitioning scheme as the failed one.
If you would try to move the disk5 to the old disk3 name, you will get an Error Message:
Example:
# io_redirect_dsf -d /dev/rdisk/disk3 -n /dev/rdisk/disk5
Number of old DSFs=8.
Number of new DSFs=2.
The number of old and new DSFs must be the same.
Be aware that you use the new created devicefiles at this time.
- Create a partition description file:

# vi /tmp/partitionfile
3
EFI 500MB
HPUX 100%
HPSP 400MB

Use idisk(1M) command to partition the disk according to this file: \

# idisk -wf /tmp/partitionfile /dev/rdisk/disk5
idisk version: 1.2
********************** WARNING ***********************
If you continue you may destroy all data on this disk.
Do you wish to continue(yes/no)? yes

Create the new device files for the new partitions (disk3_p1,_p2_p3)

# insf -e –Cdisk
# ioscan -m lun
Class     I  Lun H/W Path  Driver  S/W State   H/W Type     Health  Description
======================================================================
disk      2  64000/0xfa00/0x0   esdisk  CLAIMED   DEVICE       online  HP    
             0/2/1/0.0x5000c50008210fa5.0x0
                      /dev/disk/disk2      /dev/rdisk/disk2  
                      /dev/disk/disk2_p1   /dev/rdisk/disk2_p1
                      /dev/disk/disk2_p2   /dev/rdisk/disk2_p2
                      /dev/disk/disk2_p3   /dev/rdisk/disk2_p3
disk      3  64000/0xfa00/0x1   esdisk  NO_HW     DEVICE       online  HP    
             0/2/1/0.0x5000c5000820ee41.0x0
                      /dev/disk/disk3      /dev/rdisk/disk3  
                      /dev/disk/disk3_p1   /dev/rdisk/disk3_p1
                      /dev/disk/disk3_p2   /dev/rdisk/disk3_p2
                      /dev/disk/disk3_p3   /dev/rdisk/disk3_p3
disk      5  64000/0xfa00/0x2   esdisk  CLAIMED   DEVICE       online  HP    
            
0/2/1/0. 0x5000cca000101799.0x0
                      /dev/disk/disk5      /dev/rdisk/disk5  
                      /dev/disk/disk5_p1   /dev/rdisk/disk5_p1
                      /dev/disk/disk5_p2   /dev/rdisk/disk5_p2
                      /dev/disk/disk5_p3   /dev/rdisk/disk5_p3
# ioscan -fnH 0/2
Class        I  H/W Path       Driver    S/W State   H/W Type     Description
==============================================================================
ba           2  0/2            lba         CLAIMED     BUS_NEXUS    Local PCI-X
escsi_ctlr   0  0/2/1/0        sasd        CLAIMED     INTERFACE    HP  PCI/PCI-X SAS MPT Adapter
                              /dev/sasd0
ext_bus      0  0/2/1/0.0.0    sasd_vbus   CLAIMED     INTERFACE    SAS Device I
nterface
target       1  0/2/1/0.0.0.0  tgt         CLAIMED     DEVICE      
disk         1  0/2/1/0.0.0.0.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t0d0     /dev/rdsk/c0t0d0 
                              /dev/dsk/c0t0d0s1   /dev/rdsk/c0t0d0s1
                              /dev/dsk/c0t0d0s2   /dev/rdsk/c0t0d0s2
                              /dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3
target       0  0/2/1/0.0.0.1    tgt         NO_HW     DEVICE      
disk         0  0/2/1/0.0.0.1.0  sdisk       NO_HW     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t1d0     /dev/rdsk/c0t1d0 
                              /dev/dsk/c0t1d0s1   /dev/rdsk/c0t1d0s1
                              /dev/dsk/c0t1d0s2   /dev/rdsk/c0t1d0s2
                              /dev/dsk/c0t1d0s3   /dev/rdsk/c0t1d0s3
target       1  0/2/1/0.0.2.0    tgt         CLAIMED     DEVICE      
disk         1  0/2/1/0.0.2.0.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t2d0     /dev/rdsk/c0t2d0 
                              /dev/dsk/c0t2d0s1   /dev/rdsk/c0t2d0s1
                              /dev/dsk/c0t2d0s2   /dev/rdsk/c0t2d0s2
                              /dev/dsk/c0t2d0s3   /dev/rdsk/c0t2d0s3

# ioscan -m dsf
Persistent DSF           Legacy DSF(s)
========================================
/dev/rdisk/disk2         /dev/rdsk/c0t0d0
/dev/rdisk/disk2_p1      /dev/rdsk/c0t0d0s1
/dev/rdisk/disk2_p2      /dev/rdsk/c0t0d0s2
/dev/rdisk/disk2_p3      /dev/rdsk/c0t0d0s3
/dev/rdisk/disk3         /dev/rdsk/ c0t1d0       <-  failed disk !
/dev/rdisk/disk3_p1      /dev/rdsk/c0t1d0s1
/dev/rdisk/disk3_p2      /dev/rdsk/c0t1d0s2
/dev/rdisk/disk3_p3      /dev/rdsk/c0t1d0s3
/dev/rdisk/disk5         /dev/rdsk/c0t2d0       <-  new disk !
/dev/rdisk/disk5_p1      /dev/rdsk/c0t2d0s1
/dev/rdisk/disk5_p2      /dev/rdsk/c0t2d0s2
/dev/rdisk/disk5_p3      /dev/rdsk/c0t2d0s3


1.  Redirect the IO from the new Device to the old Devicefile for the legacy devicefile:

# sasmgr replace_tgt -D /dev/sasd0 -q old_dev=/dev/dsk/c0t1d0
–q
new_tgt_hwpath=0/2/1/0.0.0.2.0

WARNING: This is a DESTRUCTIVE operation.
No CRA is done by the driver for this command.
Please ensure that there are no outstanding I/O requests
on the old_dev and the LUN's attached to the
new_tgt_hwpath before proceeding with this command.
Do you want to continue ?(y/n) [n]...
ERROR: Unable to remove mapping of the persistent LUN with the legacy dsf.
Possibly the LUN is open. Please execute scsimgr replace_leg_dsf command
Please see the scsimgr(1M) manpage for more details.
What happened ? Since there is still an active mapping from “c0t2d0” to the persistent dsf “disk5”, the system complains that you renamed one part of it. But, it alteady deleted the new legacy dsf´s:
# ioscan -fnH 0/2
Class        I  H/W Path       Driver    S/W State   H/W Type     Description
==============================================================================
ba           2  0/2            lba         CLAIMED     BUS_NEXUS    Local PCI-X
escsi_ctlr   0  0/2/1/0        sasd        CLAIMED     INTERFACE    HP  PCI/PCI-X SAS MPT Adapter
                              /dev/sasd0
ext_bus      0  0/2/1/0.0.0    sasd_vbus   CLAIMED     INTERFACE    SAS Device I
nterface
target       1  0/2/1/0.0.0.0  tgt         CLAIMED     DEVICE      
disk         1  0/2/1/0.0.0.0.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t0d0     /dev/rdsk/c0t0d0 
                              /dev/dsk/c0t0d0s1   /dev/rdsk/c0t0d0s1
                              /dev/dsk/c0t0d0s2   /dev/rdsk/c0t0d0s2
                              /dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3
target       0  0/2/1/0.0.0.1    tgt         NO_HW     DEVICE      
disk         0  0/2/1/0.0.0.1.0  sdisk       NO_HW     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t1d0     /dev/rdsk/c0t1d0 
                              /dev/dsk/c0t1d0s1   /dev/rdsk/c0t1d0s1
                              /dev/dsk/c0t1d0s2   /dev/rdsk/c0t1d0s2
                              /dev/dsk/c0t1d0s3   /dev/rdsk/c0t1d0s3
target       1  0/2/1/0.0.2.0    tgt         NO_HW       DEVICE   deleted !
Now, let´s re-animate the persisten devicefile also:
for the persisten devicefile:
# io_redirect_dsf –d /dev/disk/disk3 –n /dev/disk/disk5
- Verify it:
# ioscan -m lun
Class     I  Lun H/W Path  Driver  S/W State   H/W Type     Health  Description
======================================================================
disk      2  64000/0xfa00/0x0   esdisk  CLAIMED     DEVICE       online  HP    
             0/2/1/0.0x5000c50008210fa5.0x0
                      /dev/disk/disk2      /dev/rdisk/disk2  
                      /dev/disk/disk2_p1   /dev/rdisk/disk2_p1
                      /dev/disk/disk2_p2   /dev/rdisk/disk2_p2
                      /dev/disk/disk2_p3   /dev/rdisk/disk2_p3
disk      5  64000/0xfa00/0x2   esdisk  CLAIMED   DEVICE       online  HP    
            
0/2/1/0. 0x5000cca000101799.0x0
                      /dev/disk/disk3      /dev/rdisk/disk3  
                      /dev/disk/disk3_p1   /dev/rdisk/disk3_p1
                      /dev/disk/disk3_p2   /dev/rdisk/disk3_p2
                      /dev/disk/disk3_p3   /dev/rdisk/disk3_p3
# ioscan -fnH 0/2
Class        I  H/W Path       Driver    S/W State   H/W Type     Description
==============================================================================
ba           2  0/2            lba         CLAIMED     BUS_NEXUS    Local PCI-X
escsi_ctlr   0  0/2/1/0        sasd        CLAIMED     INTERFACE    HP  PCI/PCI-X SAS MPT Adapter
                              /dev/sasd0
ext_bus      0  0/2/1/0.0.0    sasd_vbus   CLAIMED     INTERFACE    SAS Device I
nterface
target       1  0/2/1/0.0.0.0  tgt         CLAIMED     DEVICE      
disk         1  0/2/1/0.0.0.0.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t0d0     /dev/rdsk/c0t0d0 
                              /dev/dsk/c0t0d0s1   /dev/rdsk/c0t0d0s1
                              /dev/dsk/c0t0d0s2   /dev/rdsk/c0t0d0s2
                              /dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3
target       0  0/2/1/0.0.0.1    tgt         CLAIMED     DEVICE      
disk         0  0/2/1/0.0.0.1.0  sdisk       CLAIMED     DEVICE       HP      DG
146ABAB4
                              /dev/dsk/c0t1d0     /dev/rdsk/c0t1d0 
                              /dev/dsk/c0t1d0s1   /dev/rdsk/c0t1d0s1
                              /dev/dsk/c0t1d0s2   /dev/rdsk/c0t1d0s2
                              /dev/dsk/c0t1d0s3   /dev/rdsk/c0t1d0s3
target       1  0/2/1/0.0.2.0    tgt         NO_HW       DEVICE   deleted !
# ioscan -m dsf
Persistent DSF           Legacy DSF(s)
========================================
/dev/rdisk/disk2         /dev/rdsk/c0t0d0
/dev/rdisk/disk2_p1      /dev/rdsk/c0t0d0s1
/dev/rdisk/disk2_p2      /dev/rdsk/c0t0d0s2
/dev/rdisk/disk2_p3      /dev/rdsk/c0t0d0s3
/dev/rdisk/disk3         /dev/rdsk/ c0t1d0
/dev/rdisk/disk3_p1      /dev/rdsk/c0t1d0s1
/dev/rdisk/disk3_p2      /dev/rdsk/c0t1d0s2
/dev/rdisk/disk3_p3      /dev/rdsk/c0t1d0s3

Bingo ! The old dsf´s  “c0t0d0 and “disk3” are operational again !  

9. Initialize the EFI FAT Partition and fill boot areas:

Use efi_fsinit(1M) to initialize the FAT filesystem on the EFI partition:

# efi_fsinit -d /dev/rdisk/disk3_p1

NOTE: This step is not neccessary if it can be guaranteed that the mirror disk does not contain a valid EFI filesystem. In this case efi_fsinit(1M) will be done automatically by the subsequent mkboot(1M) command. But if you take e.g. an old UX 11.22 boot disk as mirror disk, mkboot will not automatically run efi_fsinit. As a result only 100MB of the 500MB EFI partition (s1) can be used.

Use mkboot(1M) to format the EFI partition (s1) and populate it with the EFI files   below /usr/lib/efi/ and to format the LIF volume (part of s2) and populate it with the LIF files (ISL, AUTO, HPUX, LABEL) below /usr/lib/uxbootlf:

# mkboot -e -l /dev/rdisk/disk3
# efi_ls -d /dev/rdisk/disk3_p1       (to check EFI)

FileName      Last Modified        Size
EFI/          11/ 5/2003           0
STARTUP.NSH 11/ 5/2003            296
total space 523251712 bytes,
free space 520073216 bytes

# lifls -l /dev/rdisk/disk3_p2 (to check LIF)
volume ISL10 data size 7984 directory size 8 07/01/10 17:13:32
filename   type   start   size     implement  created
===============================================================
ISL        -12800 584     242      0          07/01/10 17:13:32
AUTO       -12289 832     1        0          07/01/10 17:13:32
HPUX       -12928 840     1024     0          07/01/10 17:13:32
PAD        -12290 1864    1468     0          07/01/10 17:13:33
LABEL      BIN    3336    8        0          08/08/26 20:07:24

Check the content of AUTO file on the original Bootdisks EFI partition:

# efi_cp -d /dev/rdisk/disk2_p1 -u /EFI/HPUX/AUTO /tmp/x
# cat /tmp/x
boot vmunix

NOTE: Specify the -lq option if prefer that your system boots up without interruption in case of a disk failure:

on the original boot disk:

# mkboot -a "boot vmunix -lq" /dev/rdisk/disk2

and on the replaced new mirror disk:

# mkboot -a "boot vmunix -lq" /dev/rdisk/disk3

Copy the HP service partition (skip this, if you don't have a service partition)

# dd if=/dev/rdisk/disk2_p3 of=/dev/rdisk/disk3_p3 bs=1024k

10. Restore LVM Configuration
Now the new disk is partitioned and equipped with boot headers, you can restore the LVM data to the OS partition “p2”:

# vgcfgrestore -n /dev/vg00 /dev/rdisk/disk3_p2
Restore LVM access to the disk.
If you did not reboot the system in Step 2http://docs.hp.com/en/5992-3385/ch04s08.html, reattach the disk as follows:
# pvchange –a y /dev/disk/disk3_p2
If you did reboot the system, reattach the disk by reactivating the volume group as follows:
# vgchange -a y /dev/vg00
NOTE: The vgchange command with the -a y option can be run on a volume group that is deactivated or already activated. It attaches all paths for all disks in the volume group and resumes automatically recovering any disks in the volume group that had been offline or any disks in the volume group that were replaced. Therefore, run vgchange only after all work has been completed on all disks and paths in the volume group, and it is necessary to attach them all.
        
Syncronize volume group data (only if sync does not start automatically):

# cd /tmp
# nohup vgsync /dev/vg00 &      (output see /tmp/nohup.out)
Initialize/check  boot information on the disk.
Check if content of LABEL file (i.e. root, boot, swap and dump device definition) has been initialized (done by lvextend) on the mirror disk:

# lvlnboot -v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
        /dev/disk/disk2_p2 -- Boot Disk
        /dev/disk/disk3_p2 -- Boot Disk
Boot: lvol1     on:     /dev/disk/disk2_p2
                        /dev/disk/disk3_p2
Root: lvol3     on:     /dev/disk/disk2_p2
                        /dev/disk/disk3_p2
Swap: lvol2     on:     /dev/disk/disk2_p2
                        /dev/disk/disk3_p2
Dump: lvol2     on:     /dev/disk/disk2_p2, 0

If not, then set it:

# lvlnboot -r /dev/vg00/lvol3
# lvlnboot -b /dev/vg00/lvol1
# lvlnboot -s /dev/vg00/lvol2
# lvlnboot -d /dev/vg00/lvol2

update EFI Boot Configuration to recognize the new Disk ID:

# # setboot
Primary bootpath      : 0/2/1/0.0x5000c50008210fa5.0x0 (/dev/rdisk/disk2)
HA Alternate bootpath : 0/1/1/1 (LAN Interface)
Alternate bootpath    : 0/1/1/1 (LAN Interface)

Autoboot is ON (enabled)
Hyperthreading : ON
               : ON (next boot)
# setboot –h /dev/disk/disk3     (HA-alternate)
# setboot –a /dev/disk/disk3     (alternate )
# setboot (to check it)
# setboot
Primary bootpath      : 0/2/1/0.0x5000c50008210fa5.0x0 (/dev/rdisk/disk2)
HA Alternate bootpath : 0/2/1/0. 0x5000cca000101799.0x0 (/dev/rdisk/disk3)
Alternate bootpath    : 0/2/1/0. 0x5000cca000101799.0x0 (/dev/rdisk/disk3)

Autoboot is ON (enabled)
Hyperthreading : ON
               : ON (next boot)

This will write new Entries into the EFI Boot Menu in the order:

       - Primary Path
       - HAAlternate Path
       - Alternate Path