Search This Blog

Saturday, July 30, 2016

How to add node to (CVM)SFCFS Cluster



We had lab1 & lab2 in existing CVM two node cluster, as part of exercise, we will be adding node lab3 in existing node. 



1.      Make sure that you meet the following requirements:
·         The node must be connected to the same shared storage devices as the existing nodes.
·         The node must have private network connections to two independent switches for the cluster.
For more information, see the Veritas Cluster Server Installation Guide.
·         The network interface names used for the private interconnects on the new node must be the same as that of the existing nodes in the cluster.
2.      Install SFCFS on the new system.
3.      Update /etc/host
4.      Pass ssh key to each node.

Starting Volume Manager on the new node
Volume Manager uses license keys to control access. As you run the vxinstall utility, answer n to prompts about licensing. You installed the appropriate license when you ran the installsfcfs program.
To start Volume Manager on the new node
1.        To start Veritas Volume Manager on the new node, use the vxinstall utility:
# vxinstall
2.      Enter n when prompted to set up a system wide disk group for the system.
The installation completes.
3.        Verify that the daemons are up and running. Enter the command:
# vxdisk list
Make sure the output displays the shared disks without errors.


Cluster Status Before adding new node:
[root@lab1 /]# hastatus -summ



-- SYSTEM STATE

-- System               State                Frozen             



A  lab1                 RUNNING              0                   

A  lab2                 RUNNING              0                   



-- GROUP STATE

-- Group           System               Probed     AutoDisabled    State         



B  app_sg          lab1                 Y          N               ONLINE        

B  app_sg          lab2                 Y          N               ONLINE        

B  cvm             lab1                 Y          N               ONLINE        

B  cvm             lab2                 Y          N               ONLINE     


[root@lab1 /]# cfscluster status



  Node             :  lab1

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             MOUNTED      

  /app/test1     appvol02       appdg             MOUNTED      





  Node             :  lab2

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             MOUNTED      

  /app/test1     appvol02       appdg             MOUNTED      


Configuring LLT and GAB on the new node (lab3)
Perform the steps in the following procedure to configure LLT and GAB on the new node.
To configure LLT and GAB on the new node (lab3)
1.      Edit the /etc/llthosts file on the existing nodes. Using vi or another text editor, add the line for the new node to the file. The file resembles:
[root@lab1 /]# cat /etc/llthosts

0 lab1

1 lab2

[root@lab1 /]# vi /etc/llthosts

[root@lab1 /]# cat /etc/llthosts

0 lab1

1 lab2

2 lab3



[root@lab2 ~]# cat /etc/llthosts

0 lab1

1 lab2

[root@lab2 ~]# vi /etc/llthosts

[root@lab2 ~]# cat /etc/llthosts

0 lab1

1 lab2

2 lab3


2.      Copy the /etc/llthosts file from one of the existing systems over to the new system. The /etc/llthosts file must be identical on all nodes in the cluster.
[root@lab3 ~]# cat /etc/llthosts

0 lab1

1 lab2

2 lab3

3.      Create an /etc/llttab file on the new system. For example:
set-node saturn
set-cluster 101
link en1 /dev/dlpi/en:1 - ether - -
link en2 /dev/dlpi/en:2 - ether - -
Except for the first line that refers to the node, the file resembles the /etc/llttab files on the existing nodes. The second line, the cluster ID, must be the same as in the existing nodes.
[root@lab3 ~]# cat /etc/llttab

set-node lab3

set-cluster 28805

link eth4 eth-00:0C:29:22:2D:D7 - ether - -

link eth5 eth-00:0C:29:22:2D:CD - ether - -

link-lowpri eth3 eth-00:0C:29:22:2D:C3 - ether - -
4.      Use vi or another text editor to create the file /etc/gabtab on the new node. This file must contain a line that resembles the following example:
#/sbin/gabconfig -c –nN
Where N represents the number of systems in the cluster. For a three-system cluster, N would equal 3.
[root@lab3 ~]# cat /etc/gabtab

/sbin/gabconfig -c -n3

5.      Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file on the new system.
[root@lab1 /]# cat /etc/gabtab

/sbin/gabconfig -c -n2

[root@lab1 /]# vi /etc/gabtab

[root@lab1 /]# cat /etc/gabtab

/sbin/gabconfig -c -n3



[root@lab2 ~]# cat /etc/gabtab

/sbin/gabconfig -c -n2

[root@lab2 ~]# vi /etc/gabtab

[root@lab2 ~]# cat /etc/gabtab

/sbin/gabconfig -c -n3
6.        Use vi or another text editor to create the file /etc/VRTSvcs/conf/sysname on the new node. This file must contain the name of the new node added to the cluster.
For example:
Lab3
[root@lab3 ~]# cat /etc/VRTSvcs/conf/sysname

lab3 
7.     Create the Unique Universal Identifier file /etc/vx/.uuids/clusuuid on the new node:
# uuidconfig.pl -rsh -clus -copy -from_sys galaxy -to_sys lab3

8.        Start the LLT, GAB, and ODM drivers on the new node:
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
# /etc/methods/gmskextadm load
# /etc/rc.d/rc2.d/S99odm start
[root@lab3 ~]# /etc/init.d/llt start

Starting LLT:

LLT: loading module...

Loaded 2.6.32-431.el6.x86_64 on kernel 2.6.32-431.el6.x86_64

LLT: configuring module…

[root@lab3 ~]# /etc/init.d/gab start

Starting GAB:

Loaded 2.6.32-431.el6.x86_64 on kernel 2.6.32-431.el6.x86_64

Started gablogd

gablogd: Keeping 20 log files of 8388608 bytes each in |/var/log/gab_ffdc| directory. Daemon log size limit 8388608 bytes

9.        On the new node, verify that the GAB port memberships are a, b, d, h, v, w and f:
# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen df204 membership 012
Port b gen df20a membership 012
Port d gen df207 membership 012
Port h gen df207 membership 012



[root@lab3 ~]# gabconfig -a

GAB Port Memberships

===============================================================

Port a gen    4d703 membership 012

Configuring CVM and CFS on the new node

Modify the existing cluster configuration to configure CVM and CFS for the new node.
1.      Make a backup copy of the main.cf file, if not backed up in previous procedures. For example:
# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.2node
[root@lab1 /]# cd /etc/VRTSvcs/conf/config

[root@lab1 config]# cp -p main.cf main.cf.2node
2.      On one of the nodes in the existing cluster, set the cluster configuration to read-write mode:
# haconf -makerw
[root@lab1 /]# haconf -makerw
3.      Add the new node to the VCS configuration, if not already added:
# hasys -add lab3
[root@lab1 /]# hasys -add lab3
4.      To enable the existing cluster to recognize the new node, run the following commands on one of the existing nodes:
# hagrp -modify cvm SystemList -add lab3 2
# hagrp -modify cvm AutoStartList -add lab3
# hares -modify cvm_clus CVMNodeId -add lab3 2
# haconf -dump -makero
# /etc/vx/bin/vxclustadm -m vcs reinit
# /etc/vx/bin/vxclustadm nidmap

 [root@lab1 /]# hagrp -modify cvm SystemList -add lab3 2

[root@lab1 /]# hagrp -modify cvm AutoStartList -add lab3

[root@lab1 /]# hares -modify cvm_clus CVMNodeId -add lab3 2



[root@lab1 /]# hagrp -modify app_sg SystemList -add lab3 2

[root@lab1 /]# hagrp -modify app_sg AutoStartList -add lab3

[root@lab1 /]# hares -modify cfsmount1 NodeList -add lab3

[root@lab1 /]# hares -modify cfsmount2 NodeList -add lab3





[root@lab1 /]# hares -list

cfsmount1                    lab1

cfsmount1                    lab2

cfsmount1                    lab3

cfsmount2                    lab1

cfsmount2                    lab2

cfsmount2                    lab3

cvm_clus                     lab1

cvm_clus                     lab2

cvm_clus                     lab3

cvm_vxconfigd                lab1

cvm_vxconfigd                lab2

cvm_vxconfigd                lab3

cvmvoldg1                    lab1

cvmvoldg1                    lab2

cvmvoldg1                    lab3

vxattachd                    lab1

vxattachd                    lab2

vxattachd                    lab3

vxfsckd                      lab1

vxfsckd                      lab2

vxfsckd                      lab3



[root@lab1 /]# /etc/vx/bin/vxclustadm -m vcs reinit

[root@lab1 /]# /etc/vx/bin/vxclustadm nidmap

Name                             CVM Nid    CM Nid     State              

lab1                             1          0          Joined: Master     

lab2                             0          1          Joined: Slave      

lab3                             2          2          Out of Cluster     

5.      On the remaining nodes of the existing cluster, run the following commands:
# /etc/vx/bin/vxclustadm -m vcs reinit

# /etc/vx/bin/vxclustadm nidmap



[root@lab2 ~]# /etc/vx/bin/vxclustadm -m vcs reinit

[root@lab2 ~]# /etc/vx/bin/vxclustadm nidmap

Name                             CVM Nid    CM Nid     State              

lab1                             1          0          Joined: Master     

lab2                             0          1          Joined: Slave      

lab3                             2          2          Out of Cluster 
6.      Copy the configuration files from one of the nodes in the existing cluster to the new node:
# rcp /etc/VRTSvcs/conf/config/main.cf \
lab3:/etc/VRTSvcs/conf/config/main.cf
     # rcp /etc/VRTSvcs/conf/config/CFSTypes.cf \
lab3:/etc/VRTSvcs/conf/config/CFSTypes.cf
# rcp /etc/VRTSvcs/conf/config/CVMTypes.cf \
lab3:/etc/VRTSvcs/conf/config/CVMTypes.cf


[root@lab1 /]# scp /etc/VRTSvcs/conf/config/main.cf 192.168.1.33:/etc/VRTSvcs/conf/config/main.cf

[root@lab1 /]# scp /etc/VRTSvcs/conf/config/CFSTypes.cf 192.168.1.33:/etc/VRTSvcs/conf/config/CFSTypes.cf

 root@lab1 /]# scp /etc/VRTSvcs/conf/config/CVMTypes.cf 192.168.1.33:/etc/VRTSvcs/conf/config/CVMTypes.cf


[root@lab1 /]# hastatus -summ



-- SYSTEM STATE

-- System               State                Frozen             



A  lab1                 RUNNING              0                   

A  lab2                 RUNNING              0                   

A  lab3                 UNKNOWN              0                   



-- GROUP STATE

-- Group           System               Probed     AutoDisabled    State         



B  app_sg          lab1                 Y          N               ONLINE        

B  app_sg          lab2                 Y          N               ONLINE        

B  app_sg          lab3                 N          N               OFFLINE       

B  cvm             lab1                 Y          N               ONLINE        

B  cvm             lab2                 Y          N               ONLINE        

B  cvm             lab3                 N          N               OFFLINE       



-- RESOURCES NOT PROBED

-- Group           Type                 Resource             System             



E  app_sg          CFSMount             cfsmount1            lab3               

E  app_sg          CFSMount             cfsmount2            lab3               

E  app_sg          CVMVolDg             cvmvoldg1            lab3               

E  cvm             CFSfsckd             vxfsckd              lab3               

E  cvm             CVMCluster           cvm_clus             lab3               

E  cvm             CVMVxconfigd         cvm_vxconfigd        lab3               

E  cvm             ProcessOnOnly        vxattachd            lab3               

Starting VCS after adding the new node

Start VCS on the new node.
1.    Start VCS on the new node:
# hastart
VCS brings the CVM group online.
2.    Verify that the CVM group is online:
# hagrp -state

[root@lab1 Desktop]# hastatus -summ



-- SYSTEM STATE

-- System               State                Frozen             



A  lab1                 RUNNING              0                    

A  lab2                 RUNNING              0                   

A  lab3                 RUNNING              0                   



-- GROUP STATE

-- Group           System               Probed     AutoDisabled    State         



B  app_sg          lab1                 Y          N               ONLINE        

B  app_sg          lab2                 Y          N               ONLINE        

B  app_sg          lab3                 Y          N               ONLINE        

B  cvm             lab1                 Y          N               ONLINE        

B  cvm             lab2                 Y          N               ONLINE        
B  cvm             lab3                 Y          N               ONLINE


Thank you for reading.

For Reading other article, visit to “https://sites.google.com/site/unixwikis/

Thursday, May 26, 2016

Why ifconfig command should not be used any more in RHEL7?

Why ifconfig command should not be used any more in RHEL7?

We could see the IP address configured on any network interface using command #ip addr show command.

Here you could see there already IP “192.168.0.12” is configured on network interface “eno16777736”

[root@rhelserver /]# ip addr  show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:3b:3c:a0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.12/24 brd 192.168.0.255 scope global eno16777736
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe3b:3ca0/64 scope link
       valid_lft forever preferred_lft forever
[root@rhelserver /]#

You could check the route information via command #ip route show

[root@rhelserver /]# ip route show
default via 192.168.0.1 dev eno16777736  proto static  metric 1024
192.168.0.0/24 dev eno16777736  proto kernel  scope link  src 192.168.0.12


Now I am trying to add one more IP address “10.0.0.10” on interface “eno16777736”.

# ip addr add dev eno16777736 10.0.0.10/24

[root@rhelserver /]# ip addr add dev eno16777736 10.0.0.10/24
[root@rhelserver /]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:3b:3c:a0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.12/24 brd 192.168.0.255 scope global eno16777736
       valid_lft forever preferred_lft forever
    inet 10.0.0.10/24 scope global eno16777736
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe3b:3ca0/64 scope link
       valid_lft forever preferred_lft forever


We could see both IP “192.168.0.24” & “10.0.0.10” are configured on interface “eno16777736

Now let’s use “ifconfig”, as many of us are used to using ifconfig command.

[root@rhelserver /]# ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.12  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::20c:29ff:fe3b:3ca0  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:3b:3c:a0  txqueuelen 1000  (Ethernet)
        RX packets 627  bytes 74074 (72.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 708  bytes 45361 (44.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 461  bytes 45712 (44.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 461  bytes 45712 (44.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


As you could see, we could see first IP address “192.168.0.12” , but “ifconfig” command has no capabilities of showing other IP address i.e. “10.0.0.24”


Even if you use # ifconfig –a, It will not show the IP address “10.0.0.10

[root@rhelserver /]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.12  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::20c:29ff:fe3b:3ca0  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:3b:3c:a0  txqueuelen 1000  (Ethernet)
        RX packets 627  bytes 74074 (72.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 708  bytes 45361 (44.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 461  bytes 45712 (44.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 461  bytes 45712 (44.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


It would be shocking for many sys admin, who like ifconfig command so much, but the man page of ifconfig command says.

# man ifconfig

IFCONFIG(8)                                                                          Linux System Administrator's Manual                                                                          IFCONFIG(8)

NAME
       ifconfig - configure a network interface

SYNOPSIS
       ifconfig [-v] [-a] [-s] [interface]
       ifconfig [-v] interface [aftype] options | address ...

NOTE
       This program is obsolete!  For replacement check ip addr and ip link.  For statistics use ip -s link.

So do not use ifconfig command any more, you may miss any network information configured on the network interface.


Thank you for reading.
For Reading other article, visit to “https://sites.google.com/site/unixwikis/


Wednesday, May 25, 2016

How to check RAID Information in Linux

Have you ever tried to check how the hardware RAID Array configured on server from your Linux Shell? Have you ever wanted to change or modify your Hardware RAID configurations without rebooting the server and without leaving your Linux shell?


hpacucli utility is there to help you, If your server is HP Hardware. hpacucli  (HP Array Configuration Utility CLI) is a command line based disk configuration program for Smart Array Controllers and RAID Array Controllers. You can download and  install hpacucli tool from HP website.

Quick Abbreviations:

chassisname = ch
controller = ctrl
logicaldrive = ld
physicaldrive = pd
drivewritecache = dwc

As root, just type "hpacucli" and you will be into hpacucli command line interface. Let me give you a quick example of what you can do with this hpacucli.

To get the quick details about the RAID controller and its Health:
Using Syntax : ctrl all show

testnode:root:/ # hpacucli ctrl all show

Smart Array P400 in Slot 9                (sn: PAFGL0P9SWY39U)

Using Syntax: ctrl all show status

testnode:root:/ # hpacucli ctrl all show status

Smart Array P400 in Slot 9
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

To get a quick idea of how the disks are grouped and which raid level used:

testnode:root:/ # hpacucli ctrl all show

Smart Array P400 in Slot 9                (sn: PAFGL0P9SWY39U)

Using Syntax: ctrl all show config

testnode:root:/ # hpacucli ctrl all show config

Smart Array P400 in Slot 9                (sn: PAFGL0P9SWY39U)

   array A (SAS, Unused Space: 0  MB)


      logicaldrive 1 (136.7 GB, RAID 1, OK)

      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)

   array B (SAS, Unused Space: 0  MB)


      logicaldrive 2 (136.7 GB, RAID 1, OK)

      physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK)
      physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK)

To get complete details about how the raid configured in the server:

Using Syntax: ctrl all show config detail

testnode:root:/ # hpacucli ctrl all show config detail

Smart Array P400 in Slot 9
   Bus Interface: PCI
   Slot: 9
   Serial Number: PAFGL0P9SWY39U
   Cache Serial Number: PA2270J9SWXBAV
   RAID 6 (ADG) Status: Enabled
   Controller Status: OK
   Hardware Revision: E
   Firmware Version: 7.24
   Rebuild Priority: Medium
   Expand Priority: Medium
   Surface Scan Delay: 15 secs
   Surface Scan Mode: Idle
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 0 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 25% Read / 75% Write
   Drive Write Cache: Disabled
   Total Cache Size: 512 MB
   Total Cache Memory Available: 464 MB
   No-Battery Write Cache: Disabled
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True

   Array: A
      Interface Type: SAS
      Unused Space: 0  MB
      Status: OK
      Array Type: Data



      Logical Drive: 1
         Size: 136.7 GB
         Fault Tolerance: 1
         Heads: 255
         Sectors Per Track: 32
         Cylinders: 35132
         Strip Size: 128 KB
         Full Stripe Size: 128 KB
         Status: OK
         Caching:  Enabled
         Unique Identifier: 600508B1001050395357593339550006
         Disk Name: /dev/cciss/c0d0
         Mount Points: /boot 196 MB
         OS Status: LOCKED
         Logical Drive Label: A09C2125PAFGL0P9SWY39U6F89
         Mirror Group 0:
            physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)
         Mirror Group 1:
            physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
         Drive Type: Data

      physicaldrive 1I:1:1
         Port: 1I
         Box: 1
         Bay: 1
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 146 GB
         Rotational Speed: 10000
         Firmware Revision: HPDC
         Serial Number: 3NM8D3HA00009922LSYG
         Model: HP      DG146BB976
         Current Temperature (C): 28
         Maximum Temperature (C): 42
         PHY Count: 2
         PHY Transfer Rate: Unknown, Unknown

      physicaldrive 1I:1:2
         Port: 1I
         Box: 1
         Bay: 2
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 146 GB
         Rotational Speed: 10000
         Firmware Revision: HPDC
         Serial Number: 3NM8FY4M00009922R00K
         Model: HP      DG146BB976
         Current Temperature (C): 28
         Maximum Temperature (C): 43
         PHY Count: 2
         PHY Transfer Rate: Unknown, Unknown


   Array: B
      Interface Type: SAS
      Unused Space: 0  MB
      Status: OK
      Array Type: Data



      Logical Drive: 2
         Size: 136.7 GB
         Fault Tolerance: 1
         Heads: 255
         Sectors Per Track: 32
         Cylinders: 35132
         Strip Size: 128 KB
         Full Stripe Size: 128 KB
         Status: OK
         Caching:  Enabled
         Unique Identifier: 600508B1001050395357593339550007
         Disk Name: /dev/cciss/c0d1
         Mount Points: None
         OS Status: LOCKED
         Logical Drive Label: A09E85F0PAFGL0P9SWY39U8E5C
         Mirror Group 0:
            physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK)
         Mirror Group 1:
            physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK)
         Drive Type: Data

      physicaldrive 1I:1:3
         Port: 1I
         Box: 1
         Bay: 3
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 146 GB
         Rotational Speed: 10000
         Firmware Revision: HPDC
         Serial Number: 3NM89MV900009922LUZX
         Model: HP      DG146BB976
         Current Temperature (C): 28
         Maximum Temperature (C): 43
         PHY Count: 2
         PHY Transfer Rate: Unknown, 3.0Gbps

      physicaldrive 1I:1:4
         Port: 1I
         Box: 1
         Bay: 4
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 146 GB
         Rotational Speed: 10000
         Firmware Revision: HPDC
         Serial Number: 3NM8DAC800009922QE1N
         Model: HP      DG146BB976
         Current Temperature (C): 26
         Maximum Temperature (C): 41
         PHY Count: 2
         PHY Transfer Rate: 3.0Gbps, Unknown


Another Method to check the array hardware mirror status
testnode:root:/ # fdisk -l

Disk /dev/cciss/c0d0: 146.7 GB, 146778685440 bytes
255 heads, 63 sectors/track, 17844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d0p1   *           1          25      200781   83  Linux
/dev/cciss/c0d0p2              26       17844   143131117+  8e  Linux LVM

Disk /dev/cciss/c0d1: 146.7 GB, 146778685440 bytes
255 heads, 32 sectors/track, 35132 cylinders
Units = cylinders of 8160 * 512 = 4177920 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d1p2               1       35132   143338544   8e  Linux LVM

testnode:root:/ # hpacucli ctrl all show config detail | grep -i /dev/cciss/c0d0
         Disk Name: /dev/cciss/c0d0

testnode:root:/ # cat /proc/driver/cciss/cciss0
cciss0: HP Smart Array P400 Controller
Board ID: 0x3234103c
Firmware Version: 7.24
IRQ: 74
Logical drives: 2
Sector size: 8192
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 161
Max # commands on controller since init: 343
Max SG entries since init: 128
Sequential access devices: 0

cciss/c0d0:      146.77GB       RAID 1(1+0)
cciss/c0d1:      146.77GB       RAID 1(1+0)

Be sure to verify your version of hpacucli and refer the ReadMe always, before you trying to modify the configuration of RAID or Smart Array controllers. 

Thank you for reading.

For Reading other article, visit to “https://sites.google.com/site/unixwikis/

Sunday, April 10, 2016

RHEL 7 Boot Process in 4 Phases

    The Firmware Phase
    1.            Power on Self Test , and all hardware are checked, while doing this, it installs appropriate driver for video hardware and begins displaying system messages on screen.
    2.            The firmware is the BIOS or the UEFI (Unified Extensible Firmware Interface) code that is stored in flash memory on the x86 system board scans the available storage devices to locate the boot devices.
    3.            As soon as it discovers a usable boot devices, it load boot loader program called grub2, into memory and pass control over to it.

    The GRUB Phase
    1.            After GRUB2 is loaded into memory and take control, it searches for the kernel in the /boot file system. It extract the kernel code from /boot into memory, decompresses it, and loads it based on the configuration defined in the /boot/grub2/grub.cfg file.
    For UEFI-based systems, GRUB2 looks for the EFI system partition /boot/efi instead, and runs the kernel based on the configuration defined in the /boot/efi/EFI/redhat/grub.efi file.
    2.            Once the kernel is loaded, GRUB2 transfer the control over to it for furthering the boot process.

    The Kernel Phase
    1.            After getting control from GRUB2, the kernel loads the initial RAM disk (initrd) image from the /boot file system into memory after decompressing and extracting it.
    2.            The kernel then mounts this images as ready-only to serve as a temporary root file system. This allows the kernel to bypass mounting the actual physical root file system in order to be fully functional.
    3.            The kernel loads necessary modules from the initrd image to allow access to the physical disk and the partitions and file systems therein. It also loads any required drivers to support the boot process.
    4.            Later, the kernel unmount the initrd image and mounts the actual root file system in read/write mode. At this point, the necessary foundation is built for the boot process to carry on and start loading the enabled services.

    The Initialization Phase
    1.            Now systemd takes over control from kernel and continues the boot process. In RHEL7, systemd has replaced both SysVinit and Upstart as the default system initialization scheme.
    2.            Systemd starts all enabled userspace system and network services, and brings the system up to the preset boot target.
    3.            The system boot process is considered complete when all enabled services are operational for the boot target and users are able to log in to the system.

    Steps of Boot Phase from bottom:



    Thank you for reading.

    For Reading other article, visit to “https://sites.google.com/site/unixwikis/