Search This Blog

Wednesday, November 23, 2016

Part 1 - AWS Overview

Why Learn AWS?

·       Fastest growing cloud computing platform on the planet
·        Largest public could computing platform on the planet
·        More & more organization are outsourcing their IT to AWS
·        The AWS certification are the most popular IT certification right now
·        The safest place to be in IT right now
·        http://bit.ly/1hzUUXI

How the Exams Fit Together







 What you will learn




The Exam Blue Print

  



·        80 Minutes in Length
·        60 Questions (this can change)
·        Multiple Choice
·        Pass Mark based on a bell curve (it move around)
·        Aim for 70%
·        Qualification is valid for 2 years
·        Scenario based questions

What you will need

·       An AWS Free Tier Account
·        A Computer with an SSH terminal
·        A domain name (optional)

History of AWS





  
·        2003 - Chris Pinkman & Benjamin Black Present a paper on what Amazon's own internal infrastructure should look like
·        Suggested selling it as a service and prepared a business case.
·        SQS officially launched in 2004.
·        AWS officially launched in 2006
·        2007 over 180,000 developers on the platform
·        2010 all of amazon.com moved over
·        2012 First Re-Invent Conference
·        2013 Certification launched
·        2014 Committed to achieve 100% renewable energy usage for its global footpring
·        2015 AWS breaks out its revenue $6 Billion USD per annum and growing close to 90% year on year.







Concepts & Components Part1





What is a Region? What is an AZ?



·        AWS is divided into geographic regions and availability zones
·        Region
o   A Region is a geographical area. Each Region consists of 2 (or more) Availability Zones.
§  Network connection between zones in the same region have low latency
o   Each region is guaranteed to be completely isolated from any other region
o   Several regions are available
§  US East—Virginia (us-east-1)
§  US West—California (us-west-1)
§  US West—Oregon (us-west-2)
§  Asia Pacific—Tokyo (ap-northeast-1)
§  Asia Pacific—Singapore (ap-southeast-1)
§  Asia Pacific—Sydney (ap-southeast-2)
§  Asia Pacific—Seoul (ap-northeast-2)
§  Asia Pacific—Mumbai Region
§  EU West—Ireland (eu-west-1)
§  EU-Central—Germany (eu-central-1)
§  South America—Sao Paulo (sa-east-1)
§  China—Beijing (cn-north-1)
§  US Government (us-gov-west-1)
·        Availability Zone (AZ)
o   An Availability Zone (AZ) is Simply a Data Center; a group of servers
o   Individual datacenter within an AWS region. A region is made up of multiple datacenter and the fundamental property of AWS is building across differently availability zones and regions.
·        Two zones are guaranteed not to share any common points of failure
·        Edge Locations:
o   Location build to deliver cached data across the world. CloundFront CDN utilizes this service for faster delivery to countries without AWS regions.


Implications of Regions and Zones

·        AWS provides the capability to deploy resources in specific regions and availability zones
o   Pricing may depend on region
·        Can utilize a region geographically close to users
o   To potentially improve performance
·        Can comply with legal and regulatory requirements
o   Restrictions on where data is hosted
§  For example, within U.S. borders
·        Using multiple regions/availability zones can reduce downtime
o   Mitigate effects of a failure of a single location
o   Can actually improve the availability promised by Amazon

Service Level Agreements

·        Amazon publishes SLAs
o   The terms of the service
o   Defines Amazon’s commitment in providing the service
§  And ramifications of not meeting the service level
·        Different AWS services have different SLAs
·        Understanding the SLAs is critical when hosting a solution on AWS
o   Essentially a contract between you and Amazon
·        Differences may exist between the design specification and the SLA
o   Be sure you know what is actually being promised

What is An Edge Location?



·        Edge Location are CDN (Content Delivery Network) End Points for Cloud Front.
·        There are many more Edge Location than Regions.
·        There are over 50 Edge Locations.
·        Edge Location helps lower latency and improves performance for end users.

A Brief Look at Today's Region









The AWS platform consists of how many regions currently?
A.     10
B.     11
C.    12
D.    13
Answer: D
Explanation: This number changes with reasonable frequency, check the AWS document from time to time.

Which is the default region in AWS?
A.     eu-west-1
B.    us-east-1
C.    us-east-2
D.    ap-southeast-1
Answer: B

Thursday, September 1, 2016

Migrating Fencing diskgroup to new array


There can be several methods to do this task however for me below steps worked very well but if anyone has any better set of instructions doing this then request to share those.

Let's start then -

Output of vxfencing status:
root@lab1:/opt/SAN #  vxfenadm -d

I/O Fencing Cluster Information:
================================

 Fencing Protocol Version: 201
 Fencing Mode: SCSI3
 Fencing SCSI3 Disk Policy: dmp
 Cluster Members:

        * 0 (webap11p)
          1 (webap12p)

 RFSM State Information:
        node   0 in state  8 (running)
        node   1 in state  8 (running)


Old Disk in Fencing DG:
root@lab1:/opt/SAN # vxdisk list|grep fencing_dg
emc0_0e3c    auto:cdsdisk    emc0_0e3c    fencing_dg   online
emc0_0e3d    auto:cdsdisk    emc0_0e3d    fencing_dg   online
emc0_0e3e    auto:cdsdisk    emc0_0e3e    fencing_dg   online



1. If VCS is running, shut it down locally:

# hastop -all -force

2. Stop I/O fencing on all nodes. This removes any registration keys on the disks.

# /etc/init.d/vxfen stop   (on all 2 cluster nodes) (Not Mandatory)

3. Import the coordinator disk group. The file /etc/vxfendg includes the name of the disk group (for example, fencing_dg) that contains the coordinator disks, so use the command -

# vxdg -tfC import 'cat /etc/vxfendg'
                                       OR
# vxdg -tfC import fencing_dg

Where:

-t specifies that the disk group is imported only until the system restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or more disks is not accessible.
-C specifies that any import blocks are removed.

4. Turn off the coordinator attribute value for the coordinator disk group.

# vxdg -g fencing_dg set coordinator=off

5. Add new and Remove old disks

Add disk for fence disk group

# vxdg -g fencing_dg adddisk emc1_0be1=emc1_0be1
# vxdg -g fencing_dg adddisk emc1_0be2=emc1_0be2
# vxdg -g fencing_dg adddisk emc1_0be3=emc1_0be3

Removing old disk from fence disk group

    # vxdg -g fencing_dg rmdisk emc0_0e3c
    # vxdg -g fencing_dg rmdisk emc0_0e3d
    # vxdg -g fencing_dg rmdisk emc0_0e3e
               


6. Set the coordinator attribute value as "on" for the coordinator disk group.

# vxdg -g fencing_dg set coordinator=on

7. Run disk scan on all nodes

# vxdisk scandisks   (Run on all cluster nodes)

8. Check if fencing disks are visible on all nodes

# vxdisk -o alldgs list | grep fen

9. After replacing disks in a coordinator disk group, deport the disk group:

# vxdg deport 'cat /etc/vxfendg'
                                 OR
# vxdg deport fencing_dg

10. Verify if the fencing diskgroup is deported

# vxdisk -o alldgs list | grep fen

11. On each node in the cluster, start the I/O fencing driver:

# /etc/init.d/vxfen start  (on all 3 cluster nodes)

12. hastart on all cluster nodes. 

# hastart  (on all 3 cluster nodes)

That's it, these 12 steps takes you through migrating disks within coordinator disk group from one array to another.


  1. Modify /etc/vxfentab in all the cluster node :
Example:
root@lab1:/opt/SAN # cat /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/emc0_0e3c EMC%5FSYMMETRIX%5F000290100769%5F6900E3C000
/dev/vx/rdmp/emc0_0e3d EMC%5FSYMMETRIX%5F000290100769%5F6900E3D000
/dev/vx/rdmp/emc0_0e3e EMC%5FSYMMETRIX%5F000290100769%5F6900E3E000

To:
root@lab1:/opt/SAN # cat /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/emc1_0be1  EMC%5FSYMMETRIX%5F000298700104%5F0400BE1000
/dev/vx/rdmp/emc1_0be2  EMC%5FSYMMETRIX%5F000298700104%5F0400BE2000
/dev/vx/rdmp/emc1_0be3  EMC%5FSYMMETRIX%5F000298700104%5F0400BE3000

Fencing DG with new disk
root@lab1:/root # vxdisk -o alldgs list|grep fencing_dg
emc1_0be1    auto:cdsdisk    -            (fencing_dg) online thinrclm
emc1_0be2    auto:cdsdisk    -            (fencing_dg) online thinrclm
emc1_0be3    auto:cdsdisk    -            (fencing_dg) online thinrclm


Thank you for reading.

For Reading other article, visit to “https://sites.google.com/site/unixwikis/

Saturday, July 30, 2016

HOW TO CONVERT A LOCAL VXFS INTO CFS



Ensure all disks in the diskgroup are visible from both nodes

[root@lab1 /]# vxdisk -oalldgs list|grep appdg

disk_3       auto:cdsdisk    appdg02      appdg        online

disk_7       auto:cdsdisk    appdg01      appdg        online



Last login: Sat Jul 23 08:59:16 2016 from 192.168.1.31

[root@lab2 ~]# vxdisk -oalldgs list|grep appdg

disk_0       auto:cdsdisk    -            (appdg)      online

disk_2       auto:cdsdisk    -            (appdg)      online





[root@lab3 ~]# vxdisk -oalldgs list|grep appdg

disk_3       auto:cdsdisk    -            (appdg)      online

disk_7       auto:cdsdisk    -            (appdg)      online

Unmount the FS:

[root@lab1 /]# df -Ph|grep appdg

/dev/vx/dsk/appdg/appvol01   100M   18M   78M  19% /app/test

/dev/vx/dsk/appdg/appvol02   100M   18M   78M  19% /app/test1

[root@lab1 /]# umount /app/test1

[root@lab1 /]# umount /app/test

Also Remove the entries from /etc/fstab:

Deport the Diskgroup:
[root@lab1 /]# vxdg deport appdg

Import the Diskgroup with shared Flag On:
[root@lab1 /]# vxdg -s import appdg



[root@lab1 /]# vxdisk -oalldgs list|grep appdg

disk_3       auto:cdsdisk    appdg02      appdg        online shared

disk_7       auto:cdsdisk    appdg01      appdg        online shared



[root@lab2 ~]# vxdisk -oalldgs list|grep appdg

disk_0       auto:cdsdisk    appdg01      appdg        online shared

disk_2       auto:cdsdisk    appdg02      appdg        online shared



[root@lab3 ~]# vxdisk -oalldgs list|grep appdg

disk_3       auto:cdsdisk    appdg02      appdg        online shared

disk_7       auto:cdsdisk    appdg01      appdg        online shared


Now we could see there is an shared flag for diskgroup

 Now we will add each volume to the CFS Cluster, First we will check the Cluster Status:

[root@lab1 /]# cfscluster status



  Node             :  lab1

  Cluster Manager  :  running

  CVM state        :  running

  No mount point registered with cluster configuration





  Node             :  lab2

  Cluster Manager  :  running

  CVM state        :  running

  No mount point registered with cluster configuration





  Node             :  lab3

  Cluster Manager  :  running

  CVM state        :  running
  No mount point registered with cluster configuration

[root@lab1 /]# hastatus -summ



-- SYSTEM STATE

-- System               State                Frozen             



A  lab1                 RUNNING              0                   

A  lab2                 RUNNING              0                   

A  lab3                 RUNNING              0                   



-- GROUP STATE

-- Group           System               Probed     AutoDisabled    State         



B  cvm             lab1                 Y          N               ONLINE        

B  cvm             lab2                 Y          N               ONLINE        

B  cvm             lab3                 Y          N               ONLINE


Adding Volume appvol01  & appvol02 to CVM:
[root@lab1 /]# cfsmntadm add appdg appvol01 /app/test app_sg all=rw

  Mount Point is being added...

  /app/test added to the cluster-configuration





root@lab1 /]# cfsmntadm add appdg appvol02 /app/test1 app_sg all=rw

  Mount Point is being added...

  /app/test1 added to the cluster-configuration


Create an mount Point on host lab1,lab2 & lab3 to mounting each volume:
[root@lab1 /]# mkdir -p /app/test

[root@lab1 /]# mkdir -p /app/test1

Similarly do it on lab2 & lab3 also:


Checking Cluster Status:
[root@lab1 /]# cfscluster status



  Node             :  lab1

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             NOT MOUNTED  

  /app/test1     appvol02       appdg             NOT MOUNTED  





  Node             :  lab2

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             NOT MOUNTED  

  /app/test1     appvol02       appdg             NOT MOUNTED  





  Node             :  lab3

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             NOT MOUNTED  

  /app/test1     appvol02       appdg             NOT MOUNTED  



[root@lab1 /]# hastatus -summ



-- SYSTEM STATE

-- System               State                Frozen             



A  lab1                 RUNNING              0                   

A  lab2                 RUNNING              0                   

A  lab3                 RUNNING              0                   



-- GROUP STATE

-- Group           System               Probed     AutoDisabled    State         



B  app_sg          lab1                 Y          N               PARTIAL       

B  app_sg          lab2                 Y          N               PARTIAL       

B  app_sg          lab3                 Y          N               PARTIAL       

B  cvm             lab1                 Y          N               ONLINE        

B  cvm             lab2                 Y          N               ONLINE        

B  cvm             lab3                 Y          N               ONLINE


Mounting each volume:
[root@lab1 /]# cfsmount /app/test

  Mounting...

  [/dev/vx/dsk/appdg/appvol01] mounted successfully at /app/test on lab1

  [/dev/vx/dsk/appdg/appvol01] mounted successfully at /app/test on lab2

  [/dev/vx/dsk/appdg/appvol01] mounted successfully at /app/test on lab3



[root@lab1 /]# cfsmount /app/test1

  Mounting...

  [/dev/vx/dsk/appdg/appvol02] mounted successfully at /app/test1 on lab1

  [/dev/vx/dsk/appdg/appvol02] mounted successfully at /app/test1 on lab2

  [/dev/vx/dsk/appdg/appvol02] mounted successfully at /app/test1 on lab3



root@lab1 /]# df -Ph /app/test*

Filesystem                  Size  Used Avail Use% Mounted on

/dev/vx/dsk/appdg/appvol01  100M   18M   78M  19% /app/test

/dev/vx/dsk/appdg/appvol02  100M   18M   78M  19% /app/test1



[root@lab2 ~]# df -Ph /app/test*

Filesystem                  Size  Used Avail Use% Mounted on

/dev/vx/dsk/appdg/appvol01  100M   18M   78M  19% /app/test

/dev/vx/dsk/appdg/appvol02  100M   18M   78M  19% /app/test1



[root@lab3 ~]# df -Ph /app/test*

Filesystem                  Size  Used Avail Use% Mounted on

/dev/vx/dsk/appdg/appvol01  100M   18M   78M  19% /app/test

/dev/vx/dsk/appdg/appvol02  100M   18M   78M  19% /app/test1


We could see the FS is mounted Properly

Checking Cluster Status:

[root@lab1 /]# cfscluster status



  Node             :  lab1

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             MOUNTED      

  /app/test1     appvol02       appdg             MOUNTED      





  Node             :  lab2

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             MOUNTED      

  /app/test1     appvol02       appdg             MOUNTED      





  Node             :  lab3

  Cluster Manager  :  running

  CVM state        :  running

  MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS

  /app/test      appvol01       appdg             MOUNTED      

  /app/test1     appvol02       appdg             MOUNTED      

[root@lab1 /]# hastatus -summ



-- SYSTEM STATE

-- System               State                Frozen             



A  lab1                 RUNNING              0                   

A  lab2                 RUNNING              0                   

A  lab3                 RUNNING              0                   



-- GROUP STATE

-- Group           System               Probed     AutoDisabled    State         



B  app_sg          lab1                 Y          N               ONLINE        

B  app_sg          lab2                 Y          N               ONLINE        

B  app_sg          lab3                 Y          N               ONLINE        

B  cvm             lab1                 Y          N               ONLINE        

B  cvm             lab2                 Y          N               ONLINE        

B  cvm             lab3                 Y          N               ONLINE        

Please be noted have tested the conversion of VXFS to CFS and Vice versa with Data in it and it was successful but generic convention is to take a Full Backup of the Data before such change.

Thank you for reading.

For Reading other article, visit to “https://sites.google.com/site/unixwikis/