Steps to configure HACMP

1. Install the nodes, make sure the redundancy is maintained for power supplies, n/w and fiber n/ws. Then Install AIX on the nodes.

2. Install all the HACMP filesets except HAview and HATivoli.
Install all the RSCT filesets from the AIX base CD.
Make sure that the AIX, HACMP patches and server code are at the latest level ( ideally recommended).

4. Check for fileset bos.clvm to be present on both the nodes. This is required to make the VGs enhanced concurrent capable.

5. V.IMP: Reboot both the nodes after installing the HACMP filesets.

6. Configure shared storage on both the nodes. Also in case of a disk heartbeat, assign a 1GB shared storage LUN on both nodes.

7. Create the required VGs only on the first node. The VGs can be either normal VGs or enhanced concurrent VGs. Assign particular major number to each VGs while creating the VGs. Record the major no. information.

To check the Majar no. use the command:
ls –lrt /dev |grep <vgname>

Mount automatically at system restart should be set to NO.

8. Varyon the VGs that was just created.

9. V.IMP: Create log LV on each VG first before creating any new LV. Give a unique name to logLV.

Destroy the content of logLV by: logform /dev/loglvname

Repeat this step for all VGs that were created.

10. Create all the necessary LVs on each VG.

11. Create all the necessary filesystems on each LV created…..you can create mount pts as per the requirement of the cust,

Mount automatically at system restart should be set to NO.

12. umount all the filesystems and varyoff all the VGs.

13. chvg –an <vgname>   --- All VGs will be set to do not mount automatically at system restart.

14. Go to node 2 and run cfgmgr –v to import the shared volumes.

15. Import all the VGs on node 2
use smitty importvg ----- import with the same major number as assigned on node 1.

16. Run chvg –an <vgname> for all VGs on node 2.

17. V.IMP: Identify the boot1, boot2, service ip and persistent ip for both the nodes and make the entry in the /etc/hosts.



Make sure that the /etc/hosts file is same across both the nodes. The entries should be same and consistent across nodes.

18. Assign boot1 and boot2 ips to Ethernet interfaces (en#) on both the nodes.

Use smitty chinet ----- Assign boot ips to 2 interfaces on each node.

19. Here the planning ends. Now we can start with the actual HACMP setup:

20. Define the name for the cluster:

smitty hacmp -> Extended Configuration -> Extended topology configuration -> Configure an HACMP cluster - > Add an HACMP cluster:
Give the name of the cluster and press enter.


21. Define the cluster nodes.
smitty hacmp -> Extended Configuration -> Extended topology configuration -> Configure an HACMP node - > Add a node to an HACMP cluster

Define both the nodes one after the other.

22. Discover HACMP config: This will import for both nodes all the node info, boot ips, service ips from the /etc/hosts

smitty hacmp -> Extended configurations -> Discover hacmp related information




23. Add HACMP communication interfaces. (ether interfaces.)

smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP networks -> Add a network to the HACMP cluster.

Select ether and Press enter.

Then select diskhb and Press enter. Diskhb is your non-tcpip heartbeat.


24. Include the interfaces/devices in the ether n/w and diskhb already defined.

smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP communication interfaces/devices -> Add communication interfaces/devices.













Include all the four boot ips (2 for each nodes) in this ether interface already defined.
Then include the disk for heartbeat on both the nodes in the diskhb already defined.



25. Add the persistent IPs :

smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP persistent nodes IP label/Adderesses



Add a persistent ip label for both nodes.



26. Define the service IP labels for both nodes.

smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP extended resource configuration -> Configure HACMP service IP label





27. Add Resource Groups:

smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP extended resource group configuration





Continue similarly for all the resource groups.
The node selected first while defining the resource group will be the primary owner of that resource group. The node after that is secondary node.
Make sure you set primary node correctly for each resource group.
Also set the fallover/fallback policies as per the requirement of the setup.


28. Set attributes of the resource groups already defined.:
Here you have to actually assign the resources to the resource groups.

smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP extended resource group configuration



Add the service IP label for the owner node and also the VGs owned by the owner node of this resource group.


Continue similarly for all the resource groups.
29. Synchronize the cluster:
This will sync the info from one node to second node.

Smitty cl_sync


30. That’s it. Now you are ready to start the cluster.

Smitty clstart

You can start the cluster together on both nodes or start individually on each node.
31. Wait for the cluster to stabilize. You can check when the cluster is up by following commands

a. netstat –i


b. ifconfig –a : look-out for service ip. It will show on each node if the cluster is up.



c. Check whether the VGs under cluster’s RGs are varied-ON and the filesystems  in the VGs are mounted after the cluster start.


Here test1vg and test2vg are VGs which are varied-ON when the cluster is started and filesystems /test2 and /test3 are mounted when the cluster starts.

/test2 and /test3 are in test2vg which is part of the RG which is owned by this node.

32. Perform all the tests such as resource take-over, node failure, n/w failure and verify the cluster before releasing the system to the customer.

Thanks and Have fun with HACMP!!!!

P.S: Only one piece of advice: DO YOUR PLANNING THROUGHLY and DOCUMENT THE CONFIGURATION.

Comments

Popular posts from this blog

BMCs and the IPMI Protocol

Logical Domains Reference Manual

Understanding How ZFS Calculates Used Space