GPFS File System Cluster Installation Step by Step

Chapter 1: Installation planning
Installing and configuring GPFS involves several steps that you must complete in the appropriate sequence, Review the pre-installation and installation roadmaps before you begin the installation process.
General Parallel File System (GPFS)
The General Parallel File System (GPFS) is high performance clustered file system developed by IBM. It can be deployed in shared disk or shared nothing distributed parallel modes. It is used by many of the world’s largest commercial companies.
 GPFS allows to configure a high available file system allowing concurrent access from a cluster of nodes.
 Cluster nodes can be server using AIX, Linux or windows operating system.
GPFS provides high performance allowing striping blocks of data over multiple disk reading and writing this blocks in parallel. It offers also block replication over different disks in order to guarantee the availability of the file system also during disk failures.

Pre-installation roadmap:
Table 1: pre-installation roadmap
Actions Description
1. Plan your cluster. Review and plan your cluster setup.
Refer to “pre-installation setup” step 1.
2. Review GPFS Requirements. Make sure the minimum requirements are met, including:
• Hardware requirements
• Software requirements
Refer to “pre-installation setup” step 2.
3. Configure interface Make sure you have 3 interface cards.
Refer to “pre-installation setup” step 3
4. Plan your network configuration. before proceeding with the installation, plan your network configuration including:
• • •
5. Check the firewalls. Before proceeding with the installation, make sure the all firewalls are stop.
6. Disabling SELinux. Disable the SELinux.
Refer to “pre-installation setup” step 6.
7. Creating FQDNs and hostnames. Create the hostnames and Fully Qualified Domain Names.
Refer to “pre-installation setup” step 7.
8. Remote login. Make sure that all nodes are remotely logged in.
Refer to “pre-installation setup” step 8.
9. Storage configuration. Attach disks to all nodes,
Refer to “pre-installation steps” step 9

Installation roadmap
Actions Description
1 Get Nodes properly installed. Before installing the GPFS, we must make sure the all nodes are installed properly or not.
2 Install GPFS code.
3 Create the GPFS cluster.
4 Assigning License to cluster nodes.
5 Start GPFS and verify the status of all nodes.
6 Create NSDs.
7 Create file systems.

Module 2: Planning
Before you install GPFS and deploy system, You must decide on your network topology, and system configuration.

Planning your system configuration
Understand the role of the quorum, quorum-manager, nsd-node and compute nodes. GPFS is installed on all the nodes after meets all requirements.
The quorum and quorum-manager node is responsible for the following functions:
• Administration, management, and monitoring of the cluster.
• Creating the cluster.

The nsd-node nodes are responsible for the following functions:
• Creating the nsd-deivces.
• Creating the stanza files.
• Creating the file systems on cluster.
• Sharing the file systems on other nodes of cluster.
• Sharing the hard disk on cluster.

The compute nodes are responsible for the following functions:
• Accessing the shared disk

Module 3:Pre-installation setup

Step 1: Plan your cluster :
Before setting the GPFS environment, we need to create quorum, compute and nsd-nodes. So here we are creating two quorum nodes, four compute nodes and three nsd-nodes. And here we are using RHEL 6.5 x86 (64 bit) operating system.
• Use the FAQ section of the GPFS documentation to verify.
o OS being installed to ensure that is supported.
o Prerequisite software is installed.
head01
head02
Compute01
Compute02
Compute03
Compute04
Nsdnode01
Nsdnode02
Nsdnode03

Step 2: GPFS requirement.
You must make sure that the minimum hardware and software requirements are met.
Hardware requirements
Before you install GPFS, you must make sure that minimum hardware requirements are met.
Minimum hardware requirements for the quorum and quorum-manager node:
• 120 GB free disk space.
• 4 GB of physical memory (RAM).
• 3 static Ethernet configured interface.
Minimum hardware requirements for compute nodes:
• 50 GB free disk space.
• 2 GB of physical memory (RAM).
• 3 static Ethernet configured interface.
Minimum hardware requirements for nsd-node nodes:
• 50 and 20 GB two free disk space.
• 2 GB of physical memory (RAM).
• 3 static Ethernet configured interface.

Software requirements :
one of the following operating system is required.
• Red Hat Enterprise Linux (RHEL)6.5 x86 (64bit).
• CentOS 6.8 x86 (64 bit).

Step 4: Cluster environment
Configure the Interfaces :
Here we need three Network Interface Cards (NIC), in every node of cluster need three NICs, so commonly while installing we create the Network Interface Cards (NIC).
#ifcfg-eth0

[root@node01 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:CA:2A
inet addr:192.168.1.21 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:ca2a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:84 errors:0 dropped:0 overruns:0 frame:0
TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7891 (7.7 KiB) TX bytes:8067 (7.8 KiB)

#ifcfg-eth1
[root@node01 ~]# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:CA:34
inet addr:192.168.2.21 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:ca34/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:38 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2651 (2.5 KiB) TX bytes:636 (636.0 b)

#ifcfg-eth2
[root@node01 ~]# ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:0C:29:C6:CA:3E
inet addr:192.168.3.21 Bcast:192.168.3.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:ca3e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:29 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2111 (2.0 KiB) TX bytes:636 (636.0 b)

Networkconfiguration:

Here we are assign the IP address to all interfaces.
#vim /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
HWADDR=00:0c:29:c6:ca:2a
TYPE=Ethernet
UUID=f9ca22db-53c2-415f-9b20-979176564aa9
ONBOOT=no
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.1.21
NETMASK=255.255.255.0
IPV6INIT=no
USERCTL=no

#vim /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
HWADDR=00:0c:29:c6:ca:34
TYPE=Ethernet
UUID=b65b93be-7086-40bf-8ddc-3515464de764
ONBOOT=no
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.2.21
NETMASK=255.255.255.0
IPV6INIT=no
USERCTL=no

#vim /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
HWADDR=00:0c:29:c6:ca:3e
TYPE=Ethernet
UUID=0bc1eb27-5647-4319-ad0d-b5eb5597f658
ONBOOT=no
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.3.21
NETMASK=255.255.255.0
IPV6INIT=no
USERCTL=no

Stop the Firewall
#service iptables stop
#chkconfig iptables off
#iptables –F
#service ip6tables stop
#chkconfig ip6tables off
#ip6tables -F
#service NetworkManager stop
#chkconfig NetworkManager off

Disable the SElinux
[root@node01 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
#SELINUXTYPE=targeted

Step 7: Setup Hostname and FQDN
In this step we will assign the hostnames to all nodes, and here we are create Fully Qualified Domain Name (FQDN).
• Get nodes properly on the networks
o Ensures that the hostnames and IP address are correct and correctly recorded in DNS entries and/or in hosts entries.

[root@node01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.21 node01.unixadmin.in node01
192.168.1.22 node02.unixadmin.in node02
192.168.1.91 bridge.unixadmin.in
192.168.1.23 nsdnode01.unixadmin.in nsdnode01
192.168.1.24 client01.unixadmin.in client01
192.168.1.25 client02.unixadmin.in client02
192.168.1.26 client03.unixadmin.in client03
192.168.1.27 nsdnode02.unixadmin.in nsdnode02
192.168.1.28 client04.unixadmin.in client04
192.168.1.29 nsdnode03.unixadmin.in nsdnode03

Step 8: Configure SSH auto login
• Verify node-to-node rsh/ssh communication over the nodes hostname as well as fully qualified domain name (FQDN), including rsh/ssh to self

[root@node01 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
10:a9:10:7e:50:3c:db:f6:10:43:17:87:de:9b:18:cc root@bridge.unixadmin.in
The key's randomart image is:
+--[ RSA 2048]----+
| o+..o.oo. |
| ...o +o.. |
| ...=.* . |
| .o +.E . |
| . oSo o |
| o o |
| |
| |
| |
+-----------------+

[root@node01 ~]# ssh-copy-id localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is cf:a1:ab:1c:be:a6:2b:ba:94:64:db:df:bd:52:a5:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
root@localhost's password:
Now try logging into the machine, with "ssh 'localhost'", and check in:

.ssh/authorized_keys

to make sure we haven’t added extra keys that you weren’t expecting.

[root@node01 ~]# ssh-copy-id 192.168.1.21
root@192.168.1.21's password:
Now try logging into the machine, with "ssh '192.168.1.21'", and check in:

.ssh/authorized_keys
to make sure we haven’t added extra keys that you weren’t expecting.

Step 9: Storage configuration.
• Attach disks to nodes.
o SAN model all nodes are attached to disk.
o NSD model only NSD servers have disk access.
o Mixed model: some NSD and some direct attached application nodes.
• Verify drivers, firmware, and other software level as recommended by to storage are used.
o Array configuration.
o Plan RAID stripe size versus file system block size.

Module 4: Performing an Installation

List the Basic Software packages required
• Create a Cluster
• Create NSD
• Create File system
• Mounting a File System
Describe the Component of Cluster
• A Server Architecture to install
• A Storage architecture to create file system on
• A network architecture for cluster inter-communication
• A network architecture to support client access to file system data
• An authentication mechanism to manage access and authorization to data
• An OS to run Spectrum Scale Software on
• Spectrum Scale Software
• Spectrum Scale Server licenses
• Spectrum Scale Cluster Quorum Devices
• Spectrum Scale Client licenses ( Optional)
Describe the installation checklist.
• Server installation complete.
• Network installation complete.
• Storage hardware is installed and accessible by the servers.
• Os s registered and configured.
• GPFS software is available.
• GPFS licenses are registered and acquired.
• Installation guide is available for reference.

Gets nodes properly installed
Before installing the GPFS code we must make sure that the all nodes are installed properly.
#vim /etc/hosts

192.168.1.21 node01.unixadmin.in node01
192.168.1.22 node02.unixadmin.in node02
192.168.1.23 client01.unixadmin.in client01
192.168.1.24 nsdnode01.unixadmin.in nsdnode01
192.168.1.25 client02.unixadmin.in client02
192.168.1.26 client03.unixadmin.in client03
192.168.1.27 nsdnode02.unixadmin.in nsdnode02
192.168.1.28 client04.unixadmin.in client04
192.168.1.29 nsdnode03.unixadmin.in nsdnode03

InstallGPFS code
Step 1:

Extracts the GPFS tar file (GPFS_4.1_STD_LSX_QSG.tar.gz) using the linux command tar –xvzfGPFS_4.1_STD_LSX_QSG.tar.gz. after extracting the GPFS tar file, install it using ./gpfs_install-4.1.0-0_x86_64

[root@node01 ~]# tar -xvzf GPFS_4.1_STD_LSX_QSG.tar.gz
gpfs.base.README.Linux
gpfs_install-4.1.0-0_x86_64
[root@node01 ~]# ./gpfs_install-4.1.0-0_x86_64

Step 2:
Here we export the path, in .bashrc file, and export the pathin all nodes.
[root@node01 4.1]# cat /root/.bashrc

export PATH=/usr/lpp/mmfs/bin:$PATH

[root@node01 4.1]# source /root/.bashrc

Step 3:
• Use RPM to install the packages.

gpfs.base-4.1.0-0.x86_64.rpm
gpfs.docs-4.1.0-0.noarch.rpm
gpfs.ext-4.1.0-0.x86_64.rpm
gpfs.gpl-4.1.0-0.noarch.rpm
gpfs.gskit-8.0.50-16.x86_64.rpm
gpfs.msg.en_US-4.1.0-0.noarch.rpm

• Build the GPFS portability layer
o The portability layer allows you to run GPFS with multiple linux kernel levels.
o Linux environment only
• Enables communication between GPFS kernel modules and then linux kernel.

Step 4:installing the linux GPL
• Install GPFS RPMs
• Building GPFS portability layer

[root@node01 4.1]# cd /usr/lpp/mmfs/src/

[root@node01 4.1]# make clean
[root@node01 4.1]# make Autoconfig
[root@node01 4.1]# make World
[root@node01 4.1]# make rpm

Step 5: installing the gpfs_gplbin
• Use the rpm to install on other nodes with the same machine type and kernel.
[root@node01 4.1]# cd /root/rpmbuild/RPMS/x86_64/

[root@node01 x86_64]# rpm -ivh gpfs.gplbin-2.6.32-431.el6.x86_64-4.1.0-0.x86_64.rpm

Module 5: GPFS cluster
Create a GPFS cluster
Usage:
mmcrcluster -N {NodeDesc[,NodeDesc…] | NodeFile}
-P PrimaryServer
[-s SecondaryServer]
[-r RemoteShellCommand]
[-R RemoteFileCopyCommand]
[-C clusterName]
[-A] [-c configFile]
• Nodefile – an input file with a list of node names and designations.

Node descriptor:
• Node designation format
NodeName:NodeDesignations:AdminNode

• Can be installed on the command line or placed in a file.
• NodeName is the hostnames or IP address to be used for node-to-node communication.
• Node designation is an optional separated “-“ list of node roles
o Quorum or nonquorum
o Client or manager
Example: head01:quorum
• Must have at least one quorum and one manager node in the cluster.
• The AdminNodeName is optional.
o A DNS name to be used by GPFS Admin commands.
o The default value of AdminNodeName is NodeName.
Configuration Files:
• The config file
o Contains subset of parms in sample file.
/usr/lpp/mmfs/sample/mmfs.cfg.sample
For example pagepool, maxblocksize, maxFileToCache and maxStatCache
• Used for
o Creating similar clusters.
o Test cluster.
o Disaster recovery.
/var/mmfs/gen/mmfsNodeData Contains GPFS cluster configuration data pertaining to the node.
/var/mmfs/gen/mmsdrfs Contains a local copy of the mmsdrfs file found on the primary and secondary GPFS cluster configuration server nodes.
/var/mmfs/gen/mmfs.cfg Contains GPFS daemon startup parameters.

System directories used by GPFS
• /usr/lpp/mmfs/ – GPFS installation directory.
o /usr/lpp/mmfs/bin – location for GPFS excutables. (binaries and scripts)
o /usr/lpp/mmfs/src – location of the GPFS portability layer source
• /var/mmfs –GPFS config data directory.
o /var/mmfs/gen – location for critical GPFS config data.
o /var/mmfs/etc – location for GPFS specific user scripts and cluster specific custom config files. (mmfs.cfg and cluster.preferences) [mmchconfig]
• /var/adm/ras –system logs.

To create a GPFS cluster
o Install GPFS on one or more nodes
o Run the mmcrcluster command

[root@node01 ~]# mmcrcluster -N node01:quorum,node02:quorum -C gpfscluster -r /usr/bin/ssh -R /usr/bin/scp

Use the mmchlicense command to designate licenses as needed.
mmcrcluster: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Change GPFS cluster options
• Use the mmchcluster command to change cluster wide settings.
Examples
o Cluster name.
o Change RemoteShellCommand and/or RemoteFileCopyCommand settings.
• Use the mmchconfig to set or change other GPFS daemons options.
• Command
mmaddnode –N [NodeDesc [,NodeDesc..] | NodeFile]
• Must have root authority
• May be run from any node in the GPFS cluster
• Ensure proper authentication (.rhost or sshkey exchange)
• Install GPFS onto new node
• Decide designations for new node,
for example manager and quorum
• Run the mmaddnodecommand

Adding compute nodes
• Adding compute nodes into existing GPFS cluster
o Add nodes on already created GPFS cluster

[root@node01~]#mmaddnode-N compute01,compute02,compute03,compue04

mmaddnode: Command successfully completed

Adding nsd-nodes
• Adding nsd-node nodes into existing GPFS cluster
o Run the mmaddnodecommand

[root@node01 ~]# mmaddnode -N nsdnode01,nsdnode02,nsdnode03

• To view the status of a cluster using the mmlscluster command
[root@node01 x86_64]# mmlscluster

GPFS cluster information
========================
GPFS cluster name: xxxxxx.unixadmin.in
GPFS cluster id: xxxxxxxx
GPFS UID domain: xxxxxxxxx.unixadmin.in
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR

Node Daemon node name IP address Admin node name Designation
---------------------------------------------------------------------------------
1 head01.unixadmin.in 192.168.1.21 node01.unixadmin.in quorum
2 head02.unixadmin.in 192.168.1.22 node02.unixadmin.in quorum
3 nsdnode01.unixadmin.in 192.168.1.23 nsdnode01.unixadmin.in
4 nsdnode02.unixadmin.in 192.168.1.27 nsdnode02.unixadmin.in
5 nsdnode03.unixadmin.in 192.168.1.29 nsdnode03.unixadmin.in
6 compute01.unixadmin.in 192.168.1.24 compute01.unixadmin.in
7 compute02.unixadmin.in 192.168.1.25 compute02.unixadmin.in
8 compute03.unixadmin.in 192.168.1.26 compute03.unixadmin.in
9 compute04.unixadmin.in 192.168.1.28 compute04.unixadmin.in

GPFS License Types
• Server
o Share Data
o Cluster Administration Functions
o Serving Data : GPFS and NFS
GPFS Server Roles
o Cluster Manager Nodes : Supports various management tasks
o Quorum Node: A cluster mechanism to maintain data consistency in the event of node failure. A node quorum is the minimum number of nodes that must be running in order for the file system to be available.

• Client
o Consumes Data
• FPO
o Perform NSD server function for sharing data with others nodes that have GPFS server or FPI license

Assigning the license
• Assign license to GPFS cluster nodes
o Assign the server license to head and nsdnodenodes
o Run the mmchlicensecommand

[root@node01 ~]# mmchlicense server --accept -N head01,head02,nsdnode01,nsdnode02,nsdnode03

The following nodes will be designated as possessing GPFS server licenses:
head02.unixadmin.in
head01.unixadmin.in
nsdnode01.unixadmin.in
nsdnode02.unixadmin.in
nsdnode03.unixadmin.in
mmchlicense: Command successfully completed
mmchlicense: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
mmchlicense: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

• Assign license to GPFS cluster nodes
o Assign the client license to head and compute nodes
o Run the mmchlicense command

[root@node01 ~]# mmchlicense client --accept -N compute01,compute02,compute03,compute04

[root@node01 x86_64]# mmlslicense

Summary information
———————

[root@node01 x86_64]# mmlslicense -L
Node name Required license Designated license
---------------------------------------------------------------------

Starting a GPFS cluster
• Start GPFS and verify all nodes join.
• Usage :
Mmstartup {-a | -N [Node [,Node..] | NodeFile | Nodeclass] [-E EnvVar=value….]}
• Three scopes for command.
o Individual node mmstartup –N head01
o Some nodes mmstartup –N {Node[,Node..] | NodeFile | Nodeclass}
o Entire cluster mmstartup –a

• Run the mmstartup -a command
[root@node01 ~]# mmstartup –a
Thu Dec 8 22:40:21 IST 2016: mmstartup: Starting GPFS ...

[root@node01 x86_64]# mmlscluster

Check the status of the cluster
• verify cluster state using the mmgetstate command
o Run the mmgetstate -a command
• Node state are
o Active: up and ready for business
o Arbitrating: a node is trying to form quorum with the other available nodes.
o Down: GPFS daemon is not running on the node.
o Unknown: unknown value, node cannot be reached or some other error occurred.

[root@node01 x86_64]# mmgetstate

Node number Node name GPFS state
------------------------------------------
1 head01 active
[root@node01 x86_64]# mmgetstate -a

Working with the Cluster
Command Description
Mmcrcluster Creates GPFS Cluster

Usage: mmcrfs Device {″DiskDesc[;DiskDesc...]″ | -F DescFile} [-A {yes | no | automount}] [-D {nfs4 | posix}] [-B BlockSize] [-E {yes | no}] [-j {cluster | scatter}] [-k {posix | nfs4 | all}] [-K {no | whenpossible | always}] [-L LogFileSize] [-m DefaultMetadataReplicas] [-M MaxMetadataReplicas] [-n NumNodes] [-N NumInodes[:NumInodesToPreallocate]] [-Q {yes | no}] [-r DefaultDataReplicas] [-R MaxDataReplicas] [-S {yes | no}] [-t DriveLetter] [-T MountPoint] [-v {yes | no}] [-z {yes | no}] [--version Version]
Mmlscluster Displays GPFS cluster configuration information

Usage: mmlscluster [--cnfs]
Mmlsconfig Displays the configuration data for a GPFS cluster

Usage: mmlsconfig
Mmgetstate Displays the state of the GPFS daemon on one or more nodes

Usage: mmgetstate [-L] [-s] [-v] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Mmaddnode Adds nodes to a GPFS cluster

Usage: mmaddnode -N {NodeDesc[,NodeDesc...] | NodeFile}
Mmchlicense Designates appropriate GPFS licenses

Usage: mmchlicense {server | client} [--accept] -N {Node[,Node...] | NodeFile | NodeClass}
Mmlslicense Displays information about the GPFS node licensing designation.

Usage: mmlslicense [-L]
mmchnode Changes node attributes

Usage: mmchnode change-options -N{Node[,Node...] | NodeFile | NodeClass} Or,
Usage: mmchnode {-S Filename | --spec-file=Filename}

Mmdelnode Deletes nodes from a GPFS cluster

Usage: mmdelnode {-a | -N Node[,Node...] | NodeFile | NodeClass}
mmchcluster or mmchconfig Changes the GPFS cluster configuration data
Mmlsmgr Displays the file system manager node

Usage: mmlsmgr [Device [Device...]] Or,Usage: mmlsmgr -C ClusterName Or,Usage: mmlsmgr -c
Mmchmgr Changes the file system manager node

Usage: mmchmgr {Device | -c} [Node]
Mmstartup Starts GPFS cluster

Usage: mmstartup [-a | -N {Node[,Node...] | NodeFile | NodeClass}] [-E EnvVar=value ...]
mmshutdown Stops GPFS cluster

Usage: mmshutdown [-t UnmountTimeout ] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Mmrefresh Places the most recent GPFS cluster configuration data files on the specified nodes
mmsdrrestore Restores the latest GPFS system files on the specified nodes.
mmsdrbackup Performs a backup of the GPFS configuration data.
mmbackupconfig Backing up file system configuration information

Usage: mmbackupconfig Device -o OutputFile
mmrestoreconfig Restores file system configuration information

Usage: mmrestoreconfig Device -i InputFile [-I {test | yes | continue }] [-Q {yes | no}] [-W NewDeviceName] [-z {yes | no}] Or,Usage: mmrestoreconfig Device -i InputFile -F configFile Or,
Usage: mmrestoreconfig Device -i InputFile -I continue

Module 6: Creating NSDs
Network Shared Disk
• Nsd stands for Network Shared Disk.
• All member of aGPFS cluster will see a logical file system as it were locally mounted
• The placement of data is managed by GPFS
• The term NSD is used two places in GPFS
o NSD for defining a disk
o NSD protocol for network access to lun(max later).

Steps to define an NSD
• Choose a node that has direct access to the storage.
• Identity the disks to be used by GPFS
o On AIX you can lscfgand look for hdisks.
o On linux you can run cat /proc/partitions.
• Choose a storage type for each disk.
o dataonly, metadataonly, dataAndmetadataonly, desconly or localcache
create NSD
• use the mmcrnsdcommand to create the NSDs.
• Create a NSD stanza file (latest format).
o All of the disk create commands use StanzaFile as an input file, this file describe disk/node/usage realationships
• Create backup copy of the StanzaFile file.
Execute the mmcrnsd command
• Usage:
mmcrnsd –F StanzaFile [-v {yes/no}]
Example:

%nsd:
device=DiskName
nsd=nsdname
servers=servername
usage= {dataOnly | metadataOnly | dataAndmetadata | descOnly }
failureGroup=FailureGroup
pool=Storagepool

[root@nsdnode01 4.1]# touch nsd-device1
[root@nsdnode01 4.1]# vim nsd-device1
[root@nsdnode01 4.1]# mmcrnsd -F nsd-device1
mmcrnsd: Processing disk sdb

mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@nsdnode01 4.1]# cat nsd-device1
%nsd:
device=/dev/sdb
nsd=nsd01
servers=nsdnode01
usage=dataAndmetadata

[root@nsdnode02 4.1]# touch nsd-device2
[root@nsdnode02 4.1]# vim nsd-device2
[root@nsdnode02 4.1]# mmcrnsd -F nsd-device2
mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@nsdnode02 4.1]# cat nsd-device1
%nsd:
device=/dev/sdb
nsd=nsd02
servers=nsdnode02
usage=dataAndmetadata

[root@nsdnode03 4.1]# touch nsd-device3
[root@nsdnode03 4.1]# vim nsd-device3
[root@nsdnode03 4.1]# mmcrnsd -F nsd-device3
mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@nsdnode03 4.1]# cat nsd-device3
%nsd:
device=/dev/sdb
nsd=nsd03
servers=nsdnode03
usage=dataAndmetadata

NSD information:
• mmlsnsd command
• usage:
mmlsnsd [-a | -F | -f Device | -d “DiskName [;DiskName….]”]
[-L | -m | -M | -X] [-V]

• some option descriptions
-m: Map the NSd name to its disk device name in /devon the local node and, if applicable, on the primary and backup NSD server nodes.
-M: Map the NSD names to its disk device name in /devon all nodes, this is a slow operation and its usage is suggested for problem determination only.
-X: Map the NSD name to its disk device name in /devon the local node and, if applicable, on the NSD server nodes, showing the information with extended information in the NSD volume id and the remarks fields, this is a slow operation and is suggested only for problem determination
• verify the NSDs were created using mmlsnsdcommand.

[root@nsdnode03 ~]# mmlsnsd

File system Disk name NSD servers
—————————————————————————

Working with Disks:
Command Description

mmcrnsd Creates network shared disk servers

Usage: mmcrnsd -F DescFile [-v {yes |no}]
mmlsnsd Displaying disks in a GPFS cluster

Usage: mmlsnsd [-a | -F | -f Device | -d ″DiskName[;DiskName...]″ ] [-L | -m | -M | -X] [-v]
mmdelnsd Deletes NSDs from the GPFS cluster

Usage: mmdelnsd {″DiskName[;DiskName...]″ | -F DiskFile}
mmadddisk Adding disks to a file system

Usage: mmadddisk Device {″DiskDesc[;DiskDesc...]″ | -F DescFile} [-a] [-r] [-v {yes | no}] [-N {Node[,Node...] | NodeFile | NodeClass}]
mmdeldisk Deleting disks to a file system

Usage: mmdeldisk Device {″DiskName[;DiskName...]″ | -F DescFile} [-a] [-c] [-m | -r | -b] [-N {Node[,Node...] | NodeFile | NodeClass}]
mmrpldisk Replacing disks in a GPFS file system

Usage: mmrpldisk Device DiskName {DiskDesc | -F DescFile} [-v {yes | no}] [-N {Node[,Node...] | NodeFile | NodeClass}]
mmlsdisk Displaying GPFS disk states

Usage: mmlsdisk Device [-d ″DiskName[;DiskName...]″] [-e] [-L] Or,Usage: mmlsdisk Device [-d ″DiskName[;DiskName...]″] {-m | -M}
mmchdisk Changing GPFS disk states and parameters

Usage: mmchdisk Device {suspend | resume | stop | start | change} -d ″DiskDesc [;DiskDesc...]″ | -F {DescFile} [-N {Node [,Node...] | NodeFile | NodeClass}] Or,
Usage: mmchdisk Device {resume | start} -a [[-N {Node[,Node...] | NodeFile | NodeClass}]

mmchnsd Changing your NSD configuration

Usage: mmchnsd {″DiskDesc[;DiskDesc...]″ | -F DescFile}
mmnsddiscover Rediscovers paths to the specified network shared disks

Usage: mmnsddiscover [-a | -d ″Disk[;Disk...]″ | -F DiskFile] [-C ClusterName] [-N {Node[,Node...] | NodeFile | NodeClass}]
mmcrvsd Creates virtual shared disks for use by GPFS

Usage: mmcrvsd [-f FanoutNumber] [-y] [-c] -F DescFile

Module 7: Creating FILE System

• Before creating a file system you need to determine.
o Which NSDs are to be used for the file system.
o Storage pool configuration.
o File system block size.
o Replication plan and failure group configuration.
• Use the Stanza File (after mmcrnsd) as input to the mmcrfs command.
• The mmcrfs command.
• Usage:
Mmcrfs Device [“DiskDesc [;DiskDesc]” | -F StanzaFile] [-A [yes | no | automount ]]
[-B block size] [-D [posix | nfs4]] [-E [yes | no ]]
[-I Inodesize ] [-j [cluster | scatter ]] [-k [posix | nfs4 | all ]]
[-k [no | whenpossible | always ]] [-L log file size]
[-m Default Meta Data Replicas] [-M maxmetadata Replicas]
[-n NumNodes] [-Q [yes | no ]] [-r Default Data Replicas ]
[-R maxDataReplicas] [-s [yes | no | realtime]]
[-T mount point ] [-t Drive Letter ] [-V [yes | no]]
• Decide on mount point and file system block size.
• Parameters that cannot b changed once the file system is created
o File system block size (-B)
o Maximum data and metadata replication settings (-R, -M)
o Number of nodes cannot be changed for a storage pool (-n)
• Issue the mmcrfscommand combining options.
Example
mmcrfs fs1 –F StanzaFile –B 1M –T /fs1
this command will format the disks for use by GPFS and write a disk descriptor file to each disk in the file system.
[root@nsdnode01 4.1]# mmcrfs gpfs1 nsd01 -A yes -T /data

The following disks of gpfs1 will be formatted on node node01.unixadmin.in:
nsd01: size 20480 MB

[root@nsdnode02 4.1]# mmcrfs gpfs2 nsd02 -A yes -T /app

[root@nsdnode03 4.1]# mmcrfs gpfs3 nsd03 -A yes -T /db

o To view file system.
mmlsnsdcommand
[root@nsdnode03 ~]# mmlsnsd

File system Disk name NSD servers
—————————————————————————

• To see file system characteristics.
o Mount on all nodes.
mmmount all

[root@nsdnode03 ~]# mmmount all

o View the parameters for a file system.
mmlsfs gpfs1

[root@node01 x86_64]# mmlsfs gpfs1

File system attributes for /dev/gpfs1:
======================================
flag value description
——————- ———————— ———————————–
-f 8192 Minimum fragment size in bytes
-i 4096 Inode size in bytes
-I 16384 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file system
-B 262144 Block size
-Q none Quotas accounting enabled
none Quotas enforced
none Default quotas enabled
–perfileset-quota No Per-fileset quota enforcement
–filesetdf No Fileset df enabled?
-V 14.04 (4.1.0.0) File system version
–create-time Thu Dec 8 21:11:48 2016 File system creation time
-z No Is DMAPI enabled?
-L 4194304 Logfile size
-E Yes Exact mtime mount option
-S No Suppress atime mount option
-K whenpossible Strict replica allocation option
–fastea Yes Fast external attributes enabled?
–encryption No Encryption enabled?
–inode-limit 65792 Maximum number of inodes
–log-replicas 0 Number of log replicas
–is4KAligned Yes is4KAligned?
-P system Disk storage pools in file system
–rapid-repair Yes rapidRepair enabled?
-d nsd01 Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /data Default mount point
–mount-priority 0 Mount priority

mmlsfs gpfs1
[root@node01 x86_64]# mmlsfs gpfs2

File system attributes for /dev/gpfs2:
======================================
flag value description
——————- ———————— ———————————–
-f 8192 Minimum fragment size in bytes
-i 4096 Inode size in bytes
-I 16384 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file system
-B 262144 Block size
-Q none Quotas accounting enabled
none Quotas enforced
none Default quotas enabled
–perfileset-quota No Per-fileset quota enforcement
–filesetdf No Fileset df enabled?
-V 14.04 (4.1.0.0) File system version
–create-time Thu Dec 8 21:08:57 2016 File system creation time
-z No Is DMAPI enabled?
-L 4194304 Logfile size
-E Yes Exact mtime mount option
-S No Suppress atime mount option
-K whenpossible Strict replica allocation option
–fastea Yes Fast external attributes enabled?
–encryption No Encryption enabled?
–inode-limit 65792 Maximum number of inodes
–log-replicas 0 Number of log replicas
–is4KAligned Yes is4KAligned?
-P system Disk storage pools in file system
–rapid-repair Yes rapidRepair enabled?
-d nsd02 Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /app Default mount point
–mount-priority 0 Mount priority

mmlsfs gpfs3
[root@node01 x86_64]# mmlsfs gpfs2

File system attributes for /dev/gpfs3:
======================================
flag value description
——————- ———————— ———————————–
-f 8192 Minimum fragment size in bytes
-i 4096 Inode size in bytes
-I 16384 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file system
-B 262144 Block size
-Q none Quotas accounting enabled
none Quotas enforced
none Default quotas enabled
–perfileset-quota No Per-fileset quota enforcement
–filesetdf No Fileset df enabled?
-V 14.04 (4.1.0.0) File system version
–create-time Wed Dec 14 14:55:10 2016 File system creation time
-z No Is DMAPI enabled?
-L 4194304 Logfile size
-E Yes Exact mtime mount option
-S No Suppress atime mount option
-K whenpossible Strict replica allocation option
–fastea Yes Fast external attributes enabled?
–encryption No Encryption enabled?
–inode-limit 65792 Maximum number of inodes
–log-replicas 0 Number of log replicas
–is4KAligned Yes is4KAligned?
-P system Disk storage pools in file system
–rapid-repair Yes rapidRepair enabled?
-d nsd03 Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /db Default mount point
–mount-priority 0 Mount priority

• View the disks in a file system.
mmlsdisk gpfs1
[root@node01 x86_64]# mmlsdisk gpfs1
disk driver sector failure holds holds storage
name type size group metadata data status availability pool
———— ——– —— ———– ——– —– ————- ———— ————

mmlsdsik gpfs2
[root@node01 x86_64]# mmlsdisk gpfs2
disk driver sector failure holds holds storage
name type size group metadata data status availability pool
———— ——– —— ———– ——– —– ————- ———— ————

mmlsdsik gpfs3
[root@node01 x86_64]# mmlsdisk gpfs3
disk driver sector failure holds holds storage
name type size group metadata data status availability pool
———— ——– —— ———– ——– —– ————- ———— ————

[root@nsdnode03 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/osvg-rootlv 45356080 6674664 36377416 16% /
tmpfs 1962860 224 1962636 1% /dev/shm
/dev/sda1 198337 35011 153086 19% /boot
/dev/sr0 3762278 3762278 0 100% /media/RHEL_6.5 x86_64 Disc 1
/dev/gpfs3 20971520 20509952 461568 98% /db
/dev/gpfs1 20971520 20536320 435200 98% /data
/dev/gpfs2 20971520 20971520 0 100% /app

• View the mounted disks on all nodes

[root@nsdnode03 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/osvg-rootlv 45356080 6674664 36377416 16% /
tmpfs 1962860 224 1962636 1% /dev/shm
/dev/sda1 198337 35011 153086 19% /boot
/dev/sr0 3762278 3762278 0 100% /media/RHEL_6.5 x86_64 Disc 1
/dev/gpfs3 20971520 20509952 461568 98% /db
/dev/gpfs1 20971520 20536320 435200 98% /data
/dev/gpfs2 20971520 20971520 0 100% /app

Working with File Systems :

Command Description
Mmcrfs Creates a file system

Usage: mmcrfs Device {″DiskDesc[;DiskDesc...]″ | -F DescFile} [-A {yes | no | automount}] [-D {nfs4 | posix}] [-B BlockSize] [-E {yes | no}] [-j {cluster | scatter}] [-k {posix | nfs4 | all}] [-K {no | whenpossible | always}] [-L LogFileSize] [-m DefaultMetadataReplicas] [-M MaxMetadataReplicas] [-n NumNodes] [-N NumInodes[:NumInodesToPreallocate]] [-Q {yes | no}] [-r DefaultDataReplicas] [-R MaxDataReplicas] [-S {yes | no}] [-t DriveLetter] [-T MountPoint] [-v {yes | no}] [-z {yes | no}] [--version Version]

mmmount Mounts a file system

Usage: mmmount {Device | DefaultMountPoint | DefaultDriveLetter | all | all_local | all_remote} [-o MountOptions] [-a | -N {Node[,Node...] | NodeFile | NodeClass}] Or,Usage: mmmount Device {MountPoint | DriveLetter } [-o MountOptions] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
mmumount Umounts a file system

Usage: mmumount {Device | MountPoint | DriveLetter | all | all_local | all_remote} [-f ] [-a | -N {Node[,Node...]| NodeFile | NodeClass}] Or,
Usage: mmumount Device -f -C {all_remote | ClusterName} [-N Node[,Node...]]

mmdelfs Deletes a file system

Usage: mmdelfs Device [-p]
Mmdf Queries available file space on a GPFS file system

Usage: mmdf Device [-d | -F | -m] [-P PoolName]
mmlsmount Determines which nodes have a file system mounted

Usage: mmlsmount {Device | all | all_local | all_remote} [-L ] [-C {all | all_remote | ClusterName[,ClusterName...] } ]
Mmfsck Checks and repairs a file system

Usage: mmfsck Device [-n | -y] [-c | -o] [-t Directory] [ -v | -V] [-N {Node[,Node...] | NodeFile | NodeClass}]
Mmlsfs Listing file system attributes

Usage: mmlsfs {Device | all | all_local | all_remote} [-A] [-a] [-B] [-D] [-d] [-E] [-F] [-f] [-I] [-i] [-j] [-k] [-K][-L] [-M] [-m] [-n] [-o] [-P] [-Q] [-R] [-r] [-S] [-t] [-T] [-u] [-V] [-z]
Mmchfs Modifies file system attributes

Usage: mmchfs Device [-A {yes | no | automount}] [-E {yes | no}] [-D {nfs4 | posix}] [-F MaxNumInodes[:NumInodesToPreallocate]] [-k {posix | nfs4 | all}] [-K {no | whenpossible | always}] [-m DefaultMetadataReplicas] [-n NumberOfNodes] [-o MountOptions] [-Q {yes | no}] [-r DefaultDataReplicas] [-S {yes | no} ] [-T Mountpoint] [-t DriveLetter] [-V {full | compat}] [-z {yes | no}] Or,
Usage: mmchfs Device [-W NewDeviceName]

mmlsattr Querying file replication attributes

Usage: mmlsattr [-l] [-L] FileName [FileName...]
mmchattr Changing file replication attributes

Usage: mmchattr [-m MetadataReplicas] [-M MaxMetadataReplicas] [-r DataReplicas] [-R MaxDataReplicas] [-P DataPoolName] [-D {yes | no} ] [-I {yes | defer}] [-i {yes | no}] [-l] Filename [Filename...]
mmrestripefs Restriping a GPFS file system

Usage: mmrestripefile {-b | -m | -p | -r} {-F FilenameFile | Filename [Filename...]}
mmdefragfs Querying and reducing file system fragmentation

Usage: mmdefragfs Device [-i] [-u BlkUtilPct] [-P PoolName] [-N {Node[,Node...] | NodeFile | NodeClass}]
mmbackup Backing up a file system

Usage: mmbackup {Device | Directory} [-f] [-g GlobalWorkDirectory] [-N {Node[,Node...] | NodeFile | NodeClass}] [-s LocalWorkDirectory] [-S SnapshotName] [-t {full | incremental}]

Review:
• We have learned.
o How to verify the environment is ready for GPFS.
o What software is required to install GPFS.
o How to create a GPFS cluster using mmcrlcuster.
o How to create a Network Shared Disk (NSD).
o How to create a GPFS file system.

Be the first to comment on "GPFS File System Cluster Installation Step by Step"

Leave a comment

Your email address will not be published.


*