User Tools

Site Tools


Site Tools

Linux Storage & Data


NFSv4

This topic explain how to setup an NFSv4 client-server infrastructure on CentOS 7.

Basic network :
Server : 172.16.0.1
Clients : 172.16.0.2, 172.16.0.3

Server

Install a minimal centos system. Remove firewalld and go back to iptables (personnal preference, not a requirement but this tutorial will use iptables for server part).

yum install iptables-services
systemctl mask firewalld.service
systemctl enable iptables.service
systemctl stop firewalld.service
systemctl start iptables.service

Install nfsv4 packages :

yum install nfs-utils nfs4-acl-tools

Configure network interface (address may change on your configuration) : vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

TYPE="Ethernet"
BOOTPROTO="static"
NAME="enp0s8"
NETMASK=255.255.255.0
NM_CONTROLLED=no
ONBOOT="yes"
IPADDR0="172.16.0.1"
HWADDR=08:00:27:50:76:ac

Restart network service to take it into account (or reboot the node if service restart is not enough) :

systemctl restart network.service 

Check if ip is set on interface : ip add

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:22:d3:e5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86400sec preferred_lft 86400sec
    inet6 fe80::a00:27ff:fe22:d3e5/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:50:76:ac brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.1/16 brd 172.16.255.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe50:76ac/64 scope link 
       valid_lft forever preferred_lft forever

Ok. Now, create directory that will ne shared and add something inside :

mkdir /data
echo "Hello World" > /data/hello
mkdir /data/sphen
echo "Hello World" > /data/sphen/hello
chown -R sphen /data/sphen

Configure export of this directory : vi /etc/exports

/data                       172.16.0.2(rw,sync,fsid=0) 172.16.0.3(rw,sync,fsid=0)

Here, clients 172.16.0.2 and .3 will have read and write possibilities. fsid=0 means that clients will see /data on server as / on server, so address for the mounting point will be 172.16.0.1:/ and not 172.16.0.1:/data/. Now start nfsv4 service and enable it on startup.

systemctl enable nfs-server.service
systemctl start nfs-server.service

Configure iptables to open 2049 port for the desired network: vi /etc/sysconfig/iptables

# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -s 172.16.0.0/24 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

Then restart iptables :

systemctl restart iptables.service

That's all for server side.

Client

Install a minimal centos 7 (or something else). Install nfsv4 packages :

yum install nfs-utils nfs4-acl-tools

Configure network, like server side but with different ip and MAC : vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

TYPE="Ethernet"
BOOTPROTO="static"
NAME="enp0s8"
NETMASK=255.255.255.0
NM_CONTROLLED=no
ONBOOT="yes"
IPADDR0="172.16.0.2"
HWADDR=08:00:27:96:79:db

Restart network service to take it into account (or reboot the node if service restart is not enough) :

systemctl restart network.service 

Create mount point where nfsv4 /data directory will be mounted :

mkdir /nfs
mkdir /nfs/data

Now mount the server directory :

mount -t nfs4 172.16.0.1:/ /nfs/data/

You can check now if the directory is mounted, using df :

Filesystem              1K-blocks   Used Available Use% Mounted on
/dev/mapper/centos-root  39265556 969324  38296232   3% /
devtmpfs                   241500      0    241500   0% /dev
tmpfs                      250700      0    250700   0% /dev/shm
tmpfs                      250700   4364    246336   2% /run
tmpfs                      250700      0    250700   0% /sys/fs/cgroup
/dev/sda1                  508588 163632    344956  33% /boot
172.16.0.1:/             39265600 969088  38296512   3% /nfs/data

Check if data are available and that you can write in sphen directory using sphen user (with same uid and gid than on the server, be careful)

#cd /nfs/data
# ls
hello
# cat hello 
Hello World
<code>

Try to remove the file :

<code>
rm hello
rm: remove regular file ‘hello’? y
rm: cannot remove ‘hello’: Permission denied

File was created by root on server, you cannot delete it. Try with sphen user on sphen files in dedicated directory :

# su sphen
# cd /nfs/data/sphen
# ls
hello  
# rm hello 

It worked here, as planned.

To make mount permanent, add it in the fstab : vi /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue May  5 10:54:04 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a287097a-94f9-4e91-959a-3483b4c1001a /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
172.16.0.1:/    /nfs/data  nfs4   soft,intr,rsize=8192,wsize=8192,nosuid

iSCSI

Or: how to build a storage bay with no money… You can use iSCSI to use any server, even a Core 2 workstation, as a storage bay. This is very useful as you can then gather targets and create a huge storage space. Of course, this is not the main purpose of iSCSI, which can be used with many equipment.

I will explain here how I configured an iSCSI server/client infrastructure. Beware, I used different IP than the one we are using for our cluster, this is just an example here.

  • Server 1 : 172.16.0.31, target, with one more disk to share as /dev/sdb
  • Server 2 : 172.16.0.32, initiator
  • Server 3 : 172.16.0.33, initiator

Target = name given to an iSCSI server
Initiator = name given to an iSCSI client

Target

Note: RHEL configuration is different between 7.2 and < 7.2. In this example, 7.2 will be used, expect slightly different behavior with previous versions like 7.0 or 7.1.

We want to share /dev/sdb disk.

yum install targetcli
systemctl restart target
systemctl enable target

Then start using the GUI recommended by RedHat:

targetcli

This GUI provide a tree to navigate in resources, and you cna use ls and cd commands (and also very useful tab).

At the beginning, it is empty:

/> ls
o- / ................................................................................. [...]
  o- backstores ...................................................................... [...]
  | o- block .......................................................... [Storage Objects: 0]
  | o- fileio ......................................................... [Storage Objects: 0]
  | o- pscsi .......................................................... [Storage Objects: 0]
  | o- ramdisk ........................................................ [Storage Objects: 0]
  o- iscsi .................................................................... [Targets: 0]
  o- loopback ................................................................. [Targets: 0]
/>

In backstores, we will have disk resources, and in iscsi the sharing configuration (iqn, LUN, acl, portal).

Start by configuring disk:

cd backstores/block/
create disktest /dev/sdb

You can use write_back=false at the end of this last command, which will result in less performances but more data security, useful for important data, like configuration files, etc.

Disk is ready, now create the iqn:

cd /iscsi
create iqn.2014-08.com.example:t1

Because we are using Centos 7.2 / RHEL 7.2, a portal was created automatically and open to all IP. It is possible to delete it and recreate a more secured one. Here we will keep it default: 0.0.0.0:3260

Now create a LUN:

luns/ create /backstores/block/disktest

Then create ACL for both clients (initiators):

acls/ create iqn.2014-08.com.example:client
acls/ create iqn.2014-09.com.example:client

Here is the configuration once done:

/> ls
o- / ................................................................................. [...]
  o- backstores ...................................................................... [...]
  | o- block .......................................................... [Storage Objects: 1]
  | | o- disktest ................................. [/dev/sdb (8.0GiB) write-thru activated]
  | o- fileio ......................................................... [Storage Objects: 0]
  | o- pscsi .......................................................... [Storage Objects: 0]
  | o- ramdisk ........................................................ [Storage Objects: 0]
  o- iscsi .................................................................... [Targets: 1]
  | o- iqn.2014-08.com.example:t1 ................................................ [TPGs: 1]
  |   o- tpg1 ....................................................... [no-gen-acls, no-auth]
  |     o- acls .................................................................. [ACLs: 2]
  |     | o- iqn.2014-08.com.example:client ............................... [Mapped LUNs: 1]
  |     | | o- mapped_lun0 ...................................... [lun0 block/disktest (rw)]
  |     | o- iqn.2014-09.com.example:client ............................... [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ...................................... [lun0 block/disktest (rw)]
  |     o- luns .................................................................. [LUNs: 1]
  |     | o- lun0 .............................................. [block/disktest (/dev/sdb)]
  |     o- portals ............................................................ [Portals: 1]
  |       o- 0.0.0.0:3260 ............................................................. [OK]
  o- loopback ................................................................. [Targets: 0]
/>

Then use “exit” to quit, it will automatically save.

Initiators

Target configuration can be found in /var/lib/iscsi/nodes on initiators (if you want to remove it and restart again on initiators).

Install rpm:

yum install iscsi-initiator-utils

Give an iqn name to the initiator in /etc/iscsi/initiatorname.iscsi :

InitiatorName=iqn.2014-08.com.example:client

Then restart services:

systemctl restart iscsid
systemctl restart iscsi

Now discover the target:

iscsiadm --mode discovery --type sendtargets --portal 172.16.0.31
> 172.16.0.31:3260,1 iqn.2014-08.com.example:t1

And “mount” volume:

iscsiadm --mode node --targetname iqn.2014-08.com.example:t1 --portal 172.16.0.31 --login
> Logging in to [iface: default, target: iqn.2014-08.com.example:t1, portal: 172.16.0.31,3260] (multiple)
> Login to [iface: default, target: iqn.2014-08.com.example:t1, portal: 172.16.0.31,3260] successful.

Check all is OK:

lsblk --scsi
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
> sda  2:0:0:0    disk ATA      VBOX HARDDISK    1.0  sata
> sdb  3:0:0:0    disk LIO-ORG  disktest         4.0  iscsi
> sr0  1:0:0:0    rom  VBOX     CD-ROM           1.0  ata

Now /dev/sdb is available here, and you can mkfs on it from any initiator.

Note: in case of error like this one:

Logging in to [iface: default, target: iqn.2014-08.com.example:t1, portal: 172.16.0.31,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2014-08.com.example:t1, portal: 172.16.0.31,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals

Then check initiator iqn name is correct, if yes, restart services on client, if still error, check ACL on target en ensure iqn name of client is the good one.

Documentation used :

https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/
https://wiki.archlinux.org/index.php/ISCSI_Target
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/osm-create-iscsi-initiator.html
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch25.html#target-setup-configure-luns