This article shows an example of ISCSI device plan and installation, host side configuration of both ISCSI and multipath, followed by tunning and debug. Hardware
In this case, ISCSI target is IBM DS3524, 24 SAS drives, 10k rpm 600GB/each. 8 1GB host interfaces. 4 hosts will use ISCSI storage, vm1, vm2, adm and backup.
Setup On iscsi DS3524 target
side, there are two arrays created: two luns, both segment size is 32KB one is lun imirror
300GB, designed to be used by adm node.
the other is lun
iraid, 8.6TB, to be used by backup Array vms, 6 disk drives, raid10, total 1.67TB. 25 luns, segment size 128KB, 7*100GB, 14*20GB and 4*40GB. 675GB free space left. vm1-vm10 for vm1/2 to use. Two host groups setup. All host type set to LNXALUA one is backupgroup, host backup and adm
the other is
vmgroup, host vm1 and vm2 In one host group, each host can see others' luns, but one lun can only be mounted on one host at a time.
Network In this case, ISCSI is pretty isolated
from current network infrastracture, so we don't use CHAP
authentication, no iSNS server for discovery either
Host side installation:
Configuring Open-iSCSI initiator utilities iSCSI initiator configuation file is /etc/iscsi/iscid.conf, you can use it as it is, also you can tune it according your setup environment, I'll mention tunning part in later session. Here is the origional of the file configuration. iscsid.startup = /etc/rc.d/init.d/iscsid force-start node.startup = automatic node.leading_login = No node.session.timeo.replacement_timeout = 120 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 30 node.session.err_timeo.tgt_reset_timeout = 30 node.session.initial_login_retry_max = 8 node.session.cmds_max = 128 node.session.queue_depth = 32 node.session.xmit_thread_priority = -20 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 node.conn[0].iscsi.MaxXmitDataSegmentLength = 0 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.nr_sessions = 1 node.session.iscsi.FastAbort = Yes InitiatorName=iqn.1994-05.com.redhat:vm1 Tunning: As for ISCSI host side configuration, some
parameters have been changed, in
/etc/iscsi/iscsid.conf
On top of multipath device, block readahead is set to 16584(the best). On iscsi target part, cache size set to 32KB. path fail alert set to 60 minutes.
Connecting to the iSCSI array service iscsid start Once the iscsid service is running and the client's initiator name is configured on the iSCSI array, then you may proceed with the following command to discover available targets.
Make file system
Use netdev option for iscsi devices in fstab
According to the results above, NIC channels are pretty much saturated. For smaller file test, reading could reach to 450MB/sec, benefit from cached in memory. This could be the useful for virtual machines.
|
Storage > Storage-disk >