应用场景
- Linux客户端需要连接HBlock集群版的卷。
- 需要连接的HBlock集群版的卷为lun6a和lun7a,其lun7a有CHAP认证。
前置条件
- 对于需要连接HBlock集群版的客户端,已经按照客户端配置中的前置条件完成准备工作。
- 对于HBlock服务器端,已经成功创建卷lun6a和lun7a。
操作步骤
HBlock服务器端
查询要连接的LUN及LUN对应iSCSI Target的详细信息。
[root@hblockserver CTYUN_HBlock_Plus_3.7.0_x64]# ./stor lun ls -n lun6a
LUN Name: lun6a (LUN 0)
Storage Mode: Cache
Capacity: 500 GiB
Status: Enabled
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target6.12(192.168.0.192:3260,Active)
iqn.2012-08.cn.ctyunapi.oos:target6.11(192.168.0.110:3260,Standby)
iqn.2012-08.cn.ctyunapi.oos:target6.13(192.168.0.102:3260,ColdStandby)
Create Time: 2024-05-21 14:14:48
Local Storage Class: EC 2+1+16KiB
Minimum Replica Number: 2
Local Sector Size: 4096 bytes
Storage Pool: default
High Availability: ActiveStandby
Write Policy: WriteBack
WWID: 33fffffffc69cbabb
UUID: lun-uuid-40731bfd-d0e5-49fb-9784-1d825635daf8
Object Storage Info:
+-------------------+----------------------------+
| Bucket Name | hblocktest3 |
| Prefix | stor2 |
| Endpoint | https://oos-cn.ctyunapi.cn |
| Signature Version | v2 |
| Region | |
| Storage Class | STANDARD |
| Access Key | cb22b08b1f9229f85874 |
| Object Size | 1024 KiB |
| Compression | Enabled |
+-------------------+----------------------------+
[root@hblockserver CTYUN_HBlock_Plus_3.7.0_x64]# ./stor target ls -n target6
Target Name: target6
Max Sessions: 2
Create Time: 2024-05-21 14:12:44
Number of Servers: 3
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target6.11(192.168.0.110:3260)
iqn.2012-08.cn.ctyunapi.oos:target6.12(192.168.0.192:3260)
iqn.2012-08.cn.ctyunapi.oos:target6.13(192.168.0.102:3260)
LUN: lun6a(LUN 0)
ServerID: hblock_1,hblock_2,hblock_3
[root@hblockserver CTYUN_HBlock_Plus_3.7.0_x64]# ./stor lun ls -n lun7a
LUN Name: lun7a (LUN 0)
Storage Mode: Local
Capacity: 500 GiB
Status: Enabled
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target7.14(192.168.0.110:3260,Active)
iqn.2012-08.cn.ctyunapi.oos:target7.15(192.168.0.192:3260,Standby)
Create Time: 2024-05-21 14:15:22
Local Storage Class: EC 2+1+16KiB
Minimum Replica Number: 2
Local Sector Size: 4096 bytes
Storage Pool: default
High Availability: ActiveStandby
Write Policy: WriteBack
WWID: 330000000727497eb
UUID: lun-uuid-3429b79f-cd7d-47cb-9fb6-c79136deb237
[root@hblockserver CTYUN_HBlock_Plus_3.7.0_x64]# ./stor target ls -n target7
Target Name: target7
Max Sessions: 1
Create Time: 2024-05-21 14:13:27
Number of Servers: 2
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target7.14(192.168.0.110:3260)
iqn.2012-08.cn.ctyunapi.oos:target7.15(192.168.0.192:3260)
LUN: lun7a(LUN 0)
CHAP: test2,T12345678912,Enabled
ServerID: hblock_1,hblock_2
Linux客户端
-
发现lun6a和lun7a的Target:
[root@client ~]# iscsiadm -m discovery -t st -p 192.168.0.110 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.14 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target02.3 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target04.7 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.11 [root@client ~]# iscsiadm -m discovery -t st -p 192.168.0.192 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.15 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.12 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:test.10 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target04.8 [root@client ~]# iscsiadm -m discovery -t st -p 192.168.0.102 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:target02.4 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.13 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:test.9
-
登录iSCSI存储
-
登录lun6a的iSCSI存储(按Active Target、Standby Target、ColdStandby顺序连接):
[root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target6.12 -p 192.168.0.192:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.12, portal: 192.168.0.192,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.12, portal: 192.168.0.192,3260] successful. [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target6.11 -p 192.168.0.110:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.11, portal: 192.168.0.110,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.11, portal: 192.168.0.110,3260] successful. [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target6.13 -p 192.168.0.102:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.13, portal: 192.168.0.102,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.13, portal: 192.168.0.102,3260] successful.
-
登录lun7a的iSCSI存储,需要进行CHAP认证。
[root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -o update --name node.session.auth.authmethod --value=CHAP [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -o update --name node.session.auth.username --value=test2 [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -o update --name node.session.auth.password --value=************* [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -p 192.168.0.110:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.14, portal: 192.168.0.110,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.14, portal: 192.168.0.110,3260] successful. [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -o update --name node.session.auth.authmethod --value=CHAP [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -o update --name node.session.auth.username --value=test2 [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -o update --name node.session.auth.password --value=************* [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -p 192.168.0.192:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.15, portal: 192.168.0.192,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.15, portal: 192.168.0.192,3260] successful.
-
-
显示会话情况,查看当前iSCSI连接。
[root@client ~]# iscsiadm -m session tcp: [3] 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.12 (non-flash) tcp: [4] 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.11 (non-flash) tcp: [5] 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.13 (non-flash) tcp: [6] 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.14 (non-flash) tcp: [7] 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.15 (non-flash) [root@client ~]# lsscsi [4:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdc [5:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdd [6:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sde [7:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdf [8:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdg
-
查看MPIO、磁盘对应的LUN的WWID。
[root@client ~]# multipath -ll mpathc (0x30000000727497eb) dm-1 CTYUN ,iSCSI LUN Device size=500G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 7:0:0:0 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 8:0:0:0 sdg 8:96 active ghost running mpathb (0x3fffffffc69cbabb) dm-0 CTYUN ,iSCSI LUN Device size=500G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 4:0:0:0 sdc 8:32 active ready running |-+- policy='round-robin 0' prio=1 status=enabled | `- 5:0:0:0 sdd 8:48 active ghost running `-+- policy='round-robin 0' prio=0 status=enabled `- 6:0:0:0 sde 8:64 failed faulty running [root@client ~]# ll /dev/mapper/mpathc lrwxrwxrwx 1 root root 7 May 21 15:03 /dev/mapper/mpathc -> ../dm-1 [root@client ~]# ll /dev/mapper/mpathb lrwxrwxrwx 1 root root 7 May 21 14:57 /dev/mapper/mpathb -> ../dm-0 [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdc 33fffffffc69cbabb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdd 33fffffffc69cbabb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sde 33fffffffc69cbabb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdf 330000000727497eb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdg 330000000727497eb
说明可以看出/dev/mapper/mpathb(/dev/sdc、/dev/sdd、/dev/sde)对应HBlock卷lun6a(卷WWID为33fffffffc69cbabb),/dev/mapper/mpathc(/dev/sdf、/dev/sdg)对应HBlock卷lun7a(卷WWID为330000000727497eb)。
-
操作MPIO设备。
将iSCSI磁盘分区挂载到本地目录上,挂载之后可以写入数据。-
挂载iSCSI磁盘/dev/mapper/mpathb
[root@client ~]# lsblk sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk [root@client ~]# mkfs -t ext4 /dev/mapper/mpathb mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32768000 inodes, 131072000 blocks 6553600 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2279604224 4000 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@client ~]# mkdir /mnt/disk_mpathb [root@client ~]# mount /dev/mapper/mpathb /mnt/disk_mpathb [root@client ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk
-
挂载iSCSI磁盘/dev/mapper/mpathc
[root@client ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk [root@client ~]# mkfs -t ext4 /dev/mapper/mpathc mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32768000 inodes, 131072000 blocks 6553600 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2279604224 4000 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@client ~]# mkdir /mnt/disk_mpathc [root@client ~]# mount /dev/mapper/mpathc /mnt/disk_mpathc [root@client ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath /mnt/disk_mpathc sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath /mnt/disk_mpathc vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk
-