配置舉例
更新時間 2025-04-23 15:02:33
最近更新時間: 2025-04-23 15:02:33
分享文章
本節給出Linux客戶端掛載HBlock集群版的卷示例。
應用場景
- Linux客戶端需要連接HBlock集群版的卷。
- 需要連接的HBlock集群版的卷為lun6a和lun7a,其lun7a有CHAP認證。
前置條件
- 對于需要連接HBlock集群版的客戶端,已經按照客戶端配置中的前置條件完成準備工作。
- 對于HBlock服務器端,已經成功創建卷lun6a和lun7a。
操作步驟
HBlock服務器端
查詢要連接的LUN及LUN對應iSCSI Target的詳細信息。
[root@hblockserver CTYUN_HBlock_Plus_3.9.0_x64]# ./stor lun ls -n lun6a
LUN Name: lun6a (LUN 0)
Storage Mode: Cache
Capacity: 500 GiB
Status: Normal
Auto Failback: Enabled
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target6.12(192.168.0.192:3260,Active)
iqn.2012-08.cn.ctyunapi.oos:target6.11(192.168.0.110:3260,Standby)
iqn.2012-08.cn.ctyunapi.oos:target6.13(192.168.0.102:3260,ColdStandby)
Create Time: 2024-05-21 14:14:48
Local Storage Class: EC 2+1+16KiB
Minimum Replica Number: 2
Redundancy Overlap: 1
Local Sector Size: 4096 bytes
Storage Pool: default
High Availability: ActiveStandby
Write Policy: WriteBack
WWID: 33fffffffc69cbabb
UUID: lun-uuid-40731bfd-d0e5-49fb-9784-1d825635daf8
Object Storage Info:
+-------------------+----------------------------+
| Provider | OOS |
| Bucket Name | hblocktest3 |
| Prefix | stor2 |
| Endpoint | //oos-cn.ctyunapi.cn |
| Signature Version | v2 |
| Region | |
| Storage Class | STANDARD |
| Access Key | cb22b08b1f9229f85874 |
| Object Size | 1024 KiB |
| Compression | Enabled |
+-------------------+----------------------------+
[root@hblockserver CTYUN_HBlock_Plus_3.9.0_x64]# ./stor target ls -n target6
Target Name: target6
Max Sessions: 2
Create Time: 2024-05-21 14:12:44
Number of Servers: 3
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target6.11(192.168.0.110:3260)
iqn.2012-08.cn.ctyunapi.oos:target6.12(192.168.0.192:3260)
iqn.2012-08.cn.ctyunapi.oos:target6.13(192.168.0.102:3260)
LUN: lun6a(LUN 0)
Reclaim Policy: Retain
ServerID: hblock_1,hblock_2,hblock_3
[root@hblockserver CTYUN_HBlock_Plus_3.9.0_x64]# ./stor lun ls -n lun7a
LUN Name: lun7a (LUN 0)
Storage Mode: Local
Capacity: 500 GiB
Status: Normal
Auto Failback: Enabled
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target7.14(192.168.0.110:3260,Active)
iqn.2012-08.cn.ctyunapi.oos:target7.15(192.168.0.192:3260,Standby)
Create Time: 2024-05-21 14:15:22
Local Storage Class: EC 2+1+16KiB
Minimum Replica Number: 2
Redundancy Overlap: 1
Local Sector Size: 4096 bytes
Storage Pool: default
High Availability: ActiveStandby
Write Policy: WriteBack
WWID: 330000000727497eb
UUID: lun-uuid-3429b79f-cd7d-47cb-9fb6-c79136deb237
Snapshot Numbers: 0
[root@hblockserver CTYUN_HBlock_Plus_3.9.0_x64]# ./stor target ls -n target7
Target Name: target7
Max Sessions: 1
Create Time: 2024-05-21 14:13:27
Number of Servers: 2
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target7.14(192.168.0.110:3260)
iqn.2012-08.cn.ctyunapi.oos:target7.15(192.168.0.192:3260)
LUN: lun7a(LUN 0)
Reclaim Policy: Retain
CHAP: test2,T12345678912,Enabled
ServerID: hblock_1,hblock_2
Linux客戶端
-
發現lun6a和lun7a的Target:
[root@client ~]# iscsiadm -m discovery -t st -p 192.168.0.110 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.14 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target02.3 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target04.7 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.11 [root@client ~]# iscsiadm -m discovery -t st -p 192.168.0.192 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.15 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.12 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:test.10 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target04.8 [root@client ~]# iscsiadm -m discovery -t st -p 192.168.0.102 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:target02.4 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.13 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:test.9 -
登錄iSCSI存儲
-
登錄lun6a的iSCSI存儲(按Active Target、Standby Target、ColdStandby順序連接):
[root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target6.12 -p 192.168.0.192:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.12, portal: 192.168.0.192,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.12, portal: 192.168.0.192,3260] successful. [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target6.11 -p 192.168.0.110:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.11, portal: 192.168.0.110,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.11, portal: 192.168.0.110,3260] successful. [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target6.13 -p 192.168.0.102:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.13, portal: 192.168.0.102,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target6.13, portal: 192.168.0.102,3260] successful. -
登錄lun7a的iSCSI存儲,需要進行CHAP認證。
[root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -o update --name node.session.auth.authmethod --value=CHAP [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -o update --name node.session.auth.username --value=test2 [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -o update --name node.session.auth.password --value=************* [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.14 -p 192.168.0.110:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.14, portal: 192.168.0.110,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.14, portal: 192.168.0.110,3260] successful. [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -o update --name node.session.auth.authmethod --value=CHAP [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -o update --name node.session.auth.username --value=test2 [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -o update --name node.session.auth.password --value=************* [root@client ~]# iscsiadm -m node -T iqn.2012-08.cn.ctyunapi.oos:target7.15 -p 192.168.0.192:3260 -l Logging in to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.15, portal: 192.168.0.192,3260] (multiple) Login to [iface: default, target: iqn.2012-08.cn.ctyunapi.oos:target7.15, portal: 192.168.0.192,3260] successful.
-
-
顯示會話情況,查看當前iSCSI連接。
[root@client ~]# iscsiadm -m session tcp: [3] 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.12 (non-flash) tcp: [4] 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.11 (non-flash) tcp: [5] 192.168.0.102:3260,1 iqn.2012-08.cn.ctyunapi.oos:target6.13 (non-flash) tcp: [6] 192.168.0.110:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.14 (non-flash) tcp: [7] 192.168.0.192:3260,1 iqn.2012-08.cn.ctyunapi.oos:target7.15 (non-flash) [root@client ~]# lsscsi [4:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdc [5:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdd [6:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sde [7:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdf [8:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdg -
查看MPIO、磁盤對應的LUN的WWID。
[root@client ~]# multipath -ll mpathc (0x30000000727497eb) dm-1 CTYUN ,iSCSI LUN Device size=500G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 7:0:0:0 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 8:0:0:0 sdg 8:96 active ghost running mpathb (0x3fffffffc69cbabb) dm-0 CTYUN ,iSCSI LUN Device size=500G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 4:0:0:0 sdc 8:32 active ready running |-+- policy='round-robin 0' prio=1 status=enabled | `- 5:0:0:0 sdd 8:48 active ghost running `-+- policy='round-robin 0' prio=0 status=enabled `- 6:0:0:0 sde 8:64 failed faulty running [root@client ~]# ll /dev/mapper/mpathc lrwxrwxrwx 1 root root 7 May 21 15:03 /dev/mapper/mpathc -> ../dm-1 [root@client ~]# ll /dev/mapper/mpathb lrwxrwxrwx 1 root root 7 May 21 14:57 /dev/mapper/mpathb -> ../dm-0 [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdc 33fffffffc69cbabb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdd 33fffffffc69cbabb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sde 33fffffffc69cbabb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdf 330000000727497eb [root@client ~]# # /lib/udev/scsi_id --whitelisted --device=/dev/sdg 330000000727497eb說明可以看出/dev/mapper/mpathb(/dev/sdc、/dev/sdd、/dev/sde)對應HBlock卷lun6a(卷WWID為33fffffffc69cbabb),/dev/mapper/mpathc(/dev/sdf、/dev/sdg)對應HBlock卷lun7a(卷WWID為330000000727497eb)。
-
操作MPIO設備。
將iSCSI磁盤分區掛載到本地目錄上,掛載之后可以寫入數據。-
掛載iSCSI磁盤/dev/mapper/mpathb
[root@client ~]# lsblk sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk [root@client ~]# mkfs -t ext4 /dev/mapper/mpathb mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32768000 inodes, 131072000 blocks 6553600 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2279604224 4000 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@client ~]# mkdir /mnt/disk_mpathb [root@client ~]# mount /dev/mapper/mpathb /mnt/disk_mpathb [root@client ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk -
掛載iSCSI磁盤/dev/mapper/mpathc
[root@client ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk [root@client ~]# mkfs -t ext4 /dev/mapper/mpathc mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32768000 inodes, 131072000 blocks 6553600 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2279604224 4000 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@client ~]# mkdir /mnt/disk_mpathc [root@client ~]# mount /dev/mapper/mpathc /mnt/disk_mpathc [root@client ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdd 8:48 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sde 8:64 0 500G 0 disk └─mpathb 252:0 0 500G 0 mpath /mnt/disk_mpathb sdf 8:80 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath /mnt/disk_mpathc sdg 8:96 0 500G 0 disk └─mpathc 252:1 0 500G 0 mpath /mnt/disk_mpathc vda 253:0 0 40G 0 disk ├─vda1 253:1 0 4G 0 part └─vda2 253:2 0 36G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part /mnt/storage01 vdc 253:32 0 100G 0 disk vdd 253:48 0 100G 0 disk
-