尝试在 Kubernetes 集群中使用 iSCSI 卷但得到 "wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program"
Try to use iSCSI volume in Kubernetes Cluster but got "wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program"
由于NFS可能导致的问题ref,我尝试在K8S集群中构建iSCSI卷挂载,但出现错误:
MountVolume.MountDevice failed for volume "iscsipd-rw" : mount failed: exit status 32
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 --scope -- mount -t ext4 -o defaults /dev/disk/by-path/ip-192.168.20.100:3260-iscsi-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1
mount: /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.
一开始我是按照this document创建iSCSI initiator的,由于不同情况导致的错误,我尝试了多次各种设置。
iSCSI 启动器连接看起来不错
Command (m for help): p
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2CDE61DE-F57A-4C0B-AFB6-9DD7040A8BBD
Tue Apr 13 15:41:57 i@kt04:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
Tue Apr 13 15:45:33 i@kt04:~$ sudo ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root 9 Apr 13 15:41 ip-192.168.20.100:3260-iscsi-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part4 -> ../../sda4
lrwxrwxrwx 1 root root 9 Apr 13 01:55 pci-0000:02:01.0-ata-1 -> ../../sr0
Tue Apr 13 15:46:18 i@kt04:~$ sudo iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 2.0-874
Target: iqn.2020-09.com.xxxx:yyyy.testtarget (non-flash)
Current Portal: 192.168.20.100:3260,1
Persistent Portal: 192.168.20.100:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2020-09.com.xxxx:yyyy.testtarget
Iface IPaddress: 192.168.30.24
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 33 State: running
scsi33 Channel 00 Id 0 Lun: 1
Attached scsi disk sdb State: running
Tue Apr 13 15:57:55 i@kt04:~$ sudo systemctl status open-iscsi
● open-iscsi.service - Login to default iSCSI targets
Loaded: loaded (/lib/systemd/system/open-iscsi.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2021-04-13 11:03:20 CST; 5h 6min ago
Docs: man:iscsiadm(8)
man:iscsid(8)
Process: 1352 ExecStop=/lib/open-iscsi/logout-all.sh (code=exited, status=0/SUCCESS)
Process: 1351 ExecStop=/bin/sync (code=exited, status=0/SUCCESS)
Process: 1301 ExecStop=/lib/open-iscsi/umountiscsi.sh (code=exited, status=0/SUCCESS)
Process: 1416 ExecStart=/lib/open-iscsi/activate-storage.sh (code=exited, status=0/SUCCESS)
Process: 1383 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=0/SUCCESS)
Main PID: 1416 (code=exited, status=0/SUCCESS)
Apr 13 11:03:20 kt04 systemd[1]: Starting Login to default iSCSI targets...
Apr 13 11:03:20 kt04 iscsiadm[1383]: Logging in to [iface: default, target: iqn.2020-09.com.xxxx:yyyy.testtarget, portal: 192.168.20.100,3260] (multiple)
Apr 13 11:03:20 kt04 iscsiadm[1383]: Login to [iface: default, target: iqn.2020-09.com.xxxx:yyyy.testtarget, portal: 192.168.20.100,3260] successful.
Apr 13 11:03:20 kt04 systemd[1]: Started Login to default iSCSI targets.
Tue Apr 13 16:09:28 i@kt04:~$ sudo systemctl status iscsid
● iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-04-13 11:03:20 CST; 5h 6min ago
Docs: man:iscsid(8)
Process: 1374 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
Process: 1364 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
Main PID: 1377 (iscsid)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/iscsid.service
├─1376 /sbin/iscsid
└─1377 /sbin/iscsid
Apr 13 11:03:20 kt04 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
Apr 13 11:03:20 kt04 iscsid[1374]: iSCSI logger with pid=1376 started!
Apr 13 11:03:20 kt04 systemd[1]: Started iSCSI initiator daemon (iscsid).
Apr 13 11:03:21 kt04 iscsid[1376]: iSCSI daemon with pid=1377 started!
Apr 13 11:03:21 kt04 iscsid[1376]: Connection2:0 to [target: iqn.2020-09.com.xxxx:yyyy.testtarget, portal: 192.168.20.100,3260] through [iface: default] is operational now
Tue Apr 13 16:21:21 i@kt04:~$ cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: NECVMWar Model: VMware SATA CD00 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi32 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 2.0
Type: Direct-Access ANSI SCSI revision: 06
Host: scsi33 Channel: 00 Id: 00 Lun: 01
Vendor: SYNOLOGY Model: iSCSI Storage Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
我试过像上面显示的那样将“sdb”用作原始磁盘,并且还创建了一个 sdb1 分区,另一次挂载了 ext4 文件系统(甚至创建了一次 LVM),这导致“挂载失败:退出状态 32
/dev/sdb 已经挂载或挂载点忙”错误
我用的pod yaml
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
nodeName: kt04
containers:
- name: iscsipd-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-rw
restartPolicy: Always
volumes:
- name: iscsipd-rw
iscsi:
targetPortal: 192.168.20.100:3260
iqn: iqn.2020-09.com.xxxx:yyyy.testtarget
lun: 1
fsType: ext4
readOnly: false
我最后一次尝试使用 fdisk 创建一个带有 ext4 的 sdb1 分区但没有挂载到 /mnt 结果如下,仍然得到相同的错误“错误的 fs 类型,错误的选项,错误的超级块 /dev/sdb ,缺少代码页或帮助程序
Wed Apr 14 11:25:06 ice@kt04:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
└─sdb1 8:17 0 1023M 0 part
sr0 11:0 1 1024M 0 rom
在NAS(使用Synology RS1221)iSCSI配置面板,显示Target已连接(Lun为Thick Provisioning)
k8s 裸机版本:1.19.6
iscsiadm 版本 2.0-874
open-iscsi 版本 2.0.874-5ubuntu2.10
任何人都可以提供一些建议,告诉我可以尝试使它起作用,或者指出我做错了什么?
问题已解决。感谢 Slack 上的 [Long Wu Yuan]#kubernetes-users.
问题解决前提供的信息:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 3.5M 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 10G 49G 18% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
└─sdb1 8:17 0 1023M 0 part
sr0 11:0 1 1024M 0 rom
然后删除 pod
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 3.5M 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 10G 49G 18% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
执行sudo dd if=/dev/zero of=/dev/sdb bs=1M count=512 status=progress
然后
$ sudo dd if=/dev/zero of=/dev/sdb bs=1M count=512 status=progress
512+0 records in
512+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 4.85588 s, 111 MB/s
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
然后再次应用pod,成功!,pod工作后df -h & lsblk信息
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 3.7M 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 10G 49G 17% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
正如 Long 所说,我应该更好地了解错误消息“bad superblock”并找到解决方案,或者我的环境对于这种 iSCSI 卷情况的错误配置是什么。
由于NFS可能导致的问题ref,我尝试在K8S集群中构建iSCSI卷挂载,但出现错误:
MountVolume.MountDevice failed for volume "iscsipd-rw" : mount failed: exit status 32
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 --scope -- mount -t ext4 -o defaults /dev/disk/by-path/ip-192.168.20.100:3260-iscsi-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1
mount: /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/192.168.20.100:3260-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.
一开始我是按照this document创建iSCSI initiator的,由于不同情况导致的错误,我尝试了多次各种设置。 iSCSI 启动器连接看起来不错
Command (m for help): p
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2CDE61DE-F57A-4C0B-AFB6-9DD7040A8BBD
Tue Apr 13 15:41:57 i@kt04:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
Tue Apr 13 15:45:33 i@kt04:~$ sudo ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root 9 Apr 13 15:41 ip-192.168.20.100:3260-iscsi-iqn.2020-09.com.xxxx:yyyy.testtarget-lun-1 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Apr 13 11:07 pci-0000:00:10.0-scsi-0:0:0:0-part4 -> ../../sda4
lrwxrwxrwx 1 root root 9 Apr 13 01:55 pci-0000:02:01.0-ata-1 -> ../../sr0
Tue Apr 13 15:46:18 i@kt04:~$ sudo iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 2.0-874
Target: iqn.2020-09.com.xxxx:yyyy.testtarget (non-flash)
Current Portal: 192.168.20.100:3260,1
Persistent Portal: 192.168.20.100:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2020-09.com.xxxx:yyyy.testtarget
Iface IPaddress: 192.168.30.24
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 33 State: running
scsi33 Channel 00 Id 0 Lun: 1
Attached scsi disk sdb State: running
Tue Apr 13 15:57:55 i@kt04:~$ sudo systemctl status open-iscsi
● open-iscsi.service - Login to default iSCSI targets
Loaded: loaded (/lib/systemd/system/open-iscsi.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2021-04-13 11:03:20 CST; 5h 6min ago
Docs: man:iscsiadm(8)
man:iscsid(8)
Process: 1352 ExecStop=/lib/open-iscsi/logout-all.sh (code=exited, status=0/SUCCESS)
Process: 1351 ExecStop=/bin/sync (code=exited, status=0/SUCCESS)
Process: 1301 ExecStop=/lib/open-iscsi/umountiscsi.sh (code=exited, status=0/SUCCESS)
Process: 1416 ExecStart=/lib/open-iscsi/activate-storage.sh (code=exited, status=0/SUCCESS)
Process: 1383 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=0/SUCCESS)
Main PID: 1416 (code=exited, status=0/SUCCESS)
Apr 13 11:03:20 kt04 systemd[1]: Starting Login to default iSCSI targets...
Apr 13 11:03:20 kt04 iscsiadm[1383]: Logging in to [iface: default, target: iqn.2020-09.com.xxxx:yyyy.testtarget, portal: 192.168.20.100,3260] (multiple)
Apr 13 11:03:20 kt04 iscsiadm[1383]: Login to [iface: default, target: iqn.2020-09.com.xxxx:yyyy.testtarget, portal: 192.168.20.100,3260] successful.
Apr 13 11:03:20 kt04 systemd[1]: Started Login to default iSCSI targets.
Tue Apr 13 16:09:28 i@kt04:~$ sudo systemctl status iscsid
● iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-04-13 11:03:20 CST; 5h 6min ago
Docs: man:iscsid(8)
Process: 1374 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
Process: 1364 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
Main PID: 1377 (iscsid)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/iscsid.service
├─1376 /sbin/iscsid
└─1377 /sbin/iscsid
Apr 13 11:03:20 kt04 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
Apr 13 11:03:20 kt04 iscsid[1374]: iSCSI logger with pid=1376 started!
Apr 13 11:03:20 kt04 systemd[1]: Started iSCSI initiator daemon (iscsid).
Apr 13 11:03:21 kt04 iscsid[1376]: iSCSI daemon with pid=1377 started!
Apr 13 11:03:21 kt04 iscsid[1376]: Connection2:0 to [target: iqn.2020-09.com.xxxx:yyyy.testtarget, portal: 192.168.20.100,3260] through [iface: default] is operational now
Tue Apr 13 16:21:21 i@kt04:~$ cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: NECVMWar Model: VMware SATA CD00 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi32 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 2.0
Type: Direct-Access ANSI SCSI revision: 06
Host: scsi33 Channel: 00 Id: 00 Lun: 01
Vendor: SYNOLOGY Model: iSCSI Storage Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
我试过像上面显示的那样将“sdb”用作原始磁盘,并且还创建了一个 sdb1 分区,另一次挂载了 ext4 文件系统(甚至创建了一次 LVM),这导致“挂载失败:退出状态 32
/dev/sdb 已经挂载或挂载点忙”错误
我用的pod yaml
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
nodeName: kt04
containers:
- name: iscsipd-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-rw
restartPolicy: Always
volumes:
- name: iscsipd-rw
iscsi:
targetPortal: 192.168.20.100:3260
iqn: iqn.2020-09.com.xxxx:yyyy.testtarget
lun: 1
fsType: ext4
readOnly: false
我最后一次尝试使用 fdisk 创建一个带有 ext4 的 sdb1 分区但没有挂载到 /mnt 结果如下,仍然得到相同的错误“错误的 fs 类型,错误的选项,错误的超级块 /dev/sdb ,缺少代码页或帮助程序
Wed Apr 14 11:25:06 ice@kt04:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
└─sdb1 8:17 0 1023M 0 part
sr0 11:0 1 1024M 0 rom
在NAS(使用Synology RS1221)iSCSI配置面板,显示Target已连接(Lun为Thick Provisioning)
k8s 裸机版本:1.19.6
iscsiadm 版本 2.0-874
open-iscsi 版本 2.0.874-5ubuntu2.10
任何人都可以提供一些建议,告诉我可以尝试使它起作用,或者指出我做错了什么?
问题已解决。感谢 Slack 上的 [Long Wu Yuan]#kubernetes-users.
问题解决前提供的信息:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 3.5M 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 10G 49G 18% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
└─sdb1 8:17 0 1023M 0 part
sr0 11:0 1 1024M 0 rom
然后删除 pod
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 3.5M 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 10G 49G 18% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
执行sudo dd if=/dev/zero of=/dev/sdb bs=1M count=512 status=progress
然后
$ sudo dd if=/dev/zero of=/dev/sdb bs=1M count=512 status=progress
512+0 records in
512+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 4.85588 s, 111 MB/s
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
然后再次应用pod,成功!,pod工作后df -h & lsblk信息
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 3.7M 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 62G 10G 49G 17% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 49G 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
└─sda4 8:4 0 14G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 63G 0 lvm /
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
正如 Long 所说,我应该更好地了解错误消息“bad superblock”并找到解决方案,或者我的环境对于这种 iSCSI 卷情况的错误配置是什么。