Ceph EC2 安装创建 osd 失败

Ceph EC2 install failed to create osd

我正在尝试在两个 ec2 实例中安装 Ceph,然后 guide 但我无法创建 OSD。 我的集群只有两台服务器,使用这个命令创建分区失败:

ceph-deploy osd create  host:xvdb:/dev/xvdb1 host:xvdf:/dev/xvdf1

[WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -K -f -- /dev/xvdf1
[WARNIN] can't get size of data subvolume
[WARNIN] Usage: mkfs.xfs
[WARNIN] /* blocksize */        [-b log=n|size=num]
[WARNIN] /* metadata */     [-m crc=0|1,finobt=0|1,uuid=xxx]
[WARNIN] /* data subvol */  [-d agcount=n,agsize=n,file,name=xxx,size=num,
[WARNIN]                (sunit=value,swidth=value|su=num,sw=num|noalign),
[WARNIN]                sectlog=n|sectsize=num
[WARNIN] /* force overwrite */  [-f]
[WARNIN] /* inode size */   [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
[WARNIN]                projid32bit=0|1]
[WARNIN] /* no discard */   [-K]
[WARNIN] /* log subvol */   [-l agnum=n,internal,size=num,logdev=xxx,version=n
[WARNIN]                sunit=value|su=num,sectlog=n|sectsize=num,
[WARNIN]                lazy-count=0|1]
[WARNIN] /* label */        [-L label (maximum 12 characters)]
[WARNIN] /* naming */       [-n log=n|size=num,version=2|ci,ftype=0|1]
[WARNIN] /* no-op info only */  [-N]
[WARNIN] /* prototype file */   [-p fname]
[WARNIN] /* quiet */        [-q]
[WARNIN] /* realtime subvol */  [-r extsize=num,size=num,rtdev=xxx]
[WARNIN] /* sectorsize */   [-s log=n|size=num]
[WARNIN] /* version */      [-V]
[WARNIN]            devicename
[WARNIN] <devicename> is required unless -d name=xxx is given.
[WARNIN] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
[WARNIN]       xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
[WARNIN] <value> is xxx (512 byte blocks).
[WARNIN] '/sbin/mkfs -t xfs -K -f -- /dev/xvdf1' failed with status code 1
[ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdf /dev/xvdf1
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs

同样的错误发生在我试图创建OSD的两个磁盘中 这是我正在使用的 ceph.conf 文件:

fsid = b3901613-0b17-47d2-baaa-26859c457737
mon_initial_members = host1,host2
mon_host = host1,host2
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd mkfs options xfs = -K
public network = ip.ip.ip.0/24, ip.ip.ip.0/24
cluster network = ip.ip.0.0/24
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256
osd crush chooseleaf type = 3

有人知道如何解决这个问题吗?

>>ceph-部署 osd 创建 host:xvdb:/dev/xvdb1 host:xvdf:/dev/xvdf1

您需要使用DATA 分区dev 名称和Journal 分区dev 名称。所以就像

ceph-deploy osd 创建主机:/dev/xvdb1:/dev/xvdb2 主机:/dev/xvdf1:/dev/xvdf2

此外,当您手动创建这些分区时,您需要将设备的所有权更改为 ceph:ceph 以便 ceph-deploy 工作。 示例:chown ceph:ceph /dev/xvdb* 示例:chown ceph:ceph /dev/xvdf*

注意:如果您不指定日志磁盘,即 [/dev/xvdb2 或 /dev/xvdf2],ceph-deploy 将使用文件而不是磁盘分区来存储日志。

-- 迪帕克