bluestore_mic_alloc_size 没有被 ceph 守护进程接收
bluestore_mic_alloc_size not getting picked up by ceph daemons
我正在将此 bluestore_min_alloc_size 应用到 4096,无论我如何应用该设置,它都不会被守护进程接收,我也尝试在应用该设置后重新启动所有守护进程 pod,但没有效果。
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd bluestore_min_alloc_size
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size_hdd
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size_ssd
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file false
global advanced mon_allow_pool_delete true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global advanced osd_pool_default_pg_autoscale_mode on
global advanced osd_scrub_auto_repair true
global advanced rbd_default_features 3
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/active true
mgr advanced mgr/balancer/mode upmap
mgr.a advanced mgr/dashboard/server_port 8443 *
mgr.a advanced mgr/dashboard/ssl true *
mgr.a advanced mgr/dashboard/ssl_server_port 8443 *
osd advanced bluestore_min_alloc_size 4096 *
osd.0 advanced bluestore_min_alloc_size_hdd 4096 *
osd.0 advanced bluestore_min_alloc_size_ssd 4096 *
mds.iondfs-a basic mds_join_fs iondfs
mds.iondfs-b basic mds_join_fs iondfs
ceph df
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 43 GiB 39 GiB 4.0 GiB 5.0 GiB 11.45
TOTAL 43 GiB 39 GiB 4.0 GiB 5.0 GiB 11.45
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 12 GiB
iondfs-metadata 2 32 240 MiB 128 241 MiB 0.64 36 GiB
iondfs-data0 3 32 209 MiB 60.80k 3.8 GiB 9.41 36 GiB
您可以看到 60.80k 对象的存储大小为 209MB,但使用的是 3.8Gb,即 64x60.8x1000 kb = 3.8912 Gb
这表明仍在使用 64k 块大小而不是 4kb
问题是选项 blustore_min_alloc_size 无法在创建 osd 后设置,您需要在创建集群之前创建配置。
kubectl 创建命名空间 rook-ceph
下面另存为ceph-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: rook-config-override
namespace: rook-ceph
data:
config: |
[osd]
bluestore_min_alloc_size = 4096
bluestore_min_alloc_size_hdd = 4096
bluestore_min_alloc_size_ssd = 4096
kubectl apply -f ceph-conf.yaml
现在创建 ceph 集群
我正在将此 bluestore_min_alloc_size 应用到 4096,无论我如何应用该设置,它都不会被守护进程接收,我也尝试在应用该设置后重新启动所有守护进程 pod,但没有效果。
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd bluestore_min_alloc_size
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size_hdd
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size_ssd
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file false
global advanced mon_allow_pool_delete true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global advanced osd_pool_default_pg_autoscale_mode on
global advanced osd_scrub_auto_repair true
global advanced rbd_default_features 3
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/active true
mgr advanced mgr/balancer/mode upmap
mgr.a advanced mgr/dashboard/server_port 8443 *
mgr.a advanced mgr/dashboard/ssl true *
mgr.a advanced mgr/dashboard/ssl_server_port 8443 *
osd advanced bluestore_min_alloc_size 4096 *
osd.0 advanced bluestore_min_alloc_size_hdd 4096 *
osd.0 advanced bluestore_min_alloc_size_ssd 4096 *
mds.iondfs-a basic mds_join_fs iondfs
mds.iondfs-b basic mds_join_fs iondfs
ceph df
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 43 GiB 39 GiB 4.0 GiB 5.0 GiB 11.45
TOTAL 43 GiB 39 GiB 4.0 GiB 5.0 GiB 11.45
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 12 GiB
iondfs-metadata 2 32 240 MiB 128 241 MiB 0.64 36 GiB
iondfs-data0 3 32 209 MiB 60.80k 3.8 GiB 9.41 36 GiB
您可以看到 60.80k 对象的存储大小为 209MB,但使用的是 3.8Gb,即 64x60.8x1000 kb = 3.8912 Gb 这表明仍在使用 64k 块大小而不是 4kb
问题是选项 blustore_min_alloc_size 无法在创建 osd 后设置,您需要在创建集群之前创建配置。
kubectl 创建命名空间 rook-ceph
下面另存为ceph-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: rook-config-override
namespace: rook-ceph
data:
config: |
[osd]
bluestore_min_alloc_size = 4096
bluestore_min_alloc_size_hdd = 4096
bluestore_min_alloc_size_ssd = 4096
kubectl apply -f ceph-conf.yaml
现在创建 ceph 集群