mongoDB 压缩似乎无法正常工作
It seems that mongoDB compression does not work properly
Nginx + php-fpm + mongoDB(+mongodb php-lib)
尝试比较mongoDB的压缩率
但是,结果并不如预期。
这是我的实验。
/etc/mongod.conf
# mongod.conf //default setting
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Setting compression when creating collection in mongoDB shell
mongoDB shell> db.createCollection( "test",{storageEngine:{wiredTiger:{configString:'block_compressor=none, prefix_compression=false'}}})
The compression options are set to 6 in total
block_compressor = none or snappy or zlib // prefix_compression = false or true
When checked with db.printCollectionStats(), the options were applied correctly.
插入数据大小为100KB * 100000 = 约9GB。
但 db.test.storageSize() 结果。
block_compression none = 10653536256(字节)
block_compression snappy = 10653405184(字节)
block_compression zlib = 6690177024(字节)
zlib 与 none 相比大约压缩了 40%。
但是,none 和 snappy 没有什么不同。
(prefix_compress也没有变化。)
我应该添加什么设置?
+更新
活泼+假
"compression" : {
"compressed pages read" : 0,
"compressed pages written" : 0,
"page written failed to compress" : 100007,
"page written was too small to compress" : 1025
}
zlib+假
"compression" : {
"compressed pages read" : 0,
"compressed pages written" : 98881,
"page written failed to compress" : 0,
"page written was too small to compress" : 924
}
“写入的页面压缩失败”是什么意思?
解决方法是什么?
+更新2
使用mongoDB服务器版本:4.0.9
insert data document
$result = $collection->insertOne( ['num'=> (int)$i ,
'title' => "$i",
'main' => "$i",
'img' => "$t",
'user'=>"$users",
'like'=> 0,
'time'=> "$date" ] );
---Variable Description---
$i = 1 ~ 100,000 (Increment by 1)
$t = 100KB(102400byt) random string
$users = (Random 10 characters in 12134567890abcdefghij)
$data = Real-time server date (ex = 2019:05:18 xx.xx.xx)
index
db.test.createIndex( { "num":1 } )
db.test.createIndex( { "title":1 } )
db.test.createIndex( { "user":1 } )
db.test.createIndex( { "like":1 } )
db.test.createIndex( { "time":1 } )
collection统计数据太长所以我只放两个。
活泼+假
"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
活泼+真实
"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
感谢您的关注。
有一件事跳出来是你使用 allocation_size=4KB
。使用此分配大小,您的磁盘块太小而无法压缩,因此不会被压缩。增加 allocation_size
以启动压缩。
Nginx + php-fpm + mongoDB(+mongodb php-lib)
尝试比较mongoDB的压缩率 但是,结果并不如预期。 这是我的实验。
/etc/mongod.conf
# mongod.conf //default setting
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Setting compression when creating collection in mongoDB shell
mongoDB shell> db.createCollection( "test",{storageEngine:{wiredTiger:{configString:'block_compressor=none, prefix_compression=false'}}})
The compression options are set to 6 in total
block_compressor = none or snappy or zlib // prefix_compression = false or true
When checked with db.printCollectionStats(), the options were applied correctly.
插入数据大小为100KB * 100000 = 约9GB。
但 db.test.storageSize() 结果。
block_compression none = 10653536256(字节)
block_compression snappy = 10653405184(字节)
block_compression zlib = 6690177024(字节)
zlib 与 none 相比大约压缩了 40%。 但是,none 和 snappy 没有什么不同。
(prefix_compress也没有变化。)
我应该添加什么设置?
+更新
活泼+假
"compression" : {
"compressed pages read" : 0,
"compressed pages written" : 0,
"page written failed to compress" : 100007,
"page written was too small to compress" : 1025
}
zlib+假
"compression" : {
"compressed pages read" : 0,
"compressed pages written" : 98881,
"page written failed to compress" : 0,
"page written was too small to compress" : 924
}
“写入的页面压缩失败”是什么意思? 解决方法是什么?
+更新2
使用mongoDB服务器版本:4.0.9
insert data document
$result = $collection->insertOne( ['num'=> (int)$i ,
'title' => "$i",
'main' => "$i",
'img' => "$t",
'user'=>"$users",
'like'=> 0,
'time'=> "$date" ] );
---Variable Description---
$i = 1 ~ 100,000 (Increment by 1)
$t = 100KB(102400byt) random string
$users = (Random 10 characters in 12134567890abcdefghij)
$data = Real-time server date (ex = 2019:05:18 xx.xx.xx)
index
db.test.createIndex( { "num":1 } )
db.test.createIndex( { "title":1 } )
db.test.createIndex( { "user":1 } )
db.test.createIndex( { "like":1 } )
db.test.createIndex( { "time":1 } )
collection统计数据太长所以我只放两个。
活泼+假
"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
活泼+真实
"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
感谢您的关注。
有一件事跳出来是你使用 allocation_size=4KB
。使用此分配大小,您的磁盘块太小而无法压缩,因此不会被压缩。增加 allocation_size
以启动压缩。