mysql 插入速度太慢,io/cpu 使用率过高
mysql insert too slow and high io/cpu usage some time
table行大约一亿,有时io bps大约150
IOPS 约 4k
- os 版本:CentOS Linux 7
- MySQL版本:dockermysql:5.6
server_id=3310
skip-host-cache
skip-name-resolve
max_allowed_packet=20G
innodb_log_file_size=1G
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5120M
expire-logs-days=7
log_bin=webser
binlog_format=ROW
back_log=1024
slow_query_log
slow_query_log_file=slow-log
tmpdir=/var/log/mysql
sync_binlog=1000
- 创建 table 语句
CREATE TABLE `device_record` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`os` tinyint(9) DEFAULT NULL,
`uid` int(11) DEFAULT '0',
`idfa` varchar(50) DEFAULT NULL,
`adv` varchar(8) DEFAULT NULL,
`oaid` varchar(100) DEFAULT NULL,
`appId` tinyint(4) DEFAULT NULL,
`agent` varchar(100) DEFAULT NULL,
`channel` varchar(20) DEFAULT NULL,
`callback` varchar(1500) DEFAULT NULL,
`activeAt` datetime DEFAULT NULL,
`chargeId` int(11) DEFAULT '0',
`createAt` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idfa_record_index_oaid` (`oaid`),
UNIQUE KEY `index_record_index_agent` (`agent`) USING BTREE,
UNIQUE KEY `idfa_record_index_idfa_appId` (`idfa`) USING BTREE,
KEY `index_record_index_uid` (`uid`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1160240883 DEFAULT CHARSET=utf8mb4
- 插入语句
@Insert(
"insert into idfa_record (os,idfa,oaid,appId,agent,channel,callback,adv,createAt) "
+ "values(#{os},#{idfa},#{oaid},#{appId},#{agent},#{channel},#{callback},#{adv},now()) on duplicate key "
+ "update createAt=if(uid<=0,now(),createAt),activeAt=if(uid<=0 and channel != #{channel},null,activeAt),channel=if(uid<=0,#{channel},channel),"
+ "adv=if(uid<=0,#{adv},adv),callback=if(uid<=0,#{callback},callback),appId=if(uid<=0,#{appId},appId)")
100M 行,但 auto_increment 已经是 1160M?这是很有可能的,但是...
- 最重要的是,table 溢出
INT SIGNED
的一半以上。
- 您是否在插入“刻录”ID 的内容?
- 4个唯一键的存在是否会导致跳过很多行?
这似乎过分了:max_allowed_packet=20G
.
- 有多少内存可用?
- 是否发生交换?
每秒插入多少行?什么是“基点”? (我在思考为什么有 4K 写入。我希望每个 INSERT 的每个唯一键大约 2 IOPS,但是除非你有大约 500 Inserts/sec。
,否则加起来不会达到 4K
插入内容是否来自不同的客户? (这会导致“燃烧”ID、迟缓等)
table行大约一亿,有时io bps大约150 IOPS 约 4k
- os 版本:CentOS Linux 7
- MySQL版本:dockermysql:5.6
server_id=3310
skip-host-cache
skip-name-resolve
max_allowed_packet=20G
innodb_log_file_size=1G
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5120M
expire-logs-days=7
log_bin=webser
binlog_format=ROW
back_log=1024
slow_query_log
slow_query_log_file=slow-log
tmpdir=/var/log/mysql
sync_binlog=1000
- 创建 table 语句
CREATE TABLE `device_record` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`os` tinyint(9) DEFAULT NULL,
`uid` int(11) DEFAULT '0',
`idfa` varchar(50) DEFAULT NULL,
`adv` varchar(8) DEFAULT NULL,
`oaid` varchar(100) DEFAULT NULL,
`appId` tinyint(4) DEFAULT NULL,
`agent` varchar(100) DEFAULT NULL,
`channel` varchar(20) DEFAULT NULL,
`callback` varchar(1500) DEFAULT NULL,
`activeAt` datetime DEFAULT NULL,
`chargeId` int(11) DEFAULT '0',
`createAt` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idfa_record_index_oaid` (`oaid`),
UNIQUE KEY `index_record_index_agent` (`agent`) USING BTREE,
UNIQUE KEY `idfa_record_index_idfa_appId` (`idfa`) USING BTREE,
KEY `index_record_index_uid` (`uid`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1160240883 DEFAULT CHARSET=utf8mb4
- 插入语句
@Insert(
"insert into idfa_record (os,idfa,oaid,appId,agent,channel,callback,adv,createAt) "
+ "values(#{os},#{idfa},#{oaid},#{appId},#{agent},#{channel},#{callback},#{adv},now()) on duplicate key "
+ "update createAt=if(uid<=0,now(),createAt),activeAt=if(uid<=0 and channel != #{channel},null,activeAt),channel=if(uid<=0,#{channel},channel),"
+ "adv=if(uid<=0,#{adv},adv),callback=if(uid<=0,#{callback},callback),appId=if(uid<=0,#{appId},appId)")
100M 行,但 auto_increment 已经是 1160M?这是很有可能的,但是...
- 最重要的是,table 溢出
INT SIGNED
的一半以上。 - 您是否在插入“刻录”ID 的内容?
- 4个唯一键的存在是否会导致跳过很多行?
这似乎过分了:max_allowed_packet=20G
.
- 有多少内存可用?
- 是否发生交换?
每秒插入多少行?什么是“基点”? (我在思考为什么有 4K 写入。我希望每个 INSERT 的每个唯一键大约 2 IOPS,但是除非你有大约 500 Inserts/sec。
,否则加起来不会达到 4K插入内容是否来自不同的客户? (这会导致“燃烧”ID、迟缓等)