解释 ElasticSearch 内存不足错误
Interpret ElasticSearch Out of Memory error
首先,这是一个双节点集群,每个节点都有“-Xms256m -Xmx1g -Xss256k”(考虑到机器有 8G,这真的很糟糕)。
[2015-04-07 16:19:58,235][INFO ][monitor.jvm ] [NODE1] [gc][ParNew][3246454][64605] duration [822ms], collections [1]/[4.3s], total [822ms]/[21m], memory [966.1mb]->[766.9mb]/[990.7mb], all_pools {[Code Cache] [13.1mb]->[13.1mb]/[48mb]}{[Par Eden Space] [266.2mb]->[75.6mb]/[266.2mb]}{[Par Survivor Space] [8.9mb]->[0b]/[33.2mb]}{[CMS Old Gen] [690.8mb]->[691.2mb]/[691.2mb]}{[CMS Perm Gen] [33.6mb]->[33.6mb]/[82mb]}
[2015-04-07 16:28:02,550][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x03d14f1c, /10.0.6.100:36055 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.PriorityQueue.initialize(PriorityQueue.java:108)
at org.elasticsearch.search.controller.ScoreDocQueue.<init>(ScoreDocQueue.java:32)
....
[2015-04-07 21:55:54,743][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0xeea0018c, /10.0.6.100:36059 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
[2015-04-07 21:59:26,774][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x576557fa, /10.0.6.100:36054 => /10.0.6.105:9300]]
...
[2015-04-07 22:51:05,890][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x67f11ffe, /10.0.6.100:36052 => /10.0.
6.105:9300]]
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: transport content length received [1.5gb] exceeded [891.6mb]
[2015-04-07 22:51:05,973][WARN ][cluster.action.shard ] [NODE1] sending failed shard for [test_index][15], nod
e[xvpLmlJkRSmZNj-pa_xUNA], [P], s[STARTED], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
然后在重新加入后(我重新启动了节点 105)
[2015-04-07 22:59:11,095][INFO ][cluster.service ] [NODE1] removed {[NODE2][GMBDo5K7RMGSgiIwZE7H8w][inet[/10.0.6.105:9300]],}, reason: zen-disco-node_failed([NODE7][GMBDo5K7RMGSgiIwZE7H8w][inet[/10.0.6.105:9300]]), reason transport disconnected (with verified connect)
[2015-04-07 22:59:30,954][INFO ][cluster.service ] [NODE1] added {[NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]],}, reason: zen-disco-receive(join from node[[NODE7][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]]])
[2015-04-07 23:11:39,717][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x14a605ce, /10.0.6.100:36201 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
[2015-04-07 23:16:04,963][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x5a6d934d, /10.0.6.100:36196 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
所以我不知道如何解释“>”部分。究竟是谁失忆了?节点 1 (10.0.6.100)?为什么端口 9300?我的 API 最初与 NODE1 对话,那么在这种情况下是否意味着 NODE1 向 NODE2 发送批量数据请求?这是第二天发生的事情
来自 NODE1 日志:
[2015-04-08 09:02:46,410][INFO ][cluster.service ] [NODE1] removed {[NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]],}, reason: zen-disco-node_failed([NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]]), reason failed to ping, tried [3] times, each with maximum [30s] timeout
[2015-04-08 09:03:27,554][WARN ][search.action ] [NODE1] Failed to send release search context
org.elasticsearch.transport.NodeDisconnectedException: [NODE2][inet[/10.0.6.105:9300]][search/freeContext] disconnected
....
Caused by: org.elasticsearch.transport.NodeNotConnectedException: [NODE2][inet[/10.0.6.105:9300]] Node not connected
但是在 NODE2 日志上,只有 04-08 的几行,但是是这样的:
[2015-04-08 09:09:13,797][INFO ][discovery.zen ] [NODE2] master_left [[NDOE1][xvpLmlJkRSmZNj-pa_xUNA][inet[/10.0.6.100:9300]]], reason [do not exists on master, act as master failure]
那么到底是谁失败了?我在这里很困惑:|对不起。任何帮助表示赞赏。我知道 NODE1 有一个非常非常长的 GC(MarkSweep 是 3 小时以上,直到昨晚我的双节点集群完全重启)。
您日志的第一部分涉及 Elasticsearch 垃圾收集日志记录格式
[2015-04-07 16:19:58,235][INFO][monitor.jvm][NODE1]
垃圾收集运行
[gc]
新的并行垃圾收集器
[ParNew]
GC 耗时 822 毫秒
duration [822ms],
一集运行,共4.3秒
collections [1]/[4.3s]
池'memory'的使用数量,之前是966.1mb,现在是766.9mb,总池大小990.7mb
memory [966.1mb]->[766.9mb]/[990.7mb],
池的使用数量 'code cache'
[Code Cache] [13.1mb]->[13.1mb]/[48mb]
池的使用数量 'Par Eden Space'
[Par Eden Space] [266.2mb]->[75.6mb]/[266.2mb]
池的使用数量 'Par Survivor Space'
[Par Survivor Space] [8.9mb]->[0b]/[33.2mb]
池的使用数量 'CMS Old Gen'
[CMS Old Gen] [690.8mb]->[691.2mb]/[691.2mb]
池的使用数量 'CMS Perm Gen'
[CMS Perm Gen] [33.6mb]->[33.6mb]/[82mb]
如果你注意到你的内存池快1G了。我希望这能给你一个提示!
首先,这是一个双节点集群,每个节点都有“-Xms256m -Xmx1g -Xss256k”(考虑到机器有 8G,这真的很糟糕)。
[2015-04-07 16:19:58,235][INFO ][monitor.jvm ] [NODE1] [gc][ParNew][3246454][64605] duration [822ms], collections [1]/[4.3s], total [822ms]/[21m], memory [966.1mb]->[766.9mb]/[990.7mb], all_pools {[Code Cache] [13.1mb]->[13.1mb]/[48mb]}{[Par Eden Space] [266.2mb]->[75.6mb]/[266.2mb]}{[Par Survivor Space] [8.9mb]->[0b]/[33.2mb]}{[CMS Old Gen] [690.8mb]->[691.2mb]/[691.2mb]}{[CMS Perm Gen] [33.6mb]->[33.6mb]/[82mb]}
[2015-04-07 16:28:02,550][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x03d14f1c, /10.0.6.100:36055 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.PriorityQueue.initialize(PriorityQueue.java:108)
at org.elasticsearch.search.controller.ScoreDocQueue.<init>(ScoreDocQueue.java:32)
....
[2015-04-07 21:55:54,743][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0xeea0018c, /10.0.6.100:36059 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
[2015-04-07 21:59:26,774][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x576557fa, /10.0.6.100:36054 => /10.0.6.105:9300]]
...
[2015-04-07 22:51:05,890][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x67f11ffe, /10.0.6.100:36052 => /10.0.
6.105:9300]]
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: transport content length received [1.5gb] exceeded [891.6mb]
[2015-04-07 22:51:05,973][WARN ][cluster.action.shard ] [NODE1] sending failed shard for [test_index][15], nod
e[xvpLmlJkRSmZNj-pa_xUNA], [P], s[STARTED], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
然后在重新加入后(我重新启动了节点 105)
[2015-04-07 22:59:11,095][INFO ][cluster.service ] [NODE1] removed {[NODE2][GMBDo5K7RMGSgiIwZE7H8w][inet[/10.0.6.105:9300]],}, reason: zen-disco-node_failed([NODE7][GMBDo5K7RMGSgiIwZE7H8w][inet[/10.0.6.105:9300]]), reason transport disconnected (with verified connect)
[2015-04-07 22:59:30,954][INFO ][cluster.service ] [NODE1] added {[NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]],}, reason: zen-disco-receive(join from node[[NODE7][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]]])
[2015-04-07 23:11:39,717][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x14a605ce, /10.0.6.100:36201 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
[2015-04-07 23:16:04,963][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x5a6d934d, /10.0.6.100:36196 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
所以我不知道如何解释“>”部分。究竟是谁失忆了?节点 1 (10.0.6.100)?为什么端口 9300?我的 API 最初与 NODE1 对话,那么在这种情况下是否意味着 NODE1 向 NODE2 发送批量数据请求?这是第二天发生的事情
来自 NODE1 日志:
[2015-04-08 09:02:46,410][INFO ][cluster.service ] [NODE1] removed {[NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]],}, reason: zen-disco-node_failed([NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]]), reason failed to ping, tried [3] times, each with maximum [30s] timeout
[2015-04-08 09:03:27,554][WARN ][search.action ] [NODE1] Failed to send release search context
org.elasticsearch.transport.NodeDisconnectedException: [NODE2][inet[/10.0.6.105:9300]][search/freeContext] disconnected
....
Caused by: org.elasticsearch.transport.NodeNotConnectedException: [NODE2][inet[/10.0.6.105:9300]] Node not connected
但是在 NODE2 日志上,只有 04-08 的几行,但是是这样的:
[2015-04-08 09:09:13,797][INFO ][discovery.zen ] [NODE2] master_left [[NDOE1][xvpLmlJkRSmZNj-pa_xUNA][inet[/10.0.6.100:9300]]], reason [do not exists on master, act as master failure]
那么到底是谁失败了?我在这里很困惑:|对不起。任何帮助表示赞赏。我知道 NODE1 有一个非常非常长的 GC(MarkSweep 是 3 小时以上,直到昨晚我的双节点集群完全重启)。
您日志的第一部分涉及 Elasticsearch 垃圾收集日志记录格式
[2015-04-07 16:19:58,235][INFO][monitor.jvm][NODE1]
垃圾收集运行
[gc]
新的并行垃圾收集器
[ParNew]
GC 耗时 822 毫秒
duration [822ms],
一集运行,共4.3秒
collections [1]/[4.3s]
池'memory'的使用数量,之前是966.1mb,现在是766.9mb,总池大小990.7mb
memory [966.1mb]->[766.9mb]/[990.7mb],
池的使用数量 'code cache'
[Code Cache] [13.1mb]->[13.1mb]/[48mb]
池的使用数量 'Par Eden Space'
[Par Eden Space] [266.2mb]->[75.6mb]/[266.2mb]
池的使用数量 'Par Survivor Space'
[Par Survivor Space] [8.9mb]->[0b]/[33.2mb]
池的使用数量 'CMS Old Gen'
[CMS Old Gen] [690.8mb]->[691.2mb]/[691.2mb]
池的使用数量 'CMS Perm Gen'
[CMS Perm Gen] [33.6mb]->[33.6mb]/[82mb]
如果你注意到你的内存池快1G了。我希望这能给你一个提示!