无法针对弹性搜索启动 logstash (org.elasticsearch.transport.ReceiveTimeoutTransportException)

Unable to start logstash against elastic search (org.elasticsearch.transport.ReceiveTimeoutTransportException)

我正在遵循 http://logstash.net/docs/1.4.2/tutorials/getting-started-with-logstash 的入门指南,但我无法使其与 elasticsearch 兼容。

我的环境是Linux Fedora - logstash 1.4.2 - elasticsearch 1.1.1

我启动elasticsearch并验证它没问题:

[2015-01-16 11:12:33,039][INFO ][transport                ] [Adonis] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.13.47:9300]}
[2015-01-16 11:12:36,171][INFO ][cluster.service          ] [Adonis] new_master [Adonis][SzTj0QJNSVOweE9Dd630BQ][arq.mycompany.org][inet[/192.168.13.47:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-16 11:12:36,190][INFO ][discovery                ] [Adonis] elasticsearch/SzTj0QJNSVOweE9Dd630BQ
[2015-01-16 11:12:36,208][INFO ][http                     ] [Adonis] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.13.47:9200]}
[2015-01-16 11:12:36,252][INFO ][gateway                  ] [Adonis] recovered [0] indices into cluster_state
[2015-01-16 11:12:36,252][INFO ][node                     ] [Adonis] started

卷曲'http://localhost:9200/_search?pretty'

{
  "took" : 0,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

使用 netstat 检查端口:

netstat -na | grep LIST | grep 93

tcp        0      0 0.0.0.0:59693               0.0.0.0:*                   LISTEN      
tcp        0      0 :::9300                     :::*                        LISTEN      
tcp        0      0 :::9301                     :::*                        LISTEN      
tcp        0      0 :::9302                     :::*                        LISTEN      

针对 stdout 使用 logstash 进行测试 运行 很好:

bin/logstash -e 'input { stdin { } } output { stdout {} }'

但是后来我尝试将输出设置为 elasticsearch 并得到一个异常。

./logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

请注意,首先我在弹性搜索中看到添加的日志,然后 logstash 失败,然后删除的日志显示在弹性搜索日志中

弹性搜索日志:

[2015-01-16 11:18:06,345][INFO ][cluster.service          ] [Adonis] added {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}])
[2015-01-16 11:18:10,453][INFO ][cluster.service          ] [Adonis] removed {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-node_failed([logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}), reason transport disconnected (with verified connect)

好像是添加了客户端然后又断开了(???)

LOGSTASH 日志:

./logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_3#][arq.mycompany.org][inet[localhost/127.0.0.1:9302]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9302]][discovery/zen/unicast] request_id [0] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_2#][arq.mycompany.org][inet[localhost/127.0.0.1:9301]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9301]][discovery/zen/unicast] request_id [3] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=F771949B Handler2=F76F2915 InaccessibleAddress=00000012
EDI=F7777560 ESI=D2B42846 EAX=00000012 EBX=00000000
ECX=D545AE34 EDX=0000FFFF
EIP=F6578E1D ES=002B DS=002B ESP=D545ADF0
EFlags=00210206 CS=0023 SS=002B EBP=D12A8700
Module=/opt/IBM/SDP/jdk/jre/lib/i386/libjclscar_24.so
Module_base_address=F6533000 Symbol=sun_misc_Unsafe_getLong__Ljava_lang_Object_2J
Symbol_address=F6578DCC
Target=2_40_20110726_087724 (Linux 3.6.11-4.fc16.x86_64)
CPU=x86 (8 logical CPUs) (0x3e051c000 RAM)
----------- Stack Backtrace -----------
(0xF76E6752 [libj9prt24.so+0xb752])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E67E5 [libj9prt24.so+0xb7e5])
(0xF76E6908 [libj9prt24.so+0xb908])
(0xF76E6584 [libj9prt24.so+0xb584])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E65F8 [libj9prt24.so+0xb5f8])
(0xF771A1D3 [libj9vm24.so+0xf1d3])
(0xF7719E53 [libj9vm24.so+0xee53])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF771963B [libj9vm24.so+0xe63b])
(0xF76F2A8D [libj9prt24.so+0x17a8d])
(0xF77BE410)
---------------------------------------
JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP032I JVM requested System dump using '/home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp' in response to an event
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e" specifies that the core dump is to be piped to an external program.  Attempting to rename either core or core.31370.

JVMDUMP010I System dump written to /home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp
JVMDUMP032I JVM requested Java dump using '/home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt' in response to an event
JVMDUMP010I Java dump written to /home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt
JVMDUMP032I JVM requested Snap dump using '/home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc' in response to an event
JVMDUMP010I Snap dump written to /home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc
JVMDUMP013I Processed dump event "gpf", detail "".
[MYUSER@cl004300l bin]$ 

如果我将协议更改为协议 => http,elasticsearch 会崩溃:

Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=F76B549B Handler2=F768E915 InaccessibleAddress=000001E6
EDI=F7713560 ESI=B38E163A EAX=0000001C EBX=B3526A00
ECX=B3F1F9CC EDX=000001B2
EIP=F64D1A40 ES=002B DS=002B ESP=B3F1F98C
EFlags=00210286 CS=0023 SS=002B EBP=B3D24B00
Module=/opt/IBM/SDP/jdk/jre/lib/i386/libjclscar_24.so
Module_base_address=F648A000 Symbol=sun_misc_Unsafe_putLong__Ljava_lang_Object_2JJ

JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP032I JVM requested System dump using '/home/MYUSER/Software/elasticsearch-1.1.1/bin/core.20150119.095615.5602.0001.dmp' in response to an event
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e" specifies that the core dump is to be piped to an external program.  Attempting to rename either core or core.5723.

*** glibc detected *** /opt/IBM/SDP/jdk/bin/java: malloc(): memory corruption: 0xb3f19da0 ***

我已经为此苦苦挣扎了好几天,所以非常感谢您提供帮助或提示以找出解决方案。

有时让 logstash 连接到旧版本的 elasticsearch 会出现问题。你最好的选择是将 protocol => http 添加到你的 elasticsearch 输出,你的问题应该会解决。