Storm UI 不正确的值和毛细管工具
Storm UI improper values and Capillary tool
我是 Apache Storm 的新手,一直在尝试为 Kafka 使用三叉戟拓扑,即 TransactionalTridentKafkaSpout。除 Storm UI 外,一切正常。即使我没有为我的主题生成任何数据,Storm UI 仍然显示无效的 emitted/transferred 值。这意味着即使主题中没有数据,计数也会继续增加。我尝试删除存储在 zookeeper、storm、kafka 中的 data/logs 并重新创建 kafka 主题,并且还设置了
topology.stats.sample.rate: 1.0
但问题依然存在。
我还遇到了一个名为 Capillary 的工具来监控风暴集群。
我正在使用以下属性
capillary.zookeepers="192.168.125.20:2181"
capillary.kafka.zkroot="192.168.125.20:/home/storm/kafka_2.11-0.8.2.0"
capillary.storm.zkroot="192.168.125.20:/home/storm/apache-storm-0.9.3"
我这里使用的是Kafka内嵌的zookeeper。
即使这样也无法获得以下异常。
! @6mbg4bp7l - Internal server error, for (GET) [/] ->
play.api.Application$$anon: Execution exception[[JsonParseException: Unexpected character ('.' (code 46)): Expected space separating root-level values
at [Source: java.io.StringReader@24adb083; line: 1, column: 9]]]
at play.api.Application$class.handleError(Application.scala:296) ~[com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at play.api.DefaultApplication.handleError(Application.scala:402) [com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$$anonfun$apply.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$$anonfun$apply.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [org.scala-lang.scala-library-2.10.4.jar:na]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('.' (code 46)): Expected space separating root-level values
at [Source: java.io.StringReader@24adb083; line: 1, column: 9]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:557) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:475) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportMissingRootWS(ParserMinimalBase.java:495) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._verifyRootSpace(ReaderBasedJsonParser.java:1178) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
任何帮助都很好。提前致谢。
配置和源代码片段:
final Config config = new Config();
config.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 3000);
config.setNumWorkers(2);
config.put(Config.NIMBUS_HOST, "192.168.125.20");
config.put(Config.NIMBUS_THRIFT_PORT, 6627);
config.put(Config.STORM_ZOOKEEPER_PORT, 2181);
config.put(Config.STORM_ZOOKEEPER_SERVERS, Arrays.asList("192.168.125.20"));
config.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE, 16384);
config.put(Config.TOPOLOGY_ACKER_EXECUTORS, 1);
config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 10);
config.put(Config.DRPC_SERVERS, Arrays.asList("192.168.125.20"));
config.put(Config.DRPC_PORT, 3772);
final BrokerHosts zkHosts = new ZkHosts("192.168.125.20");
final TridentKafkaConfig kafkaConfig = new TridentKafkaConfig(zkHosts, "Test_Topic", "");
kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
kafkaConfig.bufferSizeBytes = 1024 * 1024 * 4;
kafkaConfig.fetchSizeBytes = 1024 * 1024 * 4;
kafkaConfig.forceFromStart = false;
final TransactionalTridentKafkaSpout kafkaSpout = new TransactionalTridentKafkaSpout(kafkaConfig);
final TridentTopology topology = new TridentTopology();
topology.newStream("spout", kafkaSpout)
.each(new Fields("str"), new TestFunction(), new Fields("test"))
.each(new Fields("str"), new PrintFilter());
拓扑摘要图像:
您可能看到我所说的 UI metric artifacts of Trident 了吗?这些元组也出现在 Storm UI:
的计数器中
Trident executes a batch every 500ms (by default). A batch involves a bunch
of coordination messages going out to all the bolts to coordinate the batch
(even if the batch is empty). So that's what you're seeing.
(source: Trident Kafka Spout - Ack Count Increasing Even Though No Messages Are Processed)
我是 Apache Storm 的新手,一直在尝试为 Kafka 使用三叉戟拓扑,即 TransactionalTridentKafkaSpout。除 Storm UI 外,一切正常。即使我没有为我的主题生成任何数据,Storm UI 仍然显示无效的 emitted/transferred 值。这意味着即使主题中没有数据,计数也会继续增加。我尝试删除存储在 zookeeper、storm、kafka 中的 data/logs 并重新创建 kafka 主题,并且还设置了
topology.stats.sample.rate: 1.0
但问题依然存在。
我还遇到了一个名为 Capillary 的工具来监控风暴集群。 我正在使用以下属性
capillary.zookeepers="192.168.125.20:2181"
capillary.kafka.zkroot="192.168.125.20:/home/storm/kafka_2.11-0.8.2.0"
capillary.storm.zkroot="192.168.125.20:/home/storm/apache-storm-0.9.3"
我这里使用的是Kafka内嵌的zookeeper。 即使这样也无法获得以下异常。
! @6mbg4bp7l - Internal server error, for (GET) [/] ->
play.api.Application$$anon: Execution exception[[JsonParseException: Unexpected character ('.' (code 46)): Expected space separating root-level values
at [Source: java.io.StringReader@24adb083; line: 1, column: 9]]]
at play.api.Application$class.handleError(Application.scala:296) ~[com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at play.api.DefaultApplication.handleError(Application.scala:402) [com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$$anonfun$apply.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$$anonfun$apply.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [com.typesafe.play.play_2.10-2.3.4.jar:2.3.4]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [org.scala-lang.scala-library-2.10.4.jar:na]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('.' (code 46)): Expected space separating root-level values
at [Source: java.io.StringReader@24adb083; line: 1, column: 9]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:557) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:475) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportMissingRootWS(ParserMinimalBase.java:495) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._verifyRootSpace(ReaderBasedJsonParser.java:1178) ~[com.fasterxml.jackson.core.jackson-core-2.3.2.jar:2.3.2]
任何帮助都很好。提前致谢。
配置和源代码片段:
final Config config = new Config();
config.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 3000);
config.setNumWorkers(2);
config.put(Config.NIMBUS_HOST, "192.168.125.20");
config.put(Config.NIMBUS_THRIFT_PORT, 6627);
config.put(Config.STORM_ZOOKEEPER_PORT, 2181);
config.put(Config.STORM_ZOOKEEPER_SERVERS, Arrays.asList("192.168.125.20"));
config.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE, 16384);
config.put(Config.TOPOLOGY_ACKER_EXECUTORS, 1);
config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 10);
config.put(Config.DRPC_SERVERS, Arrays.asList("192.168.125.20"));
config.put(Config.DRPC_PORT, 3772);
final BrokerHosts zkHosts = new ZkHosts("192.168.125.20");
final TridentKafkaConfig kafkaConfig = new TridentKafkaConfig(zkHosts, "Test_Topic", "");
kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
kafkaConfig.bufferSizeBytes = 1024 * 1024 * 4;
kafkaConfig.fetchSizeBytes = 1024 * 1024 * 4;
kafkaConfig.forceFromStart = false;
final TransactionalTridentKafkaSpout kafkaSpout = new TransactionalTridentKafkaSpout(kafkaConfig);
final TridentTopology topology = new TridentTopology();
topology.newStream("spout", kafkaSpout)
.each(new Fields("str"), new TestFunction(), new Fields("test"))
.each(new Fields("str"), new PrintFilter());
拓扑摘要图像:
您可能看到我所说的 UI metric artifacts of Trident 了吗?这些元组也出现在 Storm UI:
的计数器中Trident executes a batch every 500ms (by default). A batch involves a bunch of coordination messages going out to all the bolts to coordinate the batch (even if the batch is empty). So that's what you're seeing.
(source: Trident Kafka Spout - Ack Count Increasing Even Though No Messages Are Processed)