流式 K-means Spark Scala:为输入字符串获取 java.lang.NumberFormatException
Streaming K-means Spark Scala: Getting java.lang.NumberFormatException for input string
当我从包含双精度值的目录中读取 CSV 数据并按如下方式在其上应用流式 K 均值模型时,
//CSV文件
40.729,-73.9422
40.7476,-73.9871
40.7424,-74.0044
40.751,-73.9869
40.7406,-73.9902
.....
//SBT依赖项:
name := "Application name"
version := "0.1"
scalaVersion := "2.11.12"
val sparkVersion ="2.3.1"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" % "spark-streaming_2.11" % sparkVersion,
"org.apache.spark" %% "spark-mllib" % "2.3.1")
//导入语句
import org.apache.spark.sql.{DataFrame, SparkSession}
import org.apache.spark.sql.streaming.OutputMode
import org.apache.spark.sql.types._
import org.apache.spark.{SparkConf, SparkContext, rdd}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.mllib.clustering.{ KMeans,StreamingKMeans}
import org.apache.spark.mllib.linalg.Vectors
//读取csv数据
val trainingData = ssc.textFileStream ("directory path")
.map(x=>x.toDouble)
.map(x=>Vectors.dense(x))
// applying Streaming kmeans model
val model = new StreamingKMeans()
.setK(numClusters)
.setDecayFactor(1.0)
.setRandomCenters(numDimensions, 0.0)
model.trainOn(trainingData)
我收到以下错误:
18/07/24 11:20:04 ERROR Executor: Exception in task 0.0 in stage 2.0
(TID
1)
java.lang.NumberFormatException: For input string: "40.7473,-73.9857" at
sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110) at
java.lang.Double.parseDouble(Double.java:538) at
scala.collection.immutable.StringLike$class.toDouble(StringLike.scala:285)
at scala.collection.immutable.StringOps.toDouble(StringOps.scala:29)
at ubu$$anonfun.apply(uberclass.scala:305) at
ubu$$anonfun.apply(uberclass.scala:305) at
scala.collection.Iterator$$anon.next(Iterator.scala:410) at
scala.collection.Iterator$$anon.next(Iterator.scala:410) at
scala.collection.Iterator$$anon.next(Iterator.scala:410) at
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Exception in thread
"streaming-job-executor-0" java.lang.Error:
java.lang.InterruptedException at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
有人可以帮忙吗?
存在尺寸问题。 vector 的 dimension 和传递给流式 K-means 模型的 numDimension 应该相同。
当我从包含双精度值的目录中读取 CSV 数据并按如下方式在其上应用流式 K 均值模型时,
//CSV文件
40.729,-73.9422
40.7476,-73.9871
40.7424,-74.0044
40.751,-73.9869
40.7406,-73.9902
.....
//SBT依赖项:
name := "Application name"
version := "0.1"
scalaVersion := "2.11.12"
val sparkVersion ="2.3.1"libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" % "spark-streaming_2.11" % sparkVersion,
"org.apache.spark" %% "spark-mllib" % "2.3.1")
//导入语句
import org.apache.spark.sql.{DataFrame, SparkSession}
import org.apache.spark.sql.streaming.OutputMode
import org.apache.spark.sql.types._
import org.apache.spark.{SparkConf, SparkContext, rdd}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.mllib.clustering.{ KMeans,StreamingKMeans}
import org.apache.spark.mllib.linalg.Vectors
//读取csv数据
val trainingData = ssc.textFileStream ("directory path") .map(x=>x.toDouble) .map(x=>Vectors.dense(x)) // applying Streaming kmeans model val model = new StreamingKMeans() .setK(numClusters) .setDecayFactor(1.0) .setRandomCenters(numDimensions, 0.0) model.trainOn(trainingData)
我收到以下错误:
18/07/24 11:20:04 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 1) java.lang.NumberFormatException: For input string: "40.7473,-73.9857" at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110) at java.lang.Double.parseDouble(Double.java:538) at scala.collection.immutable.StringLike$class.toDouble(StringLike.scala:285) at scala.collection.immutable.StringOps.toDouble(StringOps.scala:29) at ubu$$anonfun.apply(uberclass.scala:305) at ubu$$anonfun.apply(uberclass.scala:305) at scala.collection.Iterator$$anon.next(Iterator.scala:410) at scala.collection.Iterator$$anon.next(Iterator.scala:410) at scala.collection.Iterator$$anon.next(Iterator.scala:410) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
有人可以帮忙吗?
存在尺寸问题。 vector 的 dimension 和传递给流式 K-means 模型的 numDimension 应该相同。