Spark MLLIB TFIDF 文本聚类 Python

Spark MLLIB TFIDF Text Clustering Python

我是 Spark 的新手,正在尝试在 Python 中使用 Spark API 将新闻文章聚类为聚类。新闻文章已被抓取并存储在本地文件夹 /input/ 中。它包含大约 100 个小文本文件。

作为第一步,我设置了我的 SparkContent

sconf= SparkConf().setMaster("local").setAppName("My App")
sc= SparkContext(conf=sconf)

接下来我创建 HashingTF 并使用 sc.wholeTextFiles() 加载我的数据。目录是包含 txt 文件的文件夹的路径。

htf=HashingTF()
txtdata=sc.wholeTextFiles(directory)

现在我想分别拆分每个文本文件并为每个文件输出TF-IDF。第一个问题是 split 函数不适用于 txtdata。我正在使用以下功能:

split_data=txtdata.map(lambda x: x.split(" "))

我收到以下错误:

split_data=sc.wholeTextFiles(directory).map(lambda x: x.split(" "))
AttributeError: 'tuple' object has no attribute 'split'

    at org.apache.spark.api.python.PythonRDD$$anon.read(PythonRDD.scala:137)
    at org.apache.spark.api.python.PythonRDD$$anon.<init>(PythonRDD.scala:174)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1203)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1202)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:696)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive.applyOrElse(DAGScheduler.scala:1420)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

最后我打算运行:

temp=htf.transform(split_data) temp.cache() idf = IDF().fit(temp)
tfidf = idf.transform(temp)

函数 wholeTextFiles returns (filename, string) 对的 RDD。所以你首先需要做一些像 split_data=txtdata.map(lambda (k, v): v.split(" "))