如何在 Spark 中打印特定 RDD 分区的元素?
How to print elements of particular RDD partition in Spark?
如何单独打印特定分区的元素,比如第 5 个?
val distData = sc.parallelize(1 to 50, 10)
您可以使用针对 foreachPartition() 的计数器 API 来实现它。
这是一个 Java 程序,它打印每个分区的内容
JavaSparkContext 上下文 = 新 JavaSparkContext(conf);
JavaRDD<Integer> myArray = context.parallelize(Arrays.asList(1,2,3,4,5,6,7,8,9));
JavaRDD<Integer> partitionedArray = myArray.repartition(2);
System.out.println("partitioned array size is " + partitionedArray.count());
partitionedArray.foreachPartition(new VoidFunction<Iterator<Integer>>() {
public void call(Iterator<Integer> arg0) throws Exception {
while(arg0.hasNext()) {
System.out.println(arg0.next());
}
}
});
使用Spark/Scala:
val data = 1 to 50
val distData = sc.parallelize(data,10)
distData.mapPartitionsWithIndex( (index: Int, it: Iterator[Int]) =>it.toList.map(x => if (index ==5) {println(x)}).iterator).collect
产生:
26
27
28
29
30
假设您这样做只是为了测试目的,那么请使用 glom()。请参阅 Spark 文档:https://spark.apache.org/docs/1.6.0/api/python/pyspark.html#pyspark.RDD.glom
>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> rdd.glom().collect()
[[1, 2], [3, 4]]
>>> rdd.glom().collect()[1]
[3, 4]
编辑:Scala 中的示例:
scala> val distData = sc.parallelize(1 to 50, 10)
scala> distData.glom().collect()(4)
res2: Array[Int] = Array(21, 22, 23, 24, 25)
如何单独打印特定分区的元素,比如第 5 个?
val distData = sc.parallelize(1 to 50, 10)
您可以使用针对 foreachPartition() 的计数器 API 来实现它。
这是一个 Java 程序,它打印每个分区的内容 JavaSparkContext 上下文 = 新 JavaSparkContext(conf);
JavaRDD<Integer> myArray = context.parallelize(Arrays.asList(1,2,3,4,5,6,7,8,9));
JavaRDD<Integer> partitionedArray = myArray.repartition(2);
System.out.println("partitioned array size is " + partitionedArray.count());
partitionedArray.foreachPartition(new VoidFunction<Iterator<Integer>>() {
public void call(Iterator<Integer> arg0) throws Exception {
while(arg0.hasNext()) {
System.out.println(arg0.next());
}
}
});
使用Spark/Scala:
val data = 1 to 50
val distData = sc.parallelize(data,10)
distData.mapPartitionsWithIndex( (index: Int, it: Iterator[Int]) =>it.toList.map(x => if (index ==5) {println(x)}).iterator).collect
产生:
26
27
28
29
30
假设您这样做只是为了测试目的,那么请使用 glom()。请参阅 Spark 文档:https://spark.apache.org/docs/1.6.0/api/python/pyspark.html#pyspark.RDD.glom
>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> rdd.glom().collect()
[[1, 2], [3, 4]]
>>> rdd.glom().collect()[1]
[3, 4]
编辑:Scala 中的示例:
scala> val distData = sc.parallelize(1 to 50, 10)
scala> distData.glom().collect()(4)
res2: Array[Int] = Array(21, 22, 23, 24, 25)