如何在 spark 中处理一小部分大数据文件?

How to work on small portion of big Data File in spark?

我已经在 Spark 中加载了大数据文件,但希望处理其中的一小部分以 运行 进行分析,有什么方法可以做到吗?。我尝试进行重新分区,但它带来了很多重新洗牌。有什么好的方法可以处理在 Spark 中加载的大文件的唯一小块吗?

In short

You can use sample() or randomSplit() transformations on RDD

sample()

/**
  * Return a sampled subset of this RDD.
  *
  * @param withReplacement can elements be sampled multiple times
  * @param fraction expected size of the sample as a fraction of this RDD's size
  *  without replacement: probability that each element is chosen; fraction must be [0, 1]
  *  with replacement: expected number of times each element is chosen; fraction must be 
  *  greater than or equal to 0
  * @param seed seed for the random number generator
  *
  * @note This is NOT guaranteed to provide exactly the fraction of the count
  * of the given [[RDD]].
  */

  def sample(
      withReplacement: Boolean,
      fraction: Double,
      seed: Long = Utils.random.nextLong): RDD[T]

示例:

val sampleWithoutReplacement = rdd.sample(false, 0.2, 2)

randomSplit()

/**
  * Randomly splits this RDD with the provided weights.
  *
  * @param weights weights for splits, will be normalized if they don't sum to 1
  * @param seed random seed
  *
  * @return split RDDs in an array
  */

def randomSplit(
   weights: Array[Double],
   seed: Long = Utils.random.nextLong): Array[RDD[T]]

示例:

val rddParts = randomSplit(Array(0.8, 0.2)) //Which splits RDD into 80-20 ratio

您可以使用以下任何一项 RDD API:

  1. yourRDD.filter(on some condition)
  2. yourRDD.sample(<with replacement>,<fraction of data>,<random seed>)

例如:yourRDD.sample(false, 0.3, System.currentTimeMillis().toInt)

如果您想要任何随机数据部分,我建议您使用第二种方法。或者,如果您需要满足某些条件的部分数据,请使用第一个。