:您已经超出了 PySpark 中的速率限制容许错误 /

:You`ve exceeded your rate limit allowance error in PySpark /

我找不到 RDD 的总和。我是这个领域的新手,请帮助。

使用Python 2.7 西斯火花 2.1 我目前正在使用 SQL 查询来获取数据帧,然后使用 .rdd 将其转换为 RDD。 即使我使用 df.select().rdd

这段代码给我同样的错误:

def meanTemperature(df,spark):
    tempDF = spark.sql("SELECT TEMPERATURE FROM washing").rdd
    return tempDF.sum()

我得到的错误:

     Py4JJavaErrorTraceback (most recent call last)
<ipython-input-40-3c99bf995d59> in <module>()
----> 1 meanTemperature(cloudantdata,spark)

<ipython-input-39-cb1480c78493> in meanTemperature(df, spark)
      4 
      5 
----> 6     return tempDF.sum()

/usr/local/src/spark21master/spark/python/pyspark/rdd.py in sum(self)
   1029         6.0
   1030         """
-> 1031         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
   1032 
   1033     def count(self):

/usr/local/src/spark21master/spark/python/pyspark/rdd.py in fold(self, zeroValue, op)
    903         # zeroValue provided to each partition is unique from the one provided
    904         # to the final reduce call
--> 905         vals = self.mapPartitions(func).collect()
    906         return reduce(op, vals, zeroValue)
    907 

/usr/local/src/spark21master/spark/python/pyspark/rdd.py in collect(self)
    806         """
    807         with SCCallSiteSync(self.context) as css:
--> 808             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    809         return list(_load_from_socket(port, self._jrdd_deserializer))
    810 

/usr/local/src/spark21master/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/usr/local/src/spark21master/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/local/src/spark21master/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 18.0 failed 10 times, most recent failure: Lost task 2.9 in stage 18.0 (TID 848, yp-spark-lon02-env5-0105.bluemix.net, executor cd1ff543-2b85-4961-8632-26de9890cbca): com.cloudant.client.org.lightcouch.TooManyRequestsException: 429 Too Many Requests at https://49b92f6e-fb6d-4003-aa11-80280f96591d-bluemix.cloudant.com/washing/_all_docs?include_docs=true&limit=19&skip=38. Error: too_many_requests. Reason: You`ve exceeded your rate limit allowance. Please try again later..
    at com.cloudant.client.org.lightcouch.CouchDbClient.execute(CouchDbClient.java:575)
    at com.cloudant.client.api.CloudantClient.executeRequest(CloudantClient.java:388)
    at org.apache.bahir.cloudant.CloudantConfig.executeRequest(CloudantConfig.scala:73)
    at org.apache.bahir.cloudant.common.JsonStoreDataAccess.getQueryResult(JsonStoreDataAccess.scala:114)
    at org.apache.bahir.cloudant.common.JsonStoreDataAccess.getIterator(JsonStoreDataAccess.scala:62)
    at org.apache.bahir.cloudant.common.JsonStoreRDD.compute(JsonStoreRDD.scala:223)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:326)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:290)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:326)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:290)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:326)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:290)
    at 

请帮我解决这个错误。

尽管有错误描述,但这与超出限制完全无关。错误是我没有省略空值,不得不在 lambda 函数中使用它。我还必须使用 x.temperature

访问 rdd 的列