获取 OutofMemoryError-pyspark 中的 GC 开销限制超出

Getting OutofMemoryError- GC overhead limit exceed in pyspark

在项目进行过程中,在我的 spark sql 查询中调用函数后出现以下错误

我已经编写了一个用户定义函数,它将接受两个字符串并在连接后将它们连接起来,它将使用最右边的子字符串长度 5 取决于总字符串长度([=26 的 right(string,integer) 的替代方法=] 服务器 )

  from pyspark.sql.types import*


def concatstring(xstring, ystring):
            newvalstring = xstring+ystring
            print newvalstring
            if(len(newvalstring)==6):
                stringvalue=newvalstring[1:6]
                return stringvalue
            if(len(newvalstring)==7):
                stringvalue1=newvalstring[2:7]
                return stringvalue1
            else:
                return '99999'


spark.udf.register ('rightconcat', lambda x,y:concatstring(x,y), StringType())

单独使用效果很好。现在当我在我的 spark sql 查询中将它作为列传递时,发生了这个异常 查询是

书面查询是

spark.sql("select d.BldgID,d.LeaseID,d.SuiteID,coalesce(BLDG.BLDGNAME,('select EmptyDefault from EmptyDefault')) as LeaseBldgName,coalesce(l.OCCPNAME,('select EmptyDefault from EmptyDefault'))as LeaseOccupantName, coalesce(l.DBA, ('select EmptyDefault from EmptyDefault')) as LeaseDBA, coalesce(l.CONTNAME, ('select EmptyDefault from EmptyDefault')) as LeaseContact,coalesce(l.PHONENO1, '')as LeasePhone1,coalesce(l.PHONENO2, '')as LeasePhone2,coalesce(l.NAME, '') as LeaseName,coalesce(l.ADDRESS, '') as LeaseAddress1,coalesce(l.ADDRESS2,'') as LeaseAddress2,coalesce(l.CITY, '')as LeaseCity, coalesce(l.STATE, ('select EmptyDefault from EmptyDefault'))as LeaseState,coalesce(l.ZIPCODE, '')as LeaseZip, coalesce(l.ATTENT, '') as LeaseAttention,coalesce(l.TTYPID, ('select EmptyDefault from EmptyDefault'))as LeaseTenantType,coalesce(TTYP.TTYPNAME, ('select EmptyDefault from EmptyDefault'))as LeaseTenantTypeName,l.OCCPSTAT as LeaseCurrentOccupancyStatus,l.EXECDATE as LeaseExecDate, l.RENTSTRT as LeaseRentStartDate,l.OCCUPNCY as LeaseOccupancyDate,l.BEGINDATE as LeaseBeginDate,l.EXPIR as LeaseExpiryDate,l.VACATE as LeaseVacateDate,coalesce(l.STORECAT, (select EmptyDefault from EmptyDefault)) as LeaseStoreCategory ,rightconcat('00000',cast(coalesce(SCAT.SORTSEQ,99999) as string)) as LeaseStoreCategorySortID from Dim_CMLease_primer d join LEAS l on l.BLDGID=d.BldgID and l.LEASID=d.LeaseID left outer join SUIT on SUIT.BLDGID=l.BLDGID and SUIT.SUITID=l.SUITID left outer join BLDG on BLDG.BLDGID= l.BLDGID left outer join SCAT on SCAT.STORCAT=l.STORECAT left outer join TTYP on TTYP.TTYPID = l.TTYPID").show()

我在这里上传了查询和查询后的状态。 我怎么能解决这个问题。请指导我

最简单的尝试是增加 spark 执行器内存: spark.executor.memory=6g
确保您正在使用所有可用内存。您可以在 UI 中查看。

更新 1

--conf spark.executor.extrajavaoptions="Option" 您可以将 -Xmx1024m 作为选项传递。

你现在的 spark.driver.memoryspark.executor.memory 是多少?
增加它们应该可以解决问题。

请记住,根据 spark 文档:

Note that it is illegal to set Spark properties or heap size settings with this option. Spark properties should be set using a SparkConf object or the spark-defaults.conf file used with the spark-submit script. Heap size settings can be set with spark.executor.memory.

更新 2

As GC overhead error is garbage collcection problem would also recommend to read this great answer