如何读取 PySpark 中由多个字符分隔的文本文件?

How to read text file seperated by multiple characters in PySpark?

我有一个 .bcp 格式的文件并尝试读取它。行由“|;;|”分隔。一行可能会延伸到文件中的多行。

rdd = sc.textFile("test.bcp") 将文件分成几行,但我需要用“|;;|”分隔。如何在不更改 Hadoop 配置的情况下执行此操作?

示例.bcp

A1|;|B1|;|C1|;|
D1|;;|A2|;|B2|;|
C2|;|D2|;;|

应转换为: [["A1", "B1", "C1", "D1"], ["A2", "B2", "C2", "D2"]]

对于具有多个字符的自定义定界符,请更改 hadoop 配置:

sc = SparkContext.getOrCreate()

# let hadoop separate files by our custom delimiter
conf = sc._jsc.hadoopConfiguration()
conf.set("textinputformat.record.delimiter", '|;;|')

# create RDD of .bcp file
rows = sc.textFile('/PATH/TO/FILE/test.bcp')  # split file into rows
rows = rows.map(lambda row: row.split('|;|'))  # split rows into columns

# reset hadoop delimiter
conf.set("textinputformat.record.delimiter", "\n")