如何在 PySpark ML 中创建自定义分词器

How to create a custom tokenizer in PySpark ML

sentenceDataFrame = spark.createDataFrame([
        (0, "Hi I heard about Spark"),
        (1, "I wish Java could use case classes"),
        (2, "Logistic,regression,models,are,neat")
    ], ["id", "sentence"])
tokenizer = Tokenizer(inputCol="sentence", outputCol="words") 
tokenized = tokenizer.transform(sentenceDataFrame)

如果我运行命令

tokenized.head()

我想得到这样的结果

Row(id=0, sentence='Hi I heard about Spark',
    words=['H','i',' ','h','e',‘a’,……])

然而,现在的结果是

Row(id=0, sentence='Hi I heard about Spark',
    words=['Hi','I','heard','about','spark'])

PySpark 中的 Tokenizer 或 RegexTokenizer 有什么办法可以实现吗?

类似问题在这里:

看看pyspark.ml documentationTokenizer 仅按空格拆分,但 RegexTokenizer - 顾名思义 - 使用正则表达式来查找拆分点或要提取的标记(这可以通过参数配置 gaps).

如果您传递一个空模式并保留 gaps=True(这是默认值),您应该会得到您想要的结果:

from pyspark.ml.feature import RegexTokenizer

tokenizer = RegexTokenizer(inputCol="sentence", outputCol="words", pattern="")
tokenized = tokenizer.transform(sentenceDataFrame)