使用 RegexTokenizer Scala 标记每个单词仅包含字母的句子
Tokenize a sentence where each word contains only letters using RegexTokenizer Scala
我正在将 spark 与 scala 结合使用,并尝试标记一个句子,其中每个单词只应包含字母。这是我的代码
def tokenization(extractedText: String): DataFrame = {
val existingSparkSession = SparkSession.builder().getOrCreate()
val textDataFrame = existingSparkSession.createDataFrame(Seq(
(0, extractedText))).toDF("id", "sentence")
val tokenizer = new Tokenizer().setInputCol("sentence").setOutputCol("words")
val regexTokenizer = new RegexTokenizer()
.setInputCol("sentence")
.setOutputCol("words")
.setPattern("\W")
val regexTokenized = regexTokenizer.transform(textDataFrame)
regexTokenized.select("sentence", "words").show(false)
return regexTokenized;
}
如果我在标记化后提供“我要去上学 5”的句子,它应该只有 [i, am, going, to] 并且应该放弃 school5。但在我目前的模式下,它不会忽略单词中的数字。我应该如何删除带数字的单词?
您可以使用下面的设置来获得您想要的标记化。本质上,您使用适当的正则表达式模式提取仅包含字母的单词。
val regexTokenizer = new RegexTokenizer().setInputCol("sentence").setOutputCol("words").setGaps(false).setPattern("\b[a-zA-Z]+\b")
val regexTokenized = regexTokenizer.transform(textDataFrame)
regexTokenized.show(false)
+---+---------------------+------------------+
|id |sentence |words |
+---+---------------------+------------------+
|0 |I am going to school5|[i, am, going, to]|
+---+---------------------+------------------+
为什么我把gaps
设置成false
,看文档:
A regex based tokenizer that extracts tokens either by using the provided regex pattern (in Java dialect) to split the text (default) or repeatedly matching the regex (if gaps is false). Optional parameters also allow filtering tokens using a minimal length. It returns an array of strings that can be empty.
您想重复匹配正则表达式,而不是按给定的正则表达式拆分文本。
我正在将 spark 与 scala 结合使用,并尝试标记一个句子,其中每个单词只应包含字母。这是我的代码
def tokenization(extractedText: String): DataFrame = {
val existingSparkSession = SparkSession.builder().getOrCreate()
val textDataFrame = existingSparkSession.createDataFrame(Seq(
(0, extractedText))).toDF("id", "sentence")
val tokenizer = new Tokenizer().setInputCol("sentence").setOutputCol("words")
val regexTokenizer = new RegexTokenizer()
.setInputCol("sentence")
.setOutputCol("words")
.setPattern("\W")
val regexTokenized = regexTokenizer.transform(textDataFrame)
regexTokenized.select("sentence", "words").show(false)
return regexTokenized;
}
如果我在标记化后提供“我要去上学 5”的句子,它应该只有 [i, am, going, to] 并且应该放弃 school5。但在我目前的模式下,它不会忽略单词中的数字。我应该如何删除带数字的单词?
您可以使用下面的设置来获得您想要的标记化。本质上,您使用适当的正则表达式模式提取仅包含字母的单词。
val regexTokenizer = new RegexTokenizer().setInputCol("sentence").setOutputCol("words").setGaps(false).setPattern("\b[a-zA-Z]+\b")
val regexTokenized = regexTokenizer.transform(textDataFrame)
regexTokenized.show(false)
+---+---------------------+------------------+
|id |sentence |words |
+---+---------------------+------------------+
|0 |I am going to school5|[i, am, going, to]|
+---+---------------------+------------------+
为什么我把gaps
设置成false
,看文档:
A regex based tokenizer that extracts tokens either by using the provided regex pattern (in Java dialect) to split the text (default) or repeatedly matching the regex (if gaps is false). Optional parameters also allow filtering tokens using a minimal length. It returns an array of strings that can be empty.
您想重复匹配正则表达式,而不是按给定的正则表达式拆分文本。