如何根据枚举数将文本分割成子句?

How to segment text into sub-sentences based on enumerators?

我正在使用 nltk PunktSentenceTokenizer() 为 python 中的文本分割句子。但是,有很多长句以枚举的方式出现,我需要在这种情况下获取子句。

示例:

The api allows the user to achieve following goals: (a) aXXXXXX ,(b)bXXXX, (c) cXXXXX. 

所需的输出为:

"The api allows the user to achieve following goals aXXXXX. ""The api allows the user to achieve following goals bXXXXX.""The api allows the user to achieve following goals cXXXXX. "

我怎样才能实现这个目标?

要获取子序列,您可以使用 RegExp Tokenizer

如何使用它来拆分句子的示例如下所示:

from nltk.tokenize.regexp import regexp_tokenize

str1 = 'The api allows the user to achieve following goals: (a) aXXXXXX ,(b)bXXXX, (c) cXXXXX.'

parts =  regexp_tokenize(str1, r'\(\w\)\s*', gaps=True)

start_of_sentence = parts.pop(0)

for part in parts:
    print(" ".join((start_of_sentence, part)))

我将跳过明显的问题(即:"What have you tried so far?")。正如您可能已经发现的那样,PunktSentenceTokenizer 在这里并不能真正帮助您,因为它会将您的输入句子留在一个片段中。 最佳解决方案在很大程度上取决于您输入的可预测性。以下将适用于您的示例,但如您所见,它依赖于冒号和逗号。如果他们不在那里,它不会帮助你。

import re
from nltk import PunktSentenceTokenizer
s = 'The api allows the user to achieve following goals: (a) aXXXXXX ,(b)bXXXX, (c) cXXXXX.'
#sents = PunktSentenceTokenizer().tokenize(s)

p = s.split(':')
for l in p[1:]:
    i = l.split(',')
    for j in i:
        j = re.sub(r'\([a-z]\)', '', j).strip()
        print("%s: %s" % (p[0], j))