nltk sentence tokenizer,将新行视为句子边界
nltk sentence tokenizer, consider new lines as sentence boundary
我正在使用 nltk 的 PunkSentenceTokenizer
将文本标记为一组句子。但是,分词器似乎不会将新段落或新行视为新句子。
>>> from nltk.tokenize.punkt import PunktSentenceTokenizer
>>> tokenizer = PunktSentenceTokenizer()
>>> tokenizer.tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
['Sentence 1 \n Sentence 2.', 'Sentence 3.']
>>> tokenizer.span_tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
[(0, 24), (25, 36)]
我希望它也将新行视为句子的边界。无论如何要这样做(我也需要保存偏移量)?
好吧,我遇到了同样的问题,我所做的是将文本拆分为“\n”。像这样:
# in my case, when it had '\n', I called it a new paragraph,
# like a collection of sentences
paragraphs = [p for p in text.split('\n') if p]
# and here, sent_tokenize each one of the paragraphs
for paragraph in paragraphs:
sentences = tokenizer.tokenize(paragraph)
这是我在生产中使用的简化版本,但总体思路是相同的。而且,对于葡萄牙语的评论和文档字符串,我们在 'educational purposes' 中针对巴西观众
完成了
def paragraphs(self):
if self._paragraphs is not None:
for p in self._paragraphs:
yield p
else:
raw_paras = self.raw_text.split(self.paragraph_delimiter)
gen = (Paragraph(self, p) for p in raw_paras if p)
self._paragraphs = []
for p in gen:
self._paragraphs.append(p)
yield p
我正在使用 nltk 的 PunkSentenceTokenizer
将文本标记为一组句子。但是,分词器似乎不会将新段落或新行视为新句子。
>>> from nltk.tokenize.punkt import PunktSentenceTokenizer
>>> tokenizer = PunktSentenceTokenizer()
>>> tokenizer.tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
['Sentence 1 \n Sentence 2.', 'Sentence 3.']
>>> tokenizer.span_tokenize('Sentence 1 \n Sentence 2. Sentence 3.')
[(0, 24), (25, 36)]
我希望它也将新行视为句子的边界。无论如何要这样做(我也需要保存偏移量)?
好吧,我遇到了同样的问题,我所做的是将文本拆分为“\n”。像这样:
# in my case, when it had '\n', I called it a new paragraph,
# like a collection of sentences
paragraphs = [p for p in text.split('\n') if p]
# and here, sent_tokenize each one of the paragraphs
for paragraph in paragraphs:
sentences = tokenizer.tokenize(paragraph)
这是我在生产中使用的简化版本,但总体思路是相同的。而且,对于葡萄牙语的评论和文档字符串,我们在 'educational purposes' 中针对巴西观众
完成了def paragraphs(self):
if self._paragraphs is not None:
for p in self._paragraphs:
yield p
else:
raw_paras = self.raw_text.split(self.paragraph_delimiter)
gen = (Paragraph(self, p) for p in raw_paras if p)
self._paragraphs = []
for p in gen:
self._paragraphs.append(p)
yield p