python 中句子的单词 n-gram 列表

Word n-gram list of sentences in python

我想生成大小为 2 到 4 的 char-n-gram。这是我现在拥有的:

from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much']

for i in range(len(sentence)):
    for n in range(2, 4):
        n_grams = ngrams(sentence[i].split(), n)
        for grams in n_grams:
            print(grams)

这会给我:

('i', 'have')
('have', 'an')
('an', 'apple')
('i', 'have', 'an')
('have', 'an', 'apple')
('i', 'like')
('like', 'apples')
('apples', 'so')
('so', 'much')
('i', 'like', 'apples')
('like', 'apples', 'so')
('apples', 'so', 'much')

我怎样才能以最佳方式做到这一点?我有一个非常大的条目数据,我的解决方案包含 for in for 所以复杂度有点大,算法完成需要很多时间。

假设你的意思是 n-gram 词而不是 char),不确定是否有重复句子的机会,但你可以尝试 set 输入句子和可能是 list comprehension:

%%timeit
from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much', 'so much']
n_grams = []
for i in range(len(sentence)):
    for n in range(2, 4):
        for item in ngrams(sentence[i].split(), n):
            n_grams.append(item)

结果:

1000 loops, best of 3: 228 µs per loop

仅使用 list comprehension,它有一些改进:

%%timeit
from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much', 'so much']
n_grams = [item for sent in sentence for n in range(2, 4) for item in ngrams(sent.split(), n)]

结果:

1000 loops, best of 3: 214 µs per loop

其他方法是使用 setlist comprehension:

%%timeit
from nltk import ngrams
sentences = ['i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much','i have an apple', 'i like apples so much', 'i like apples so much', 'i like apples so much',
           'i like apples so much', 'i like apples so much', 'i like apples so much', 'so much']
# use of set
sentence = set(sentences)
n_grams = [item for sent in sentence for n in range(2, 4) for item in ngrams(sent.split(), n)]

结果:

10000 loops, best of 3: 23.5 µs per loop

所以,如果有很多重复的句子,它可能会有所帮助。

>>> from nltk import everygrams
>>> from collections import Counter

>>> sents = ['i have an apple', 'i like apples so much']

# For character ngrams, use the string directly as 
# the input to `ngrams` or `everygrams`

# If you like to keep the keys as tuple of characters.
>>> Counter(everygrams(sents[0], 1, 4))
Counter({('a',): 3, (' ',): 3, ('e',): 2, ('p',): 2, (' ', 'a'): 2, ('n',): 1, ('v', 'e'): 1, (' ', 'a', 'n'): 1, ('v', 'e', ' '): 1, (' ', 'h', 'a'): 1, ('l', 'e'): 1, ('n', ' '): 1, ('p', 'p', 'l', 'e'): 1, ('e', ' ', 'a'): 1, ('a', 'v', 'e'): 1, ('p', 'l'): 1, ('a', 'v', 'e', ' '): 1, ('a', 'v'): 1, (' ', 'a', 'p'): 1, (' ', 'a', 'p', 'p'): 1, ('h', 'a'): 1, ('i', ' ', 'h', 'a'): 1, ('i',): 1, ('i', ' ', 'h'): 1, ('v', 'e', ' ', 'a'): 1, ('p', 'p', 'l'): 1, ('e', ' '): 1, ('p', 'p'): 1, (' ', 'a', 'n', ' '): 1, ('n', ' ', 'a', 'p'): 1, (' ', 'h', 'a', 'v'): 1, ('a', 'p', 'p', 'l'): 1, ('a', 'n', ' '): 1, (' ', 'h'): 1, ('n', ' ', 'a'): 1, ('a', 'n', ' ', 'a'): 1, ('a', 'p', 'p'): 1, ('h', 'a', 'v'): 1, ('a', 'n'): 1, ('v',): 1, ('h', 'a', 'v', 'e'): 1, ('h',): 1, ('a', 'p'): 1, ('i', ' '): 1, ('p', 'l', 'e'): 1, ('l',): 1, ('e', ' ', 'a', 'n'): 1})

# If you like the keys to be just the string.
>>> Counter(map(''.join,everygrams(sents[0], 1, 4)))
Counter({' ': 3, 'a': 3, ' a': 2, 'e': 2, 'p': 2, 'ppl': 1, 've': 1, ' h': 1, 'i ha': 1, 'an': 1, 'ap': 1, 'have': 1, 'av': 1, 'ave': 1, 'pp': 1, 'le': 1, 'n ap': 1, ' app': 1, ' an': 1, ' ap': 1, 'appl': 1, 'i h': 1, 'app': 1, 'pl': 1, 'an ': 1, 'pple': 1, 'e ': 1, 'e a': 1, 'ple': 1, 'e an': 1, 'i ': 1, 'ha': 1, 'n a': 1, 've a': 1, ' an ': 1, 'i': 1, 'h': 1, 'ave ': 1, 'l': 1, 'n': 1, 'an a': 1, ' hav': 1, 'n ': 1, 've ': 1, 'v': 1, ' ha': 1, 'hav': 1})


# If you want word ngrams:

>>> Counter(map(' '.join,everygrams(sents[0].split(), 1, 4)))
Counter({'have an': 1, 'apple': 1, 'i': 1, 'i have an': 1, 'i have an apple': 1, 'an': 1, 'have': 1, 'have an apple': 1, 'i have': 1, 'an apple': 1})

# Or using word_tokenize
>>> from nltk import word_tokenize
>>> Counter(map(' '.join,everygrams(word_tokenize(sents[0]), 1, 4)))
Counter({'have an': 1, 'apple': 1, 'i': 1, 'i have an': 1, 'i have an apple': 1, 'an': 1, 'have': 1, 'have an apple': 1, 'i have': 1, 'an apple': 1})

如果速度是一个问题,那么 Fast n-gram calculation

当你没有 M 时,O(MN) 的复杂性在这里很自然。句子和 N 号要迭代的 ngrams 顺序。即使在 everygrams 中,它也会一个接一个地迭代 n-gram 顺序。

我确信有更有效的方法来计算 ngram,但我怀疑当涉及到大规模 ngram 时,您 运行 会遇到内存问题而不是速度问题。在那种情况下,我可以建议 https://github.com/kpu/kenlm