如何快速获取语料库中的单词集合(使用nltk)?
How to quickly get the collection of words in a corpus (with nltk)?
我想用 nltk 快速构建语料库的单词查找 table。以下是我正在做的事情:
- 读取原始文本:file=open("corpus","r").read().decode('utf-8')
- 使用a=nltk.word_tokenize(file)获取所有token;
- 使用 set(a) 获取唯一标记,并将其转换回列表。
这是执行此任务的正确方法吗?
尝试:
import time
from collections import Counter
from nltk import FreqDist
from nltk.corpus import brown
from nltk import word_tokenize
def time_uniq(maxchar):
# Let's just take the first 10000 characters.
words = brown.raw()[:maxchar]
# Time to tokenize
start = time.time()
words = word_tokenize(words)
print time.time() - start
# Using collections.Counter
start = time.time()
x = Counter(words)
uniq_words = x.keys()
print time.time() - start
# Using nltk.FreqDist
start = time.time()
FreqDist(words)
uniq_words = x.keys()
print time.time() - start
# If you don't need frequency info, use set()
start = time.time()
uniq_words = set(words)
print time.time() - start
[输出]:
~$ python test.py
0.0413908958435
0.000495910644531
0.000432968139648
9.3936920166e-05
0.10734796524
0.00458407402039
0.00439405441284
0.00084400177002
1.12890005112
0.0492491722107
0.0490930080414
0.0100378990173
要加载您自己的语料库文件(假设您的文件足够小,可以放入 RAM):
from collections import Counter
from nltk import FreqDist, word_tokenize
with open('myfile.txt', 'r') as fin:
# Using Counter.
x = Counter(word_tokenize(fin.read()))
uniq = x.keys()
# Using FreqDist
x = Counter(word_tokenize(fin.read()))
uniq = x.keys()
# Using Set
uniq = set(word_tokenize(fin.read()))
如果文件太大,您可能希望一次一行处理文件:
from collections import Counter
from nltk import FreqDist, word_tokenize
from nltk.corpus import brown
# Using Counter.
x = Counter()
with open('myfile.txt', 'r') as fin:
for line in fin.split('\n'):
x.update(word_tokenize(line))
uniq = x.keys()
# Using Set.
x = set()
with open('myfile.txt', 'r') as fin:
for line in fin.split('\n'):
x.update(word_tokenize(line))
uniq = x.keys()
我想用 nltk 快速构建语料库的单词查找 table。以下是我正在做的事情:
- 读取原始文本:file=open("corpus","r").read().decode('utf-8')
- 使用a=nltk.word_tokenize(file)获取所有token;
- 使用 set(a) 获取唯一标记,并将其转换回列表。
这是执行此任务的正确方法吗?
尝试:
import time
from collections import Counter
from nltk import FreqDist
from nltk.corpus import brown
from nltk import word_tokenize
def time_uniq(maxchar):
# Let's just take the first 10000 characters.
words = brown.raw()[:maxchar]
# Time to tokenize
start = time.time()
words = word_tokenize(words)
print time.time() - start
# Using collections.Counter
start = time.time()
x = Counter(words)
uniq_words = x.keys()
print time.time() - start
# Using nltk.FreqDist
start = time.time()
FreqDist(words)
uniq_words = x.keys()
print time.time() - start
# If you don't need frequency info, use set()
start = time.time()
uniq_words = set(words)
print time.time() - start
[输出]:
~$ python test.py
0.0413908958435
0.000495910644531
0.000432968139648
9.3936920166e-05
0.10734796524
0.00458407402039
0.00439405441284
0.00084400177002
1.12890005112
0.0492491722107
0.0490930080414
0.0100378990173
要加载您自己的语料库文件(假设您的文件足够小,可以放入 RAM):
from collections import Counter
from nltk import FreqDist, word_tokenize
with open('myfile.txt', 'r') as fin:
# Using Counter.
x = Counter(word_tokenize(fin.read()))
uniq = x.keys()
# Using FreqDist
x = Counter(word_tokenize(fin.read()))
uniq = x.keys()
# Using Set
uniq = set(word_tokenize(fin.read()))
如果文件太大,您可能希望一次一行处理文件:
from collections import Counter
from nltk import FreqDist, word_tokenize
from nltk.corpus import brown
# Using Counter.
x = Counter()
with open('myfile.txt', 'r') as fin:
for line in fin.split('\n'):
x.update(word_tokenize(line))
uniq = x.keys()
# Using Set.
x = set()
with open('myfile.txt', 'r') as fin:
for line in fin.split('\n'):
x.update(word_tokenize(line))
uniq = x.keys()