如何在数据框中使用 word_tokenize
how to use word_tokenize in data frame
我最近开始使用 nltk 模块进行文本分析。我被困在一个点上。我想在数据帧上使用 word_tokenize,以获得数据帧特定行中使用的所有单词。
data example:
text
1. This is a very good site. I will recommend it to others.
2. Can you please give me a call at 9983938428. have issues with the listings.
3. good work! keep it up
4. not a very helpful site in finding home decor.
expected output:
1. 'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2. 'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3. 'good','work','!','keep','it','up'
4. 'not','a','very','helpful','site','in','finding','home','decor'
基本上,我想分离所有的单词并找到数据框中每个文本的长度。
我知道 word_tokenize 可以用于字符串,但如何将它应用到整个数据帧上?
请帮忙!
提前致谢...
你可以使用DataFrame的apply方法API:
import pandas as pd
import nltk
df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)
输出:
>>> df
sentences \
0 This is a very good site. I will recommend it ...
1 Can you please give me a call at 9983938428. h...
2 good work! keep it up
tokenized_sents
0 [This, is, a, very, good, site, ., I, will, re...
1 [Can, you, please, give, me, a, call, at, 9983...
2 [good, work, !, keep, it, up]
要查找每个文本的长度,请再次尝试使用 apply 和 lambda 函数:
df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)
>>> df
sentences \
0 This is a very good site. I will recommend it ...
1 Can you please give me a call at 9983938428. h...
2 good work! keep it up
tokenized_sents sents_length
0 [This, is, a, very, good, site, ., I, will, re... 14
1 [Can, you, please, give, me, a, call, at, 9983... 15
2 [good, work, !, keep, it, up] 6
pandas.Series.apply 比 pandas.DataFrame.apply
快
import pandas as pd
import nltk
df = pd.read_csv("/path/to/file.csv")
start = time.time()
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
print "series.apply", (time.time() - start)
start = time.time()
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
print "dataframe.apply", (time.time() - start)
在 125 MB 的示例 csv 文件中,
series.apply 144.428858995
dataframe.apply 201.884778976
编辑:您可能会想到 series.apply([=42= 之后的 Dataframe df ])体积较大,可能会影响下一次操作的运行时间dataframe.apply(nltk.word_tokenize).
Pandas 针对这种情况进行了底层优化。我仅通过单独执行 dataframe.apply(nltk.word_tokenize) 获得了类似的 200s 运行时间。
可能需要添加 str() 以将 pandas' 对象类型转换为字符串。
请记住,计算字数的更快方法通常是计算空格。
有趣的是分词器计算句点。可能想先删除那些,也许还删除数字。取消注释下面的行将导致计数相等,至少在这种情况下是这样。
import nltk
import pandas as pd
sentences = pd.Series([
'This is a very good site. I will recommend it to others.',
'Can you please give me a call at 9983938428. have issues with the listings.',
'good work! keep it up',
'not a very helpful site in finding home decor. '
])
# remove anything but characters and spaces
sentences = sentences.str.replace('[^A-z ]','').str.replace(' +',' ').str.strip()
splitwords = [ nltk.word_tokenize( str(sentence) ) for sentence in sentences ]
print(splitwords)
# output: [['This', 'is', 'a', 'very', 'good', 'site', 'I', 'will', 'recommend', 'it', 'to', 'others'], ['Can', 'you', 'please', 'give', 'me', 'a', 'call', 'at', 'have', 'issues', 'with', 'the', 'listings'], ['good', 'work', 'keep', 'it', 'up'], ['not', 'a', 'very', 'helpful', 'site', 'in', 'finding', 'home', 'decor']]
wordcounts = [ len(words) for words in splitwords ]
print(wordcounts)
# output: [12, 13, 5, 9]
wordcounts2 = [ sentence.count(' ') + 1 for sentence in sentences ]
print(wordcounts2)
# output: [12, 13, 5, 9]
如果您不使用 Pandas,您可能不需要 str()
我给你举个例子。假设您有一个名为 twitter_df 的 数据框 ,并且您在其中存储了情绪和文本。所以,首先我将文本数据提取到列表中,如下所示
tweetText = twitter_df['text']
然后标记化
from nltk.tokenize import word_tokenize
tweetText = tweetText.apply(word_tokenize)
tweetText.head()
我想这对你有帮助
使用 pandarallel
使其更快
使用Spacy
import spacy
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
nlp = spacy.load("en_core_web_sm")
df['new_col'] = df['text'].parallel_apply(lambda x: nlp(x))
使用NLTK
import nltk
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
df['new_col'] = df['text'].parallel_apply(lambda x: nltk.word_tokenize(x))
我最近开始使用 nltk 模块进行文本分析。我被困在一个点上。我想在数据帧上使用 word_tokenize,以获得数据帧特定行中使用的所有单词。
data example:
text
1. This is a very good site. I will recommend it to others.
2. Can you please give me a call at 9983938428. have issues with the listings.
3. good work! keep it up
4. not a very helpful site in finding home decor.
expected output:
1. 'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2. 'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3. 'good','work','!','keep','it','up'
4. 'not','a','very','helpful','site','in','finding','home','decor'
基本上,我想分离所有的单词并找到数据框中每个文本的长度。
我知道 word_tokenize 可以用于字符串,但如何将它应用到整个数据帧上?
请帮忙!
提前致谢...
你可以使用DataFrame的apply方法API:
import pandas as pd
import nltk
df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)
输出:
>>> df
sentences \
0 This is a very good site. I will recommend it ...
1 Can you please give me a call at 9983938428. h...
2 good work! keep it up
tokenized_sents
0 [This, is, a, very, good, site, ., I, will, re...
1 [Can, you, please, give, me, a, call, at, 9983...
2 [good, work, !, keep, it, up]
要查找每个文本的长度,请再次尝试使用 apply 和 lambda 函数:
df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)
>>> df
sentences \
0 This is a very good site. I will recommend it ...
1 Can you please give me a call at 9983938428. h...
2 good work! keep it up
tokenized_sents sents_length
0 [This, is, a, very, good, site, ., I, will, re... 14
1 [Can, you, please, give, me, a, call, at, 9983... 15
2 [good, work, !, keep, it, up] 6
pandas.Series.apply 比 pandas.DataFrame.apply
快import pandas as pd
import nltk
df = pd.read_csv("/path/to/file.csv")
start = time.time()
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
print "series.apply", (time.time() - start)
start = time.time()
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
print "dataframe.apply", (time.time() - start)
在 125 MB 的示例 csv 文件中,
series.apply 144.428858995
dataframe.apply 201.884778976
编辑:您可能会想到 series.apply([=42= 之后的 Dataframe df ])体积较大,可能会影响下一次操作的运行时间dataframe.apply(nltk.word_tokenize).
Pandas 针对这种情况进行了底层优化。我仅通过单独执行 dataframe.apply(nltk.word_tokenize) 获得了类似的 200s 运行时间。
可能需要添加 str() 以将 pandas' 对象类型转换为字符串。
请记住,计算字数的更快方法通常是计算空格。
有趣的是分词器计算句点。可能想先删除那些,也许还删除数字。取消注释下面的行将导致计数相等,至少在这种情况下是这样。
import nltk
import pandas as pd
sentences = pd.Series([
'This is a very good site. I will recommend it to others.',
'Can you please give me a call at 9983938428. have issues with the listings.',
'good work! keep it up',
'not a very helpful site in finding home decor. '
])
# remove anything but characters and spaces
sentences = sentences.str.replace('[^A-z ]','').str.replace(' +',' ').str.strip()
splitwords = [ nltk.word_tokenize( str(sentence) ) for sentence in sentences ]
print(splitwords)
# output: [['This', 'is', 'a', 'very', 'good', 'site', 'I', 'will', 'recommend', 'it', 'to', 'others'], ['Can', 'you', 'please', 'give', 'me', 'a', 'call', 'at', 'have', 'issues', 'with', 'the', 'listings'], ['good', 'work', 'keep', 'it', 'up'], ['not', 'a', 'very', 'helpful', 'site', 'in', 'finding', 'home', 'decor']]
wordcounts = [ len(words) for words in splitwords ]
print(wordcounts)
# output: [12, 13, 5, 9]
wordcounts2 = [ sentence.count(' ') + 1 for sentence in sentences ]
print(wordcounts2)
# output: [12, 13, 5, 9]
如果您不使用 Pandas,您可能不需要 str()
我给你举个例子。假设您有一个名为 twitter_df 的 数据框 ,并且您在其中存储了情绪和文本。所以,首先我将文本数据提取到列表中,如下所示
tweetText = twitter_df['text']
然后标记化
from nltk.tokenize import word_tokenize
tweetText = tweetText.apply(word_tokenize)
tweetText.head()
我想这对你有帮助
使用 pandarallel
使其更快使用Spacy
import spacy from pandarallel import pandarallel pandarallel.initialize(progress_bar=True) nlp = spacy.load("en_core_web_sm") df['new_col'] = df['text'].parallel_apply(lambda x: nlp(x))
使用NLTK
import nltk from pandarallel import pandarallel pandarallel.initialize(progress_bar=True) df['new_col'] = df['text'].parallel_apply(lambda x: nltk.word_tokenize(x))