优化 pandas 列中的函数计算?
Optimizing function computation in a pandas column?
假设我有以下 pandas 数据框:
id |opinion
1 |Hi how are you?
...
n-1|Hello!
我想像这样创建一个新的 pandas POS-tagged 列:
id| opinion |POS-tagged_opinions
1 |Hi how are you?|hi\tUH\thi
how\tWRB\thow
are\tVBP\tbe
you\tPP\tyou
?\tSENT\t?
.....
n-1| Hello |Hello\tUH\tHello
!\tSENT\t!
从文档教程中,我尝试了几种方法。特别是:
df.apply(postag_cell, axis=1)
和
df['content'].map(postag_cell)
因此,我创建了这个 POS-tag 单元格函数:
import pandas as pd
df = pd.read_csv('/Users/user/Desktop/data2.csv', sep='|')
print df.head()
def postag_cell(pandas_cell):
import pprint # For proper print of sequences.
import treetaggerwrapper
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
#2) tag your text.
y = [i.decode('UTF-8') if isinstance(i, basestring) else i for i in [pandas_cell]]
tags = tagger.tag_text(y)
#3) use the tags list... (list of string output from TreeTagger).
return tags
#df.apply(postag_cell(), axis=1)
#df['content'].map(postag_cell())
df['POS-tagged_opinions'] = (df['content'].apply(postag_cell))
print df.head()
以上函数return如下:
user:~/PycharmProjects/misc_tests$ time python tagging\ with\ pandas.py
id| opinion |POS-tagged_opinions
1 |Hi how are you?|[hi\tUH\thi
how\tWRB\thow
are\tVBP\tbe
you\tPP\tyou
?\tSENT\t?]
.....
n-1| Hello |Hello\tUH\tHello
!\tSENT\t!
--- 9.53674316406e-07 seconds ---
real 18m22.038s
user 16m33.236s
sys 1m39.066s
问题是大量的 opinions 需要很多时间:
如何使用 pandas 和 treetagger 以更 pythonic 的方式更有效地执行 pos-tagging?。我认为这个问题是由于我的 pandas 知识有限,因为我很快就用 treetagger 在 pandas 数据框中标记了意见。
可以进行一些明显的修改以获得合理的时间(例如从 postag_cell
函数中删除 TreeTagger class 的导入和实例化)。然后代码可以并行化。然而,大部分工作是由 treetagger 自己完成的。由于我对这个软件一无所知,所以我不知道它是否可以进一步优化。
最小工作代码:
import pandas as pd
import treetaggerwrapper
input_file = 'new_corpus.csv'
output_file = 'output.csv'
def postag_string(s):
'''Returns tagged text from string s'''
if isinstance(s, basestring):
s = s.decode('UTF-8')
return tagger.tag_text(s)
# Reading in the file
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
df['POS-tagged_content'] = df['content'].apply(postag_string)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
# With encoding:
out = list(tuple(i.encode().split('\t')) for i in x)
# or without:
# out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
我没有使用 pd.read_csv(filename, sep = '|')
,因为您的输入文件是 "misformatted" - 它在某些文本意见中包含未转义的字符 |
。
(更新:) 格式修复后,输出文件如下所示:
$ cat output_example.csv
|id|content|POS-tagged_content
0|cv01.txt|How are you?|[('How', 'WRB', 'How'), ('are', 'VBP', 'be'), ('you', 'PP', 'you'), ('?', 'SENT', '?')]
1|cv02.txt|Hello!|[('Hello', 'UH', 'Hello'), ('!', 'SENT', '!')]
2|cv03.txt|"She said ""OK""."|"[('She', 'PP', 'she'), ('said', 'VVD', 'say'), ('""', '``', '""'), ('OK', 'UH', 'OK'), ('""', ""''"", '""'), ('.', 'SENT', '.')]"
如果格式不符合您的要求,我们可以解决。
并行代码
它可能会带来一些加速,但不要期待奇迹。来自多进程设置的开销甚至可能超过收益。您可以试验进程数 nproc
(此处,默认设置为 CPU 数;设置超过此数效率低下)。
Treetaggerwrapper 有自己的多进程 class。我怀疑它与下面的代码做的事情差不多,所以我没有尝试。
import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp
input_file = 'new_corpus.csv'
output_file = 'output2.csv'
def postag_string_mp(s):
'''
Returns tagged text for string s.
"pool_tagger" is a global name, defined in each subprocess.
'''
if isinstance(s, basestring):
s = s.decode('UTF-8')
return pool_tagger.tag_text(s)
''' Reading in the file '''
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
''' Multiprocessing '''
# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()
# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
global pool_tagger
pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
# The actual job done in subprcesses:
def run(df):
return df.apply(postag_string_mp)
# Splitting the input
lst_split = np.array_split(df['content'], nproc)
pool = mp.Pool(processes = nproc, initializer = init)
lst_out = pool.map(run, lst_split)
pool.close()
pool.join()
# Concatenating the output from subprocesses
df['POS-tagged_content'] = pd.concat(lst_out)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
# With encoding:
out = list(tuple(i.encode().split('\t')) for i in x)
# and without:
# out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
更新
在Python3中,所有的字符串默认都是unicode的,所以用decoding/encoding可以省事省时。 (在下面的代码中,我还在子进程中使用纯 numpy 数组而不是数据帧——但这种变化的影响是微不足道的。)
# Python3 code:
import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp
input_file = 'new_corpus.csv'
output_file = 'output3.csv'
''' Reading in the file '''
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
''' Multiprocessing '''
# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()
# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
global pool_tagger
pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
# The actual job done in subprcesses:
def run(arr):
out = np.empty_like(arr)
for i in range(len(arr)):
out[i] = pool_tagger.tag_text(arr[i])
return out
# Splitting the input
lst_split = np.array_split(df.values[:,1], nproc)
with mp.Pool(processes = nproc, initializer = init) as p:
lst_out = p.map(run, lst_split)
# Concatenating the output from subprocesses
df['POS-tagged_content'] = np.concatenate(lst_out)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
单次运行后(因此,在统计上并不显着),我在你的文件中得到了这些时间:
$ time python2.7 treetagger_minimal.py
real 0m59.783s
user 0m50.697s
sys 0m16.657s
$ time python2.7 treetagger_mp.py
real 0m48.798s
user 1m15.503s
sys 0m22.300s
$ time python3 treetagger_mp3.py
real 0m39.746s
user 1m25.340s
sys 0m21.157s
如果 pandas 数据帧 pd
的唯一用途是将所有内容保存回文件,那么下一步就是从代码中完全删除 pandas。但同样,与 treetagger 的工作时间相比,收益将微不足道。
假设我有以下 pandas 数据框:
id |opinion
1 |Hi how are you?
...
n-1|Hello!
我想像这样创建一个新的 pandas POS-tagged 列:
id| opinion |POS-tagged_opinions
1 |Hi how are you?|hi\tUH\thi
how\tWRB\thow
are\tVBP\tbe
you\tPP\tyou
?\tSENT\t?
.....
n-1| Hello |Hello\tUH\tHello
!\tSENT\t!
从文档教程中,我尝试了几种方法。特别是:
df.apply(postag_cell, axis=1)
和
df['content'].map(postag_cell)
因此,我创建了这个 POS-tag 单元格函数:
import pandas as pd
df = pd.read_csv('/Users/user/Desktop/data2.csv', sep='|')
print df.head()
def postag_cell(pandas_cell):
import pprint # For proper print of sequences.
import treetaggerwrapper
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
#2) tag your text.
y = [i.decode('UTF-8') if isinstance(i, basestring) else i for i in [pandas_cell]]
tags = tagger.tag_text(y)
#3) use the tags list... (list of string output from TreeTagger).
return tags
#df.apply(postag_cell(), axis=1)
#df['content'].map(postag_cell())
df['POS-tagged_opinions'] = (df['content'].apply(postag_cell))
print df.head()
以上函数return如下:
user:~/PycharmProjects/misc_tests$ time python tagging\ with\ pandas.py
id| opinion |POS-tagged_opinions
1 |Hi how are you?|[hi\tUH\thi
how\tWRB\thow
are\tVBP\tbe
you\tPP\tyou
?\tSENT\t?]
.....
n-1| Hello |Hello\tUH\tHello
!\tSENT\t!
--- 9.53674316406e-07 seconds ---
real 18m22.038s
user 16m33.236s
sys 1m39.066s
问题是大量的 opinions 需要很多时间:
如何使用 pandas 和 treetagger 以更 pythonic 的方式更有效地执行 pos-tagging?。我认为这个问题是由于我的 pandas 知识有限,因为我很快就用 treetagger 在 pandas 数据框中标记了意见。
可以进行一些明显的修改以获得合理的时间(例如从 postag_cell
函数中删除 TreeTagger class 的导入和实例化)。然后代码可以并行化。然而,大部分工作是由 treetagger 自己完成的。由于我对这个软件一无所知,所以我不知道它是否可以进一步优化。
最小工作代码:
import pandas as pd
import treetaggerwrapper
input_file = 'new_corpus.csv'
output_file = 'output.csv'
def postag_string(s):
'''Returns tagged text from string s'''
if isinstance(s, basestring):
s = s.decode('UTF-8')
return tagger.tag_text(s)
# Reading in the file
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
df['POS-tagged_content'] = df['content'].apply(postag_string)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
# With encoding:
out = list(tuple(i.encode().split('\t')) for i in x)
# or without:
# out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
我没有使用 pd.read_csv(filename, sep = '|')
,因为您的输入文件是 "misformatted" - 它在某些文本意见中包含未转义的字符 |
。
(更新:) 格式修复后,输出文件如下所示:
$ cat output_example.csv
|id|content|POS-tagged_content
0|cv01.txt|How are you?|[('How', 'WRB', 'How'), ('are', 'VBP', 'be'), ('you', 'PP', 'you'), ('?', 'SENT', '?')]
1|cv02.txt|Hello!|[('Hello', 'UH', 'Hello'), ('!', 'SENT', '!')]
2|cv03.txt|"She said ""OK""."|"[('She', 'PP', 'she'), ('said', 'VVD', 'say'), ('""', '``', '""'), ('OK', 'UH', 'OK'), ('""', ""''"", '""'), ('.', 'SENT', '.')]"
如果格式不符合您的要求,我们可以解决。
并行代码
它可能会带来一些加速,但不要期待奇迹。来自多进程设置的开销甚至可能超过收益。您可以试验进程数 nproc
(此处,默认设置为 CPU 数;设置超过此数效率低下)。
Treetaggerwrapper 有自己的多进程 class。我怀疑它与下面的代码做的事情差不多,所以我没有尝试。
import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp
input_file = 'new_corpus.csv'
output_file = 'output2.csv'
def postag_string_mp(s):
'''
Returns tagged text for string s.
"pool_tagger" is a global name, defined in each subprocess.
'''
if isinstance(s, basestring):
s = s.decode('UTF-8')
return pool_tagger.tag_text(s)
''' Reading in the file '''
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
''' Multiprocessing '''
# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()
# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
global pool_tagger
pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
# The actual job done in subprcesses:
def run(df):
return df.apply(postag_string_mp)
# Splitting the input
lst_split = np.array_split(df['content'], nproc)
pool = mp.Pool(processes = nproc, initializer = init)
lst_out = pool.map(run, lst_split)
pool.close()
pool.join()
# Concatenating the output from subprocesses
df['POS-tagged_content'] = pd.concat(lst_out)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
# With encoding:
out = list(tuple(i.encode().split('\t')) for i in x)
# and without:
# out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
更新
在Python3中,所有的字符串默认都是unicode的,所以用decoding/encoding可以省事省时。 (在下面的代码中,我还在子进程中使用纯 numpy 数组而不是数据帧——但这种变化的影响是微不足道的。)
# Python3 code:
import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp
input_file = 'new_corpus.csv'
output_file = 'output3.csv'
''' Reading in the file '''
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
''' Multiprocessing '''
# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()
# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
global pool_tagger
pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
# The actual job done in subprcesses:
def run(arr):
out = np.empty_like(arr)
for i in range(len(arr)):
out[i] = pool_tagger.tag_text(arr[i])
return out
# Splitting the input
lst_split = np.array_split(df.values[:,1], nproc)
with mp.Pool(processes = nproc, initializer = init) as p:
lst_out = p.map(run, lst_split)
# Concatenating the output from subprocesses
df['POS-tagged_content'] = np.concatenate(lst_out)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
单次运行后(因此,在统计上并不显着),我在你的文件中得到了这些时间:
$ time python2.7 treetagger_minimal.py
real 0m59.783s
user 0m50.697s
sys 0m16.657s
$ time python2.7 treetagger_mp.py
real 0m48.798s
user 1m15.503s
sys 0m22.300s
$ time python3 treetagger_mp3.py
real 0m39.746s
user 1m25.340s
sys 0m21.157s
如果 pandas 数据帧 pd
的唯一用途是将所有内容保存回文件,那么下一步就是从代码中完全删除 pandas。但同样,与 treetagger 的工作时间相比,收益将微不足道。