Python, multiprocessing: 如何优化代码?让代码更快?
Python, multiprocessing: How to optimize the code? Make the code faster?
我用Python。我有 100 个 zip 文件。每个 zip 文件包含 100 多个 xml 文件。我使用 xmlfiles 创建 csvfiles。
from xml.etree.ElementTree import fromstring
import zipfile
from multiprocessing import Process
def parse_xml_for_csv1(data, writer1):
root = fromstring(data)
for node in root.iter('name'):
writer1.writerow(node.get('value'))
def create_csv1():
with open('output1.csv', 'w') as f1:
writer1 = csv.writer(f1)
for i in range(1, 100):
z = zipfile.ZipFile('xml' + str(i) + '.zip')
# z.namelist() contains more than 100 xml files
for finfo in z.namelist():
data = z.read(finfo)
parse_xml_for_csv1(data, writer1)
def create_csv2():
with open('output2.csv', 'w') as f2:
writer2 = csv.writer(f2)
for i in range(1, 100):
...
if __name__ == "__main__":
p1 = Process(target=create_csv1)
p2 = Process(target=create_csv2)
p1.start()
p2.start()
p1.join()
p2.join()
请告诉我,如何优化我的代码?让代码更快?
你只需要定义一个方法,带参数。
在给定数量的线程或进程中拆分 100 个 .zip 文件的处理。添加的进程越多,使用的 CPU 越多,也许您可以使用 2 个以上的进程,速度会更快(有时可能会因为磁盘 I/O 而出现瓶颈点)
在下面的代码中,我可以改成4个或10个进程,不需要copy/paste代码。它处理不同的 zip 文件。
您的代码并行处理相同的 100 个文件两次:比没有多处理时还要慢!
def create_csv(start_index,step):
with open('output{0}.csv'.format(start_index//step), 'w') as f1:
writer1 = csv.writer(f1)
for i in range(start_index, start_index+step):
z = zipfile.ZipFile('xml' + str(i) + '.zip')
# z.namelist() contains more than 100 xml files
for finfo in z.namelist():
data = z.read(finfo)
parse_xml_for_csv1(data, writer1)
if __name__ == "__main__":
nb_files = 100
nb_processes = 2 # raise to 4 or 8 depending on your machine
step = nb_files//nb_processes
lp = []
for start_index in range(1,nb_files,step):
p = Process(target=create_csv,args=[start_index,step])
p.start()
lp.append(p)
for p in lp:
p.join()
我用Python。我有 100 个 zip 文件。每个 zip 文件包含 100 多个 xml 文件。我使用 xmlfiles 创建 csvfiles。
from xml.etree.ElementTree import fromstring
import zipfile
from multiprocessing import Process
def parse_xml_for_csv1(data, writer1):
root = fromstring(data)
for node in root.iter('name'):
writer1.writerow(node.get('value'))
def create_csv1():
with open('output1.csv', 'w') as f1:
writer1 = csv.writer(f1)
for i in range(1, 100):
z = zipfile.ZipFile('xml' + str(i) + '.zip')
# z.namelist() contains more than 100 xml files
for finfo in z.namelist():
data = z.read(finfo)
parse_xml_for_csv1(data, writer1)
def create_csv2():
with open('output2.csv', 'w') as f2:
writer2 = csv.writer(f2)
for i in range(1, 100):
...
if __name__ == "__main__":
p1 = Process(target=create_csv1)
p2 = Process(target=create_csv2)
p1.start()
p2.start()
p1.join()
p2.join()
请告诉我,如何优化我的代码?让代码更快?
你只需要定义一个方法,带参数。 在给定数量的线程或进程中拆分 100 个 .zip 文件的处理。添加的进程越多,使用的 CPU 越多,也许您可以使用 2 个以上的进程,速度会更快(有时可能会因为磁盘 I/O 而出现瓶颈点)
在下面的代码中,我可以改成4个或10个进程,不需要copy/paste代码。它处理不同的 zip 文件。
您的代码并行处理相同的 100 个文件两次:比没有多处理时还要慢!
def create_csv(start_index,step):
with open('output{0}.csv'.format(start_index//step), 'w') as f1:
writer1 = csv.writer(f1)
for i in range(start_index, start_index+step):
z = zipfile.ZipFile('xml' + str(i) + '.zip')
# z.namelist() contains more than 100 xml files
for finfo in z.namelist():
data = z.read(finfo)
parse_xml_for_csv1(data, writer1)
if __name__ == "__main__":
nb_files = 100
nb_processes = 2 # raise to 4 or 8 depending on your machine
step = nb_files//nb_processes
lp = []
for start_index in range(1,nb_files,step):
p = Process(target=create_csv,args=[start_index,step])
p.start()
lp.append(p)
for p in lp:
p.join()