如何加快此文件创建过程?
How do I speed up this file creation process?
我正在尝试创建一个包含多个层的具有固定宽度列的大型平面文件,但处理速度似乎很慢,很可能是因为我正在遍历每一行。
对于上下文,这是为了传输保单信息。
层次结构如下:
-Policy row
--Property on policy
---Coverage on property
--Property on policy
---Coverage on property
--Owner on policy
--Owner on policy
--Owner on policy
目前我正在将四种记录类型加载到单独的数据帧中,然后通过根据父记录的 ID 提取它们来对每种类型进行 for 循环,然后将它们写入文件。我希望有某种分层数据帧合并,它不会在我每次需要记录时强制我扫描文件。
import re
import pandas as pd
import math
def MakeNumeric(instring):
output = re.sub('[^0-9]', '', str(instring))
return str(output)
def Pad(instring, padchar, length, align):
if instring is None: # Takes care of NULL values
instring = ''
instring = str(instring).upper()
instring = instring.replace(',', '').replace('\n', '').replace('\r', '')
instring = instring[:length]
if align == 'L':
output = instring + (padchar * (length - len(instring)))
elif align == 'R':
output = (padchar * (length - len(instring))) + instring
else:
output = instring
return output
def FileCreation():
POLR = pd.read_parquet(r'POLR.parquet')
PRP1 = pd.read_parquet(r'PRP1.parquet')
PROP = pd.read_parquet(r'PROP.parquet')
SUBJ = pd.read_parquet(r'SUBJ.parquet')
rownum = 1
totalrownum = 1
POLRCt = 0
size = 900000
POLR = [POLR.loc[i:i + size - 1, :] for i in range(0, len(POLR), size)]
FileCt = 0
print('Predicted File Count: ' + str(math.ceil(len(POLR[0])/ size)) )
for df in POLR:
FileCt += 1
filename = r'OutputFile.' + Pad(FileCt, '0', 2, 'R')
with open(filename, 'a+') as outfile:
for i, row in df.iterrows():
row[0] = Pad(rownum, '0', 9, 'R')
row[1] = Pad(row[1], ' ', 4, 'L')
row[2] = Pad(row[2], '0', 5, 'R')
# I do this for all 50 columns
outfile.write((','.join(row[:51])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
for i2, row2 in PROP[PROP.ID == row[51]].iterrows():
row2[0] = Pad(rownum, '0', 9, 'R')
row2[1] = Pad(row2[1], ' ', 4, 'L')
row2[2] = Pad(row2[2], '0', 5, 'R')
# I do this for all 105 columns
outfile.write((','.join(row2[:106])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
for i3, row3 in PRP1[(PRP1['id'] == row2['ID']) & (PRP1['VNum'] == row2['vnum'])].iterrows():
row3[0] = Pad(rownum, '0', 9, 'R')
row3[1] = Pad(row3[1], ' ', 4, 'L')
row3[2] = Pad(row3[2], '0', 5, 'R')
# I do this for all 72 columns
outfile.write((','.join(row3[:73])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
for i2, row2 in SUBJ[SUBJ['id'] == row['id']].iterrows():
row2[0] = Pad(rownum, '0', 9, 'R')
row2[1] = Pad(row2[1], ' ', 4, 'L')
row2[2] = Pad(row2[2], '0', 5, 'R')
# I do this for all 24 columns
outfile.write((','.join(row2[:25])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
POLRCt += 1
print('File {} of {} '.format(str(FileCt),str(len(POLR)) ) + str((POLRCt - 1) / len(df.index) * 100) + '% Finished\r')
rownum += 1
rownum = 1
POLRCt = 1
我主要是在寻找一个脚本,它不需要花费多天时间来创建一个 27M 的记录文件。
我最终为每个记录级别填充临时 tables,并创建键,然后将它们插入永久暂存 table 并为键分配聚簇索引。
然后我在使用 OFFSET
和 FETCH NEXT %d ROWS ONLY
减少内存大小的同时查询结果。然后,我使用多处理库为 CPU 上的每个线程分解工作负载。
最终,这些组合将运行时间减少到最初发布此问题时的大约 20%。
我正在尝试创建一个包含多个层的具有固定宽度列的大型平面文件,但处理速度似乎很慢,很可能是因为我正在遍历每一行。 对于上下文,这是为了传输保单信息。
层次结构如下:
-Policy row
--Property on policy
---Coverage on property
--Property on policy
---Coverage on property
--Owner on policy
--Owner on policy
--Owner on policy
目前我正在将四种记录类型加载到单独的数据帧中,然后通过根据父记录的 ID 提取它们来对每种类型进行 for 循环,然后将它们写入文件。我希望有某种分层数据帧合并,它不会在我每次需要记录时强制我扫描文件。
import re
import pandas as pd
import math
def MakeNumeric(instring):
output = re.sub('[^0-9]', '', str(instring))
return str(output)
def Pad(instring, padchar, length, align):
if instring is None: # Takes care of NULL values
instring = ''
instring = str(instring).upper()
instring = instring.replace(',', '').replace('\n', '').replace('\r', '')
instring = instring[:length]
if align == 'L':
output = instring + (padchar * (length - len(instring)))
elif align == 'R':
output = (padchar * (length - len(instring))) + instring
else:
output = instring
return output
def FileCreation():
POLR = pd.read_parquet(r'POLR.parquet')
PRP1 = pd.read_parquet(r'PRP1.parquet')
PROP = pd.read_parquet(r'PROP.parquet')
SUBJ = pd.read_parquet(r'SUBJ.parquet')
rownum = 1
totalrownum = 1
POLRCt = 0
size = 900000
POLR = [POLR.loc[i:i + size - 1, :] for i in range(0, len(POLR), size)]
FileCt = 0
print('Predicted File Count: ' + str(math.ceil(len(POLR[0])/ size)) )
for df in POLR:
FileCt += 1
filename = r'OutputFile.' + Pad(FileCt, '0', 2, 'R')
with open(filename, 'a+') as outfile:
for i, row in df.iterrows():
row[0] = Pad(rownum, '0', 9, 'R')
row[1] = Pad(row[1], ' ', 4, 'L')
row[2] = Pad(row[2], '0', 5, 'R')
# I do this for all 50 columns
outfile.write((','.join(row[:51])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
for i2, row2 in PROP[PROP.ID == row[51]].iterrows():
row2[0] = Pad(rownum, '0', 9, 'R')
row2[1] = Pad(row2[1], ' ', 4, 'L')
row2[2] = Pad(row2[2], '0', 5, 'R')
# I do this for all 105 columns
outfile.write((','.join(row2[:106])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
for i3, row3 in PRP1[(PRP1['id'] == row2['ID']) & (PRP1['VNum'] == row2['vnum'])].iterrows():
row3[0] = Pad(rownum, '0', 9, 'R')
row3[1] = Pad(row3[1], ' ', 4, 'L')
row3[2] = Pad(row3[2], '0', 5, 'R')
# I do this for all 72 columns
outfile.write((','.join(row3[:73])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
for i2, row2 in SUBJ[SUBJ['id'] == row['id']].iterrows():
row2[0] = Pad(rownum, '0', 9, 'R')
row2[1] = Pad(row2[1], ' ', 4, 'L')
row2[2] = Pad(row2[2], '0', 5, 'R')
# I do this for all 24 columns
outfile.write((','.join(row2[:25])).replace(',', '') + '\n')
rownum += 1
totalrownum += 1
POLRCt += 1
print('File {} of {} '.format(str(FileCt),str(len(POLR)) ) + str((POLRCt - 1) / len(df.index) * 100) + '% Finished\r')
rownum += 1
rownum = 1
POLRCt = 1
我主要是在寻找一个脚本,它不需要花费多天时间来创建一个 27M 的记录文件。
我最终为每个记录级别填充临时 tables,并创建键,然后将它们插入永久暂存 table 并为键分配聚簇索引。
然后我在使用 OFFSET
和 FETCH NEXT %d ROWS ONLY
减少内存大小的同时查询结果。然后,我使用多处理库为 CPU 上的每个线程分解工作负载。
最终,这些组合将运行时间减少到最初发布此问题时的大约 20%。