文件中要替换的行集 - python

Set of lines to replace in file - python

我是 python 的新手。我正在尝试使用包含新数据 (newprops) 的文件来替换第二个文件中的旧数据。两个文件都超过 3MB。

包含新数据的文件如下所示:

PROD    850 30003   0.096043  
PROD    851 30003   0.096043  
PROD    853 30003   0.096043  
PROD    852 30003   0.096043  
....

包含旧数据的原始文件类似于:

CROD    850     123456 123457 123458 123459  
PROD    850     30003   0.08  
CROD    851     123456 123457 123458 123459  
PROD    851     30003   0.07  
CROD    852     123456 123457 123458 123459  
PROD    852     30003   0.095  
CROD    853     123456 123457 123458 123459  
PROD    853     30003   0.095  
....

输出应该是:

CROD    850     123456 123457 123458 123459  
PROD    850     30003   0.096043  
CROD    851     123456 123457 123458 123459  
PROD    851     30003   0.096043  
CROD    852     123456 123457 123458 123459  
PROD    852     30003   0.096043  
CROD    853     123456 123457 123458 123459  
PROD    853     30003   0.096043  

这是我目前的情况:

import fileinput

def prop_update(newprops,bdffile):

    fnewprops=open(newprops,'r')
    fbdf=open(bdffile,'r+')
    newpropsline=fnewprops.readline()
    fbdfline=fbdf.readline()


    while len(newpropsline)>0:
        fbdf.seek(0)
        propname=newpropsline.split()[1]
        propID=newpropsline.split()[2]
            while len(fbdfline)>0:
                if propID and propname in fbdfline:
                    bdffile.write(newpropsline) #i'm stuck here... I want to delete the old line and use updated value                   
                else:                    
                    fbdfline=fbdfline.readline()

        newpropsline=fnewprops.readline()

    fnewprops.close()

请帮忙!

您可以从原始行中取出每一行并用新行压缩它们,然后重新打开原始行并写入更新的行,假设新行的长度等于原始行的一半:

from itertools import izip

with open("new.txt") as f,open("orig.txt") as f2:
    lines = f2.readlines()
    zipped = izip(lines[::2],f) # just use zip for python3
    with open("orig.txt","w") as out:
        for pair in zipped:
            out.writelines(pair)

如果您希望根据第二列对行进行排序,您还需要手动删除和插入换行符以便分隔最后的行:

from itertools import izip,islice

with open("new.txt") as f, open("orig.txt") as f2:
    orig = sorted((x.strip() for x in islice(f2, 0, None, 2)), key=lambda x: int(x.split(None, 2)[1]))
    new = sorted((x.strip() for x in f), key=lambda x:int(x.split(None,2)[1]))
    zipped = izip(orig, new)
    with open("orig.txt","w") as out:
        for pair in zipped:
            out.write("{}\n{}\n".format(*pair))

输出:

CROD 850 123456 123457 123458 123459
PROD 850 30003 0.096043
CROD 851 123456 123457 123458 123459
PROD 851 30003 0.096043
CROD 852 123456 123457 123458 123459
PROD 852 30003 0.096043
CROD 853 123456 123457 123458 123459
PROD 853 30003 0.096043

如果长度不一样,您可以使用 itertools.izip_longest"" 的填充值,这样您就不会丢失任何数据:

如果旧文件已经有序,只需忘记对 f2 的排序调用并使用 f2.readlines()[::2] 但如果它不有序,那么这将确保所有行都根据第二列排序,而不管原来的顺序。

您可以使用字典来索引新数据。然后将原始文件逐行写入新文件,同时更新索引中的数据。看起来前三项应该是关键("PROD 850 30003"),它们可以用正则表达式提取出来,例如 (PROD\s+\d+\s+\d+).

import re
_split_new = re.compile(r"(PROD\s+\d+\s+\d+)(.*)")

# create an index for the PROD items to be updated

# this might be a bit more understandable...
#with open('updates.txt') as updates:
#    new_data = {}
#    for line in updates:
#        match = _split_new.match(line)
#        if match:
#            key, value = match.groups()
#            new_data[key] = value

# ... but this is fancier (and likely faster)
with open('updates.txt') as updates:
    new_data = dict(match.groups() 
        for match in (_split_new.search(line) for line in updates)
        if match)

# then process the updates
with open('origstuff.txt') as orig, open('newstuff.txt', 'w') as newstuff:
    # for each line in the original...
    for line in orig:
        match = _split_new.match(line)
        # ... see if its a PROD line
        if match:
            key, value = match.groups()
            # ... and rewrite with value from indexing dict (defaulting to current value)
            newstuff.write("%s%s\n" % (key, new_data.get(key, value)))
        else:
            # ... or just the original line
            newstuff.write(line)