Python 来自多个文件的 txt 矩阵

Python txt matrix from multiple files

如何将多个 TXT 文件中的线性频率分布转换为单个矩阵?每个文件都具有完全相同的结构,因为所有 words/terms/phrases 的顺序相同并且包含在每个文件中。每个文件的唯一性是文件名、发布日期和 words/terms/phrases 的相应频率,由“:”后的数字给出,参见以下内容:

How my input files look like:

FilenameA Date:31.12.20XX
('financial' 'statement'):15
('corporate-taxes'):3
('assets'):8
('available-for-sale' 'property'):2
('auditors'):23

我有多个文件,它们的顺序与 words/phrases 完全相同,只是频率不同(“:”后面的数字)

现在我想创建一个包含矩阵的单个文件,它将所有单词保留在顶部列并将文件特征(文件名、日期和频率)附加为逐行条目:

Desired Output:

Filename  Date  ('financial' 'statement') ('corporate-taxes') ... ('auditors)
A         2008             15                      3                  23
B         2010              9                      6                  11
C         2013              1                      8                   4
...
.
.

非常感谢任何帮助,如果有一个从目录读取所有文件并输出上述矩阵的循环会很棒。

以下代码应该对您有所帮助:

import os

# Compute matrix
titles = ['Filename', 'Date']
matrix = [titles]
for directory, __, files in os.walk('files'): # replace with your directory
    for filename in files:
        with open(os.path.join(directory, filename)) as f:
            name, date = f.readline().strip().split()
            row = [name[8:], date.split('.')[-1]]
            for line in f:
                header, value = line.strip().split(':')
                if len(matrix) == 1:
                    titles.append(header)
                row.append(value)        
        matrix.append(row)

# Work out column widths
column_widths = [0]*len(titles)
for row in matrix:
    for column, data in enumerate(row):
        column_widths[column] = max(column_widths[column], len(data))
formats = ['{:%s%ss}' % ('^' if c>1 else '<', w) for c, w in enumerate(column_widths)]

# Print matrix
for row in matrix:
    for column, data in enumerate(row):
        print formats[column].format(data), 
    print

示例输出:

Filename Date ('financial' 'statement') ('corporate-taxes') ('assets') ('available-for-sale' 'property') ('auditors')
A        2012            15                      3              8                      2                      23     
B        2010             9                      6              8                      2                      11     
C        2010             1                      8              8                      2                      4