Univocity - 如何使用迭代器样式 return 每行一个 bean?

Univocity - How to return one bean per row using iterator style?

简介

我正在构建一个流程来合并几个大的已排序 csv 文件。我目前正在研究使用 Univocity 来执行此操作。我设置合并的方法是使用实​​现可比接口的 bean。

给定

简化后的文件如下所示:

id,data
1,aa
2,bb
3,cc

bean 看起来像这样(省略了 getter 和 setter):

public class Address implements Comparable<Address> {

    @Parsed
    private int id;
    @Parsed
    private String data;        

    @Override
    public int compareTo(Address o) {
        return Integer.compare(this.getId(), o.getId());
    }
}

比较器如下所示:

public class AddressComparator implements Comparator<Address>{

    @Override
    public int compare(Address a, Address b) {
        if (a == null)
            throw new IllegalArgumentException("argument object a cannot be null");
        if (b == null)
            throw new IllegalArgumentException("argument object b cannot be null");
        return Integer.compare(a.getId(), b.getId());
    }
}

因为我不想读取内存中的所有数据,所以我想读取每个文件的顶部记录并执行一些比较逻辑。这是我的简化示例:

public class App {
    
    private static final String INPUT_1 = "src/test/input/address1.csv";
    private static final String INPUT_2 = "src/test/input/address2.csv";
    private static final String INPUT_3 = "src/test/input/address3.csv";
    
    public static void main(String[] args) throws FileNotFoundException {       
        BeanListProcessor<Address> rowProcessor = new BeanListProcessor<Address>(Address.class);
        CsvParserSettings parserSettings = new CsvParserSettings();
        parserSettings.setRowProcessor(rowProcessor);       
        parserSettings.setHeaderExtractionEnabled(true);
        CsvParser parser = new CsvParser(parserSettings);       
        
        List<FileReader> readers = new ArrayList<>();
        readers.add(new FileReader(new File(INPUT_1)));
        readers.add(new FileReader(new File(INPUT_2)));
        readers.add(new FileReader(new File(INPUT_3)));
        
        // This parses all rows, but I am only interested in getting 1 row as a bean.
        for (FileReader fileReader : readers) {
            parser.parse(fileReader);
            List<Address> beans = rowProcessor.getBeans();
            for (Address address : beans) {
                System.out.println(address.toString());
            }           
        }
        
        // want to have a map with the reader and the first bean object
        // Map<FileReader, Address> topRecordofReader = new HashMap<>();
        Map<FileReader, String[]> topRecordofReader = new HashMap<>();
        for (FileReader reader : readers) {
            parser.beginParsing(reader);
            String[] row;
            while ((row = parser.parseNext()) != null) {
               System.out.println(row[0]); 
               System.out.println(row[1]); 
               topRecordofReader.put(reader, row);
               // all done, only want to get first row
               break;        
            }
        }       
    }   
}

问题

在上面的示例中,我如何以遍历每一行并且每行 returns 一个 bean 的方式进行解析,而不是解析整个文件?

我正在寻找这样的东西(这个无效代码只是为了表明我正在寻找的解决方案类型):

for (FileReader fileReader : readers) {
            parser.beginParsing(fileReader);            
            Address bean = null;
            while (bean = parser.parseNextRecord() != null) {
                topRecordofReader.put(fileReader, bean);
            }                       
        }

有两种方法可以迭代读取而不是将所有内容加载到内存中,第一种是使用 BeanProcessor 而不是 BeanListProcessor:

settings.setRowProcessor(new BeanProcessor<Address>(Address.class) {
        @Override
        public void beanProcessed(Address address, ParsingContext context) {
            // your code to process the each parsed object here!
        }

为了在没有回调的情况下迭代读取 bean(并执行一些其他常见的过程),我们创建了一个 CsvRoutines class (which extends from AbstractRoutines - more examples here):

    File input = new File("/path/to/your.csv")

    CsvParserSettings parserSettings = new CsvParserSettings();
    //...configure the parser

    // You can also use TSV and Fixed-width routines
    CsvRoutines routines = new CsvRoutines(parserSettings); 
    for (Address address : routines.iterate(Address.class, input, "UTF-8")) {
        //process your bean
    }

希望对您有所帮助!