将txt文件中的数据块导入sqlite3数据库

importing blocks of data in txt file into an sqlite3 database

我想创建一个 sqlite3 数据库以使用 python.I 我正在使用 ArnetMiner 数据集,其中每个实体都有 "blocks" 数据。一个区块的描述如下:

    #* --- paperTitle
    #@ --- Authors
    #year ---- Year
    #conf --- publication venue
    #citation --- citation number (both -1 and 0 means none)
    #index ---- index id of this paper
    #arnetid ---- pid in arnetminer database
    #% ---- the id of references of this paper (there are multiple lines, with each indicating a reference)
    #! --- Abstract

示例如下:

#*Spatial Data Structures.
#@Hanan Samet,Wei Lee Chang,Jose Fernandez
#year1995
#confModern Database Systems
#citation2743
#index25
#arnetid27
#%165
#%356
#%786754
#%3243
#!An overview is presented of the use of spatial data structures in spatial databases. The focus is on hierarchical data structures, including a number of variants of quadtrees, which sort the data with respect to the space occupied by it. Such techniques are known as spatial indexing methods. Hierarchical data structures are based on the principle of recursive decomposition. 

这是我的问题:

如何将其导入我创建的 sqlite3 table?

通常我使用的数据集只是用制表符分隔,所以我会在创建 table 后说下面的内容:

.separator "\t"
.import Data.txt table_name

我创建了 tables 如下:

CREATE TABLE publications (
    PaperTitle varchar(150),
    Year int,
    Conference varchar(150),
    Citations int,
    ID int primary key,
    arnetId int,
    Abstract text
);

CREATE TABLE authors (
    ID int primary key,
    Name varchar (100)
);

CREATE TABLE authors_publications (
    PaperID int,
    AuthorID int
);

CREATE TABLE publications_citations (
    PaperID int,
    CitationID int
);

基本上,我想我是在问是否有一种快速的方法可以将数据集导入到我创建的 table 数据库中?或者我是否必须编写 python 脚本并一次插入每个块?

最好的方法是自己解析数据并将其重写为 csv 文件,然后直接将它们导入我的数据库表。