为什么sqlalchemy需要可空字段

Why is nullable field required with sqlalchemy

我正在尝试在迁移文件中使用 Alembic 的 bulk_insert 将一些数据获取到我的数据库以用于测试目的。这是我的数据库创建代码:

import sqlalchemy as sa
from alembic import op

def upgrade():
    test_table = op.create_table(
        "test",
        sa.Column("id", sa.String, nullable=False),
        sa.Column("item_id", sa.String, nullable=True),
        sa.Column("object_id", sa.String, nullable=True),
        sa.Column("article_id", sa.String, nullable=True),
        sa.Column("active", sa.Boolean, nullable=False, default=True),
        sa.Column("name", sa.String, nullable=False),
        sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
        sa.Column("updated_at", sa.DateTime(timezone=True), nullable=False),
        sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
        sa.PrimaryKeyConstraint("id"),
        sa.ForeignKeyConstraint("item_id"),
        sa.ForeignKeyConstraint("object_id"),
        sa.ForeignKeyConstraint("article_id"),
    )

数据可以有item_idobject_idarticle_id,也可以同时有全部。如果 id 存在,则它是指向另一个 table.

的外键

这就是我尝试向 table 插入一些测试数据的方式。它对第一行工作正常,但下一行会导致错误,例如:A value is required for bind parameter 'item_id', in parameter group 1,如果我的第一个对象有 item_id 但没有其他 FK,而第二个对象没有 item_id。如果第一行有 article_id 而第二行没有,则错误为 A value is required for bind parameter 'article_id'....

op.bulk_insert(
test_table,
[
    {
        "id": "test1",
        "item_id": "item001",
        "active": True,
        "name" : "item 1",
        "created_at": datetime.now(tz=timezone.utc),
        "updated_at": datetime.now(tz=timezone.utc),
    },

    {
        "id": "test2",
        "article_id": "art001",
        "active": True,
        "name" : "article 1",
        "created_at": datetime.now(tz=timezone.utc),
        "updated_at": datetime.now(tz=timezone.utc),
    }
],

)

我不明白为什么会这样,为什么我不能将两行插入到给定不同 FK 列的数据库中?我想要实现的是所有不存在的 FK 字段都是 Null.

我认为问题不在于它们丢失了,而是字典有不同的键。尝试将有时出现有时不出现的参数设置为 None。我认为语句编译然后 re-used 一遍又一遍,它可能是基于列表中的第一组值。因此,当字典不同时,以后的调用会失败。我认为这里有解释 executing-multiple-statements

When executing multiple sets of parameters, each dictionary must have the same set of keys; i.e. you cant have fewer keys in some dictionaries than others. This is because the Insert statement is compiled against the first dictionary in the list, and it’s assumed that all subsequent argument dictionaries are compatible with that statement.

例子

op.bulk_insert(
test_table,
[
    {
        "id": "test1",
        "article_id": None,
        "item_id": "item001",
        "active": True,
        "name" : "item 1",
        "created_at": datetime.now(tz=timezone.utc),
        "updated_at": datetime.now(tz=timezone.utc),
    },

    {
        "id": "test2",
        "item_id": None,
        "article_id": "art001",
        "active": True,
        "name" : "article 1",
        "created_at": datetime.now(tz=timezone.utc),
        "updated_at": datetime.now(tz=timezone.utc),
    }
])