使用 SQLAlchemy 批量更新
Bulk upsert with SQLAlchemy
我正在使用 SQLAlchemy 1.1.0b 将大量数据批量更新插入到 PostgreSQL 中,我 运行 遇到了重复键错误。
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.automap import automap_base
import pg
engine = create_engine("postgresql+pygresql://" + uname + ":" + passw + "@" + url)
# reflectively load the database.
metadata = MetaData()
metadata.reflect(bind=engine)
session = sessionmaker(autocommit=True, autoflush=True)
session.configure(bind=engine)
session = session()
base = automap_base(metadata=metadata)
base.prepare(engine, reflect=True)
table_name = "arbitrary_table_name" # this will always be arbitrary
mapped_table = getattr(base.classses, table_name)
# col and col2 exist in the table.
chunks = [[{"col":"val"},{"col2":"val2"}],[{"col":"val"},{"col2":"val3"}]]
for chunk in chunks:
session.bulk_insert_mappings(mapped_table, chunk)
session.commit()
当我 运行 它时,我得到这个:
sqlalchemy.exc.IntegrityError: (pg.IntegrityError) ERROR: duplicate key value violates unique constraint <constraint>
我似乎也无法将 mapped_table
正确实例化为 Table()
对象。
我正在处理时间序列数据,因此我正在批量抓取时间范围内有一些重叠的数据。我想做一个批量更新插入以确保数据一致性。
对大型数据集执行批量更新插入的最佳方法是什么?我现在知道 PostgreSQL support upserts,但我不确定如何在 SQLAlchemy 中执行此操作。
来自
After I found this command, I was able to perform upserts, but it is
worth mentioning that this operation is slow for a bulk "upsert".
The alternative is to get a list of the primary keys you would like to
upsert, and query the database for any matching ids:
我正在使用 SQLAlchemy 1.1.0b 将大量数据批量更新插入到 PostgreSQL 中,我 运行 遇到了重复键错误。
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.automap import automap_base
import pg
engine = create_engine("postgresql+pygresql://" + uname + ":" + passw + "@" + url)
# reflectively load the database.
metadata = MetaData()
metadata.reflect(bind=engine)
session = sessionmaker(autocommit=True, autoflush=True)
session.configure(bind=engine)
session = session()
base = automap_base(metadata=metadata)
base.prepare(engine, reflect=True)
table_name = "arbitrary_table_name" # this will always be arbitrary
mapped_table = getattr(base.classses, table_name)
# col and col2 exist in the table.
chunks = [[{"col":"val"},{"col2":"val2"}],[{"col":"val"},{"col2":"val3"}]]
for chunk in chunks:
session.bulk_insert_mappings(mapped_table, chunk)
session.commit()
当我 运行 它时,我得到这个:
sqlalchemy.exc.IntegrityError: (pg.IntegrityError) ERROR: duplicate key value violates unique constraint <constraint>
我似乎也无法将 mapped_table
正确实例化为 Table()
对象。
我正在处理时间序列数据,因此我正在批量抓取时间范围内有一些重叠的数据。我想做一个批量更新插入以确保数据一致性。
对大型数据集执行批量更新插入的最佳方法是什么?我现在知道 PostgreSQL support upserts,但我不确定如何在 SQLAlchemy 中执行此操作。
来自
After I found this command, I was able to perform upserts, but it is worth mentioning that this operation is slow for a bulk "upsert".
The alternative is to get a list of the primary keys you would like to upsert, and query the database for any matching ids: