微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

python – 使用SQLAlchemy批量upsert

参见英文答案 > SQLAlchemy – performing a bulk upsert (if exists, update, else insert) in postgresql                                     1个
>            How to UPSERT (MERGE, INSERT … ON DUPLICATE UPDATE) in PostgreSQL?                                    6个
我正在使用sqlAlchemy 1.1.0b将大量数据批量插入到Postgresql中,并且我遇到了重复的键错误.

from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.automap import automap_base

import pg

engine = create_engine("postgresql+pygresql://" + uname + ":" + passw + "@" + url)

# reflectively load the database.
Metadata = MetaData()
Metadata.reflect(bind=engine)
session = sessionmaker(autocommit=True, autoflush=True)
session.configure(bind=engine)
session = session()
base = automap_base(Metadata=Metadata)
base.prepare(engine, reflect=True)

table_name = "arbitrary_table_name" # this will always be arbitrary
mapped_table = getattr(base.classses, table_name)
# col and col2 exist in the table.
chunks = [[{"col":"val"},{"col2":"val2"}],[{"col":"val"},{"col2":"val3"}]]

for chunk in chunks:
    session.bulk_insert_mappings(mapped_table, chunk)
    session.commit()

当我运行它时,我得到了这个:

sqlalchemy.exc.IntegrityError: (pg.IntegrityError) ERROR:  duplicate key value violates unique constraint <constraint>

我似乎无法正确地将mapped_table实例化为Table()对象.

我正在处理时间序列数据,因此我正在大量抓取数据,并在时间范围内有一些重叠.我想进行批量upsert以确保数据一致性.

使用大型数据集进行批量upsert的最佳方法是什么?我现在知道了PostgreSQL support upserts,但我不确定如何在sqlAlchemy中这样做.

解决方法:

https://stackoverflow.com/a/26018934/465974

After I found this command, I was able to perform upserts, but it is
worth mentioning that this operation is slow for a bulk “upsert”.

The alternative is to get a list of the primary keys you would like to
upsert, and query the database for any matching ids:

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐