没有重复项的 PostgreSQL 多次更新插入会引发错误
PostgreSQL multiple upsert without duplicates rises an error
我使用 PostgreSQL 9.5,Ubuntu16.04
我有一个空 table:
CREATE TABLE IF NOT EXISTS candles_1m(
timestamp REAL PRIMARY KEY,
open REAL,
close REAL,
high REAL,
low REAL,
volume REAL
);
然后我尝试做多个 upsert(不重复 'timestamp' - 主键):
INSERT INTO candles_1m (
timestamp, open, close, high, low, volume
) VALUES
(1507804800, 5160, 5158.7, 5160, 5158.7, 5.40608574),
(1507804740, 5157.5, 5160, 5160, 5156.1, 39.03357813),
(1507804680, 5156.5, 5157.4, 5157.4, 5156, 33.54458319),
(1507804620, 5151.3, 5156.5, 5157.5, 5151.2, 19.75590599)
ON CONFLICT (timestamp)
DO UPDATE SET
open = EXCLUDED.open,
close = EXCLUDED.close,
high = EXCLUDED.high,
low = EXCLUDED.low,
volume = EXCLUDED.volume;
我收到错误:
ERROR: ON CONFLICT DO UPDATE command cannot affect row a second time
HINT: Ensure that no rows proposed for insertion within the same command have duplicate constrained values.
我不明白为什么?我那里没有重复的!但是我的下一步将创建一个请求,该请求将逐步添加(或更新)每一行(独立于存在的重复项)。
real 4 bytes variable-precision, inexact 6 decimal digits precision
你有两对相等的实数值:
select
1507804800::real = 1507804740::real as r1r2,
1507804680::real = 1507804620::real as r3r4
r1r2 | r3r4
------+------
t | t
(1 row)
使用精度更高的类型。
正如其他人指出的那样,这是因为您输入的两个值在转换为 REAL
.
时被截断为相同的值
Why?
因为浮点数在其范围内没有统一的精度 - 接近零,它们可以更准确地表示非常小的分数,而远离零,它们可以不准确地表示非常大的值。您的值高于精确表示每个整数的范围,因此您的值会在您插入它们时有效地四舍五入到最接近的 representable 值。
请注意,这不仅仅是重复的问题,实际上每次插入 table 时都会丢失数据。
How to fix it?
通过选择 a more suitable data type for your column. If your timestamps never have decimal components, a BigInt
might be appropriate; otherwise, read up on the precision limits of different widths of floating point numbers. Or possibly you should be casting them to an appropriate date/time type instead, perhaps using to_timestamp
.
我使用 PostgreSQL 9.5,Ubuntu16.04
我有一个空 table:
CREATE TABLE IF NOT EXISTS candles_1m(
timestamp REAL PRIMARY KEY,
open REAL,
close REAL,
high REAL,
low REAL,
volume REAL
);
然后我尝试做多个 upsert(不重复 'timestamp' - 主键):
INSERT INTO candles_1m (
timestamp, open, close, high, low, volume
) VALUES
(1507804800, 5160, 5158.7, 5160, 5158.7, 5.40608574),
(1507804740, 5157.5, 5160, 5160, 5156.1, 39.03357813),
(1507804680, 5156.5, 5157.4, 5157.4, 5156, 33.54458319),
(1507804620, 5151.3, 5156.5, 5157.5, 5151.2, 19.75590599)
ON CONFLICT (timestamp)
DO UPDATE SET
open = EXCLUDED.open,
close = EXCLUDED.close,
high = EXCLUDED.high,
low = EXCLUDED.low,
volume = EXCLUDED.volume;
我收到错误:
ERROR: ON CONFLICT DO UPDATE command cannot affect row a second time
HINT: Ensure that no rows proposed for insertion within the same command have duplicate constrained values.
我不明白为什么?我那里没有重复的!但是我的下一步将创建一个请求,该请求将逐步添加(或更新)每一行(独立于存在的重复项)。
real 4 bytes variable-precision, inexact 6 decimal digits precision
你有两对相等的实数值:
select
1507804800::real = 1507804740::real as r1r2,
1507804680::real = 1507804620::real as r3r4
r1r2 | r3r4
------+------
t | t
(1 row)
使用精度更高的类型。
正如其他人指出的那样,这是因为您输入的两个值在转换为 REAL
.
Why?
因为浮点数在其范围内没有统一的精度 - 接近零,它们可以更准确地表示非常小的分数,而远离零,它们可以不准确地表示非常大的值。您的值高于精确表示每个整数的范围,因此您的值会在您插入它们时有效地四舍五入到最接近的 representable 值。
请注意,这不仅仅是重复的问题,实际上每次插入 table 时都会丢失数据。
How to fix it?
通过选择 a more suitable data type for your column. If your timestamps never have decimal components, a BigInt
might be appropriate; otherwise, read up on the precision limits of different widths of floating point numbers. Or possibly you should be casting them to an appropriate date/time type instead, perhaps using to_timestamp
.