如何合并 DataFrame,以便将对应于 *dates* 的值应用于另一个的所有日期的所有 *times*?

How can DataFrames be merged such that the values of one that correspond to *dates* get applied to all *times* of all dates of the other?

我有两个 DataFrame。一个有一组对应于特定时间和日期的值 (df_1)。另一个有一组值对应于特定日期 (df_2)。我想合并这些 DataFrame,以便日期的 df_2 值应用于相应日期的 df_1 的所有

所以,这里是 df_1:

|DatetimeIndex          |value_1|
|-----------------------|-------|
|2015-07-18 13:53:33.280|10     |
|2015-07-18 15:43:30.111|11     |
|2015-07-19 13:54:03.330|12     |
|2015-07-20 13:52:13.350|13     |
|2015-07-20 16:10:01.901|14     |
|2015-07-20 16:50:55.020|15     |
|2015-07-21 13:56:03.126|16     |
|2015-07-22 13:53:51.747|17     |
|2015-07-22 19:45:14.647|18     |
|2015-07-23 13:53:29.346|19     |
|2015-07-23 20:00:30.100|20     |

这里是 df_2:

|DatetimeIndex|value_2|
|-------------|-------|
|2015-07-18   |100    |
|2015-07-19   |200    |
|2015-07-20   |300    |
|2015-07-21   |400    |
|2015-07-22   |500    |
|2015-07-23   |600    |

我想这样合并它们:

|DatetimeIndex          |value_1|value_2|
|-----------------------|-------|-------|
|2015-07-18 00:00:00.000|NaN    |100    |
|2015-07-18 13:53:33.280|10.0   |100    |
|2015-07-18 15:43:30.111|11.0   |100    |
|2015-07-19 00:00:00.000|NaN    |200    |
|2015-07-19 13:54:03.330|12.0   |200    |
|2015-07-20 00:00:00.000|NaN    |300    |
|2015-07-20 13:52:13.350|13.0   |300    |
|2015-07-20 16:10:01.901|14.0   |300    |
|2015-07-20 16:50:55.020|15.0   |300    |
|2015-07-21 00:00:00.000|NaN    |400    |
|2015-07-21 13:56:03.126|16.0   |400    |
|2015-07-22 00:00:00.000|NaN    |500    |
|2015-07-22 13:53:51.747|17     |500    |
|2015-07-22 19:45:14.647|18     |500    |
|2015-07-23 00:00:00.000|NaN    |600    |
|2015-07-23 13:53:29.346|19     |600    |
|2015-07-23 20:00:30.100|20     |600    |

因此,value_2 一直存在。

这叫什么合并?怎么做到的?

DataFrame的代码如下:

import pandas as pd

df_1 = pd.DataFrame(
    [
        [pd.Timestamp("2015-07-18 13:53:33.280"), 10],
        [pd.Timestamp("2015-07-18 15:43:30.111"), 11],
        [pd.Timestamp("2015-07-19 13:54:03.330"), 12],
        [pd.Timestamp("2015-07-20 13:52:13.350"), 13],
        [pd.Timestamp("2015-07-20 16:10:01.901"), 14],
        [pd.Timestamp("2015-07-20 16:50:55.020"), 15],
        [pd.Timestamp("2015-07-21 13:56:03.126"), 16],
        [pd.Timestamp("2015-07-22 13:53:51.747"), 17],
        [pd.Timestamp("2015-07-22 19:45:14.647"), 18],
        [pd.Timestamp("2015-07-23 13:53:29.346"), 19],
        [pd.Timestamp("2015-07-23 20:00:30.100"), 20]
    ],
    columns = [
        "datetime",
        "value_1"
    ]
)
df_1.index = df_1["datetime"]
del df_1["datetime"]
df_1.index = pd.to_datetime(df_1.index.values)

df_2 = pd.DataFrame(
    [
        [pd.Timestamp("2015-07-18 00:00:00"), 100],
        [pd.Timestamp("2015-07-19 00:00:00"), 200],
        [pd.Timestamp("2015-07-20 00:00:00"), 300],
        [pd.Timestamp("2015-07-21 00:00:00"), 400],
        [pd.Timestamp("2015-07-22 00:00:00"), 500],
        [pd.Timestamp("2015-07-23 00:00:00"), 600]
    ],
    columns = [
        "datetime",
        "value_2"
    ]
)
df_2
df_2.index = df_2["datetime"]
del df_2["datetime"]
df_2.index = pd.to_datetime(df_2.index.values)

解决方案
构造一个新的索引,它是两者的联合。然后使用 reindexmap

的组合
idx = df_1.index.union(df_2.index)

df_1.reindex(idx).assign(value_2=idx.floor('D').map(df_2.value_2.get))

                         value_1  value_2
2015-07-18 00:00:00.000      NaN      100
2015-07-18 13:53:33.280     10.0      100
2015-07-18 15:43:30.111     11.0      100
2015-07-19 00:00:00.000      NaN      200
2015-07-19 13:54:03.330     12.0      200
2015-07-20 00:00:00.000      NaN      300
2015-07-20 13:52:13.350     13.0      300
2015-07-20 16:10:01.901     14.0      300
2015-07-20 16:50:55.020     15.0      300
2015-07-21 00:00:00.000      NaN      400
2015-07-21 13:56:03.126     16.0      400
2015-07-22 00:00:00.000      NaN      500
2015-07-22 13:53:51.747     17.0      500
2015-07-22 19:45:14.647     18.0      500
2015-07-23 00:00:00.000      NaN      600
2015-07-23 13:53:29.346     19.0      600
2015-07-23 20:00:30.100     20.0      600

解释

  • 将两者结合起来应该是不言自明的。然而,当采用联合时,我们也会自动获得一个排序索引。太方便了!
  • 当我们使用这个新的和改进的索引联合重新索引 df_1 时,某些索引值将不会出现在 df_1 的索引中。在不指定其他参数的情况下,那些以前不存在的索引的列值将为 np.nan,这正是我们想要的。
  • 我使用 assign 添加列。
    • 我认为它更干净
    • 它不会覆盖我正在使用的数据框
    • 流水线很好
  • idx.floor('D') 给了我一天的时间,同时保持 pd.DatetimeIndex 的特点。这让我可以在它之后 map
  • pd.Index.map 需要一个 callable
  • 我通过了 df_2.value_2.get,感觉很像 dict.get(我喜欢)

回复评论
假设 df_2 有几列。我们可以使用 join 而不是

df_1.join(df_2.loc[idx.date].set_index(idx), how='outer')

                         value_1  value_2
2015-07-18 00:00:00.000      NaN      100
2015-07-18 13:53:33.280     10.0      100
2015-07-18 15:43:30.111     11.0      100
2015-07-19 00:00:00.000      NaN      200
2015-07-19 13:54:03.330     12.0      200
2015-07-20 00:00:00.000      NaN      300
2015-07-20 13:52:13.350     13.0      300
2015-07-20 16:10:01.901     14.0      300
2015-07-20 16:50:55.020     15.0      300
2015-07-21 00:00:00.000      NaN      400
2015-07-21 13:56:03.126     16.0      400
2015-07-22 00:00:00.000      NaN      500
2015-07-22 13:53:51.747     17.0      500
2015-07-22 19:45:14.647     18.0      500
2015-07-23 00:00:00.000      NaN      600
2015-07-23 13:53:29.346     19.0      600
2015-07-23 20:00:30.100     20.0      600

这似乎是一个更好的答案,因为它更短。但对于单列情况来说速度较慢。无论如何,将它用于多列情况。

%timeit df_1.reindex(idx).assign(value_2=idx.floor('D').map(df_2.value_2.get))
%timeit df_1.join(df_2.loc[idx.date].set_index(idx), how='outer')

1.56 ms ± 69 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
2.38 ms ± 591 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)