如何将某些列的 pandas 数据框展平为 json?
How to flatten a pandas dataframe with some columns as json?
我有一个从数据库加载数据的数据框 df
。大多数列是 json 字符串,而有些甚至是 json 的列表。例如:
id name columnA columnB
1 John {"dist": "600", "time": "0:12.10"} [{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "3rd", "value": "200"}, {"pos": "total", "value": "1000"}]
2 Mike {"dist": "600"} [{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "total", "value": "800"}]
...
如您所见,并非所有行在一列的 json 字符串中都具有相同数量的元素。
我需要做的是保持像 id
和 name
这样的普通列,并像这样展平 json 列:
id name columnA.dist columnA.time columnB.pos.1st columnB.pos.2nd columnB.pos.3rd columnB.pos.total
1 John 600 0:12.10 500 300 200 1000
2 Mark 600 NaN 500 300 Nan 800
我试过像这样使用 json_normalize
:
from pandas.io.json import json_normalize
json_normalize(df)
但是keyerror
似乎有些问题。这样做的正确方法是什么?
创建一个自定义函数来展平 columnB
然后使用 pd.concat
def flatten(js):
return pd.DataFrame(js).set_index('pos').squeeze()
pd.concat([df.drop(['columnA', 'columnB'], axis=1),
df.columnA.apply(pd.Series),
df.columnB.apply(flatten)], axis=1)
这是一个再次使用 json_normalize()
的解决方案,方法是使用自定义函数以 json_normalize
函数理解的正确格式获取数据。
import ast
from pandas.io.json import json_normalize
def only_dict(d):
'''
Convert json string representation of dictionary to a python dict
'''
return ast.literal_eval(d)
def list_of_dicts(ld):
'''
Create a mapping of the tuples formed after
converting json strings of list to a python list
'''
return dict([(list(d.values())[1], list(d.values())[0]) for d in ast.literal_eval(ld)])
A = json_normalize(df['columnA'].apply(only_dict).tolist()).add_prefix('columnA.')
B = json_normalize(df['columnB'].apply(list_of_dicts).tolist()).add_prefix('columnB.pos.')
最后将DFs
加入共同索引得到:
df[['id', 'name']].join([A, B])
编辑:- 根据@MartijnPieters 的评论,推荐的解码 json 字符串的方法是如果您知道数据源是 JSON.
,则使用 json.loads()
which is much faster when compared to using ast.literal_eval()
最快的似乎是:
import pandas as pd
import json
json_struct = json.loads(df.to_json(orient="records"))
df_flat = pd.io.json.json_normalize(json_struct) #use pd.io.json
TL;DR 复制粘贴以下函数并像这样使用它:flatten_nested_json_df(df)
这是我能想到的最通用的函数:
def flatten_nested_json_df(df):
df = df.reset_index()
print(f"original shape: {df.shape}")
print(f"original columns: {df.columns}")
# search for columns to explode/flatten
s = (df.applymap(type) == list).all()
list_columns = s[s].index.tolist()
s = (df.applymap(type) == dict).all()
dict_columns = s[s].index.tolist()
print(f"lists: {list_columns}, dicts: {dict_columns}")
while len(list_columns) > 0 or len(dict_columns) > 0:
new_columns = []
for col in dict_columns:
print(f"flattening: {col}")
# explode dictionaries horizontally, adding new columns
horiz_exploded = pd.json_normalize(df[col]).add_prefix(f'{col}.')
horiz_exploded.index = df.index
df = pd.concat([df, horiz_exploded], axis=1).drop(columns=[col])
new_columns.extend(horiz_exploded.columns) # inplace
for col in list_columns:
print(f"exploding: {col}")
# explode lists vertically, adding new columns
df = df.drop(columns=[col]).join(df[col].explode().to_frame())
new_columns.append(col)
# check if there are still dict o list fields to flatten
s = (df[new_columns].applymap(type) == list).all()
list_columns = s[s].index.tolist()
s = (df[new_columns].applymap(type) == dict).all()
dict_columns = s[s].index.tolist()
print(f"lists: {list_columns}, dicts: {dict_columns}")
print(f"final shape: {df.shape}")
print(f"final columns: {df.columns}")
return df
它需要一个可能在其列中包含嵌套列表 and/or 字典的数据框,并递归地 explodes/flattens 这些列。
它使用 pandas' pd.json_normalize
to explode the dictionaries (creating new columns), and pandas' explode
展开列表(创建新行)。
使用简单:
# Test
df = pd.DataFrame(
columns=['id','name','columnA','columnB'],
data=[
[1,'John',{"dist": "600", "time": "0:12.10"},[{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "3rd", "value": "200"}, {"pos": "total", "value": "1000"}]],
[2,'Mike',{"dist": "600"},[{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "total", "value": "800"}]]
])
flatten_nested_json_df(df)
这不是地球上最有效的方法,它有重置数据框索引的副作用,但它完成了工作。随意调整它。
我有一个从数据库加载数据的数据框 df
。大多数列是 json 字符串,而有些甚至是 json 的列表。例如:
id name columnA columnB
1 John {"dist": "600", "time": "0:12.10"} [{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "3rd", "value": "200"}, {"pos": "total", "value": "1000"}]
2 Mike {"dist": "600"} [{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "total", "value": "800"}]
...
如您所见,并非所有行在一列的 json 字符串中都具有相同数量的元素。
我需要做的是保持像 id
和 name
这样的普通列,并像这样展平 json 列:
id name columnA.dist columnA.time columnB.pos.1st columnB.pos.2nd columnB.pos.3rd columnB.pos.total
1 John 600 0:12.10 500 300 200 1000
2 Mark 600 NaN 500 300 Nan 800
我试过像这样使用 json_normalize
:
from pandas.io.json import json_normalize
json_normalize(df)
但是keyerror
似乎有些问题。这样做的正确方法是什么?
创建一个自定义函数来展平 columnB
然后使用 pd.concat
def flatten(js):
return pd.DataFrame(js).set_index('pos').squeeze()
pd.concat([df.drop(['columnA', 'columnB'], axis=1),
df.columnA.apply(pd.Series),
df.columnB.apply(flatten)], axis=1)
这是一个再次使用 json_normalize()
的解决方案,方法是使用自定义函数以 json_normalize
函数理解的正确格式获取数据。
import ast
from pandas.io.json import json_normalize
def only_dict(d):
'''
Convert json string representation of dictionary to a python dict
'''
return ast.literal_eval(d)
def list_of_dicts(ld):
'''
Create a mapping of the tuples formed after
converting json strings of list to a python list
'''
return dict([(list(d.values())[1], list(d.values())[0]) for d in ast.literal_eval(ld)])
A = json_normalize(df['columnA'].apply(only_dict).tolist()).add_prefix('columnA.')
B = json_normalize(df['columnB'].apply(list_of_dicts).tolist()).add_prefix('columnB.pos.')
最后将DFs
加入共同索引得到:
df[['id', 'name']].join([A, B])
编辑:- 根据@MartijnPieters 的评论,推荐的解码 json 字符串的方法是如果您知道数据源是 JSON.
,则使用json.loads()
which is much faster when compared to using ast.literal_eval()
最快的似乎是:
import pandas as pd
import json
json_struct = json.loads(df.to_json(orient="records"))
df_flat = pd.io.json.json_normalize(json_struct) #use pd.io.json
TL;DR 复制粘贴以下函数并像这样使用它:flatten_nested_json_df(df)
这是我能想到的最通用的函数:
def flatten_nested_json_df(df):
df = df.reset_index()
print(f"original shape: {df.shape}")
print(f"original columns: {df.columns}")
# search for columns to explode/flatten
s = (df.applymap(type) == list).all()
list_columns = s[s].index.tolist()
s = (df.applymap(type) == dict).all()
dict_columns = s[s].index.tolist()
print(f"lists: {list_columns}, dicts: {dict_columns}")
while len(list_columns) > 0 or len(dict_columns) > 0:
new_columns = []
for col in dict_columns:
print(f"flattening: {col}")
# explode dictionaries horizontally, adding new columns
horiz_exploded = pd.json_normalize(df[col]).add_prefix(f'{col}.')
horiz_exploded.index = df.index
df = pd.concat([df, horiz_exploded], axis=1).drop(columns=[col])
new_columns.extend(horiz_exploded.columns) # inplace
for col in list_columns:
print(f"exploding: {col}")
# explode lists vertically, adding new columns
df = df.drop(columns=[col]).join(df[col].explode().to_frame())
new_columns.append(col)
# check if there are still dict o list fields to flatten
s = (df[new_columns].applymap(type) == list).all()
list_columns = s[s].index.tolist()
s = (df[new_columns].applymap(type) == dict).all()
dict_columns = s[s].index.tolist()
print(f"lists: {list_columns}, dicts: {dict_columns}")
print(f"final shape: {df.shape}")
print(f"final columns: {df.columns}")
return df
它需要一个可能在其列中包含嵌套列表 and/or 字典的数据框,并递归地 explodes/flattens 这些列。
它使用 pandas' pd.json_normalize
to explode the dictionaries (creating new columns), and pandas' explode
展开列表(创建新行)。
使用简单:
# Test
df = pd.DataFrame(
columns=['id','name','columnA','columnB'],
data=[
[1,'John',{"dist": "600", "time": "0:12.10"},[{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "3rd", "value": "200"}, {"pos": "total", "value": "1000"}]],
[2,'Mike',{"dist": "600"},[{"pos": "1st", "value": "500"},{"pos": "2nd", "value": "300"},{"pos": "total", "value": "800"}]]
])
flatten_nested_json_df(df)
这不是地球上最有效的方法,它有重置数据框索引的副作用,但它完成了工作。随意调整它。