让JQ输出一个table
Make JQ output a table
我的问题是:如何使 JQ 输出为 table 格式,用 0 替换缺失值?
因此 JQ 的输入是以下 Elastic Search JSON 响应:
{"aggregations": {
"overall": {
"buckets": [
{
"key": "2018-01-18T00:00:00.000Z-2018-01-25T19:33:16.010Z",
"from_as_string": "2018-01-18T00:00:00.000Z",
"to": 1516908796010,
"to_as_string": "2018-01-25T19:33:16.010Z",
"doc_count": 155569,
"agg_per_name": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "ASSET-DD583",
"doc_count": 3016,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-22T00:00:00.000Z",
"key": 1516579200000,
"doc_count": 161,
"totalMaxUptime": {
"value": 77598
}
},
{
"key_as_string": "2018-01-23T00:00:00.000Z",
"key": 1516665600000,
"doc_count": 251,
"totalMaxUptime": {
"value": 80789
}
},
{
"key_as_string": "2018-01-24T00:00:00.000Z",
"key": 1516752000000,
"doc_count": 192,
"totalMaxUptime": {
"value": 56885
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 2088,
"totalMaxUptime": {
"value": 7392705
}
}
]
}
},
{
"key": "ASSET-DD568",
"doc_count": 2990,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-18T00:00:00.000Z",
"key": 1516233600000,
"doc_count": 106,
"totalMaxUptime": {
"value": 31241
}
},
{
"key_as_string": "2018-01-19T00:00:00.000Z",
"key": 1516320000000,
"doc_count": 241,
"totalMaxUptime": {
"value": 2952565
}
},
{
"key_as_string": "2018-01-20T00:00:00.000Z",
"key": 1516406400000,
"doc_count": 326,
"totalMaxUptime": {
"value": 2698235
}
},
{
"key_as_string": "2018-01-21T00:00:00.000Z",
"key": 1516492800000,
"doc_count": 214,
"totalMaxUptime": {
"value": 85436
}
},
{
"key_as_string": "2018-01-22T00:00:00.000Z",
"key": 1516579200000,
"doc_count": 279,
"totalMaxUptime": {
"value": 83201
}
},
{
"key_as_string": "2018-01-23T00:00:00.000Z",
"key": 1516665600000,
"doc_count": 50,
"totalMaxUptime": {
"value": 96467
}
},
{
"key_as_string": "2018-01-24T00:00:00.000Z",
"key": 1516752000000,
"doc_count": 5,
"totalMaxUptime": {
"value": 903
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 1769,
"totalMaxUptime": {
"value": 12337946
}
}
]
}
},
{
"key": "ASSET-42631",
"doc_count": 2899,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-18T00:00:00.000Z",
"key": 1516233600000,
"doc_count": 132,
"totalMaxUptime": {
"value": 39054
}
},
{
"key_as_string": "2018-01-19T00:00:00.000Z",
"key": 1516320000000,
"doc_count": 172,
"totalMaxUptime": {
"value": 47634
}
},
{
"key_as_string": "2018-01-20T00:00:00.000Z",
"key": 1516406400000,
"doc_count": 214,
"totalMaxUptime": {
"value": 68264
}
},
{
"key_as_string": "2018-01-21T00:00:00.000Z",
"key": 1516492800000,
"doc_count": 220,
"totalMaxUptime": {
"value": 66243
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 128,
"totalMaxUptime": {
"value": 47660
}
}
]
}
}
]
}
}
]
}
}
}
这个 JSON 有一些固有的属性:
- 在 agg_per_name.buckets
中会有可变数量的桶
- TotalMaxUptime_perDays.buckets 表示从当前日期开始的最后 7 天按天分组。 totalMaxUptime_perDays.buckets 将为每个资产设置 1 到 8 之间的多个桶,每个桶对应于特定日期。
对于给定的样本,JQ 的期望输出是一个 table,在水平方向上你有从 key_as_string 开始的日期(在本例中是从 2018 年 1 月 18 日到 2018 年 1 月 25 日)和垂直所有资产键(即 ASSET-DD583、ASSET-DD568 等)。 table 为每个相应的日期填充 totalMaxUptime.value,如果结果中不存在该日期,则应使用“0”值代替:
XXXXXXXXXXX, 2018-01-18, 2018-01-19, 2018-01-20, 2018-01-21, 2018-01-22, 2018-01-23, 2018-01-24, 2018-01-25
ASSET-DD583, 0, 0, 0, 0, 77598, 80789, 56885, 7392705
ASSET-DD568, 31241, 2952565, 2698235, 85436, 83201, 96467, 903, 12337946
ASSET-42631, 39054, 47634, 68264, 66243, 0, 0, 0, 47660
编辑 1:
这是我得到的结果:
input.json | jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] | .key + ", " + (.totalMaxUptime_perDays.buckets[] | .key_as_string + ", " + (.totalMaxUptime.value | tostring))' | sed 's/"//g' | sed 's/T00:00:00.000Z//g'> uptime.csv
产生这种输出:
ASSET-DD583, 2018-01-22, 77598
ASSET-DD583, 2018-01-23, 80789
ASSET-DD583, 2018-01-24, 56885
ASSET-DD583, 2018-01-25, 7392705
...............
您的问题的部分解决方案。
如果使用 @csv
,则可以将数组的值放在同一行。
例如,假设您有
{
"a": [1,2,3],
"b": [
{
"x": 10
},
{
"x": 20
},
{
"x": 30
}
]
}
要获得 1,2,3
你应该使用 jq '.a | @csv'
要获得 10,20,30
你应该使用 jq '[.b[].x] | @csv'
希望对您有所帮助!
在下文中,我使用了 @tsv
,因此输出更容易被视为 table,但您可能想要使用 @csv
.
这里棘手的部分是将 0 放在正确的位置。创建一个 JSON "dictionary" (即 JSON 对象)使它变得容易。在这里,normalize
利用了 jq 将遵循将键添加到对象的顺序这一事实。
def dates:
["2018-01-18", "2018-01-19", "2018-01-20", "2018-01-21", "2018-01-22", "2018-01-23", "2018-01-24", "2018-01-25"];
def normalize:
. as $in
| reduce dates[] as $k ({}; .[$k] = ($in[$k] // 0));
(["Asset"] + dates),
(.aggregations.overall.buckets[].agg_per_name.buckets[]
| .key as $asset
| .totalMaxUptime_perDays.buckets
| map( { (.key_as_string | sub("T.*";"") ): .totalMaxUptime.value } )
| add
| normalize
| [$asset] + [.[]]
)
| @tsv
您可能需要修改以上内容,以便根据数据计算 dates
。
输出:
Asset 2018-01-18 2018-01-19 2018-01-20 2018-01-21 2018-01-22 2018-01-23 2018-01-24 2018-01-25
ASSET-DD583 0 0 0 0 77598 80789 56885 7392705
ASSET-DD568 31241 2952565 2698235 85436 83201 96467 903 12337946
ASSET-42631 39054 47634 68264 66243 0 0 0 47660
编辑: $in[$k] // 0
周围的括号已添加。
尝试以下操作:
cat input.json
| jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] |
.key + ", " + (.totalMaxUptime_perDays.buckets[] |
.key_as_string + ", " + (.totalMaxUptime.value | tostring))' |column -t -s,
有一个工具可以做到这一点。
https://github.com/uzimaru0000/tv
$ curl -s https://jsonplaceholder.typicode.com/users | tv
+--+------------------------+----------------+-------------------------+-------+---------------------+-------------+-------+
|id|name |username |email |address|phone |website |company|
+--+------------------------+----------------+-------------------------+-------+---------------------+-------------+-------+
|1 |Leanne Graham |Bret |Sincere@april.biz |... |1-770-736-8031 x56442|hildegard.org|... |
|2 |Ervin Howell |Antonette |Shanna@melissa.tv |... |010-692-6593 x09125 |anastasia.net|... |
|3 |Clementine Bauch |Samantha |Nathan@yesenia.net |... |1-463-123-4447 |ramiro.info |... |
|4 |Patricia Lebsack |Karianne |Julianne.OConner@kory.org|... |493-170-9623 x156 |kale.biz |... |
|5 |Chelsey Dietrich |Kamren |Lucio_Hettinger@annie.ca |... |(254)954-1289 |demarco.info |... |
|6 |Mrs. Dennis Schulist |Leopoldo_Corkery|Karley_Dach@jasper.info |... |1-477-935-8478 x6430 |ola.org |... |
|7 |Kurtis Weissnat |Elwyn.Skiles |Telly.Hoeger@billy.biz |... |210.067.6132 |elvis.io |... |
|8 |Nicholas Runolfsdottir V|Maxime_Nienow |Sherwood@rosamond.me |... |586.493.6943 x140 |jacynthe.com |... |
|9 |Glenna Reichert |Delphine |Chaim_McDermott@dana.io |... |(775)976-6794 x41206 |conrad.com |... |
|10|Clementina DuBuque |Moriah.Stanton |Rey.Padberg@karina.biz |... |024-648-3804 |ambrose.net |... |
+--+------------------------+----------------+-------------------------+-------+---------------------+-------------+-------+
我的问题是:如何使 JQ 输出为 table 格式,用 0 替换缺失值?
因此 JQ 的输入是以下 Elastic Search JSON 响应:
{"aggregations": {
"overall": {
"buckets": [
{
"key": "2018-01-18T00:00:00.000Z-2018-01-25T19:33:16.010Z",
"from_as_string": "2018-01-18T00:00:00.000Z",
"to": 1516908796010,
"to_as_string": "2018-01-25T19:33:16.010Z",
"doc_count": 155569,
"agg_per_name": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "ASSET-DD583",
"doc_count": 3016,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-22T00:00:00.000Z",
"key": 1516579200000,
"doc_count": 161,
"totalMaxUptime": {
"value": 77598
}
},
{
"key_as_string": "2018-01-23T00:00:00.000Z",
"key": 1516665600000,
"doc_count": 251,
"totalMaxUptime": {
"value": 80789
}
},
{
"key_as_string": "2018-01-24T00:00:00.000Z",
"key": 1516752000000,
"doc_count": 192,
"totalMaxUptime": {
"value": 56885
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 2088,
"totalMaxUptime": {
"value": 7392705
}
}
]
}
},
{
"key": "ASSET-DD568",
"doc_count": 2990,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-18T00:00:00.000Z",
"key": 1516233600000,
"doc_count": 106,
"totalMaxUptime": {
"value": 31241
}
},
{
"key_as_string": "2018-01-19T00:00:00.000Z",
"key": 1516320000000,
"doc_count": 241,
"totalMaxUptime": {
"value": 2952565
}
},
{
"key_as_string": "2018-01-20T00:00:00.000Z",
"key": 1516406400000,
"doc_count": 326,
"totalMaxUptime": {
"value": 2698235
}
},
{
"key_as_string": "2018-01-21T00:00:00.000Z",
"key": 1516492800000,
"doc_count": 214,
"totalMaxUptime": {
"value": 85436
}
},
{
"key_as_string": "2018-01-22T00:00:00.000Z",
"key": 1516579200000,
"doc_count": 279,
"totalMaxUptime": {
"value": 83201
}
},
{
"key_as_string": "2018-01-23T00:00:00.000Z",
"key": 1516665600000,
"doc_count": 50,
"totalMaxUptime": {
"value": 96467
}
},
{
"key_as_string": "2018-01-24T00:00:00.000Z",
"key": 1516752000000,
"doc_count": 5,
"totalMaxUptime": {
"value": 903
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 1769,
"totalMaxUptime": {
"value": 12337946
}
}
]
}
},
{
"key": "ASSET-42631",
"doc_count": 2899,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-18T00:00:00.000Z",
"key": 1516233600000,
"doc_count": 132,
"totalMaxUptime": {
"value": 39054
}
},
{
"key_as_string": "2018-01-19T00:00:00.000Z",
"key": 1516320000000,
"doc_count": 172,
"totalMaxUptime": {
"value": 47634
}
},
{
"key_as_string": "2018-01-20T00:00:00.000Z",
"key": 1516406400000,
"doc_count": 214,
"totalMaxUptime": {
"value": 68264
}
},
{
"key_as_string": "2018-01-21T00:00:00.000Z",
"key": 1516492800000,
"doc_count": 220,
"totalMaxUptime": {
"value": 66243
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 128,
"totalMaxUptime": {
"value": 47660
}
}
]
}
}
]
}
}
]
}
}
}
这个 JSON 有一些固有的属性:
- 在 agg_per_name.buckets 中会有可变数量的桶
- TotalMaxUptime_perDays.buckets 表示从当前日期开始的最后 7 天按天分组。 totalMaxUptime_perDays.buckets 将为每个资产设置 1 到 8 之间的多个桶,每个桶对应于特定日期。
对于给定的样本,JQ 的期望输出是一个 table,在水平方向上你有从 key_as_string 开始的日期(在本例中是从 2018 年 1 月 18 日到 2018 年 1 月 25 日)和垂直所有资产键(即 ASSET-DD583、ASSET-DD568 等)。 table 为每个相应的日期填充 totalMaxUptime.value,如果结果中不存在该日期,则应使用“0”值代替:
XXXXXXXXXXX, 2018-01-18, 2018-01-19, 2018-01-20, 2018-01-21, 2018-01-22, 2018-01-23, 2018-01-24, 2018-01-25
ASSET-DD583, 0, 0, 0, 0, 77598, 80789, 56885, 7392705
ASSET-DD568, 31241, 2952565, 2698235, 85436, 83201, 96467, 903, 12337946
ASSET-42631, 39054, 47634, 68264, 66243, 0, 0, 0, 47660
编辑 1:
这是我得到的结果:
input.json | jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] | .key + ", " + (.totalMaxUptime_perDays.buckets[] | .key_as_string + ", " + (.totalMaxUptime.value | tostring))' | sed 's/"//g' | sed 's/T00:00:00.000Z//g'> uptime.csv
产生这种输出:
ASSET-DD583, 2018-01-22, 77598
ASSET-DD583, 2018-01-23, 80789
ASSET-DD583, 2018-01-24, 56885
ASSET-DD583, 2018-01-25, 7392705
...............
您的问题的部分解决方案。
如果使用 @csv
,则可以将数组的值放在同一行。
例如,假设您有
{
"a": [1,2,3],
"b": [
{
"x": 10
},
{
"x": 20
},
{
"x": 30
}
]
}
要获得 1,2,3
你应该使用 jq '.a | @csv'
要获得 10,20,30
你应该使用 jq '[.b[].x] | @csv'
希望对您有所帮助!
在下文中,我使用了 @tsv
,因此输出更容易被视为 table,但您可能想要使用 @csv
.
这里棘手的部分是将 0 放在正确的位置。创建一个 JSON "dictionary" (即 JSON 对象)使它变得容易。在这里,normalize
利用了 jq 将遵循将键添加到对象的顺序这一事实。
def dates:
["2018-01-18", "2018-01-19", "2018-01-20", "2018-01-21", "2018-01-22", "2018-01-23", "2018-01-24", "2018-01-25"];
def normalize:
. as $in
| reduce dates[] as $k ({}; .[$k] = ($in[$k] // 0));
(["Asset"] + dates),
(.aggregations.overall.buckets[].agg_per_name.buckets[]
| .key as $asset
| .totalMaxUptime_perDays.buckets
| map( { (.key_as_string | sub("T.*";"") ): .totalMaxUptime.value } )
| add
| normalize
| [$asset] + [.[]]
)
| @tsv
您可能需要修改以上内容,以便根据数据计算 dates
。
输出:
Asset 2018-01-18 2018-01-19 2018-01-20 2018-01-21 2018-01-22 2018-01-23 2018-01-24 2018-01-25
ASSET-DD583 0 0 0 0 77598 80789 56885 7392705
ASSET-DD568 31241 2952565 2698235 85436 83201 96467 903 12337946
ASSET-42631 39054 47634 68264 66243 0 0 0 47660
编辑: $in[$k] // 0
周围的括号已添加。
尝试以下操作:
cat input.json
| jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] |
.key + ", " + (.totalMaxUptime_perDays.buckets[] |
.key_as_string + ", " + (.totalMaxUptime.value | tostring))' |column -t -s,
有一个工具可以做到这一点。 https://github.com/uzimaru0000/tv
$ curl -s https://jsonplaceholder.typicode.com/users | tv
+--+------------------------+----------------+-------------------------+-------+---------------------+-------------+-------+
|id|name |username |email |address|phone |website |company|
+--+------------------------+----------------+-------------------------+-------+---------------------+-------------+-------+
|1 |Leanne Graham |Bret |Sincere@april.biz |... |1-770-736-8031 x56442|hildegard.org|... |
|2 |Ervin Howell |Antonette |Shanna@melissa.tv |... |010-692-6593 x09125 |anastasia.net|... |
|3 |Clementine Bauch |Samantha |Nathan@yesenia.net |... |1-463-123-4447 |ramiro.info |... |
|4 |Patricia Lebsack |Karianne |Julianne.OConner@kory.org|... |493-170-9623 x156 |kale.biz |... |
|5 |Chelsey Dietrich |Kamren |Lucio_Hettinger@annie.ca |... |(254)954-1289 |demarco.info |... |
|6 |Mrs. Dennis Schulist |Leopoldo_Corkery|Karley_Dach@jasper.info |... |1-477-935-8478 x6430 |ola.org |... |
|7 |Kurtis Weissnat |Elwyn.Skiles |Telly.Hoeger@billy.biz |... |210.067.6132 |elvis.io |... |
|8 |Nicholas Runolfsdottir V|Maxime_Nienow |Sherwood@rosamond.me |... |586.493.6943 x140 |jacynthe.com |... |
|9 |Glenna Reichert |Delphine |Chaim_McDermott@dana.io |... |(775)976-6794 x41206 |conrad.com |... |
|10|Clementina DuBuque |Moriah.Stanton |Rey.Padberg@karina.biz |... |024-648-3804 |ambrose.net |... |
+--+------------------------+----------------+-------------------------+-------+---------------------+-------------+-------+