如何在 Avro 中混合记录和地图?
How to mix record with map in Avro?
我正在处理 JSON 格式的服务器日志,我想将我的日志以 Parquet 格式存储在 AWS S3 上(Parquet 需要 Avro 架构)。首先,所有日志都有一组公共字段,其次,所有日志都有很多不在公共集中的可选字段。
例如以下三个日志:
{ "ip": "172.18.80.109", "timestamp": "2015-09-17T23:00:18.313Z", "message":"blahblahblah"}
{ "ip": "172.18.80.112", "timestamp": "2015-09-17T23:00:08.297Z", "message":"blahblahblah", "microseconds": 223}
{ "ip": "172.18.80.113", "timestamp": "2015-09-17T23:00:08.299Z", "message":"blahblahblah", "thread":"http-apr-8080-exec-1147"}
三个日志都有3个共享字段:ip
、timestamp
和message
,一些日志有额外的字段,例如microseconds
和thread
.
如果我使用以下架构,那么我将丢失所有其他字段。:
{"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "String"},
{"name": "message", "type": "string"}
]
}
并且以下架构工作正常:
{"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "String"},
{"name": "message", "type": "string"},
{"name": "microseconds", "type": [null,long]},
{"name": "thread", "type": [null,string]}
]
}
但唯一的问题是,除非我扫描所有日志,否则我不知道所有可选字段的名称,此外,将来会有新的附加字段。
然后我想出了一个结合record
和map
的想法:
{"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "String"},
{"name": "message", "type": "string"},
{"type": "map", "values": "string"} // error
]
}
不幸的是,这不会编译:
java -jar avro-tools-1.7.7.jar compile schema example.avro .
它会抛出一个错误:
Exception in thread "main" org.apache.avro.SchemaParseException: No field name: {"type":"map","values":"long"}
at org.apache.avro.Schema.getRequiredText(Schema.java:1305)
at org.apache.avro.Schema.parse(Schema.java:1192)
at org.apache.avro.Schema$Parser.parse(Schema.java:965)
at org.apache.avro.Schema$Parser.parse(Schema.java:932)
at org.apache.avro.tool.SpecificCompilerTool.run(SpecificCompilerTool.java:73)
at org.apache.avro.tool.Main.run(Main.java:84)
at org.apache.avro.tool.Main.main(Main.java:73)
有没有办法以 Avro 格式存储 JSON 字符串,这样可以灵活地处理未知的可选字段?
基本上这是一个schema evolution的问题,Spark可以通过Schema Merging来处理这个问题。我正在寻找 Hadoop 的解决方案。
地图类型是 avro 术语中的“复杂”类型。以下代码段有效:
{
"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "string"},
{"name": "message", "type": "string"},
{"name": "additional", "type": {"type": "map", "values": "string"}}
]
}
我正在处理 JSON 格式的服务器日志,我想将我的日志以 Parquet 格式存储在 AWS S3 上(Parquet 需要 Avro 架构)。首先,所有日志都有一组公共字段,其次,所有日志都有很多不在公共集中的可选字段。
例如以下三个日志:
{ "ip": "172.18.80.109", "timestamp": "2015-09-17T23:00:18.313Z", "message":"blahblahblah"}
{ "ip": "172.18.80.112", "timestamp": "2015-09-17T23:00:08.297Z", "message":"blahblahblah", "microseconds": 223}
{ "ip": "172.18.80.113", "timestamp": "2015-09-17T23:00:08.299Z", "message":"blahblahblah", "thread":"http-apr-8080-exec-1147"}
三个日志都有3个共享字段:ip
、timestamp
和message
,一些日志有额外的字段,例如microseconds
和thread
.
如果我使用以下架构,那么我将丢失所有其他字段。:
{"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "String"},
{"name": "message", "type": "string"}
]
}
并且以下架构工作正常:
{"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "String"},
{"name": "message", "type": "string"},
{"name": "microseconds", "type": [null,long]},
{"name": "thread", "type": [null,string]}
]
}
但唯一的问题是,除非我扫描所有日志,否则我不知道所有可选字段的名称,此外,将来会有新的附加字段。
然后我想出了一个结合record
和map
的想法:
{"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "String"},
{"name": "message", "type": "string"},
{"type": "map", "values": "string"} // error
]
}
不幸的是,这不会编译:
java -jar avro-tools-1.7.7.jar compile schema example.avro .
它会抛出一个错误:
Exception in thread "main" org.apache.avro.SchemaParseException: No field name: {"type":"map","values":"long"}
at org.apache.avro.Schema.getRequiredText(Schema.java:1305)
at org.apache.avro.Schema.parse(Schema.java:1192)
at org.apache.avro.Schema$Parser.parse(Schema.java:965)
at org.apache.avro.Schema$Parser.parse(Schema.java:932)
at org.apache.avro.tool.SpecificCompilerTool.run(SpecificCompilerTool.java:73)
at org.apache.avro.tool.Main.run(Main.java:84)
at org.apache.avro.tool.Main.main(Main.java:73)
有没有办法以 Avro 格式存储 JSON 字符串,这样可以灵活地处理未知的可选字段?
基本上这是一个schema evolution的问题,Spark可以通过Schema Merging来处理这个问题。我正在寻找 Hadoop 的解决方案。
地图类型是 avro 术语中的“复杂”类型。以下代码段有效:
{
"namespace": "example.avro",
"type": "record",
"name": "Log",
"fields": [
{"name": "ip", "type": "string"},
{"name": "timestamp", "type": "string"},
{"name": "message", "type": "string"},
{"name": "additional", "type": {"type": "map", "values": "string"}}
]
}