在 spark scala 中删除 json 解析中的重复属性

Drop duplicate attribute in json parsing in spark scala

我正在解析 json 文件,我有一个属性出现了两次。所以我想删除一个属性,这样我就可以避免模棱两可的错误。这是示例 json。例如,address1 和 Address1 具有相同的值,但唯一的区别是第一个字符是大写字母。所以我想从 spark scala 中的 json 解析中删除其中一个。

{
    "ID": 1,
    "case": "12",
    "addresses": {
        "": [{
            "address1": "abc",
            "address2": "bkc",
            "Address1": "abc",
            "Address2": "bk"
        }, {
            "address1": "ede",
            "address2": "ak",
            "Address1": "ede",
            "Address2": "ak"
        }]
    },
    "FirstName": "abc",
    "LastName": "cvv"
}

当我们在 spark scala 中进行 json 解析时,有人可以指导我如何删除其中一个。我需要自动执行此操作,这意味着现在我们面临地址问题,将来其他一些属性可能会出现类似问题。因此,与其硬编码,我们可能需要为我们面临类似问题的所有情况寻找解决方案。

val jsonString = """
{
    "ID": 1,
    "case": "12",
    "addresses": [{
    "address1": "abc",
    "address2": "bkc",
    "Address1": "abc",
    "Address2": "bk"
    }, {
    "address1": "ede",
    "address2": "ak",
    "Address1": "ede",
    "Address2": "ak"
    }],
    "FirstName": "abc",
    "LastName": "cvv"
}
"""
val jsonDF = spark.read.json(Seq(jsonString).toDS)


import org.apache.spark.sql.functions._

//Add this before using drop
sqlContext.sql("set spark.sql.caseSensitive=true")

jsonDF.withColumn("Addresses", explode(col("addresses")))
  .selectExpr("Addresses.*", "ID","case","FirstName","LastName")
  .drop("address1","address2")
  .show()