Circe Couldn't convert raw json to case class Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder

Circe Couldn't convert raw json to case class Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder

我已经为 JSON 表示定义了几个 case 类,但我不确定我是否正确地定义了它,因为有很多嵌套的 case 类。 spec、meta 等实体属于 JSONObject 类型以及 Custom 对象本身。

这是我定义的所有类:

  case class CustomObject(apiVersion: String,kind: String, metadata: Metadata,spec: Spec,labels: Object,version: String)

  case class Metadata(creationTimestamp: String, generation: Int, uid: String,resourceVersion: String,name: String,namespace: String,selfLink: String)

  case class Spec(mode: String,image: String,imagePullPolicy: String, mainApplicationFile: String,mainClass: String,deps: Deps,driver: Driver,executor: Executor,subresources: Subresources)

  case class Driver(cores: Double,coreLimit: String,memory: String,serviceAccount: String,labels: Labels)

  case class Executor(cores: Double,instances: Double,memory: String,labels: Labels)

  case class Labels(version: String)

  case class Subresources(status: Status)

  case class Status()

  case class Deps()

这是我需要转换的自定义 K8s 对象的 JSON 结构:

{
    "apiVersion": "sparkoperator.k8s.io/v1alpha1",
    "kind": "SparkApplication",
    "metadata": {
        "creationTimestamp": "2019-01-11T15:58:45Z",
        "generation": 1,
        "name": "spark-example",
        "namespace": "default",
        "resourceVersion": "268972",
        "selfLink": "/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example",
        "uid": "uid"
    },
    "spec": {
        "deps": {},
        "driver": {
            "coreLimit": "1000m",
            "cores": 0.1,
            "labels": {
                "version": "2.4.0"
            },
            "memory": "1024m",
            "serviceAccount": "default"
        },
        "executor": {
            "cores": 1,
            "instances": 1,
            "labels": {
                "version": "2.4.0"
            },
            "memory": "1024m"
        },
        "image": "gcr.io/ynli-k8s/spark:v2.4.0,
        "imagePullPolicy": "Always",
        "mainApplicationFile": "http://localhost:8089/spark_k8s_airflow.jar",
        "mainClass": "org.apache.spark.examples.SparkExample",
        "mode": "cluster",
        "subresources": {
            "status": {}
        },
        "type": "Scala"
    }
}

更新: 我想用 Circe 将 JSON 转换为大小写 类,但是,这样 类 我会遇到这个错误:

Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject]
    implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]

我已经为所有情况定义了隐式解码器类:

 implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder[Labels]
    implicit val customObjectSubresourcesDecoder: Decoder[Subresources] = deriveDecoder[Subresources]
    implicit val customObjectDepsDecoder: Decoder[Deps] = deriveDecoder[Deps]
    implicit val customObjectStatusDecoder: Decoder[Status] = deriveDecoder[Status]
    implicit val customObjectExecutorDecoder: Decoder[Executor] = deriveDecoder[Executor]
    implicit val customObjectDriverDecoder: Decoder[Driver] = deriveDecoder[Driver]
    implicit val customObjectSpecDecoder: Decoder[Spec] = deriveDecoder[Spec]
    implicit val customObjectMetadataDecoder: Decoder[Metadata] = deriveDecoder[Metadata]
    implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]

您无法为 CustomObject 推导解码的原因是 labels: Object 成员。

在 circe 中,所有解码都是由静态类型驱动的,circe 不为 ObjectAny 等类型提供编码器或解码器,这些类型没有有用的静态信息。

如果您将大小写 class 更改为如下所示:

case class CustomObject(apiVersion: String, kind: String, metadata: Metadata, spec: Spec)

...并保留其余代码,并导入:

import io.circe.Decoder, io.circe.generic.semiauto.deriveDecoder

并将您的 JSON 文档定义为 doc(在 "image": "gcr.io/ynli-k8s/spark:v2.4.0, 行添加引号后使其有效 JSON),以下应该可以正常工作很好:

scala> io.circe.jawn.decode[CustomObject](doc)
res0: Either[io.circe.Error,CustomObject] = Right(CustomObject(sparkoperator.k8s.io/v1alpha1,SparkApplication,Metadata(2019-01-11T15:58:45Z,1,uid,268972,spark-example,default,/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example),Spec(cluster,gcr.io/ynli-k8s/spark:v2.4.0,Always,http://localhost:8089/spark_k8s_airflow.jar,org.apache.spark.examples.SparkExample,Deps(),Driver(0.1,1000m,1024m,default,Labels(2.4.0)),Executor(1.0,1.0,1024m,Labels(2.4.0)),Subresources(Status()))))

尽管其他答案之一是这样说的,但 circe 绝对可以为 class 没有成员的情况派生编码器和解码器——这绝对不是这里的问题。

附带说明一下,我希望有比这更好的错误消息:

Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject

但是考虑到 circe-generic 现在必须使用 Shapeless 的 Lazy 的方式,这是我们能得到的最好的。您可以尝试 circe-derivation for a mostly drop-in alternative for circe-generic's semi-automatic derivation that has better error messages (and some other advantages), or you can use a compiler plugin like splain,它专门设计用于在出现 shapeless.Lazy.

之类的情况下提供更好的错误消息

最后一点,您可以通过推断 deriveDecoder 上的类型参数来稍微清理您的半自动定义:

implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder

这完全是个人喜好问题,但我觉得读起来不那么嘈杂。