Spark:在 s3 上保存和加载机器学习模型
Spark: save and load machine learning model on s3
我想在 s3 上保存和加载机器学习模型。
我做到了:
val credentials = new ProfileCredentialsProvider()
val hadoopConf = sc.hadoopConfiguration
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAccessKeyId", credentials.getCredentials.getAWSAccessKeyId)
hadoopConf.set("fs.s3.awsSecretAccessKey", credentials.getCredentials.getAWSSecretKey)
TrainValidationSplitModel.load(s"s3://model_path")
当我在本地 运行 它工作时。
然而,当我 运行 它在一个集群中时,我得到了以下错误:
Serialization trace:
fields (org.apache.spark.sql.types.StructType)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:101)
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:366)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:307)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:312)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Class is not registered: org.apache.spark.sql.types.StructField[]
Note: To register this class use: kryo.register(org.apache.spark.sql.types.StructField[].class);
at com.esotericsoftware.kryo.Kryo.getRegistration(Kryo.java:488)
at com.esotericsoftware.kryo.util.DefaultClassResolver.writeClass(DefaultClassResolver.java:97)
at com.esotericsoftware.kryo.Kryo.writeClass(Kryo.java:517)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:76)
... 10 more
你可能会说:"Easy, you just have to register the class org.apache.spark.sql.types.StructField using kryo.register(SomeClass.class);"
但是,在将近十五 classes 注册之后。 Kryo 要求我注册一个私有的 class(访问仅限于 spark 包)。
我该如何解决这个问题?
该错误与保存和加载模型无关。
这是由 spark.kryo.registrationRequired
引起的,请在您的配置中将某处设置为 true
。如果是,it behaves as follows
Whether to require registration with Kryo. If set to 'true', Kryo will throw an exception if an unregistered class is serialized. If set to false (the default), Kryo will write unregistered class names along with each object. Writing class names can cause significant performance overhead, so enabling this option can enforce strictly that a user has not omitted classes from registration.
我个人的建议是仅将其用于诊断并在实际 运行 应用程序时禁用。
我想在 s3 上保存和加载机器学习模型。
我做到了:
val credentials = new ProfileCredentialsProvider()
val hadoopConf = sc.hadoopConfiguration
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAccessKeyId", credentials.getCredentials.getAWSAccessKeyId)
hadoopConf.set("fs.s3.awsSecretAccessKey", credentials.getCredentials.getAWSSecretKey)
TrainValidationSplitModel.load(s"s3://model_path")
当我在本地 运行 它工作时。
然而,当我 运行 它在一个集群中时,我得到了以下错误:
Serialization trace:
fields (org.apache.spark.sql.types.StructType)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:101)
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:366)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:307)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:312)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Class is not registered: org.apache.spark.sql.types.StructField[]
Note: To register this class use: kryo.register(org.apache.spark.sql.types.StructField[].class);
at com.esotericsoftware.kryo.Kryo.getRegistration(Kryo.java:488)
at com.esotericsoftware.kryo.util.DefaultClassResolver.writeClass(DefaultClassResolver.java:97)
at com.esotericsoftware.kryo.Kryo.writeClass(Kryo.java:517)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:76)
... 10 more
你可能会说:"Easy, you just have to register the class org.apache.spark.sql.types.StructField using kryo.register(SomeClass.class);"
但是,在将近十五 classes 注册之后。 Kryo 要求我注册一个私有的 class(访问仅限于 spark 包)。
我该如何解决这个问题?
该错误与保存和加载模型无关。
这是由 spark.kryo.registrationRequired
引起的,请在您的配置中将某处设置为 true
。如果是,it behaves as follows
Whether to require registration with Kryo. If set to 'true', Kryo will throw an exception if an unregistered class is serialized. If set to false (the default), Kryo will write unregistered class names along with each object. Writing class names can cause significant performance overhead, so enabling this option can enforce strictly that a user has not omitted classes from registration.
我个人的建议是仅将其用于诊断并在实际 运行 应用程序时禁用。