将 pickle 文件写入 Minio 对象存储

Write a pickle file in to Minio Object Storage

目前,我使用以下方法保存 pickle 文件:

with open('model/tokenizer.pickle', 'wb') as handle:
   pickle.dump(t, handle, protocol=pickle.HIGHEST_PROTOCOL)

这是将文件存储到我的本地目录,稍后我使用以下方式从本地上传到 Minio:

minioClient = Minio(endpoint = endpoint, access_key = minio_access_key, secret_key = minio_secret_key)
minioClient.fput_object(bucket_name='model', object_name='tokenizer.pickle', file_path='model/tokenizer.pickle')

如何不在本地写入,直接保存到Minio中?

你可以先用 bytes_file = pickle.dumps(t) 将您的对象转换为字节,然后这样使用 io.BytesIO(bytes_file)

client.put_object(
            bucket_name=bucket_name,
            object_name=object_name,
            data=io.BytesIO(bytes_file),
            length=len(bytes_file)
        )

然后

pickle.loads(client.get_object(bucket_name=bucket_name,
                               object_name=path_file).read())

最上面的答案是正确的,但不正确。它甚至没有 运行 因为 put_object 方法中的参数无效。此外,由于 OP 想要将文件写入 Minio(内部托管),因此您必须指定 endpoint_url.

下面是一些应该可以工作的从头到尾的示例代码。将 endpoint_url 替换为托管 ec2 的任何 public ip。我用 localhost 作为一个简单的例子。

import boto3
import io
import numpy as np
import pandas as pd
import pickle


ACCESS_KEY = 'BLARG'
SECRET_ACCESS_KEY = 'KWARG'

#sample dataframe
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 4))
                                     , columns=list('ABCD'))
bytes_file = pickle.dumps(df)

bucket_name = 'mlflow-minio'
object_name = 'df.pkl'

s3client = boto3.client('s3'
                        ,endpoint_url = 'http://localhost:9000/'
                        ,aws_access_key_id = ACCESS_KEY
                        ,aws_secret_access_key = SECRET_ACCESS_KEY
                       )

#places file in the Minio bucket
s3client.put_object(
            Bucket=bucket_name,
            Key=object_name,
            Body=io.BytesIO(bytes_file)
        )

#Now to load the pickled file
response = s3client.get_object(Bucket=bucket_name, Key=object_name)

body = response['Body'].read()
data = pickle.loads(body)

#sample records
print (data.head())