AWS Sagemaker - df.to_csv 错误 write() 参数 1 必须是 unicode,而不是 str

AWS Sagemaker - df.to_csv error write() argument 1 must be unicode, not str

我正在尝试将文件从 sagemaker 实例保存到 S3 存储桶。下面一行会抛出一个错误!

df.to_csv("s3://informatri/Drug_Data_Cleaned.csv"), index = False)
error - 
TypeErrorTraceback (most recent call last)
<ipython-input-28-d33896172c11> in <module>()
      1 
----> 2 a.to_csv("s3://informatri/{}".format('Drug_Data_Cleaned.csv'), index = False)

/home/ec2-user/anaconda3/envs/amazonei_mxnet_p27/lib/python2.7/site-packages/pandas/core/generic.pyc in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, compression, quoting, quotechar, line_terminator, chunksize, tupleize_cols, date_format, doublequote, escapechar, decimal)
   3018                                  doublequote=doublequote,
   3019                                  escapechar=escapechar, decimal=decimal)
-> 3020         formatter.save()
   3021 
   3022         if path_or_buf is None:

/home/ec2-user/anaconda3/envs/amazonei_mxnet_p27/lib/python2.7/site-packages/pandas/io/formats/csvs.pyc in save(self)
    170                 self.writer = UnicodeWriter(f, **writer_kwargs)
    171 
--> 172             self._save()
    173 
    174         finally:

/home/ec2-user/anaconda3/envs/amazonei_mxnet_p27/lib/python2.7/site-packages/pandas/io/formats/csvs.pyc in _save(self)
    272     def _save(self):
    273 
--> 274         self._save_header()
    275 
    276         nrows = len(self.data_index)

/home/ec2-user/anaconda3/envs/amazonei_mxnet_p27/lib/python2.7/site-packages/pandas/io/formats/csvs.pyc in _save_header(self)
    240         if not has_mi_columns or has_aliases:
    241             encoded_labels += list(write_cols)
--> 242             writer.writerow(encoded_labels)
    243         else:
    244             # write out the mi

TypeError: write() argument 1 must be unicode, not str

我尝试了以下方法:

df.to_csv("s3://informatri/Drug_Data_Cleaned.csv"), index = False, encoding = 'utf-8', sep = '\t')

我仍然遇到同样的错误。如果我只做:

df.to_csv("Drug_Data_Cleaned.csv"), index = False) 

保存在本地一切正常。所以数据框或名称等不是问题。它必须对保存到 S3 存储桶做一些事情。 过去我曾多次使用类似的方法保存到 s3 存储桶,并且效果非常好。因此,我想知道为什么会出错?

我解决了这个问题。

错误是 Sagemaker ipynb 笔记本是在 conda_python2.7 左右打开的。只需在 conda_python3 中重新编写脚本,然后一切正常:)