使用 C# 在 Stream 中解压缩 S3 文件
Decompressing a S3 File in Stream using C#
我正在尝试从 S3 中将 .zip 文件读入 C# 中的流,然后将条目写回 S3 中的原始文件夹。我查看了无数的 SO 问题,观看了视频等,试图做到这一点,但我似乎遗漏了一些东西。我现在比原来更远了,但我仍然被困住了。 (我真的希望 Amazon 能实现一个解压缩方法,因为这似乎出现了很多,但还没有这样的运气。)这是我目前的代码:
private async Task<string> DecompressFile(string bucketName, string keystring)
{
AmazonS3Client client = new AmazonS3Client();
Stream fileStream = new MemoryStream();
string sourceDir = keystring.Split('/')[0];
GetObjectRequest request = new GetObjectRequest{ BucketName = bucketName, Key = keystring };
try
{
using (var response = await client.GetObjectAsync(request))
using (var arch = new ZipArchive(response.ResponseStream))
{
foreach (ZipArchiveEntry entry in arch.Entries)
{
fileStream = entry.Open();
string newFile = sourceDir + "/" + entry.FullName;
using (Amazon.S3.Transfer.TransferUtility tranute = new Amazon.S3.Transfer.TransferUtility(client))
{
var upld = new Amazon.S3.Transfer.TransferUtilityUploadRequest();
upld.InputStream = fileStream;
upld.Key = newFile;
upld.BucketName = bucketName;
await tranute.UploadAsync(upld);
}
}
}
return $"Decompression complete for {keystring}...";
}
catch (Exception e)
{
ctxt.Logger.LogInformation($"Error decompressing file {keystring} from bucket {bucketName}. Please check the file and try again.");
ctxt.Logger.LogInformation(e.Message);
ctxt.Logger.LogInformation(e.StackTrace);
throw;
}
}
我现在不断遇到的错误是在 await tranute.UploadAsync(upld)
的写入过程中。我得到的错误是:
$exception {"This operation is not supported."} System.NotSupportedException
以下是异常详情:
System.NotSupportedException
HResult=0x80131515
Message=This operation is not supported.
Source=System.IO.Compression
StackTrace:
at System.IO.Compression.DeflateStream.get_Length()
at Amazon.S3.Transfer.TransferUtilityUploadRequest.get_ContentLength()
at Amazon.S3.Transfer.TransferUtility.IsMultipartUpload(TransferUtilityUploadRequest request)
at Amazon.S3.Transfer.TransferUtility.GetUploadCommand(TransferUtilityUploadRequest request, SemaphoreSlim asyncThrottler)
at Amazon.S3.Transfer.TransferUtility.UploadAsync(TransferUtilityUploadRequest request, CancellationToken cancellationToken)
at File_Ingestion.Function.<DecompressFile>d__13.MoveNext() in File-Ingestion\Function.cs:line 136
This exception was originally thrown at this call stack:
[External Code]
File_Ingestion.Function.DecompressFile(string, string) in Function.cs
如有任何帮助,我们将不胜感激。
谢谢!
我认为问题是 AWS 在上传文件之前需要知道文件的长度,但是 ZipArchiveEntry.Open
返回的流事先不知道它的长度。
看看TransferUtilityUploadRequest.ContentLength
试图调用DeflateStream.Length
(which always throws)时是如何抛出异常的,其中DeflateStream
最终是ZipArchiveEntry.Open
返回的东西.
(有点奇怪 DeflateStream
没有报告它自己的解压缩长度。它当然知道它 应该 是什么,但这只是一个可能的指示错了,所以它可能想避免报告一个可能不正确的值。)
我认为您需要做的是在将提取的文件传递给 AWS 之前将其缓冲在内存中。这样,我们就可以找出流的未压缩长度,这将由 MemoryStream.Length
:
正确报告
using var fileStream = entry.Open();
// Copy the fileStream into an in-memory MemoryStream
using var ms = new MemoryStream();
fileStream.CopyTo(ms);
ms.Position = 0;
string newFile = sourceDir + "/" + entry.FullName;
using (Amazon.S3.Transfer.TransferUtility tranute = new Amazon.S3.Transfer.TransferUtility(client))
{
var upld = new Amazon.S3.Transfer.TransferUtilityUploadRequest();
upld.InputStream = ms;
upld.Key = newFile;
upld.BucketName = bucketName;
await tranute.UploadAsync(upld);
}
我正在尝试从 S3 中将 .zip 文件读入 C# 中的流,然后将条目写回 S3 中的原始文件夹。我查看了无数的 SO 问题,观看了视频等,试图做到这一点,但我似乎遗漏了一些东西。我现在比原来更远了,但我仍然被困住了。 (我真的希望 Amazon 能实现一个解压缩方法,因为这似乎出现了很多,但还没有这样的运气。)这是我目前的代码:
private async Task<string> DecompressFile(string bucketName, string keystring)
{
AmazonS3Client client = new AmazonS3Client();
Stream fileStream = new MemoryStream();
string sourceDir = keystring.Split('/')[0];
GetObjectRequest request = new GetObjectRequest{ BucketName = bucketName, Key = keystring };
try
{
using (var response = await client.GetObjectAsync(request))
using (var arch = new ZipArchive(response.ResponseStream))
{
foreach (ZipArchiveEntry entry in arch.Entries)
{
fileStream = entry.Open();
string newFile = sourceDir + "/" + entry.FullName;
using (Amazon.S3.Transfer.TransferUtility tranute = new Amazon.S3.Transfer.TransferUtility(client))
{
var upld = new Amazon.S3.Transfer.TransferUtilityUploadRequest();
upld.InputStream = fileStream;
upld.Key = newFile;
upld.BucketName = bucketName;
await tranute.UploadAsync(upld);
}
}
}
return $"Decompression complete for {keystring}...";
}
catch (Exception e)
{
ctxt.Logger.LogInformation($"Error decompressing file {keystring} from bucket {bucketName}. Please check the file and try again.");
ctxt.Logger.LogInformation(e.Message);
ctxt.Logger.LogInformation(e.StackTrace);
throw;
}
}
我现在不断遇到的错误是在 await tranute.UploadAsync(upld)
的写入过程中。我得到的错误是:
$exception {"This operation is not supported."} System.NotSupportedException
以下是异常详情:
System.NotSupportedException
HResult=0x80131515
Message=This operation is not supported.
Source=System.IO.Compression
StackTrace:
at System.IO.Compression.DeflateStream.get_Length()
at Amazon.S3.Transfer.TransferUtilityUploadRequest.get_ContentLength()
at Amazon.S3.Transfer.TransferUtility.IsMultipartUpload(TransferUtilityUploadRequest request)
at Amazon.S3.Transfer.TransferUtility.GetUploadCommand(TransferUtilityUploadRequest request, SemaphoreSlim asyncThrottler)
at Amazon.S3.Transfer.TransferUtility.UploadAsync(TransferUtilityUploadRequest request, CancellationToken cancellationToken)
at File_Ingestion.Function.<DecompressFile>d__13.MoveNext() in File-Ingestion\Function.cs:line 136
This exception was originally thrown at this call stack:
[External Code]
File_Ingestion.Function.DecompressFile(string, string) in Function.cs
如有任何帮助,我们将不胜感激。 谢谢!
我认为问题是 AWS 在上传文件之前需要知道文件的长度,但是 ZipArchiveEntry.Open
返回的流事先不知道它的长度。
看看TransferUtilityUploadRequest.ContentLength
试图调用DeflateStream.Length
(which always throws)时是如何抛出异常的,其中DeflateStream
最终是ZipArchiveEntry.Open
返回的东西.
(有点奇怪 DeflateStream
没有报告它自己的解压缩长度。它当然知道它 应该 是什么,但这只是一个可能的指示错了,所以它可能想避免报告一个可能不正确的值。)
我认为您需要做的是在将提取的文件传递给 AWS 之前将其缓冲在内存中。这样,我们就可以找出流的未压缩长度,这将由 MemoryStream.Length
:
using var fileStream = entry.Open();
// Copy the fileStream into an in-memory MemoryStream
using var ms = new MemoryStream();
fileStream.CopyTo(ms);
ms.Position = 0;
string newFile = sourceDir + "/" + entry.FullName;
using (Amazon.S3.Transfer.TransferUtility tranute = new Amazon.S3.Transfer.TransferUtility(client))
{
var upld = new Amazon.S3.Transfer.TransferUtilityUploadRequest();
upld.InputStream = ms;
upld.Key = newFile;
upld.BucketName = bucketName;
await tranute.UploadAsync(upld);
}