Azure table 存储抛出异常:无法从传输连接读取数据:
Azure table Storage throws exception : Unable to read data from the transport connection:
我在 5-6 小时后运行 6-7 小时的长 Azure table 存储查询,Azure table 存储抛出异常 "Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.An existing connection was forcibly closed by the remote host " **
"Exception : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host., Stack Trace : at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Table.TableQuery`1.<>c__DisplayClass7.<ExecuteInternal>b__6(IContinuationToken continuationToken)
at Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility.<LazyEnumerable>d__0`1.MoveNext()
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)"
**
不知道是什么导致了问题,谁能帮我找出这个错误的原因。
ServicePointManager.DefaultConnectionLimit = 48;
ServicePointManager.Expect100Continue = false;
ServicePointManager.UseNagleAlgorithm = false;
我使用的是 A7(8 CPU 核心,56 GB RAM),即使是高配置也是失败的。
还包括 Table 存储确保执行 Query 的重试逻辑,但运气不好。
var DefaultRequestOptions = new TableRequestOptions
{
RetryPolicy =new ExponentialRetry(TimeSpan.FromSeconds(3), 3),
//PayloadFormat = TablePayloadFormat.JsonNoMetadata
};
AzureTableQuery.Execute(DefaultRequestOptions).ToList();
我还检查了网络输入:它显示有 100 GB。网络带宽是否有任何限制。我请求任何人的帮助。
提前致谢
对于需要这么长时间的查询,最好逐个处理结果,而不是尝试一次下载所有内容。这样,如果您的查询在任何时候失败,您就不必重新下载所有内容。例如:
TableContinuationToken token = null;
try
{
do
{
TableQuerySegment<ITableEntity> segment = AzureTableQuery.ExecuteSegmented(token);
// Do something with segment.Results(), which is this batch of results from the query
List<ITableEntity> results = segment.Results;
// Save the continuation token for the next iteration.
token = segment.ContinuationToken;
} while (token != null);
}
catch (Exception e)
{
// Handle exception, retry, etc
}
这样,即使查询中途失败,您也会得到部分结果,并且您有继续标记,因此您可以从中断的地方继续查询,而不是从头开始。
请注意,大多数 table 扫描效率不高;如果您的场景对延迟敏感,您可能需要重新设计 table 以允许更高效的查询。此外,我不确定您如何在网络上获得 100 GB/s,但绝对不是全部来自这个查询,Azure 存储不会为一个查询快速推送数据。
我在 5-6 小时后运行 6-7 小时的长 Azure table 存储查询,Azure table 存储抛出异常 "Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.An existing connection was forcibly closed by the remote host " **
"Exception : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host., Stack Trace : at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Table.TableQuery`1.<>c__DisplayClass7.<ExecuteInternal>b__6(IContinuationToken continuationToken)
at Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility.<LazyEnumerable>d__0`1.MoveNext()
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)"
**
不知道是什么导致了问题,谁能帮我找出这个错误的原因。
ServicePointManager.DefaultConnectionLimit = 48;
ServicePointManager.Expect100Continue = false;
ServicePointManager.UseNagleAlgorithm = false;
我使用的是 A7(8 CPU 核心,56 GB RAM),即使是高配置也是失败的。
还包括 Table 存储确保执行 Query 的重试逻辑,但运气不好。
var DefaultRequestOptions = new TableRequestOptions
{
RetryPolicy =new ExponentialRetry(TimeSpan.FromSeconds(3), 3),
//PayloadFormat = TablePayloadFormat.JsonNoMetadata
};
AzureTableQuery.Execute(DefaultRequestOptions).ToList();
我还检查了网络输入:它显示有 100 GB。网络带宽是否有任何限制。我请求任何人的帮助。 提前致谢
对于需要这么长时间的查询,最好逐个处理结果,而不是尝试一次下载所有内容。这样,如果您的查询在任何时候失败,您就不必重新下载所有内容。例如:
TableContinuationToken token = null;
try
{
do
{
TableQuerySegment<ITableEntity> segment = AzureTableQuery.ExecuteSegmented(token);
// Do something with segment.Results(), which is this batch of results from the query
List<ITableEntity> results = segment.Results;
// Save the continuation token for the next iteration.
token = segment.ContinuationToken;
} while (token != null);
}
catch (Exception e)
{
// Handle exception, retry, etc
}
这样,即使查询中途失败,您也会得到部分结果,并且您有继续标记,因此您可以从中断的地方继续查询,而不是从头开始。
请注意,大多数 table 扫描效率不高;如果您的场景对延迟敏感,您可能需要重新设计 table 以允许更高效的查询。此外,我不确定您如何在网络上获得 100 GB/s,但绝对不是全部来自这个查询,Azure 存储不会为一个查询快速推送数据。