Azure Table ExecuteQuerySegmentedAsync 与 ExecuteQuery 的查询限制
Azure Table Query Limits with ExecuteQuerySegmentedAsync vs ExecuteQuery
调用ExecuteQuery()
有什么限制?例如,实体数量限制和下载大小。
换句话说,下面的方法什么时候会达到极限?
private static void ExecuteSimpleQuery(CloudTable table, string partitionKey, string startRowKey, string endRowKey)
{
try
{
// Create the range query using the fluid API
TableQuery<CustomerEntity> rangeQuery = new TableQuery<CustomerEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, startRowKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, endRowKey))));
foreach (CustomerEntity entity in table.ExecuteQuery(rangeQuery))
{
Console.WriteLine("Customer: {0},{1}\t{2}\t{3}", entity.PartitionKey, entity.RowKey, entity.Email, entity.PhoneNumber);
}
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
下面的方法是使用ExecuteQuerySegmentedAsync
,TakeCount为50,但是50是如何确定的,我认为这是由我上面的问题决定的。
private static async Task PartitionRangeQueryAsync(CloudTable table, string partitionKey, string startRowKey, string endRowKey)
{
try
{
// Create the range query using the fluid API
TableQuery<CustomerEntity> rangeQuery = new TableQuery<CustomerEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, startRowKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, endRowKey))));
// Request 50 results at a time from the server.
TableContinuationToken token = null;
rangeQuery.TakeCount = 50;
int segmentNumber = 0;
do
{
// Execute the query, passing in the continuation token.
// The first time this method is called, the continuation token is null. If there are more results, the call
// populates the continuation token for use in the next call.
TableQuerySegment<CustomerEntity> segment = await table.ExecuteQuerySegmentedAsync(rangeQuery, token);
// Indicate which segment is being displayed
if (segment.Results.Count > 0)
{
segmentNumber++;
Console.WriteLine();
Console.WriteLine("Segment {0}", segmentNumber);
}
// Save the continuation token for the next call to ExecuteQuerySegmentedAsync
token = segment.ContinuationToken;
// Write out the properties for each entity returned.
foreach (CustomerEntity entity in segment)
{
Console.WriteLine("\t Customer: {0},{1}\t{2}\t{3}", entity.PartitionKey, entity.RowKey, entity.Email, entity.PhoneNumber);
}
Console.WriteLine();
}
while (token != null);
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
示例来自下面的 link:
https://github.com/Azure-Samples/storage-table-dotnet-getting-started
对于ExecuteQuerySegmentedAsync
,限制是1000
。这是基于 REST API 的限制,其中对 table 服务的单个请求可以 return 最多 1000 个实体(参考:https://docs.microsoft.com/en-us/rest/api/storageservices/query-timeout-and-pagination)。
ExecuteQuery
方法将尝试 return 匹配查询的所有实体。在内部,它尝试在单次迭代中获取最多 1000 个实体,并且如果来自 table 服务的响应包含延续令牌,它将尝试获取下一组实体。
更新
If ExecuteQuery performs pagination automatically, it seems it is
easier to use than ExecuteQuerySegmentedAsync. Why must I use
ExecuteQuerySegmentedAsync? What about download size? 1000 entities
regardless their sizes?
使用ExecuteQuery
,您无法跳出循环。当 table 中有很多实体时,这就会成为问题。 ExecuteQuerySegmentedAsync
你有那种灵活性。例如,假设您要从非常大的 table 下载所有实体并将它们保存在本地。如果使用 ExecuteQuerySegmentedAsync
,则可以将实体保存在不同的文件中。
关于您关于 1000 个实体(无论大小)的评论,答案是肯定的。请记住,每个实体的最大大小可以为 1MB。
调用ExecuteQuery()
有什么限制?例如,实体数量限制和下载大小。
换句话说,下面的方法什么时候会达到极限?
private static void ExecuteSimpleQuery(CloudTable table, string partitionKey, string startRowKey, string endRowKey)
{
try
{
// Create the range query using the fluid API
TableQuery<CustomerEntity> rangeQuery = new TableQuery<CustomerEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, startRowKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, endRowKey))));
foreach (CustomerEntity entity in table.ExecuteQuery(rangeQuery))
{
Console.WriteLine("Customer: {0},{1}\t{2}\t{3}", entity.PartitionKey, entity.RowKey, entity.Email, entity.PhoneNumber);
}
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
下面的方法是使用ExecuteQuerySegmentedAsync
,TakeCount为50,但是50是如何确定的,我认为这是由我上面的问题决定的。
private static async Task PartitionRangeQueryAsync(CloudTable table, string partitionKey, string startRowKey, string endRowKey)
{
try
{
// Create the range query using the fluid API
TableQuery<CustomerEntity> rangeQuery = new TableQuery<CustomerEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, startRowKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, endRowKey))));
// Request 50 results at a time from the server.
TableContinuationToken token = null;
rangeQuery.TakeCount = 50;
int segmentNumber = 0;
do
{
// Execute the query, passing in the continuation token.
// The first time this method is called, the continuation token is null. If there are more results, the call
// populates the continuation token for use in the next call.
TableQuerySegment<CustomerEntity> segment = await table.ExecuteQuerySegmentedAsync(rangeQuery, token);
// Indicate which segment is being displayed
if (segment.Results.Count > 0)
{
segmentNumber++;
Console.WriteLine();
Console.WriteLine("Segment {0}", segmentNumber);
}
// Save the continuation token for the next call to ExecuteQuerySegmentedAsync
token = segment.ContinuationToken;
// Write out the properties for each entity returned.
foreach (CustomerEntity entity in segment)
{
Console.WriteLine("\t Customer: {0},{1}\t{2}\t{3}", entity.PartitionKey, entity.RowKey, entity.Email, entity.PhoneNumber);
}
Console.WriteLine();
}
while (token != null);
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
示例来自下面的 link: https://github.com/Azure-Samples/storage-table-dotnet-getting-started
对于ExecuteQuerySegmentedAsync
,限制是1000
。这是基于 REST API 的限制,其中对 table 服务的单个请求可以 return 最多 1000 个实体(参考:https://docs.microsoft.com/en-us/rest/api/storageservices/query-timeout-and-pagination)。
ExecuteQuery
方法将尝试 return 匹配查询的所有实体。在内部,它尝试在单次迭代中获取最多 1000 个实体,并且如果来自 table 服务的响应包含延续令牌,它将尝试获取下一组实体。
更新
If ExecuteQuery performs pagination automatically, it seems it is easier to use than ExecuteQuerySegmentedAsync. Why must I use ExecuteQuerySegmentedAsync? What about download size? 1000 entities regardless their sizes?
使用ExecuteQuery
,您无法跳出循环。当 table 中有很多实体时,这就会成为问题。 ExecuteQuerySegmentedAsync
你有那种灵活性。例如,假设您要从非常大的 table 下载所有实体并将它们保存在本地。如果使用 ExecuteQuerySegmentedAsync
,则可以将实体保存在不同的文件中。
关于您关于 1000 个实体(无论大小)的评论,答案是肯定的。请记住,每个实体的最大大小可以为 1MB。