AFNetworking 并发 Http 请求
AFNetworking concurrent Http requests
我需要从 API 下载大约 200 个文件。它们都只有 15KB 左右。
我当前的代码如下所示:
NSOperationQueue* opQueue = [[NSOperationQueue alloc]init];
[opQueue setMaxConcurrentOperationCount:8];
NSString* baseUrl = @"https://someapi.com/somejson.json?page=";
for(int n = 0; n <= numberOfPages; n++) {
@autoreleasepool {
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:[NSString stringWithFormat:@"%@%d", baseUrl, n]]cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:20];
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
operation.responseSerializer = [AFJSONResponseSerializer serializer];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
NSLog(@"Done");
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(@"Error");
}];
[opQueue addOperation:operation];
}
};
虽然这工作正常,但我很好奇是否有更快的方法来做到这一点。目前下载文件大约需要 20 秒。由于它们都那么小,我想最花时间的是等待服务器响应。是否有更好的排队方式,以便所有连接都在 "same" 时间建立,或者这只是一种硬顶?
我已经试过设置
[opQueue setMaxConcurrentOperationCount:8];
各种不同的值,以及根本不设置它。但这似乎并没有真正提高性能。
这是其中一个连接的 Charles 日志。其他的几乎一样,但如果需要我可以 post 更多。
URL https://<hidden>&page=0
Status Complete
Response Code 200 OK
Protocol HTTP/1.1
Method GET
Kept Alive No
Content-Type application/json; charset=utf-8
Client Address /127.0.0.1
Remote Address <hidden>
Timing
Request Start Time 24.02.15 09:49:58
Request End Time 24.02.15 09:49:58
Response Start Time 24.02.15 09:49:59
Response End Time 24.02.15 09:49:59
Duration 1.18 sec
DNS 1 ms
Connect 42 ms
SSL Handshake 127 ms
Request 0 ms
Response 1 ms
Latency 974 ms
Speed 8,36 KB/s
Response Speed 9.626,95 KB/s
Size
Request Header 283 bytes
Response Header 552 bytes
Request -
Response 9,09 KB (9306 bytes)
Total 9,90 KB (10141 bytes)
Request Compression -
Response Compression 91,8% (gzip)
最好不设置[opQueuesetMaxConcurrentOperationCount:8];
实际上,你有8个进程会同时启动。不设置就多了
看起来延迟是获得对请求的响应所花费的持续时间的主要原因。一个可能的原因可能是使用代理网络。另请注意,它不仅是延迟,还有连接时间、SSL 握手,这也有助于 Duration 1.18 sec
。
AFHTTPRequestOperation
使用裸机 NSURLConnection
。我怀疑您可以做些什么来改善客户端的服务器响应时间。但是,在服务器端还有很多事情要做:
如果可能,请将文件打包到一个存档中。
附加的响应 headers 声明服务器禁用了对 persistent HTTP connections 的支持:
Kept Alive No
启用 keep-alive 可能是您能做的最好的事情。与 http 相比,https 的禁用持久连接 even worse:
Defeating HTTP/1.1’s default Keep-Alive behavior is a bad practice for regular HTTP connections but it’s far worse for HTTPS connections because the initial setup costs of a HTTPS connection are far higher than a regular HTTP connection. Not only does the browser pay the performance penalty of setting up a new TCP/IP connection, including the handshake and initial congestion window sizing, but the request’s progress is also penalized by the time required to complete the HTTPS handshake used to secure the connection.
最后是HTTP pipelining。确保检查您的服务器是否支持它。然后根据您的要求启用它:
NSMutableURLRequest *request = ...
request.HTTPShouldUsePipelining = YES;
我需要从 API 下载大约 200 个文件。它们都只有 15KB 左右。 我当前的代码如下所示:
NSOperationQueue* opQueue = [[NSOperationQueue alloc]init];
[opQueue setMaxConcurrentOperationCount:8];
NSString* baseUrl = @"https://someapi.com/somejson.json?page=";
for(int n = 0; n <= numberOfPages; n++) {
@autoreleasepool {
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:[NSString stringWithFormat:@"%@%d", baseUrl, n]]cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:20];
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
operation.responseSerializer = [AFJSONResponseSerializer serializer];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
NSLog(@"Done");
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(@"Error");
}];
[opQueue addOperation:operation];
}
};
虽然这工作正常,但我很好奇是否有更快的方法来做到这一点。目前下载文件大约需要 20 秒。由于它们都那么小,我想最花时间的是等待服务器响应。是否有更好的排队方式,以便所有连接都在 "same" 时间建立,或者这只是一种硬顶?
我已经试过设置
[opQueue setMaxConcurrentOperationCount:8];
各种不同的值,以及根本不设置它。但这似乎并没有真正提高性能。
这是其中一个连接的 Charles 日志。其他的几乎一样,但如果需要我可以 post 更多。
URL https://<hidden>&page=0
Status Complete
Response Code 200 OK
Protocol HTTP/1.1
Method GET
Kept Alive No
Content-Type application/json; charset=utf-8
Client Address /127.0.0.1
Remote Address <hidden>
Timing
Request Start Time 24.02.15 09:49:58
Request End Time 24.02.15 09:49:58
Response Start Time 24.02.15 09:49:59
Response End Time 24.02.15 09:49:59
Duration 1.18 sec
DNS 1 ms
Connect 42 ms
SSL Handshake 127 ms
Request 0 ms
Response 1 ms
Latency 974 ms
Speed 8,36 KB/s
Response Speed 9.626,95 KB/s
Size
Request Header 283 bytes
Response Header 552 bytes
Request -
Response 9,09 KB (9306 bytes)
Total 9,90 KB (10141 bytes)
Request Compression -
Response Compression 91,8% (gzip)
最好不设置[opQueuesetMaxConcurrentOperationCount:8];
实际上,你有8个进程会同时启动。不设置就多了
看起来延迟是获得对请求的响应所花费的持续时间的主要原因。一个可能的原因可能是使用代理网络。另请注意,它不仅是延迟,还有连接时间、SSL 握手,这也有助于 Duration 1.18 sec
。
AFHTTPRequestOperation
使用裸机 NSURLConnection
。我怀疑您可以做些什么来改善客户端的服务器响应时间。但是,在服务器端还有很多事情要做:
如果可能,请将文件打包到一个存档中。
附加的响应 headers 声明服务器禁用了对 persistent HTTP connections 的支持:
Kept Alive No
启用 keep-alive 可能是您能做的最好的事情。与 http 相比,https 的禁用持久连接 even worse:
Defeating HTTP/1.1’s default Keep-Alive behavior is a bad practice for regular HTTP connections but it’s far worse for HTTPS connections because the initial setup costs of a HTTPS connection are far higher than a regular HTTP connection. Not only does the browser pay the performance penalty of setting up a new TCP/IP connection, including the handshake and initial congestion window sizing, but the request’s progress is also penalized by the time required to complete the HTTPS handshake used to secure the connection.
最后是HTTP pipelining。确保检查您的服务器是否支持它。然后根据您的要求启用它:
NSMutableURLRequest *request = ... request.HTTPShouldUsePipelining = YES;