如何让 Android grpc 客户端与服务器流连接保持活动状态?
How to keep an Android grpc client with server streaming connection alive?
我有一个 grpc-js 服务器和一个用于 Android 客户端的 Kotlin,可以进行服务器流式调用。这是 GRPCService class.
class GRPCService {
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(10, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
val asyncStub : ResponderServiceGrpc.ResponderServiceStub =
ResponderServiceGrpc.newStub(mChannel)
}
并且该方法是从前台服务调用的。
override fun onCreate() {
super.onCreate()
...
startForeground(MyNotificationBuilder.SERVICE_NOTIFICATION_ID, notificationBuilder.getServiceNotification())
}
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
val userId = sharedPreferencesManager.getInt(SharedPreferencesManager.USER_ID)
val taskRequest = Responder.TaskRequest.newBuilder()
.setUserId(userId)
.build()
grpcService.asyncStub.getTasks(taskRequest, object :
StreamObserver<Responder.TaskResponse> {
override fun onCompleted() {
Log.d("grpc Tasks", "Completed")
}
override fun onError(t: Throwable?) {
Log.d("grpc error cause", t?.cause.toString())
t?.cause?.printStackTrace()
Log.d("grpc error", "AFTER CAUSE")
t!!.printStackTrace()
}
override fun onNext(value: Responder.TaskResponse?) {
if (value != null) {
when (value.command) {
...
}
}
}
})
return super.onStartCommand(intent, flags, startId)
}
连接打开并保持打开状态大约一分钟没有通信,然后失败并出现以下错误。
D/grpc error cause: null
D/grpc error: AFTER CAUSE
io.grpc.StatusRuntimeException: INTERNAL: Internal error
io.grpc.Status.asRuntimeException(Status.java:533)
io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:460)
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
io.grpc.internal.ClientCallImpl.access0(ClientCallImpl.java:66)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access0(ClientCallImpl.java:577)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInternal(ClientCallImpl.java:751)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInContext(ClientCallImpl.java:740)
io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
grpc-js 服务器是使用以下选项创建的。
var server = new grpc.Server({
"grpc.http2.min_ping_interval_without_data_ms" : 10000,
"grpc.keepalive_permit_without_calls" : true,
"grpc.http2.min_time_between_pings_ms" : 10000,
"grpc.keepalive_time_ms" : 10000,
"grpc.http2.max_pings_without_data" : 0,
'grpc.http2.min_ping_interval_without_data_ms': 5000
});
我也从未收到 too many pings
错误。
我注意到如果通过此连接存在周期性通信(例如服务器每 30 秒左右用少量数据对客户端执行 ping 操作),那么我不会收到错误并且连接会保持打开状态随着 ping 的继续(测试了 2 天)。
如何在不定期 ping 客户端的情况下保持连接打开?
您尝试过 ManagedChannelBuilder.keepAliveTime
设置 (https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ManagedChannelBuilder.java#L357) 吗?我假设它会在服务器流式传输调用的中间工作。
托管频道有一个名为 keepAliveWithoutCalls 的 属性,如 here 所示,它的默认值为 false。如果未将其设置为 true,则在没有当前活动呼叫发生的情况下不会发生 keepAlive。您需要这样设置:
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
您可能还必须在服务器上进行一些其他设置,以便在不传递任何数据的情况下保持连接打开。您可能会在服务器上收到一条错误消息,提示“ping 次数过多”。发生这种情况是因为 GRPC 需要一些其他设置。我不确定如何使用 JS 服务器实现这一点,但应该不会太难。这些设置包括:
GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS
Minimum allowed time between a server receiving successive ping frames without sending any data/header/window_update frame.
还有这个:
GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS
Minimum time between sending successive ping frames without receiving any data/header/window_update frame, Int valued, milliseconds.
还有这个:
GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS
Is it permissible to send keepalive pings without any outstanding streams.
有个Keepalive User Guide for gRPC which I suggest you read through to understand how gRPC should keep connections open. This is the core standard that all server and client implementations should follow, but I have noticed this is not always the case. You can have a look at a previous but similar question I asked a while back here.
我有一个 grpc-js 服务器和一个用于 Android 客户端的 Kotlin,可以进行服务器流式调用。这是 GRPCService class.
class GRPCService {
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(10, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
val asyncStub : ResponderServiceGrpc.ResponderServiceStub =
ResponderServiceGrpc.newStub(mChannel)
}
并且该方法是从前台服务调用的。
override fun onCreate() {
super.onCreate()
...
startForeground(MyNotificationBuilder.SERVICE_NOTIFICATION_ID, notificationBuilder.getServiceNotification())
}
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
val userId = sharedPreferencesManager.getInt(SharedPreferencesManager.USER_ID)
val taskRequest = Responder.TaskRequest.newBuilder()
.setUserId(userId)
.build()
grpcService.asyncStub.getTasks(taskRequest, object :
StreamObserver<Responder.TaskResponse> {
override fun onCompleted() {
Log.d("grpc Tasks", "Completed")
}
override fun onError(t: Throwable?) {
Log.d("grpc error cause", t?.cause.toString())
t?.cause?.printStackTrace()
Log.d("grpc error", "AFTER CAUSE")
t!!.printStackTrace()
}
override fun onNext(value: Responder.TaskResponse?) {
if (value != null) {
when (value.command) {
...
}
}
}
})
return super.onStartCommand(intent, flags, startId)
}
连接打开并保持打开状态大约一分钟没有通信,然后失败并出现以下错误。
D/grpc error cause: null
D/grpc error: AFTER CAUSE
io.grpc.StatusRuntimeException: INTERNAL: Internal error
io.grpc.Status.asRuntimeException(Status.java:533)
io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:460)
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
io.grpc.internal.ClientCallImpl.access0(ClientCallImpl.java:66)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access0(ClientCallImpl.java:577)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInternal(ClientCallImpl.java:751)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInContext(ClientCallImpl.java:740)
io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
grpc-js 服务器是使用以下选项创建的。
var server = new grpc.Server({
"grpc.http2.min_ping_interval_without_data_ms" : 10000,
"grpc.keepalive_permit_without_calls" : true,
"grpc.http2.min_time_between_pings_ms" : 10000,
"grpc.keepalive_time_ms" : 10000,
"grpc.http2.max_pings_without_data" : 0,
'grpc.http2.min_ping_interval_without_data_ms': 5000
});
我也从未收到 too many pings
错误。
我注意到如果通过此连接存在周期性通信(例如服务器每 30 秒左右用少量数据对客户端执行 ping 操作),那么我不会收到错误并且连接会保持打开状态随着 ping 的继续(测试了 2 天)。
如何在不定期 ping 客户端的情况下保持连接打开?
您尝试过 ManagedChannelBuilder.keepAliveTime
设置 (https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ManagedChannelBuilder.java#L357) 吗?我假设它会在服务器流式传输调用的中间工作。
托管频道有一个名为 keepAliveWithoutCalls 的 属性,如 here 所示,它的默认值为 false。如果未将其设置为 true,则在没有当前活动呼叫发生的情况下不会发生 keepAlive。您需要这样设置:
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
您可能还必须在服务器上进行一些其他设置,以便在不传递任何数据的情况下保持连接打开。您可能会在服务器上收到一条错误消息,提示“ping 次数过多”。发生这种情况是因为 GRPC 需要一些其他设置。我不确定如何使用 JS 服务器实现这一点,但应该不会太难。这些设置包括:
GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS
Minimum allowed time between a server receiving successive ping frames without sending any data/header/window_update frame.
还有这个:
GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS
Minimum time between sending successive ping frames without receiving any data/header/window_update frame, Int valued, milliseconds.
还有这个:
GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS
Is it permissible to send keepalive pings without any outstanding streams.
有个Keepalive User Guide for gRPC which I suggest you read through to understand how gRPC should keep connections open. This is the core standard that all server and client implementations should follow, but I have noticed this is not always the case. You can have a look at a previous but similar question I asked a while back here.