关于 gRPC Capacity/Adjustment
About gRPC Capacity/Adjustment
我是运行一个使用gRPC的微服务,在客户端从服务器端收到了很多onError()回调。
t.printStackTrace() 显示:
io.grpc.StatusRuntimeException: UNKNOWN
at io.grpc.Status.asRuntimeException(Status.java:526)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:385)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor.onClose(CensusTracingModule.java:339)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:443)
at io.grpc.internal.ClientCallImpl.access0(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:525)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access0(ClientCallImpl.java:446)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInContext(ClientCallImpl.java:557)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:107)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
如果我关闭除测试客户端以外的所有连接,异常将消失。
所以,我想知道(在服务器端)是否有关于以下实例的最大数量的限制:
io.grpc.ManagedChannel
io.grpc.stub.StreamObserver
如果有的话,我怎么才能adjust/enlarge呢?
如有任何帮助,我们将不胜感激。
UNKNOWN
状态通常表示服务器以某种方式出现故障。您可能想检查服务器日志。
服务器可以拥有的连接数和 RPC 数量没有实际限制。有可能 运行 超出连接太多的文件描述符。可能会达到 RPC 限制,这会导致它们排队等待发送。与任何事情一样,您可能 运行 了解内存使用和类似的限制。
@Eric 是正确的。
这里我只展示我遇到的问题。希望也能帮助其他人遇到同样的问题。
在服务器端,我发现有很多异常,比如
Apr 19, 2018 5:06:07 PM io.grpc.internal.SerializingExecutor run
SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListenerMessagesAvailable@752265fc
redis.clients.jedis.exceptions.JedisException: Could not return the resource to the pool
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:256)
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:16)
at redis.clients.jedis.Jedis.close(Jedis.java:3409)
...
at io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:251)
at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:251)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListenerMessagesAvailable.runInContext(ServerImpl.java:592)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:107)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:202)
at redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
at redis.clients.jedis.Protocol.process(Protocol.java:151)
at redis.clients.jedis.Protocol.read(Protocol.java:215)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:340)
at redis.clients.jedis.Connection.getAll(Connection.java:310)
at redis.clients.jedis.Connection.getAll(Connection.java:302)
at redis.clients.jedis.Pipeline.sync(Pipeline.java:99)
at redis.clients.jedis.Pipeline.clear(Pipeline.java:85)
at redis.clients.jedis.BinaryJedis.resetState(BinaryJedis.java:1781)
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:252)
... 13 more
Caused by: java.net.SocketTimeoutException: Read timed out
既然提到了redis,我查看了相关代码,发现可能是JedisPoolConfig线程不够用,可能是默认的timeout太短了。
所以我把两者都放大了,问题就解决了。
换句话说,服务器没有足够的资源来在期望的时间内处理客户端请求。这导致 gRPC 服务器失败,因此调用了客户端的 onError() 方法。
谢谢@Eric。
我是运行一个使用gRPC的微服务,在客户端从服务器端收到了很多onError()回调。
t.printStackTrace() 显示:
io.grpc.StatusRuntimeException: UNKNOWN
at io.grpc.Status.asRuntimeException(Status.java:526)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:385)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor.onClose(CensusTracingModule.java:339)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:443)
at io.grpc.internal.ClientCallImpl.access0(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:525)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access0(ClientCallImpl.java:446)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInContext(ClientCallImpl.java:557)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:107)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
如果我关闭除测试客户端以外的所有连接,异常将消失。
所以,我想知道(在服务器端)是否有关于以下实例的最大数量的限制:
io.grpc.ManagedChannel
io.grpc.stub.StreamObserver
如果有的话,我怎么才能adjust/enlarge呢?
如有任何帮助,我们将不胜感激。
UNKNOWN
状态通常表示服务器以某种方式出现故障。您可能想检查服务器日志。
服务器可以拥有的连接数和 RPC 数量没有实际限制。有可能 运行 超出连接太多的文件描述符。可能会达到 RPC 限制,这会导致它们排队等待发送。与任何事情一样,您可能 运行 了解内存使用和类似的限制。
@Eric 是正确的。 这里我只展示我遇到的问题。希望也能帮助其他人遇到同样的问题。
在服务器端,我发现有很多异常,比如
Apr 19, 2018 5:06:07 PM io.grpc.internal.SerializingExecutor run
SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListenerMessagesAvailable@752265fc
redis.clients.jedis.exceptions.JedisException: Could not return the resource to the pool
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:256)
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:16)
at redis.clients.jedis.Jedis.close(Jedis.java:3409)
...
at io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:251)
at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:251)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListenerMessagesAvailable.runInContext(ServerImpl.java:592)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:107)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:202)
at redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
at redis.clients.jedis.Protocol.process(Protocol.java:151)
at redis.clients.jedis.Protocol.read(Protocol.java:215)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:340)
at redis.clients.jedis.Connection.getAll(Connection.java:310)
at redis.clients.jedis.Connection.getAll(Connection.java:302)
at redis.clients.jedis.Pipeline.sync(Pipeline.java:99)
at redis.clients.jedis.Pipeline.clear(Pipeline.java:85)
at redis.clients.jedis.BinaryJedis.resetState(BinaryJedis.java:1781)
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:252)
... 13 more
Caused by: java.net.SocketTimeoutException: Read timed out
既然提到了redis,我查看了相关代码,发现可能是JedisPoolConfig线程不够用,可能是默认的timeout太短了。
所以我把两者都放大了,问题就解决了。
换句话说,服务器没有足够的资源来在期望的时间内处理客户端请求。这导致 gRPC 服务器失败,因此调用了客户端的 onError() 方法。
谢谢@Eric。