如何在 HiveMQ 中正确使用 TLS 1.3 密码套件? (获取 SSL 异常:在接收对等 close_notify 之前关闭入站)

How to properly use TLS 1.3 cipher suites in HiveMQ? (Getting a SSL exception: closing inbound before receiving peer's close_notify)

我想使用 TLS 1.3 与 HiveMQ 进行安全通信。我已将 HiveMQ 社区版服务器 config.xml 文件配置为指定使用 TLS 1.3 密码套件,并使用曲线将其指向包含 256 位椭圆曲线密钥(EC NOT DSA)的密钥对的密钥库: secp256r1(这是 TLS 1.3 支持的少数曲线之一)。 256 位密钥对适用于我要使用的 TLS 1.3 密码套件:TLS_AES_128_GCM_SHA256。我还为 TLS_AES_256_GCM_SHA384 生成了一个 384 位椭圆曲线密钥,但我只关注 TLS_AES_128_GCM_SHA256,因为如果我让 AES 128 工作,AES 256 套件将工作。我已经为两个密钥对生成了证书,并将它们都放在 JAVA HOME Folder 中的 cacerts 文件中。我仍然收到 javax.net.ssl.SSLHandshakeException:

javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify

我试过使用这个 TLS 1.2 密码套件:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256(使用适当的证书)并且它没有任何问题,所以看来这个问题专门针对 TLS 1.3。我的项目在 Java 12.0.1。我注意到虽然 HiveMQ 服务器识别出 TLSv1.3,但它启用了 TLSv1.2 协议,但没有说它启用了任何 TLSv1.3 密码套件。我是否需要以某种方式在 HiveMQ 中手动启用 TLSv1.3 密码套件,因为即使在指定特定协议时它们看起来也没有打开?我在下面留下了服务器控制台输出的副本以及 Java 代码和异常。


更新:我在sslConfig中指定客户端使用.protocols()方法使用TLS1.3。我尝试手动添加密码套件:TLS_AES_128_GCM_SHA256 到 config.xml 文件,但这次我收到 SSL 异常错误。更新后的输出和异常如下。我怀疑 HiveMQ 正在过滤掉我尝试使用的密码套件。我尝试创建一个 SSL 引擎作为测试并使用了 .getEnabledCipherSuites()getSupportedCipherSuites(),它说我的 JVM 以及 TLS1.3 协议本身支持上面的 TLS 1.3 密码套件。

HiveMQ 服务器控制台输出(来自 run.sh 文件,在 logback.xml 中启用了 DEBUG):

2019-07-06 12:06:42,394 INFO  - Starting HiveMQ Community Edition Server
2019-07-06 12:06:42,398 INFO  - HiveMQ version: 2019.1
2019-07-06 12:06:42,398 INFO  - HiveMQ home directory: /Users/chigozieasikaburu/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1
2019-07-06 12:06:42,508 INFO  - Log Configuration was overridden by /Users/someuser/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1/conf/logback.xml
2019-07-06 12:06:42,619 DEBUG - Reading configuration file /Users/someuser/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1/conf/config.xml
2019-07-06 12:06:42,838 DEBUG - Adding TCP Listener with TLS of type TlsTcpListener on bind address 0.0.0.0 and port 8883.
2019-07-06 12:06:42,839 DEBUG - Setting retained messages enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting wildcard subscriptions enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting subscription identifier enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting shared subscriptions enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting maximum qos to EXACTLY_ONCE 
2019-07-06 12:06:42,840 DEBUG - Setting topic alias enabled to true
2019-07-06 12:06:42,840 DEBUG - Setting topic alias maximum per client to 5
2019-07-06 12:06:42,840 DEBUG - Setting the number of max queued messages  per client to 1000 entries
2019-07-06 12:06:42,841 DEBUG - Setting queued messages strategy for each client to DISCARD
2019-07-06 12:06:42,841 DEBUG - Setting the expiry interval for client sessions to 4294967295 seconds
2019-07-06 12:06:42,841 DEBUG - Setting the expiry interval for publish messages to 4294967296 seconds
2019-07-06 12:06:42,841 DEBUG - Setting the server receive maximum to 10
2019-07-06 12:06:42,841 DEBUG - Setting keep alive maximum to 65535 seconds
2019-07-06 12:06:42,841 DEBUG - Setting keep alive allow zero to true
2019-07-06 12:06:42,842 DEBUG - Setting the maximum packet size for mqtt messages 268435460 bytes
2019-07-06 12:06:42,842 DEBUG - Setting global maximum allowed connections to -1
2019-07-06 12:06:42,842 DEBUG - Setting the maximum client id length to 65535
2019-07-06 12:06:42,842 DEBUG - Setting the timeout for disconnecting idle tcp connections before a connect message was received to 10000 milliseconds
2019-07-06 12:06:42,842 DEBUG - Throttling the global incoming traffic limit 0 bytes/second
2019-07-06 12:06:42,842 DEBUG - Setting the maximum topic length to 65535
2019-07-06 12:06:42,843 DEBUG - Setting allow server assigned client identifier to true
2019-07-06 12:06:42,843 DEBUG - Setting validate UTF-8 to true
2019-07-06 12:06:42,843 DEBUG - Setting payload format validation to false
2019-07-06 12:06:42,843 DEBUG - Setting allow-problem-information to true
2019-07-06 12:06:42,843 DEBUG - Setting anonymous usage statistics enabled to false 
2019-07-06 12:06:42,845 INFO  - This HiveMQ ID is JAzWT
2019-07-06 12:06:43,237 DEBUG - Using disk-based Publish Payload Persistence
2019-07-06 12:06:43,259 DEBUG - 1024.00 MB allocated for qos 0 inflight messages
2019-07-06 12:06:45,268 DEBUG - Initializing payload reference count and queue sizes for client_queue persistence.
2019-07-06 12:06:45,690 DEBUG - Diagnostic mode is disabled
2019-07-06 12:06:46,276 DEBUG - Throttling incoming traffic to 0 B/s
2019-07-06 12:06:46,277 DEBUG - Throttling outgoing traffic to 0 B/s
2019-07-06 12:06:46,321 DEBUG - Set extension executor thread pool size to 4
2019-07-06 12:06:46,321 DEBUG - Set extension executor thread pool keep-alive to 30 seconds
2019-07-06 12:06:46,336 DEBUG - Building initial topic tree
2019-07-06 12:06:46,395 DEBUG - Started JMX Metrics Reporting.
2019-07-06 12:06:46,491 INFO  - Starting HiveMQ extension system.
2019-07-06 12:06:46,536 DEBUG - Starting extension with id "hivemq-file-rbac-extension" at /Users/someuser/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1/extensions/hivemq-file-rbac-extension
2019-07-06 12:06:46,558 INFO  - Starting File RBAC extension.
2019-07-06 12:06:46,795 INFO  - Extension "File Role Based Access Control Extension" version 4.0.0 started successfully.
2019-07-06 12:06:46,818 INFO  - Enabled protocols for TCP Listener with TLS at address 0.0.0.0 and port 8883: [TLSv1.3]
2019-07-06 12:06:46,819 INFO  - Enabled cipher suites for TCP Listener with TLS at address 0.0.0.0 and port 8883: []
2019-07-06 12:06:46,823 WARN  - Unknown cipher suites for TCP Listener with TLS at address 0.0.0.0 and port 8883: [TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384]
2019-07-06 12:06:46,827 INFO  - Starting TLS TCP listener on address 0.0.0.0 and port 8883
2019-07-06 12:06:46,881 INFO  - Started TCP Listener with TLS on address 0.0.0.0 and on port 8883
2019-07-06 12:06:46,882 INFO  - Started HiveMQ in 4500ms
2019-07-06 12:10:32,396 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:10:38,967 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:23:29,721 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:23:35,990 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:24:17,436 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:24:29,160 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)

Java代码:

Mqtt5BlockingClient subscriber = Mqtt5Client.builder()
        .identifier(UUID.randomUUID().toString()) // the unique identifier of the MQTT client. The ID is randomly generated between 
        .serverHost("localhost")  // the host name or IP address of the MQTT server. Kept it localhost for testing. localhost is default if not specified.
        .serverPort(8883)  // specifies the port of the server
        .addConnectedListener(context -> ClientConnectionRetreiver.printConnected("Subscriber1"))        // prints a string that the client is connected
        .addDisconnectedListener(context -> ClientConnectionRetreiver.printDisconnected("Subscriber1"))  // prints a string that the client is disconnected
        .sslConfig()
            .cipherSuites(Arrays.asList("TLS_AES_128_GCM_SHA256"))
            .applySslConfig()
        .buildBlocking();  // creates the client builder                
         subscriber.connectWith() // connects the client
            .simpleAuth()
                .username("user1")                                                                                                                      
                .password("somepassword".getBytes())
                .applySimpleAuth()
            .send();

异常(使用Ssl调试工具:-Djavax.net.debug=ssl):

SubThread1 is running.
javax.net.ssl|DEBUG|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.379 EDT|SSLCipher.java:463|jdk.tls.keyLimits:  entry = AES/GCM/NoPadding KeyUpdate 2^37. AES/GCM/NOPADDING:KEYUPDATE = 137438953472
javax.net.ssl|ALL|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.761 EDT|SSLEngineImpl.java:752|Closing outbound of SSLEngine
javax.net.ssl|ALL|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.762 EDT|SSLEngineImpl.java:724|Closing inbound of SSLEngine
javax.net.ssl|ERROR|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.765 EDT|TransportContext.java:312|Fatal (INTERNAL_ERROR): closing inbound before receiving peer's close_notify (
"throwable" : {
  javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify
    at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
    at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
    at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
    at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
    at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:254)
    at 

java.base/sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:733)
        at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1565)

    at io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1049)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1429)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(De

faultChannelPipeline.java:947)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.run(AbstractChannel.java:826)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
        at io.nett

y.channel.nio.NioEventLoop.run(NioEventLoop.java:474)
    at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:909)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:835)}

    )
    Subscriber1 disconnected.
    Exception in thread "SubThread1" com.hivemq.client.mqtt.exceptions.ConnectionClosedException: Server closed connection without DISCONNECT.
        at com.hivemq.client.internal.mqtt.MqttBlockingClient.connect(MqttBlockingClient.java:91)
        at 

com.hivemq.client.internal.mqtt.message.connect.MqttConnectBuilder$Send.send(MqttConnectBuilder.java:196)
    at com.main.SubThread.run(SubThread.java:90)
    at java.base/java.lang.Thread.run(Thread.java:835)

看来您必须在服务器和客户端中都将协议设置为 "TLSv1.3"。

客户:

    ...
    .sslConfig()
        .cipherSuites(Arrays.asList("TLS_AES_128_GCM_SHA256"))
        .protocols(Arrays.asList("TLSv1.3"))
        .applySslConfig()
    ...

HiveMQ:

    <tls-tcp-listener>
        <tls>
            ...
            <protocols>
                <protocol>TLSv1.3</protocol>
            </protocols>
            <cipher-suites>
                <cipher-suite>TLS_AES_128_GCM_SHA256</cipher-suite>
            </cipher-suites>
            ...
        </tls>
    </tls-tcp-listener>

此问题是由于 HiveMQ 客户端中的一个错误 #27 in HiveMQ Client Edition 1.1.0 caused by incorrect SSL Context Handling for TLS 1.3. This issue was fixed with #70