Nginx 代理 + NodeJS WebSocket + >17KB 消息。没有交通。谁是罪魁祸首?

Nginx Proxy + NodeJS WebSocket + >17KB messages. No traffic. Who is the culprit?

无法增加缓冲区宽度以避免丢帧

无法正确管理 WS 碎片


总结

我的目标:

一个非常简单的事情:让 websocket 隧道传输每个隧道至少 2/3 MB 的数据。我需要发送目录结构,因此数据可以非常多

问题:

从 A 到 B 发送超过 17KB 的 WebSocket 消息,导致“通信丢失”或数据包 drop/loss; connection/tunnel 无法通过同一隧道从 A 到 B 发送新消息;反之,从B到A继续工作。

我必须重新启动隧道才能恢复功能。

也可以是个思路,达到阈值时重启隧道的包堆的管理,但是很明显我需要一次发送超过阈值。

“信号路径”:

GoLang app(Client) ---> :443 NGINX Proxy(Debian) ---> :8050 NodeJS WS Server

测试:

分析:

代码和配置:

GoLang 应用相关部分

    websocket.DefaultDialer = &websocket.Dialer{
        Proxy:            http.ProxyFromEnvironment,
        HandshakeTimeout: 45 * time.Second,
        WriteBufferSize:  1000, //also tried with 2000, 5000, 10000, 11000
    }

    c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)

    wsConn = c


    bufferChunk := 1000

    bufferSample := ""

    for j := 7; j <= bufferChunk; j++ {
        bufferSample = bufferSample + "0"
    }

    i := 1

    for {

        sendingBytes := i * bufferChunk
        fmt.Println(strconv.Itoa(sendingBytes) + " bytes sent")
        wsConn.WriteMessage(websocket.TextMessage, []byte(bufferSample))     

        i++
        time.Sleep(1000 * time.Millisecond)
    }

NGINX 配置文件:

upstream backend {
    server 127.0.0.1:8050;
}
server {
    server_name my.domain.com;
    
        large_client_header_buffers 8 32k;
    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        
        proxy_buffers 8 2m;
        proxy_buffer_size 10m;
        proxy_busy_buffers_size 10m;
        proxy_pass http://backend;
        proxy_redirect off;
        #proxy_buffering off; ### ON/OFF IT'S THE SAME 
        
        # enables WS support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade"; ### "upgrade" it's the same
    }
    
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
    if ($host = my.domain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    server_name my.domain.com;
    listen 80;
    return 404; # managed by Certbot
}

NodeJS 代码:


//index.js

const config = require("./config.js");
const fastify = require('fastify')();
const WsController = require("./controller");

fastify.register(require('fastify-websocket'), {
  /*these options are the same as the native nodeJS WS*/
  options :{
    maxPayload: 10 * 1024 * 1024,
    maxReceivedFrameSize: 131072,
    maxReceivedMessageSize: 10 * 1024 * 1024,
    autoAcceptConnections: false
  }
});

fastify.ready(err => {
  if (err) throw err
  console.log("Server started")



  fastify.websocketServer
    .on("connection", WsController)

})


//controller.js
module.exports = (ws, req) => {
    ws.on("message", (msg) => {
        log("msg received"); //it is shown as long as the tunnel does not "fill" up to 17KB
    })

})

已解决

更新 fastifyfastify-websocket 问题消失了。真可惜!

我想出了这个解决方案,方法是创建一个新的云实例并从头开始安装所有内容。

npm update.

感谢大家的支持