使用连接的套接字终止 QWebSocketServer

Terminating QWebSocketServer with connected sockets

我调试用 C++/Qt 5.12.1 编写的控制台多线程应用程序。在 Linux Mint 18.3 x64 上 运行。

此应用有 SIGINT 个处理程序、QWebSocketServerQWebSocket table。它使用 close() QWebSocketServer 并为 QWebSocket table 中的项目调用 abort()/deleteLater() 来处理终止。

如果 websocket 客户端连接到此控制台应用程序,则由于某些 运行 线程(我想它是内部 QWebSocket 线程)而终止失败。 如果没有连接,则终止成功。

如何解决?使应用程序正常退出。

要优雅地退出套接字服务器,我们可以尝试:

最重要的部分是允许主线程事件循环到 运行 并等待 QWebSocketServer::closed() so that the slot calls QCoreApplication::quit()

甚至可以做到:

connect(webSocketServer, &QWebSocketServer::closed,
        QCoreApplication::instance(), &QCoreApplication::quit);

如果我们不需要更详细的反应。

首先连接该信号后,继续pauseAccepting()以防止更多连接。 调用 QWebSocketServer::close.

如果上面的已经足够,下面的可能不需要。你需要先尝试上面的,只有当仍然有问题时才处理现有的和挂起的连接。根据我的经验,行为因平台而异,并且在服务器环境中有一些独特的 websocket 实现(这可能只是适合您的 Qt)。

只要我们有一些包含 QWebSocket 个实例的数组,我们就可以尝试对所有实例调用 QWebSocket::abort() 以立即释放。这一步好像是问题作者描述的。

尝试用 QWebSocketServer::nextPendingConnection() 迭代 pending 连接并为它们调用 abort()。如果也有效,请调用 deleteLater

不需要做任何事情。 "graceful exit" 是什么意思?一旦请求终止您的应用程序,您应该立即使用 exit(0) 或类似机制终止它。 "graceful exit" 应该是这样。

注:我改造了。我曾经认为优雅的退出是一件好事。它们通常是 CPU 资源的浪费,通常表明应用程序的体系结构存在问题。

kj框架(capnproto的一部分)写的不错rationale for why it should be so

引用 Kenton Varda 的话:

KJ_NORETURN(virtual void exit()) = 0;

Indicates program completion. The program is considered successful unless error() was called. Typically this exits with _Exit(), meaning that the stack is not unwound, buffers are not flushed, etc. -- it is the responsibility of the caller to flush any buffers that matter. However, an alternate context implementation e.g. for unit testing purposes could choose to throw an exception instead.

At first this approach may sound crazy. Isn't it much better to shut down cleanly? What if you lose data? However, it turns out that if you look at each common class of program, _Exit() is almost always preferable. Let's break it down:

  • Commands: A typical program you might run from the command line is single-threaded and exits quickly and deterministically. Commands often use buffered I/O and need to flush those buffers before exit. However, most of the work performed by destructors is not flushing buffers, but rather freeing up memory, placing objects into freelists, and closing file descriptors. All of this is irrelevant if the process is about to exit anyway, and for a command that runs quickly, time wasted freeing heap space may make a real difference in the overall runtime of a script. Meanwhile, it is usually easy to determine exactly what resources need to be flushed before exit, and easy to tell if they are not being flushed (because the command fails to produce the expected output). Therefore, it is reasonably easy for commands to explicitly ensure all output is flushed before exiting, and it is probably a good idea for them to do so anyway, because write failures should be detected and handled. For commands, a good strategy is to allocate any objects that require clean destruction on the stack, and allow them to go out of scope before the command exits. Meanwhile, any resources which do not need to be cleaned up should be allocated as members of the command's main class, whose destructor normally will not be called.

  • Interactive apps: Programs that interact with the user (whether they be graphical apps with windows or console-based apps like emacs) generally exit only when the user asks them to. Such applications may store large data structures in memory which need to be synced to disk, such as documents or user preferences. However, relying on stack unwind or global destructors as the mechanism for ensuring such syncing occurs is probably wrong. First of all, it's 2013, and applications ought to be actively syncing changes to non-volatile storage the moment those changes are made. Applications can crash at any time and a crash should never lose data that is more than half a second old. Meanwhile, if a user actually does try to close an application while unsaved changes exist, the application UI should prompt the user to decide what to do. Such a UI mechanism is obviously too high level to be implemented via destructors, so KJ's use of _Exit() shouldn't make a difference here.

  • Servers: A good server is fault-tolerant, prepared for the possibility that at any time it could crash, the OS could decide to kill it off, or the machine it is running on could just die. So, using _Exit() should be no problem. In fact, servers generally never even call exit anyway; they are killed externally.

  • Batch jobs: A long-running batch job is something between a command and a server. It probably knows exactly what needs to be flushed before exiting, and it probably should be fault-tolerant.