在 Boost ASIO 中取消回调
Cancelling callbacks in Boost ASIO
我一直在尝试将我的代码从每个网络连接使用一个 io_service
切换到使用共享的,并且我在服务器套接字上看到一些非常奇怪的行为(客户端套接字似乎工作正常) .
为了弄清楚发生了什么,我重新开始构建一个简单的示例,它可以让我检查我对所有应该发生的事情的假设。我遇到的第一个问题是 io_service::run
在没有剩余处理程序时不会退出,据我所知,处理程序没有从工作队列中删除。
我有一个线程执行 async_accept
,然后执行 async_read
。有一个单独的客户端线程(它有自己的 io_service
)。客户端线程的 io_service
永远不会是 运行,而服务器线程的 运行 在另一个线程中。
我正在使用条件变量在服务器线程中等待读取完成(这永远不会发生,因为客户端从不写入)。这次超时就好了,然后我调用 socket.cancel()
。我希望这会删除读取处理程序并 运行 退出,因为工作队列现在是空的。
我确实看到读取处理程序被调用(出现取消错误),但 运行 从未退出。当我将套接字生命周期与处理程序生命周期相关联时(通过 lambda 将 shared_ptr
捕获到套接字),内存也没有被释放。
服务器设置如下:
std::mutex mutex;
std::unique_lock<std::mutex> lock(mutex);
std::condition_variable signal;
boost::asio::io_service server_service;
boost::asio::ip::tcp::acceptor listener(server_service);
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex);
std::condition_variable read_done;
std::thread server([&]() {
std::unique_lock<std::mutex> lock(mutex);
listener.open(boost::asio::ip::tcp::v4());
listener.set_option(boost::asio::socket_base::enable_connection_aborted(true));
listener.bind(boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 4567));
listener.listen();
std::shared_ptr<connection> server_cnx(new connection(server_service));
listener.async_accept(server_cnx->socket,
[&, server_cnx](const boost::system::error_code& error) {
log_thread() << "Server got a connection " << error << std::endl;
boost::asio::async_read_until(server_cnx->socket, server_cnx->buffer, '\n',
[&, server_cnx](const boost::system::error_code& error, std::size_t bytes) {
log_thread() << "Got " << bytes << ", " << error << std::endl;
std::unique_lock<std::mutex> lock(read_mutex);
lock.unlock();
read_done.notify_one();
});
});
lock.unlock();
signal.notify_one();
if ( read_done.wait_for(read_lock, std::chrono::seconds(1)) == std::cv_status::timeout ) {
log_thread() << "Server read timed out -- cancelling socket jobs" << std::endl;
server_cnx->socket.cancel();
server_cnx->socket.close();
} else {
log_thread() << "Server data read" << std::endl;
}
log_thread() << "Exiting server thread" << std::endl;
});
signal.wait(lock);
log_thread() << "Server set up" << std::endl;
io_service
线程是这样设置的:
std::thread server_io([&]() {
log_thread() << "About to service server IO requests" << std::endl;
try {
server_service.run();
} catch ( ... ) {
log_thread() << "Exception caught" << std::endl;
}
log_thread() << "**** Service jobs all run" << std::endl;
signal.notify_one();
});
输出结果如下:
10.0002 139992957945728 Server set up
10.0005 139992957945728 Client set up
10.0006 139992848398080 About to service server IO requests
10.0006 139992848398080 Server got a connection system:0
11.0003 139992934819584 Server read timed out -- cancelling socket jobs
11.0004 139992934819584 Exiting server thread
11.0004 139992848398080 Got 0, system:125
20.0006 139992957945728 IO thread timed out servicing requests -- stopping it
^^^ This should not happen because the server service should have run out of work
20.0006 139992957945728 Waiting for things to close....
22.0008 139992957945728 Wait over, exiting
(列为时间+10s,线程ID,日志消息)
在第 11 秒处,您可以看到调用了 async_read_until
。这是服务器 io_service
中的最后一个处理程序,但 run
没有退出。
即使在等待 run
退出的超时之后触发并且等待线程执行 io_service::stop()
,仍然 run
没有退出(那里还有 2 秒的等待)。
完整代码在github
当然,这种多线程是一项棘手的工作。事实证明,在这种情况下,读锁是在错误的位置获取的,因此处理程序被等待它完成的线程阻塞。
我想这里的教训是永远不要在没有某种超时的情况下处理线程锁。
当服务器线程试图解锁它不拥有的 read_lock
时,程序正在调用未定义的行为。
int main()
{
...
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex); // Acquired by main.
std::condition_variable read_done;
std::thread server([&]() { // Capture lock reference.
std::unique_lock<std::mutex> lock(mutex);
...
// The next line invokes undefined behavior as this thread does did
// not acquire read_lock.mutex().
if (read_done.wait_for(read_lock, ...)
// ^^^^^^^^^ caller does not own.
{
...
}
});
signal.wait(lock);
...
}
特别是在调用condition_variable::wait_for(lock)
时,标准要求lock.owns_lock()
为真,lock.mutex()
被调用线程锁定。
混合同步和异步流程通常会增加复杂性。在这种特殊情况下,同步调用在每个层中都交织在一起,使用较低级别的结构进行 event/signal 通知而没有持久状态,我认为它增加了不必要的复杂性并使流程过于复杂。此外,广泛的变量范围会增加复杂性。如果 read_lock
从未被 lambda 表达式捕获,则会发生编译器错误。
在尝试观察两个事件时考虑 space 中的分离:
// I will eventually be interested when the server starts
// accepting connections, so start setting up now.
std::mutex server_mutex;
std::unique_lock<std::mutex> server_lock(server_mutex);
std::condition_variable server_started;
std::thread server([&]()
{
// I will eventually be interested when the server reads
// data, so start setting up now.
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex);
std::condition_variable read_done;
listener.async_accept(...,
[&](...)
{
// Got connection.
async_read_until(...,
[&](...)
{
// Someone may be interested that data has been read,
// so use the correct mutex and condition_variable
// pair.
std::unique_lock<std::mutex> read_lock(read_mutex);
read_lock.unlock();
read_done.notify_one();
});
}); // async_accept
// Someone may be interested that I am accepting connections,
// so use the correct mutex and condition_variable pair.
std::unique_lock<std::mutex> server_lock(server_mutex);
server_lock.unlock();
server_done.notify_one();
// I am now interested in if data has been read.
read_done.wait_for(read_lock);
}); // server thread
// I am now interested in if the server has started.
server_started.wait(server_lock);
调用者必须准备处理一个事件,开始一个操作,然后等待事件,并且操作必须知道调用者感兴趣的事件。更糟糕的是,现在必须考虑锁定顺序以防止死锁。请注意在上面的示例中,服务器线程如何获取 read_mutex
,然后获取 server_mutex
。在不引入死锁机会的情况下,另一个线程无法以不同的顺序获取互斥量。就复杂性而言,这种方法很难适应事件的数量。
可能值得考虑重新检查程序的流程和控制结构。如果可以将其编写为主要是异步的,则回调链、延续或信号槽系统(Boost.Signals) may uncomplicate the solution. If one prefers to have asynchronous code read as if it was synchronous, then Boost.Asio's support for coroutines can provide a clean solution. Finally, if one needs to synchronously wait on an asynchronous operation's result or timeout, then consider using Boost.Asio's support for std::future
或直接使用它们。
// Use an asynchronous operation so that it can be cancelled on timeout.
std::future<std::size_t> on_read = boost::asio::async_read_until(
socket, buffer, '\n',boost::asio::use_future);
// If timeout occurs, then cancel the operation.
if (on_read.wait_for(std::chrono::seconds(1)) == std::future_status::timeout)
{
socket.cancel();
}
// Otherwise, the operation completed (with success or error).
else
{
// If the operation failed, then on_read.get() will throw a
// boost::system::system_error.
auto bytes_transferred = on_read.get();
}
虽然我强烈主张重新检查整体控制结构并缩小变量范围,但以下示例大致等同于上述示例,但使用 std::future
可能更容易维护:
// I will eventually be interested when the server starts
// accepting connections, so start setting up now.
std::promise<void> server_started_promise;
auto server_started = server_started_promise.get_future();
std::thread server([&]()
{
// I will eventually be interested when the server reads
// data, so start setting up now.
std::promise<void> read_done_promise;
auto read_done = read_done_promise.get_future();
listener.async_accept(...,
[&](...)
{
// Got connection.
async_read_until(...,
[&](...)
{
// Someone may be interested that data has been read.
read_done_promise.set_value();
});
}); // async_accept
// Someone may be interested that I am accepting connections.
server_started_promise.set_value();
// I am now interested in if data has been read.
read_done.wait_for(...);
}); // server thread
// I am now interested in if the server has started.
server_started.wait();
这里是一个完整的例子,基于demonstrates使用std::future
以同步方式控制流和超时异步操作的原始代码:
#include <future>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
#include <boost/asio/use_future.hpp>
#include <boost/optional.hpp>
#include <boost/utility/in_place_factory.hpp>
int main()
{
using boost::asio::ip::tcp;
// Setup server thread.
boost::asio::io_service server_io_service;
std::promise<tcp::endpoint> server_promise;
auto server_future = server_promise.get_future();
// Start server thread.
std::thread server_thread(
[&server_io_service, &server_promise]
{
tcp::acceptor acceptor(server_io_service);
acceptor.open(tcp::v4());
acceptor.set_option(
boost::asio::socket_base::enable_connection_aborted(true));
acceptor.bind(tcp::endpoint(tcp::v4(), 0));
acceptor.listen();
// Handlers will not chain work, so control the io_service with a work
// object.
boost::optional<boost::asio::io_service::work> work(
boost::in_place(std::ref(server_io_service)));
// Accept a connection.
tcp::socket server_socket(server_io_service);
auto on_accept = acceptor.async_accept(server_socket,
boost::asio::use_future);
// Server has started, so notify caller.
server_promise.set_value(acceptor.local_endpoint());
// Wait for connection or error.
boost::system::system_error error =
make_error_code(boost::system::errc::success);
try
{
on_accept.get();
}
catch (const boost::system::system_error& e)
{
error = e;
}
std::cout << "Server got a connection " << error.code() << std::endl;
// Read from connection.
boost::asio::streambuf buffer;
auto on_read = boost::asio::async_read_until(
server_socket, buffer, '\n', boost::asio::use_future);
// The async_read operation is work, so destroy the work object allowing
// run() to exit.
work = boost::none;
// Timeout the async read operation.
if (on_read.wait_for(std::chrono::seconds(1)) ==
std::future_status::timeout)
{
std::cout << "Server read timed out -- cancelling socket jobs"
<< std::endl;
server_socket.close();
}
else
{
error = make_error_code(boost::system::errc::success);
std::size_t bytes_transferred = 0;
try
{
bytes_transferred = on_read.get();
}
catch (const boost::system::system_error& e)
{
error = e;
}
std::cout << "Got " << bytes_transferred << ", "
<< error.code() << std::endl;
}
std::cout << "Exiting server thread" << std::endl;
});
// Wait for server to start accepting connections.
auto server_endpoint = server_future.get();
std::cout << "Server set up" << std::endl;
// Client thread.
std::promise<void> promise;
auto future = promise.get_future();
std::thread client_thread(
[&server_endpoint, &promise]
{
boost::asio::io_service io_service;
tcp::socket client_socket(io_service);
boost::system::error_code error;
client_socket.connect(server_endpoint, error);
std::cout << "Connected " << error << std::endl;
promise.set_value();
// Keep client socket alive, allowing server to timeout.
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "Exiting client thread" << std::endl;
});
// Wait for client to connect.
future.get();
std::cout << "Client set up" << std::endl;
// Reset generic promise and future.
promise = std::promise<void>();
future = promise.get_future();
// Run server's io_service.
std::thread server_io_thread(
[&server_io_service, &promise]
{
std::cout << "About to service server IO requests" << std::endl;
try
{
server_io_service.run();
}
catch (const std::exception& e)
{
std::cout << "Exception caught: " << e.what() << std::endl;
}
std::cout << "Service jobs all run" << std::endl;
promise.set_value();
});
if (future.wait_for(std::chrono::seconds(3)) ==
std::future_status::timeout)
{
std::cout << "IO thread timed out servicing requests -- stopping it"
<< std::endl;
server_io_service.stop();
}
// Join all threads.
server_io_thread.join();
server_thread.join();
client_thread.join();
}
我一直在尝试将我的代码从每个网络连接使用一个 io_service
切换到使用共享的,并且我在服务器套接字上看到一些非常奇怪的行为(客户端套接字似乎工作正常) .
为了弄清楚发生了什么,我重新开始构建一个简单的示例,它可以让我检查我对所有应该发生的事情的假设。我遇到的第一个问题是 io_service::run
在没有剩余处理程序时不会退出,据我所知,处理程序没有从工作队列中删除。
我有一个线程执行 async_accept
,然后执行 async_read
。有一个单独的客户端线程(它有自己的 io_service
)。客户端线程的 io_service
永远不会是 运行,而服务器线程的 运行 在另一个线程中。
我正在使用条件变量在服务器线程中等待读取完成(这永远不会发生,因为客户端从不写入)。这次超时就好了,然后我调用 socket.cancel()
。我希望这会删除读取处理程序并 运行 退出,因为工作队列现在是空的。
我确实看到读取处理程序被调用(出现取消错误),但 运行 从未退出。当我将套接字生命周期与处理程序生命周期相关联时(通过 lambda 将 shared_ptr
捕获到套接字),内存也没有被释放。
服务器设置如下:
std::mutex mutex;
std::unique_lock<std::mutex> lock(mutex);
std::condition_variable signal;
boost::asio::io_service server_service;
boost::asio::ip::tcp::acceptor listener(server_service);
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex);
std::condition_variable read_done;
std::thread server([&]() {
std::unique_lock<std::mutex> lock(mutex);
listener.open(boost::asio::ip::tcp::v4());
listener.set_option(boost::asio::socket_base::enable_connection_aborted(true));
listener.bind(boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 4567));
listener.listen();
std::shared_ptr<connection> server_cnx(new connection(server_service));
listener.async_accept(server_cnx->socket,
[&, server_cnx](const boost::system::error_code& error) {
log_thread() << "Server got a connection " << error << std::endl;
boost::asio::async_read_until(server_cnx->socket, server_cnx->buffer, '\n',
[&, server_cnx](const boost::system::error_code& error, std::size_t bytes) {
log_thread() << "Got " << bytes << ", " << error << std::endl;
std::unique_lock<std::mutex> lock(read_mutex);
lock.unlock();
read_done.notify_one();
});
});
lock.unlock();
signal.notify_one();
if ( read_done.wait_for(read_lock, std::chrono::seconds(1)) == std::cv_status::timeout ) {
log_thread() << "Server read timed out -- cancelling socket jobs" << std::endl;
server_cnx->socket.cancel();
server_cnx->socket.close();
} else {
log_thread() << "Server data read" << std::endl;
}
log_thread() << "Exiting server thread" << std::endl;
});
signal.wait(lock);
log_thread() << "Server set up" << std::endl;
io_service
线程是这样设置的:
std::thread server_io([&]() {
log_thread() << "About to service server IO requests" << std::endl;
try {
server_service.run();
} catch ( ... ) {
log_thread() << "Exception caught" << std::endl;
}
log_thread() << "**** Service jobs all run" << std::endl;
signal.notify_one();
});
输出结果如下:
10.0002 139992957945728 Server set up
10.0005 139992957945728 Client set up
10.0006 139992848398080 About to service server IO requests
10.0006 139992848398080 Server got a connection system:0
11.0003 139992934819584 Server read timed out -- cancelling socket jobs
11.0004 139992934819584 Exiting server thread
11.0004 139992848398080 Got 0, system:125
20.0006 139992957945728 IO thread timed out servicing requests -- stopping it
^^^ This should not happen because the server service should have run out of work
20.0006 139992957945728 Waiting for things to close....
22.0008 139992957945728 Wait over, exiting
(列为时间+10s,线程ID,日志消息)
在第 11 秒处,您可以看到调用了 async_read_until
。这是服务器 io_service
中的最后一个处理程序,但 run
没有退出。
即使在等待 run
退出的超时之后触发并且等待线程执行 io_service::stop()
,仍然 run
没有退出(那里还有 2 秒的等待)。
完整代码在github
当然,这种多线程是一项棘手的工作。事实证明,在这种情况下,读锁是在错误的位置获取的,因此处理程序被等待它完成的线程阻塞。
我想这里的教训是永远不要在没有某种超时的情况下处理线程锁。
当服务器线程试图解锁它不拥有的 read_lock
时,程序正在调用未定义的行为。
int main()
{
...
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex); // Acquired by main.
std::condition_variable read_done;
std::thread server([&]() { // Capture lock reference.
std::unique_lock<std::mutex> lock(mutex);
...
// The next line invokes undefined behavior as this thread does did
// not acquire read_lock.mutex().
if (read_done.wait_for(read_lock, ...)
// ^^^^^^^^^ caller does not own.
{
...
}
});
signal.wait(lock);
...
}
特别是在调用condition_variable::wait_for(lock)
时,标准要求lock.owns_lock()
为真,lock.mutex()
被调用线程锁定。
混合同步和异步流程通常会增加复杂性。在这种特殊情况下,同步调用在每个层中都交织在一起,使用较低级别的结构进行 event/signal 通知而没有持久状态,我认为它增加了不必要的复杂性并使流程过于复杂。此外,广泛的变量范围会增加复杂性。如果 read_lock
从未被 lambda 表达式捕获,则会发生编译器错误。
在尝试观察两个事件时考虑 space 中的分离:
// I will eventually be interested when the server starts
// accepting connections, so start setting up now.
std::mutex server_mutex;
std::unique_lock<std::mutex> server_lock(server_mutex);
std::condition_variable server_started;
std::thread server([&]()
{
// I will eventually be interested when the server reads
// data, so start setting up now.
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex);
std::condition_variable read_done;
listener.async_accept(...,
[&](...)
{
// Got connection.
async_read_until(...,
[&](...)
{
// Someone may be interested that data has been read,
// so use the correct mutex and condition_variable
// pair.
std::unique_lock<std::mutex> read_lock(read_mutex);
read_lock.unlock();
read_done.notify_one();
});
}); // async_accept
// Someone may be interested that I am accepting connections,
// so use the correct mutex and condition_variable pair.
std::unique_lock<std::mutex> server_lock(server_mutex);
server_lock.unlock();
server_done.notify_one();
// I am now interested in if data has been read.
read_done.wait_for(read_lock);
}); // server thread
// I am now interested in if the server has started.
server_started.wait(server_lock);
调用者必须准备处理一个事件,开始一个操作,然后等待事件,并且操作必须知道调用者感兴趣的事件。更糟糕的是,现在必须考虑锁定顺序以防止死锁。请注意在上面的示例中,服务器线程如何获取 read_mutex
,然后获取 server_mutex
。在不引入死锁机会的情况下,另一个线程无法以不同的顺序获取互斥量。就复杂性而言,这种方法很难适应事件的数量。
可能值得考虑重新检查程序的流程和控制结构。如果可以将其编写为主要是异步的,则回调链、延续或信号槽系统(Boost.Signals) may uncomplicate the solution. If one prefers to have asynchronous code read as if it was synchronous, then Boost.Asio's support for coroutines can provide a clean solution. Finally, if one needs to synchronously wait on an asynchronous operation's result or timeout, then consider using Boost.Asio's support for std::future
或直接使用它们。
// Use an asynchronous operation so that it can be cancelled on timeout.
std::future<std::size_t> on_read = boost::asio::async_read_until(
socket, buffer, '\n',boost::asio::use_future);
// If timeout occurs, then cancel the operation.
if (on_read.wait_for(std::chrono::seconds(1)) == std::future_status::timeout)
{
socket.cancel();
}
// Otherwise, the operation completed (with success or error).
else
{
// If the operation failed, then on_read.get() will throw a
// boost::system::system_error.
auto bytes_transferred = on_read.get();
}
虽然我强烈主张重新检查整体控制结构并缩小变量范围,但以下示例大致等同于上述示例,但使用 std::future
可能更容易维护:
// I will eventually be interested when the server starts
// accepting connections, so start setting up now.
std::promise<void> server_started_promise;
auto server_started = server_started_promise.get_future();
std::thread server([&]()
{
// I will eventually be interested when the server reads
// data, so start setting up now.
std::promise<void> read_done_promise;
auto read_done = read_done_promise.get_future();
listener.async_accept(...,
[&](...)
{
// Got connection.
async_read_until(...,
[&](...)
{
// Someone may be interested that data has been read.
read_done_promise.set_value();
});
}); // async_accept
// Someone may be interested that I am accepting connections.
server_started_promise.set_value();
// I am now interested in if data has been read.
read_done.wait_for(...);
}); // server thread
// I am now interested in if the server has started.
server_started.wait();
这里是一个完整的例子,基于demonstrates使用std::future
以同步方式控制流和超时异步操作的原始代码:
#include <future>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
#include <boost/asio/use_future.hpp>
#include <boost/optional.hpp>
#include <boost/utility/in_place_factory.hpp>
int main()
{
using boost::asio::ip::tcp;
// Setup server thread.
boost::asio::io_service server_io_service;
std::promise<tcp::endpoint> server_promise;
auto server_future = server_promise.get_future();
// Start server thread.
std::thread server_thread(
[&server_io_service, &server_promise]
{
tcp::acceptor acceptor(server_io_service);
acceptor.open(tcp::v4());
acceptor.set_option(
boost::asio::socket_base::enable_connection_aborted(true));
acceptor.bind(tcp::endpoint(tcp::v4(), 0));
acceptor.listen();
// Handlers will not chain work, so control the io_service with a work
// object.
boost::optional<boost::asio::io_service::work> work(
boost::in_place(std::ref(server_io_service)));
// Accept a connection.
tcp::socket server_socket(server_io_service);
auto on_accept = acceptor.async_accept(server_socket,
boost::asio::use_future);
// Server has started, so notify caller.
server_promise.set_value(acceptor.local_endpoint());
// Wait for connection or error.
boost::system::system_error error =
make_error_code(boost::system::errc::success);
try
{
on_accept.get();
}
catch (const boost::system::system_error& e)
{
error = e;
}
std::cout << "Server got a connection " << error.code() << std::endl;
// Read from connection.
boost::asio::streambuf buffer;
auto on_read = boost::asio::async_read_until(
server_socket, buffer, '\n', boost::asio::use_future);
// The async_read operation is work, so destroy the work object allowing
// run() to exit.
work = boost::none;
// Timeout the async read operation.
if (on_read.wait_for(std::chrono::seconds(1)) ==
std::future_status::timeout)
{
std::cout << "Server read timed out -- cancelling socket jobs"
<< std::endl;
server_socket.close();
}
else
{
error = make_error_code(boost::system::errc::success);
std::size_t bytes_transferred = 0;
try
{
bytes_transferred = on_read.get();
}
catch (const boost::system::system_error& e)
{
error = e;
}
std::cout << "Got " << bytes_transferred << ", "
<< error.code() << std::endl;
}
std::cout << "Exiting server thread" << std::endl;
});
// Wait for server to start accepting connections.
auto server_endpoint = server_future.get();
std::cout << "Server set up" << std::endl;
// Client thread.
std::promise<void> promise;
auto future = promise.get_future();
std::thread client_thread(
[&server_endpoint, &promise]
{
boost::asio::io_service io_service;
tcp::socket client_socket(io_service);
boost::system::error_code error;
client_socket.connect(server_endpoint, error);
std::cout << "Connected " << error << std::endl;
promise.set_value();
// Keep client socket alive, allowing server to timeout.
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "Exiting client thread" << std::endl;
});
// Wait for client to connect.
future.get();
std::cout << "Client set up" << std::endl;
// Reset generic promise and future.
promise = std::promise<void>();
future = promise.get_future();
// Run server's io_service.
std::thread server_io_thread(
[&server_io_service, &promise]
{
std::cout << "About to service server IO requests" << std::endl;
try
{
server_io_service.run();
}
catch (const std::exception& e)
{
std::cout << "Exception caught: " << e.what() << std::endl;
}
std::cout << "Service jobs all run" << std::endl;
promise.set_value();
});
if (future.wait_for(std::chrono::seconds(3)) ==
std::future_status::timeout)
{
std::cout << "IO thread timed out servicing requests -- stopping it"
<< std::endl;
server_io_service.stop();
}
// Join all threads.
server_io_thread.join();
server_thread.join();
client_thread.join();
}