vil*_*pam 6 c++ windows performance boost-asio io-completion-ports
我正在使用boost :: asio进行一个项目,其中同一台机器上的两个进程使用TCP/IP进行通信.一个生成要由另一个读取的数据,但我遇到的问题是间歇性地没有数据通过连接发送.基于异步tcp echo服务器示例,我将其简化为下面的一个非常简单的示例.
这些流程(下面的源代码)很好地开始,从发送方到接收方以快速的速度提供数据.然后突然间,根本没有数据被传递大约五秒钟.然后再次传送数据,直到下一个无法解释的暂停.在这五秒钟内,进程占用0%的CPU,并且没有其他进程似乎特别做任何事情.暂停总是相同的长度 - 五秒钟.
我试图弄清楚如何摆脱这些摊位以及导致它们的原因.

请注意在运行过程中如何有三次CPU使用率下降 - "运行"是服务器进程和客户端进程的单次调用.在这些逢低期间,没有提供数据.运行之间的下降次数和时间有所不同 - 有时甚至没有下降,有时很多.
我可以通过改变读缓冲区的大小来影响这些停顿的"概率" - 例如,如果我使读缓冲区成为发送块大小的倍数,则看起来这个问题几乎消失了,但并非完全消失.
我用Visual Studio 2005编译了下面的代码,使用了Boost 1.43和Boost 1.45.我已经在Windows Vista 64位(四核)和Windows 7 64位(四核和双核)上进行了测试.
服务器接受连接,然后只读取和丢弃数据.每当执行读取时,都会发出新的读取.
客户端连接到服务器,然后将一堆数据包放入发送队列.在此之后,它一次写入一个数据包.只要写入完成,就会写入队列中的下一个数据包.一个单独的线程监视队列大小,并每秒将其打印到stdout.在io停顿期间,队列大小保持完全相同.
我曾尝试使用scatter io(在一次系统调用中写入多个数据包),但结果是一样的.如果我在Boost中禁用IO完成端口BOOST_ASIO_DISABLE_IOCP,问题似乎就会消失,但代价是吞吐量明显降低.
// Example is adapted from async_tcp_echo_server.cpp which is
// Copyright (c) 2003-2010 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Start program with -s to start as the server
#ifndef _WIN32_WINNT
#define _WIN32_WINNT 0x0501
#endif
#include <iostream>
#include <tchar.h>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#define PORT "1234"
using namespace boost::asio::ip;
using namespace boost::system;
class session {
public:
session(boost::asio::io_service& io_service) : socket_(io_service) {}
void do_read() {
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this, _1, _2));
}
boost::asio::ip::tcp::socket& socket() { return socket_; }
protected:
void handle_read(const error_code& ec, size_t bytes_transferred) {
if (!ec) {
do_read();
} else {
delete this;
}
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
explicit server(boost::asio::io_service& io_service)
: io_service_(io_service)
, acceptor_(io_service, tcp::endpoint(tcp::v4(), atoi(PORT)))
{
session* new_session = new session(io_service_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session, _1));
}
void handle_accept(session* new_session, const error_code& ec) {
if (!ec) {
new_session->do_read();
new_session = new session(io_service_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session, _1));
} else {
delete new_session;
}
}
private:
boost::asio::io_service& io_service_;
boost::asio::ip::tcp::acceptor acceptor_;
};
class client {
public:
explicit client(boost::asio::io_service &io_service)
: io_service_(io_service)
, socket_(io_service)
, work_(new boost::asio::io_service::work(io_service))
{
io_service_.post(boost::bind(&client::do_init, this));
}
~client() {
packet_thread_.join();
}
protected:
void do_init() {
// Connect to the server
tcp::resolver resolver(io_service_);
tcp::resolver::query query(tcp::v4(), "localhost", PORT);
tcp::resolver::iterator iterator = resolver.resolve(query);
socket_.connect(*iterator);
// Start packet generation thread
packet_thread_.swap(boost::thread(
boost::bind(&client::generate_packets, this, 8000, 5000000)));
}
typedef std::vector<unsigned char> packet_type;
typedef boost::shared_ptr<packet_type> packet_ptr;
void generate_packets(long packet_size, long num_packets) {
// Add a single dummy packet multiple times, then start writing
packet_ptr buf(new packet_type(packet_size, 0));
write_queue_.insert(write_queue_.end(), num_packets, buf);
queue_size = num_packets;
do_write_nolock();
// Wait until all packets are sent.
while (long queued = InterlockedExchangeAdd(&queue_size, 0)) {
std::cout << "Queue size: " << queued << std::endl;
Sleep(1000);
}
// Exit from run(), ignoring socket shutdown
work_.reset();
}
void do_write_nolock() {
const packet_ptr &p = write_queue_.front();
async_write(socket_, boost::asio::buffer(&(*p)[0], p->size()),
boost::bind(&client::on_write, this, _1));
}
void on_write(const error_code &ec) {
if (ec) { throw system_error(ec); }
write_queue_.pop_front();
if (InterlockedDecrement(&queue_size)) {
do_write_nolock();
}
}
private:
boost::asio::io_service &io_service_;
tcp::socket socket_;
boost::shared_ptr<boost::asio::io_service::work> work_;
long queue_size;
std::list<packet_ptr> write_queue_;
boost::thread packet_thread_;
};
int _tmain(int argc, _TCHAR* argv[]) {
try {
boost::asio::io_service io_svc;
bool is_server = argc > 1 && 0 == _tcsicmp(argv[1], _T("-s"));
std::auto_ptr<server> s(is_server ? new server(io_svc) : 0);
std::auto_ptr<client> c(is_server ? 0 : new client(io_svc));
io_svc.run();
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Run Code Online (Sandbox Code Playgroud)
所以我的问题基本上是:
我如何摆脱这些摊位?
是什么导致这种情况发生?
更新:与我上面所说的相反,似乎与磁盘活动有一些关联,因此看起来如果我在测试运行时在磁盘上启动大型目录副本,这可能会增加io停顿的频率.这可能表明这是启动的Windows IO优先级?由于暂停总是相同的长度,这听起来有点像OS io代码中某处的超时...