在我做的項目中,經常遇到一種如下形式的網絡結構。
出于安全的考慮,server處于防火墻之后,client無法直接訪問,只能通過telnet登陸到proxy server上訪問。在這種方式下,就無法利用cient上的各種強大的桌面工具(如數據庫客戶端等),只能通過telnet的命令行形式交互,確實有些不便。
我們可以通過端口映射解決這個問題,實現client到server的"直接訪問"。當client想訪問server時,只需要與proxy的某個端口建立連接,proxy監聽到這個連接后,建立一個與server的連接(client的目標),同時提供這兩個連接的消息傳輸管道。這樣,所有client發到proxy的消息都發送到了server上,server的消息也發送到了proxy上,從而實現了client到server的訪問。
由于連接的數目可能較多,并且proxy程序起著一個消息中轉的作用,因此程序本身需要較高的socket通信效率,所有的操作都不能阻塞,否則嚴重影響其它的進程通信,因此程序中的socket的連接,通信方式都必須采用異步操作。(多線程的方式如果連接的進程較多時則開銷太大)。
這種方式十分簡單有效,并且不需要對客戶端和服務器端做任何修改。不足的地方有如下幾處:
-
proxy需要中轉所有的消息,負荷較大。(不過處理目前的幾十個客戶端應用是綽綽有余,并且目前的通信瓶頸一般在internet上)
-
需要在proxy上建立端口監聽,并且所監聽的端口需要能被client直接訪問,這種情況的網絡很多時候得不到滿足。(大多時候proxy只開放了幾個有限的端口,并無多余的端口讓端口映射程序綁定)
本來我用C#寫了一個,程序非常簡單,這里就不拿出來了。
后來由于要把這個程序放到Unix服務器上長期運行,我就用C++重寫了一下,最初我是用socket api寫的,可程序的可讀性總是不盡人意,后來就改用了asio庫(asio 0.3.8 rc3,與早期的asio庫不兼容),通過boost的asio,function,smart_ptr這幾個庫的運用,一個C++版的端口映射程序便誕生了,精簡、高效、安全、跨平臺,原來c++下的異步socket也可以如此優雅。^_^
代碼如下:
#include <list>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
using namespace std;
class socket_client
: public boost::enable_shared_from_this<socket_client>
,public tcp::socket
{
public:
typedef boost::shared_ptr<socket_client> pointer;
static pointer create(boost::asio::io_service& io_service)
{
return pointer(new socket_client(io_service));
}
public:
socket_client(boost::asio::io_service& io_service)
:tcp::socket(io_service)
{
}
};
class socket_pipe
{
public:
socket_pipe(socket_client::pointer read,socket_client::pointer write)
:read_socket_(*read),write_socket_(*write)
{
read_ = read;
write_ = write;
begin_read();
}
private:
void begin_read()
{
read_socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&socket_pipe::end_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void end_read(const boost::system::error_code& error, size_t bytes_transferred)
{
if(error)
handle_error(error);
else
begin_write(bytes_transferred);
}
void begin_write(int bytes_transferred)
{
boost::asio::async_write(write_socket_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&socket_pipe::end_write, this,
boost::asio::placeholders::error));
}
void end_write(const boost::system::error_code& error)
{
if(error)
handle_error(error);
else
begin_read();
}
void handle_error(const boost::system::error_code& error)
{
read_socket_.close();
write_socket_.close();
delete this;
}
private:
socket_client& read_socket_;
socket_client& write_socket_;
socket_client::pointer read_;
socket_client::pointer write_;
enum { max_length = 1024 };
char data_[max_length];
};
class async_listener
{
public:
typedef boost::function<void (socket_client::pointer client)> accept_handler;
typedef boost::shared_ptr<async_listener> pointer;
public:
async_listener(short port,boost::asio::io_service& io_service)
:io_service_(io_service),
acceptor_(io_service, tcp::endpoint(tcp::v4(), port))
{
begin_accept();
}
void begin_accept()
{
socket_client::pointer client = socket_client::create(io_service_);
acceptor_.async_accept(*client,
boost::bind(&async_listener::end_accept, this, client,
boost::asio::placeholders::error));
}
void end_accept(socket_client::pointer client, const boost::system::error_code& error)
{
if(error)
handle_error(error);
begin_accept();
if(!handle_accept.empty())
handle_accept(client);
}
void handle_error(const boost::system::error_code& error)
{
}
public:
accept_handler handle_accept;
private:
tcp::acceptor acceptor_;
boost::asio::io_service& io_service_;
};
class port_map_server
{
public:
port_map_server(boost::asio::io_service& io_service)
:io_service_(io_service)
{
}
void add_portmap(short port,tcp::endpoint& remote_endpoint)
{
async_listener::pointer listener(new async_listener(port,io_service_));
listeners.push_back(listener);
listener->handle_accept = boost::bind(&port_map_server::handle_accept
,this,remote_endpoint,_1);
}
void handle_accept(tcp::endpoint remote_endpoint,socket_client::pointer client)
{
begin_connect(remote_endpoint,client);
}
void begin_connect(tcp::endpoint& remote_endpoint,socket_client::pointer socket_local)
{
socket_client::pointer socket_remote = socket_client::create(io_service_);
socket_remote->async_connect(remote_endpoint,
boost::bind(&port_map_server::end_connect, this,
boost::asio::placeholders::error,socket_local,socket_remote));
}
void end_connect(const boost::system::error_code& error,socket_client::pointer socket_local,socket_client::pointer socket_remote)
{
if(error)
{
handle_error(error);
}
else
{
new socket_pipe(socket_local,socket_remote);
new socket_pipe(socket_remote,socket_local);
}
}
void handle_error(const boost::system::error_code& error)
{
}
private:
boost::asio::io_service& io_service_;
list<async_listener::pointer> listeners;
};
int main()
{
try
{
boost::asio::io_service io_service;
tcp::endpoint ep(boost::asio::ip::address_v4::from_string("192.168.1.193"),23);
tcp::endpoint ep2(boost::asio::ip::address_v4::from_string("192.168.1.175"),23);
port_map_server server(io_service);
server.add_portmap(3000,ep);
server.add_portmap(4000,ep2);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}