As I understand it, TcpListener will queue connections once you call Start(). Each time you call AcceptTcpClient (or BeginAcceptTcpClient), it will dequeue one item from the queue.
If we load test our TcpListener app by sending 1,000 connections to it at once, the queue builds far faster than we can clear it, leading (eventually) to timeouts from the client because it didn't get a response because its connection was still in the queue. However, the server doesn't appear to be under much pressure, our app isn't consuming much CPU time and the other monitored resources on the machine aren't breaking a sweat. It feels like we're not running efficiently enough right now.
We're calling BeginAcceptTcpListener and then immediately handing over to a ThreadPool thread to actually do the work, then calling BeginAcceptTcpClient again. The work involved doesn't seem to put any pressure on the machine, it's basically just a 3 second sleep followed by a dictionary lookup and then a 100 byte write to the TcpClient's stream.
Here's the TcpListener code we're using:
// Thread signal.
private static ManualResetEvent tcpClientConnected = new ManualResetEvent(false);
public void DoBeginAcceptTcpClient(TcpListener listener)
{
// Set the event to nonsignaled state.
tcpClientConnected.Reset();
listener.BeginAcceptTcpClient(
new AsyncCallback(DoAcceptTcpClientCallback),
listener);
// Wait for signal
tcpClientConnected.WaitOne();
}
public void DoAcceptTcpClientCallback(IAsyncResult ar)
{
// Get the listener that handles the client request, and the TcpClient
TcpListener listener = (TcpListener)ar.AsyncState;
TcpClient client = listener.EndAcceptTcpClient(ar);
if (inProduction)
ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client, serverCertificate)); // With SSL
else
ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client)); // Without SSL
// Signal the calling thread to continue.
tcpClientConnected.Set();
}
public void Start()
{
currentHandledRequests = 0;
tcpListener = new TcpListener(IPAddress.Any, 10000);
try
{
tcpListener.Start();
while (true)
DoBeginAcceptTcpClient(tcpListener);
}
catch (SocketException)
{
// The TcpListener is shutting down, exit gracefully
CheckBuffer();
return;
}
}
I'm assuming the answer will be related to using Sockets instead of TcpListener, or at least using TcpListener.AcceptSocket, but I wondered how we'd go about doing that?
One idea we had was to call AcceptTcpClient and immediately Enqueue the TcpClient into one of multiple Queue<TcpClient> objects. That way, we could poll those queues on separate threads (one queue per thread), without running into monitors that might block the thread while waiting for other Dequeue operations. Each queue thread could then use ThreadPool.QueueUserWorkItem to have the work done in a ThreadPool thread and then move onto dequeuing the next TcpClient in its queue. Would you recommend this approach, or is our problem that we're using TcpListener and no amount of rapid dequeueing is going to fix that?