16 millisecond quantization when sending/receivingtcp packets
Posted
by MKZ
on Stack Overflow
See other posts from Stack Overflow
or by MKZ
Published on 2010-05-22T07:44:23Z
Indexed on
2010/05/22
7:50 UTC
Read the original article
Hit count: 160
Hi,
I have a C++ application running on windows xp 32 system sending and receiving short tcp/ip packets.
Measuring (accurately) the arrival time I see a quantization of the arrival time to 16 millisecond time units. (Meaning all packets arriving are at (16 )xN milliseconds separated from each other)
To avoid packet aggregation I tried to disable the NAGLE algorithm by setting the IPPROTO_TCP option to TCP_NODELAY in the socket variables but it did not help
I suspect that the problem is related to the windows schedular which also have a 16 millisecond clock.. Any idea of a solution to this problem ? Thanks
© Stack Overflow or respective owner