What would cause different rates of packet loss between client and server in UDP?

Posted by febreezey on Super User See other posts from Super User or by febreezey
Published on 2012-11-21T22:26:21Z Indexed on 2012/11/21 23:02 UTC
Read the original article Hit count: 305

Filed under:
|
|
|
|

If I've implemented a reliable UDP file transfer protocol and I have a file that deliberately drops a percentage of packets when I transmit, why would it be more evident that transmission time increases as the packet loss percentage increases going from the client to server as opposed from the server to the client? Is this something that can be explained as a result of the protocol?

Here are my numbers from two separate experiments. I kept the max packet size to 500 Bytes and the opposite direction packet loss to 5% with a 1 Megabyte file:

Server to Client loss Percentage varied:

1 MB file, 500 b segments, client to server loss 5%

1% : 17253 ms

3% : 3388 ms

5% : 7252 ms

10% : 6229 ms

11% : 12346 ms

13% : 11282 ms

15% : 9252 ms

20% : 11266 ms


Client to Server loss percentage varied

1 MB file, 500 b segments, server to client loss 5%

1%: 4227 ms

3%: 4334 ms

5%: 3308 ms

10%: 31350 ms

11%: 36398 ms

13%: 48436 ms

15%: 65475 ms

20%: 120515 ms

You can clearly see an exponential increase in the client to server group

© Super User or respective owner

Related posts about networking

Related posts about ftp