What's the best way to unit test code that generates random output?

Posted by Flynn1179 on Stack Overflow See other posts from Stack Overflow or by Flynn1179
Published on 2010-06-18T09:30:20Z Indexed on 2010/06/18 9:33 UTC
Read the original article Hit count: 263

Specifically, I've got a method picks n items from a list in such a way that a% of them meet one criterion, and b% meet a second, and so on. A simplified example would be to pick 5 items where 50% have a given property with the value 'true', and 50% 'false'; 50% of the time the method would return 2 true/3 false, and the other 50%, 3 true/2 false.

Statistically speaking, this means that over 100 runs, I should get about 250 true/250 false, but because of the randomness, 240/260 is entirely possible.

What's the best way to unit test this? I'm assuming that even though technically 300/200 is possible, it should probably fail the test if this happens. Is there a generally accepted tolerance for cases like this, and if so, how do you determine what that is?

© Stack Overflow or respective owner

Related posts about best-practices

Related posts about unit-testing