This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages?
String comparison
Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code:
if (strcmp(str1, str2)) { // Do something... }
Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time.
Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one:
sign operator<=>(const std::string& lhs, const std::string& rhs);
Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way:
sign operator==(const std::string& lhs, const std::string& rhs);
This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed:
if (str1 == str2) { // Do something... }
Old errors handling
We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things:
#define TRUE 0
// ...
if (some_function() == TRUE)
{
// Here, TRUE would mean success...
// Do something
}
If we think about how we think in programming, we often have the following reasoning pattern:
Do something
Did it work?
Yes ->
That's ok, one case to handle
No ->
Why? Many cases to handle
If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense.
Conclusion
My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of?
Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)