Consider the function,
add a b = a + b
This works:
*Main> add 1 2
3
However, if I add a type signature specifying that I want to add things of the same type:
add :: a -> a -> a
add a b = a + b
I get an error:
test.hs:3:10:
Could not deduce (Num a) from the context ()
arising from a use of `+' at test.hs:3:10-14
Possible fix:
add (Num a) to the context of the type signature for `add'
In the expression: a + b
In the definition of `add': add a b = a + b
So GHC clearly can deduce that I need the Num type constraint, since it just told me:
add :: Num a => a -> a -> a
add a b = a + b
Works.
Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator?
In C++ template programming, you can do this easily:
#include <string>
#include <cstdio>
using namespace std;
template<typename T>
T add(T a, T b) { return a + b; }
int main()
{
printf("%d, %f, %s\n",
add(1, 2),
add(1.0, 3.4),
add(string("foo"), string("bar")).c_str());
return 0;
}
The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num?
Thank you.