How does Haskell do pattern matching without us defining an Eq on our data types?
- by devoured elysium
I have defined a binary tree:
data Tree = Null | Node Tree Int Tree
and have implemented a function that'll yield the sum of the values of all its nodes:
sumOfValues :: Tree -> Int
sumOfValues Null = 0
sumOfValues (Node Null v Null) = v
sumOfValues (Node Null v t2) = v + (sumOfValues t2)
sumOfValues (Node t1 v Null) = v + (sumOfValues t1)
sumOfValues (Node t1 v t2) = v + (sumOfValues t1) + (sumOfValues t2)
It works as expected. I had the idea of also trying to implement it using guards:
sumOfValues2 :: Tree -> Int
sumOfValues2 Null = 0
sumOfValues2 (Node t1 v t2)
| t1 == Null && t2 == Null = v
| t1 == Null = v + (sumOfValues2 t2)
| t2 == Null = v + (sumOfValues2 t1)
| otherwise = v + (sumOfValues2 t1) + (sumOfValues2 t2)
but this one doesn't work because I haven't implemented Eq, I believe:
No instance for (Eq Tree)
arising from a use of `==' at zzz3.hs:13:3-12
Possible fix: add an instance declaration for (Eq Tree)
In the first argument of `(&&)', namely `t1 == Null'
In the expression: t1 == Null && t2 == Null
In a stmt of a pattern guard for
the definition of `sumOfValues2':
t1 == Null && t2 == Null
The question that has to be made, then, is how can Haskell make pattern matching without knowing when a passed argument matches, without resorting to Eq?