Should integer divide by zero halt execution?
- by Pyrolistical
I know that modern languages handle integer divide by zero as an error just like the hardware does, but what if we could design a whole new language?
Ignoring existing hardware, what should a programming language does when an integer divide by zero occurs? Should it return a NaN of type integer? Or should it mirror IEEE 754 float and return +/- Infinity? Or is the existing design choice correct, and an error should be thrown?
Is there a language that handles integer divide by zero nicely?
EDIT
When I said ignore existing hardware, I mean don't assume integer is represented as 32 bits, it can be represented in anyway you can to imagine.