Why doesn't my processor have built-in BigInt support?

Posted by ol on Stack Overflow See other posts from Stack Overflow or by ol
Published on 2010-04-12T20:00:06Z Indexed on 2010/04/12 20:03 UTC
Read the original article Hit count: 143

Filed under:
|

As far as I understood it, BigInts are usually implemented in most programming languages as strings containing numbers, where, eg.: when adding two of them, each digit is added one after another like we know it from school, e.g.:

 246
 816
 * *
----
1062

Where * marks that there was an overflow. I learned it this way at school and all BigInt adding functions I've implemented work similar to the example above.

So we all know that our processors can only natively manage ints from 0 to 2^32 / 2^64.

That means that most scripting languages in order to be high-level and offer arithmetics with big integers, have to implement/use BigInt libraries that work with integers as strings like above. But of course this means that they'll be far slower than the processor.

So what I've asked myself is:

  • Why doesn't my processor have a built-in BigInt function?

It would work like any other BigInt library, only (a lot) faster and at a lower level: Processor fetches one digit from the cache/RAM, adds it, and writes the result back again.

Seems like a fine idea to me, so why isn't there something like that?

© Stack Overflow or respective owner

Related posts about biginteger

Related posts about processor