CPU Architecture and floating-point math
- by Jo-Herman Haugholt
I'm trying to wrap my head around some details about how floating point math is performed on the CPU, trying to better understand what data types to use etc.
I think I have a fairly good understanding of how integer math is performed. If I've understood correctly, and disregarding SIMD, a 32-bit CPU will generally perform integer math at at least 32-bit precision etc.
Is it correct that floating-point math is dependent on the presence of a FPU? And that the FPU on the x86 is 80-bit, so floating point math is performed at this precision unless using SIMD? What about ARM?