Is integer addition faster or slower than floating-point addition? How
about multiplication? How do trigonometric functions compare? Is single
precision any faster than double precision? Are 8-bit integers faster
than 16-bit integers?
Does anybody know of a resource where I can get an idea of the relative
difference in speed between the various arithmetic operations on
different data types on "typical" current-generation CPUs?
Post a reply to this message
|