milank.δe

On the road to realitγ.

April 30, 2019

The harmonic sum with posits

One plus tiny plus tiny ... equals big!

The harmonic sum with posits

What can be so hard about summing a few numbers?

Recently, Nick Higham posted a blog article where he examined what happens if we compute the harmonic sum, which analytically diverges, with various binary number formats on a computer. Yes, indeed, they all converge because at some point the increment is smaller than half the distance to the next representable number in your given number format. The result is therefore round back, and the next result in your sum is as big as the previous one - that means the sum converged.

An interesting question is when that happens, and highly depends on your number format. Most people use, without thinking about it, 64bit double precision floating-point numbers (also known as Float64, fp64, double, etc.) as it's the standard. However, the future is 16bit. Maybe also 8bit, but for many applications certainly not 64bit, because it's an overkill and a unnecessary high precision that we rather would exchange for speed. That's why Google has 16bit support on their TPUs, and more and more of this kind of hardware is yet to come.

From the floating-point of view there are two strong candidates: Float16 (or fp16, half precision floats) and Bfloat16. The former has a higher precision of almost 4 decimal places but can only represent numbers between 6*10^-8 and 65504. The latter exchanges precision for a wider range that reaches from 10^-45 to 10^38. But there's also another horse in the race: Posits. The posit number format puts a lot of precision for numbers around 1, and has still a wide dynamic range of representable numbers. The current draft standard candidate is Posit(16,1), which represents numbers between 3*10^-9 and 3*10^8 yet has a precision of almost 5 decimal places for numbers around one. 

Coming back to the harmonic sum. In order to compare Float16 and Bfloat16, Nick Higham computed the value the harmonic sum converges to using both number formats, and concludes that Float16 actually does a much better job, as the value of 7.0859 is reached after 513 terms. Bfloat16, instead, can only sum up 65 terms to yield 5.0625 before it converges. How would posits perform in comparison?

To test that, I wrote the following Julia function and import the posit arithmetic emulator SoftPosit.jl

Now executing harmsum with Float16 yields what Nick Higham calculated too. However, doing exactly the same with Posit16 (That's the Posit(16,1) format proposed as the new posit standard for 16bit numbers) shows that posits do much better than floats: The sum only converges after 1024 terms and yields 7.78, a value that neither Float16 nor Bfloat16 can produce.

Also the posit number format allows to have a wider dynamic range in exchange for precision. The Posit(16,2) format sacrifices a little bit of the precision for numbers around one but can represent numbers in the range between 10^-17 up to 7*10^16. In contrast to the Bfloat16 format, which is drastically faster converging compared to Float16, the Posit(16,2) format is doing even better on the harmonic series (here called Posit16_2):

The question obviously is, who cares about an idealised mathematical series that is somwhat irrelevant for real world applications that run on big supercomputers? I would claim there is at least a little bit to learn from this: In fact, regarding the harmonic sum as "adding a lot of small numbers to a big number", this is indeed something that happens often on many supercomputers. Think about a weather forecast model for example, where the state of the atmosphere is represented with a whole bunch of numbers. These models use time steps on the order of minutes. As you know from your own experience, the weather, in most cases, does not change that much from one minute to the other. This translates into computing as a rather large number (say temperature at a given location) gets updated with a lot of tiny increments throughout a day that eventually add up to the change from a freezing morning to a beautifully warm and sunny afternoon. Quite similar to the harmonic sum therefore.

Posits are not supported by high performance computing architectures, but this should rapidly change in the future.