[Pharo-dev] summary of "float & fraction equality bug"

raffaello.giulietti at lifeware.ch raffaello.giulietti at lifeware.ch
Sat Nov 11 05:58:44 EST 2017


On 2017-11-10 22:18, Nicolas Cellier wrote:
> 
> 
> 2017-11-10 20:58 GMT+01:00 Martin McClure <martin at hand2mouse.com
> <mailto:martin at hand2mouse.com>>:
> 
>     On 11/10/2017 11:33 AM, raffaello.giulietti at lifeware.ch
>     <mailto:raffaello.giulietti at lifeware.ch> wrote:
> 
>         Doing only Fraction->Float conversions in mixed mode won't
>         preserve = as
>         an equivalence relation and won't enable a consistent ordering
>         with <=,
>         which probably most Smalltalkers consider important and enjoyable
>         properties.
> 
>     Good point. I agree that Float -> Fraction is the more desirable
>     mode for implicit conversion, since it can always be done without
>     changing the value.
> 
>         Nicolas gave some convincing examples on why most
>         programmers might want to rely on them.
> 
> 
>         Also, as I mentioned, most Smalltalkers might prefer keeping
>         away from
>         the complex properties of Floats. Doing automatic, implicit
>         Fraction->Float conversions behind the scenes only exacerbates the
>         probability of encountering Floats and of having to deal with their
>         weird and unfamiliar arithmetic.
> 
>     One problem is that we make it easy to create Floats in source code
>     (0.1), and we print Floats in a nice decimal format but by default
>     print Fractions in their reduced fractional form. If we didn't do
>     this, Smalltalkers might not be working with Floats in the first
>     place, and if they did not have any Floats in their computation they
>     would never run into an implicit conversion to *or* from Float.
> 
>     As it is, if we were to uniformly do Float -> Fraction conversion on
>     mixed-mode operations, we would get things like
> 
>     (0.1 * (1/1)) printString                       -->
>     '3602879701896397/36028797018963968'
> 
>     Not incredibly friendly.
> 
> 
> For those not practicing the litote: definitely a no go.
> 
> 
>     Regards,
>     -Martin
> 
> 
> At the risk of repeating myself, unique choice for all operations is a
> nice to have but not a goal per se.
> 
> I mostly agree with Florin: having 0.1 representing a decimal rather
> than a Float might be a better path meeting more expectations.
> 
> But thinking that it will magically eradicate the problems is a myth.
> 
> Shall precision be limited or shall we use ScaledDecimals ?
> 
> With limited precision, we'll be back to having several Fraction
> converting to same LimitedDecimal, so it won't solve anything wrt
> original problem.
> With illimted precision we'll have the bad property that what we print
> does not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but
> that's a detail.
> The worse thing is that long chain of operations will tend to produce
> monster numerators denominators.
> And we will all have long chain when resizing a morph with proportional
> layout in scalable graphics.
> We will then have to insert rounding operations manually for mitigating
> the problem, and somehow reinvent a more inconvenient Float...
> It's boring to allways play the role of Cassandra, but why do you think
> that Scheme and Lisp did not choose that path?
> 

I don't know why Lisper got this way. They justify the conversion choice
for comparison (equivalence of = and total ordering of <=) but for the
arithmetic operations there does not seem to be a clearly stated rationale.

Are they happy with their choice? I haven't the foggiest idea.

Don't get me wrong: I understand the Fraction->Float choice for the sake
of more speed.

But this increase in speed is not functionally transparent: it is not
like an increase in the clock rate of a CPU, it's not like a better
performing division algorithm, a faster implementation of sorting or a
JIT compiler.

This increase in speed, rather, comes at the cost of a shift in
functionality and cognitive load, because even + or 0.1 have not their
conventional semantics. It's not that Floats are wrong by themselves,
it's that their operations seem weird at first and at odd with the
familiar behavior.



Again, I'm not addressing the experts in floating point computation
here: they know how to deal with it. Thus, I expect that the developers
of numerically intensive libraries or packages, like 2D or 3D graphics,
are fully aware of the trade-offs and are in a position to make an
explicit choice: either monster denominators in fractions or more care
with floats, but always in control.

Here, I'm targeting the John Doe Smalltalker which usually uses
well-crafted, well-performing libraries and probably only performs a few
thousand mixed Fraction/Float computations on his own, and then speed
trade-offs do not matter. He is better served with implicit, automatic
Float->Fraction conversions in both comparisons and operations.

For him, less cognitive load and less surprises are probably an added value.





More information about the Pharo-dev mailing list