Discussion:
Is this a bug in LISP?
Add Reply
Paul G
2010-09-05 06:10:20 UTC
Reply
Permalink
Hi,

I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.

I input this line:
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)

And I get this result:
32.800003

Obviously this answer is off by .000003. Could somebody explain to me
why this is, and if there's a way to add these numbers correctly using
CLISP?
Teemu Likonen
2010-09-05 06:37:18 UTC
Reply
Permalink
Post by Paul G
I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800003
Obviously this answer is off by .000003. Could somebody explain to me
why this is, and if there's a way to add these numbers correctly using
CLISP?
Floating point numbers are not exact; they have limited precision
because their machine implementation uses fixed number of bits. This
feature is not specific to CLISP. See:

"What Every Computer Scientist Should Know About Floating-Point
Arithmetic"

http://docs.sun.com/source/806-3568/ncg_goldberg.html

In Common Lisp you could use ratios if you need exact math:

(+ 1/5 2/5 1/5 1/5 9 4/5 15 2/5 1 1/5 5 2/5)
=> 164/5
Teemu Likonen
2010-09-06 13:39:23 UTC
Reply
Permalink
Post by Teemu Likonen
Post by Paul G
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
And I get this result: 32.800003
(+ 1/5 2/5 1/5 1/5 9 4/5 15 2/5 1 1/5 5 2/5)
=> 164/5
If you get those floating point numbers from user or some other source
which you can't control you should probably RATIONALIZE the numbers
before doing any calculations (that is, if floating point numbers are
not accurate enough for you).

(mapcar #'rationalize '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
=> (1/5 2/5 1/5 1/5 49/5 77/5 6/5 27/5)

(apply #'+ *)
=> 164/5

(float *)
=> 32.8
Stanisław Halik
2010-09-06 14:54:56 UTC
Reply
Permalink
W dniu 2010-09-06 15:39, Teemu Likonen pisze:
you should probably RATIONALIZE the numbers

Or rather, RATIONAL the numbers [sic], since RATIONALIZE is imprecise.
Teemu Likonen
2010-09-06 15:25:40 UTC
Reply
Permalink
Post by Stanisław Halik
Post by Teemu Likonen
you should probably RATIONALIZE the numbers
Or rather, RATIONAL the numbers [sic], since RATIONALIZE is imprecise.
That depends on how we interpret "accuracy" and perhaps also where the
numbers come from. I believe RATIONALIZE is what the original poster
wants here.

(mapcar #'rationalize '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
=> (1/5 2/5 1/5 1/5 49/5 77/5 6/5 27/5)

(mapcar #'rational '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
=> (13421773/67108864 13421773/33554432 13421773/67108864
13421773/67108864 10276045/1048576 8074035/524288
5033165/4194304 11324621/2097152)
kodifik
2010-09-06 15:45:28 UTC
Reply
Permalink
Post by Teemu Likonen
Post by Stanisław Halik
Post by Teemu Likonen
you should probably RATIONALIZE the numbers
Or rather, RATIONAL the numbers [sic], since RATIONALIZE is imprecise.
That depends on how we interpret "accuracy" and perhaps also where the
numbers come from. I believe RATIONALIZE is what the original poster
wants here.
    (mapcar #'rationalize '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
    => (1/5 2/5 1/5 1/5 49/5 77/5 6/5 27/5)
    (mapcar #'rational '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
    => (13421773/67108864 13421773/33554432 13421773/67108864
        13421773/67108864 10276045/1048576 8074035/524288
        5033165/4194304 11324621/2097152)
Use of rational can be reasonable -however- inside a wrapper:
(defun myadd (&rest nums) (float (reduce (function +) (mapcar
(function rational) nums))))
...so that: (myadd 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4) --> 32.8
Paul G
2020-08-26 05:22:22 UTC
Reply
Permalink
Post by kodifik
Post by Teemu Likonen
Post by Stanisław Halik
Post by Teemu Likonen
you should probably RATIONALIZE the numbers
Or rather, RATIONAL the numbers [sic], since RATIONALIZE is imprecise.
That depends on how we interpret "accuracy" and perhaps also where the
numbers come from. I believe RATIONALIZE is what the original poster
wants here.
(mapcar #'rationalize '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
=> (1/5 2/5 1/5 1/5 49/5 77/5 6/5 27/5)
(mapcar #'rational '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
=> (13421773/67108864 13421773/33554432 13421773/67108864
13421773/67108864 10276045/1048576 8074035/524288
5033165/4194304 11324621/2097152)
(defun myadd (&rest nums) (float (reduce (function +) (mapcar
(function rational) nums))))
...so that: (myadd 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4) --> 32.8
Very cool, thank you!
Paul G
2020-08-26 05:20:07 UTC
Reply
Permalink
Post by Teemu Likonen
Post by Teemu Likonen
Post by Paul G
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
And I get this result: 32.800003
(+ 1/5 2/5 1/5 1/5 9 4/5 15 2/5 1 1/5 5 2/5)
=> 164/5
If you get those floating point numbers from user or some other source
which you can't control you should probably RATIONALIZE the numbers
before doing any calculations (that is, if floating point numbers are
not accurate enough for you).
(mapcar #'rationalize '(0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4))
=> (1/5 2/5 1/5 1/5 49/5 77/5 6/5 27/5)
(apply #'+ *)
=> 164/5
(float *)
=> 32.8
Thank you very much for the reply!
Aleksander Nabagło
2010-09-06 07:27:19 UTC
Reply
Permalink
!
Post by Paul G
Hi,
I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800003
Obviously this answer is off by .000003. Could somebody explain to me
why this is, and if there's a way to add these numbers correctly using
CLISP?
;; Dribble of #<IO TERMINAL-STREAM> started on NIL.

#<OUTPUT BUFFERED FILE-STREAM CHARACTER #P"float-format.out">
[7]> (quit)
;; Dribble of #<IO TERMINAL-STREAM> started on NIL.

#<OUTPUT BUFFERED FILE-STREAM CHARACTER #P"float-format.out">
[2]> (+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800003
[3]> (setf *read-default-float-format* 'double-float)
DOUBLE-FLOAT
[4]> (+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800000000000004
[5]> (setf *read-default-float-format* 'long-float)
LONG-FLOAT
[6]> (+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800000000000000003
[7]> (setf (ext:long-float-digits) 70)
70
[8]> (+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800000000000000000000000001
[9]> (quit)
--
A
.
Sam Steingold
2010-09-07 14:37:58 UTC
Reply
Permalink
Post by Paul G
I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800003
Obviously this answer is off by .000003. Could somebody explain to me
why this is, and if there's a way to add these numbers correctly using
CLISP?
http://clisp.cons.org/impnotes/faq.html#faq-fp

Floating point arithmetic is inherently inexact, so this not a bug, at least
not a bug in CLISP....
Giovanni Gigante
2010-09-07 15:40:20 UTC
Reply
Permalink
Post by Sam Steingold
Floating point arithmetic is inherently inexact
I've discovered that the reason why this is not always apparent is that
some languages tend to sweep those embarassing little digits under the
carpet.

For example, I did this on sbcl and it produced the usual sad result:

(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
32.800003

and slightly better, but still unpleasant, with double precision:

(+ 0.2d0 0.4d0 0.2d0 0.2d0 9.8d0 15.4d0 1.2d0 5.4d0)
32.800000000000004d0

But I was surprised that perl, on that same machine, seemed more correct:

perl -e 'print 0.2+0.4+0.2+0.2+9.8+15.4+1.2+5.4;'
32.8

At first I thought that perl implemented some clever numeric magic, but
in fact the reason is this:
"When Perl is told to print a floating-point number but not told the
precision, it automatically rounds that number to however many decimal
digits of precision that your machine supports."

So, let's lift the carpet...

perl -e 'printf("%.20g", (0.2+0.4+0.2+0.2+9.8+15.4+1.2+5.4));'
32.800000000000004

No escape. It's just that Lisp is more honest.
Robert Maas, http://tinyurl.com/uh3t
2010-09-07 22:04:03 UTC
Reply
Permalink
Post by Paul G
I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
which converts each of those decimal-digit expressions into
floating-point binary approximations. In particular not one of
those eight values you entered can be expresssed exactly in binary.
(If you don't believe me, try to find an exact binary value that
equals any one of those, and report which exact binary value you
claim exactly equals which of the decimal values there.)
Then seven floating-point-approximate additions are done before
yielding the grand total, each of those possibly causign additional
roundoff errors. Finally the binary result is converted back to
decimal notation, with additional conversion error, to print the
result. So that's a total of nine (9) times you absolutely cannot
get the correct result so some roundoff *must* occur, and seven (7)
times you might also get roundoff error. Only an idiot would expect
the final result to print exactly how it'd be if you did all the
arithmetic by hand using decimal notation.
Post by Paul G
32.800003
Given all the points of definite conversion approximation or
addition roundoff error, that's a pretty good result.
Post by Paul G
Obviously this answer is off by .000003.
That's a pretty small amount of conversion+roundoff error, given
all those operations you asked it to do, all those errors you
allowed to accumulate.
Post by Paul G
Could somebody explain to me why this is,
Why do I need to explain it to you? Don't you have the slighest
concept of decimal and binary notational systems, and the wisdom to
know that values in one system generally cannot be expressed
exactly in the other system, and indeed specifically that *none* of
the eight input values you gave above can be expressed exactly in
binary?
Post by Paul G
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?

Alternately, write an interval-arithmetic pagkage. That will give
you **correct** upper and lower bounds on the result. It won't tell
you the exact answer, but it'll tell you a narrow range in which
the exact answer lies, and that answer-interval will be absolutely
correct, and that answer-interval will also give you an upper bound
on the amount of error between the exactly-correct answer and the
midpoint of the answer-interval. With proper interface to IEEE
rounding modes, it will be nearly as efficient as ordinary
floating-point arithmetic. Or without such interface, doing the
arithmetic using binary integers with liberal use of FLOOR and
CEILING, it'll be considerably slower than floating-point, but
still much faster than decimal arithmetic.

Ideally you can write a lazy-evaluation continuation-style interval
arithmetic package, whereby it first calculates a crude set of
bounds, then any time later you can ask it to extend that work to
generate more and more accurate bounds. I started work on such a
system several years ago, but nobody show interest, nobody offered
to pay me for my work, but if you offer to pay me for what I did
already and pay me to finish the work, I'm available. For some
demos of what I was working on back then, see:
http://www.rawbw.com/~rem/IntAri/

Bottom line: You haven't caught LISP making any math error, but
I've caught you making a pretty grievous error in understanding
what exactly you asked Lisp to calculate for you.
Pascal J. Bourguignon
2010-09-08 00:00:25 UTC
Reply
Permalink
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
which converts each of those decimal-digit expressions into
floating-point binary approximations. In particular not one of
those eight values you entered can be expresssed exactly in binary.
(If you don't believe me, try to find an exact binary value that
equals any one of those, and report which exact binary value you
claim exactly equals which of the decimal values there.)
Then seven floating-point-approximate additions are done before
yielding the grand total, each of those possibly causign additional
roundoff errors. Finally the binary result is converted back to
decimal notation, with additional conversion error, to print the
result. So that's a total of nine (9) times you absolutely cannot
get the correct result so some roundoff *must* occur, and seven (7)
times you might also get roundoff error. Only an idiot would expect
the final result to print exactly how it'd be if you did all the
arithmetic by hand using decimal notation.
In the old languages, there were two kinds of numbers, binary and
decimal. So the 'idiots' (not doing scientific computation) could use
the decimal numbers and get the 'right' answers.

This most often occurs in financial computing, where the units are not
the dollar or euro, but actually the cent, and all amounts are not
floating points or real, but integer numbers of cents.
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
32.800003
(COM.INFORMATIMAGO.COMMON-LISP.INVOICE::+ #m0.2 #m0.4 #m0.2 #m0.2 #m9.8 #m15.4 #m1.2 #m5.4)
32.80 EUR
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
Could somebody explain to me why this is,
Why do I need to explain it to you? Don't you have the slighest
concept of decimal and binary notational systems, and the wisdom to
know that values in one system generally cannot be expressed
exactly in the other system, and indeed specifically that *none* of
the eight input values you gave above can be expressed exactly in
binary?
When I was in primary school, we learned bases and base conversion
along with decimal arithmetic. But that was a long time ago, I hear
nowadays, they're not able to teach pupils even to count in base
ten...
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
It wouldn't be necessarily slower:

- some hardware have decimal integer or decimal floating-point support.

- nowadays computers are so fast, that instead of waiting for memory
doing nothing, they could as well spend some time doing decimal
arithmetic meanwhile.

There's no time to waste much time, there are libraries (or chunks of
code to be scrapped). And we would be doing that to avoid spending a
lot of time explaining newbies why (+ 0.2 15.4) is not 15.6, and just
point them to the decimal arithmetic library instead.
Post by Robert Maas, http://tinyurl.com/uh3t
Bottom line: You haven't caught LISP making any math error, but
I've caught you making a pretty grievous error in understanding
what exactly you asked Lisp to calculate for you.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Aleksander Nabagło
2010-09-08 14:01:33 UTC
Reply
Permalink
!
Post by Pascal J. Bourguignon
In the old languages, there were two kinds of numbers, binary and
decimal. So the 'idiots' (not doing scientific computation) could use
the decimal numbers and get the 'right' answers.
This most often occurs in financial computing, where the units are not
the dollar or euro, but actually the cent, and all amounts are not
floating points or real, but integer numbers of cents.
Yes, they hardly excercise to hide fractions of cents
and finally some of them master to hide millions or billions of dollars.
--
A
.
Pascal J. Bourguignon
2010-09-08 19:00:51 UTC
Reply
Permalink
Post by Aleksander Nabagło
!
Post by Pascal J. Bourguignon
In the old languages, there were two kinds of numbers, binary and
decimal. So the 'idiots' (not doing scientific computation) could use
the decimal numbers and get the 'right' answers.
This most often occurs in financial computing, where the units are not
the dollar or euro, but actually the cent, and all amounts are not
floating points or real, but integer numbers of cents.
Yes, they hardly excercise to hide fractions of cents
and finally some of them master to hide millions or billions of dollars.
That's the point. If money wasn't an integer number of cents,
multiplying it by fractionnal tax rates would just add decimals, and
there would be no rounding to do. It's because they're integer, that
rounding has to occur, and a fractionnal remainder has to be dealt
with (legally or illegally).
--
__Pascal Bourguignon__ http://www.informatimago.com/
Robert Maas, http://tinyurl.com/uh3t
2010-09-12 06:04:06 UTC
Reply
Permalink
If money wasn't an integer number of cents, multiplying it by
fractionnal tax rates would just add decimals, and there would be
no rounding to do. It's because they're integer, that rounding has
to occur, and a fractionnal remainder has to be dealt with (legally
or illegally).
With the old paper system, anything less than a penny was hardly
worth the trouble of writing a bill and collecting it. But with the
InterNet, and CPU time so dreadfully cheap, and online service
providers able to measure charges to precision of microsecond or
nanosecond, and some server applications taking much less than a
second of CPU time to perform a task, charges for online services
could easily be orders of a magnitude smaller than a cent. As a
result, this idea that all money is a multiple of a cent might be
changing very soon. For example, I'm currently building
http://TinyURL.Com/NewEco which uses one second of human time (at
exchange rate of two cents per nine seconds, i.e. $8/hr) as the
unit of currency for human labor, and one millisecond as unit of
currency for PHP-script time, typically 5-10 milliseconds for
simple script-runs, maximum 100 milliseconds charge for any
pure-PHP script even if it by chance takes longer. (Bids on
fixed-time contracts will always be multiples of ten seconds, so
that shaving one or two seconds off somebody else's bid won't lure
bidders into a death-walk into infeasible contracts. Unless the
contract time is a multiple of 90 seconds, it won't come out an
integer number of cents at the exchange rate.)

But still, whatever the unit of currency, all charges will be exact
integer multiples of that unit, so really it's just the same as the
old system scaled down to a much smaller unit of currency.

As to the original poster, as Pascal pointed out: It's not correct
to use floating-point values of dollars to make calculations that
are supposed to be exact integer muultiples of cents. Correct is to
use integer number of cents directly in the calculations. I.e.
convert all data-entry immediately (by syntax transformation
probably, although reading as floating-point then multiplying by
100 and rounding does work, gack) from $4.95 to 495" for example,
and then convert the final result back to dollar.cc notation.

Nit: I almost wish that Common Lisp, upon seeing decimal-fraction
input (to READ), would generate a rational number, such as 4.95 =
495/100, and then you could "have your cake and eat it too", not
having to convert to cents but nevertheless the calculations are
exact.
Pascal J. Bourguignon
2010-09-08 00:04:50 UTC
Reply
Permalink
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
For example, somebody could waste their time doing that to avoid
killing people, since lay programmers are too dumb to do the correct
thing in the first place:

http://www.ima.umn.edu/~arnold/disasters/patriot.html
--
__Pascal Bourguignon__ http://www.informatimago.com/
Paul G
2010-09-08 04:01:36 UTC
Reply
Permalink
Thanks to everyone who replied!

Teemu, Stanislaw, and Kodifik, thanks in particular for showing me how
to do the calculation using rational or rationalize.

Sam, thanks for the link. I tried using google to find an explanation
before posting here, but I didn't find that particular faq.

Robert, you labeled me an idiot because I expected LISP to be able to
perform mathematical operations on par with a pocket calculator. Your
post was informative, but also rude and obnoxious. I suggest you read
up on netiquette. Here's a reference to get you started:
http://www.ietf.org/rfc/rfc1855.txt.
Post by Robert Maas, http://tinyurl.com/uh3t
Bottom line: You haven't caught LISP making any math error, but
I've caught you making a pretty grievous error in understanding
what exactly you asked Lisp to calculate for you.
*Sigh*. Now I know that I'm dealing with a computer scientist, and
not a mathematician. How can you assert that adding a series of
numbers and displaying the wrong result is not a math error? Next
you'll probably try to convince me that there are 1024 meters in a
kilometer.
Teemu Likonen
2010-09-08 12:26:24 UTC
Reply
Permalink
Post by Paul G
Teemu, Stanislaw, and Kodifik, thanks in particular for showing me how
to do the calculation using rational or rationalize.
Also note that if you are reading decimal numbers from text files or
interactively from a user, and want to interpret the numbers as exact
values, you could convert such input number strings directly to
rationals. Going through floating points and then RATIONALIZE'ing them
is inexact when you hit the limits of floating point range.

For going from input strings to rationals directly I have these in my
personal library:


(defun string-to-integer (string)
(loop
for i = 1 then (* i 10)
for c across (reverse string)
unless (char<= #\0 c #\9) return nil
sum (* i (position c "0123456789"))))


(defun string-to-fractional (string)
(loop
for i = 1/10 then (/ i 10)
for c across string
unless (char<= #\0 c #\9) return nil
sum (* i (position c "0123456789"))))


(defun read-number-from-string (string &optional (decimal-separator #\.))
(setf string (string-trim '(#\Space #\Tab) string))
(when (plusp (length string))
(let ((sign 1))
(cond ((find (aref string 0) "-–−")
(setf sign -1
string (subseq string 1)))
((eql (aref string 0) #\+)
(setf string (subseq string 1))))
(when (and (every #'(lambda (item)
(or (char<= #\0 item #\9)
(char= item decimal-separator)))
string)
(<= 0 (count decimal-separator string) 1))
(let ((pos (position decimal-separator string)))
(* sign (+ (string-to-integer (subseq string 0 pos))
(if pos
(string-to-fractional (subseq string (1+ pos)))
0))))))))
Raffael Cavallaro
2010-09-08 14:20:13 UTC
Reply
Permalink
Post by Paul G
*Sigh*. Now I know that I'm dealing with a computer scientist, and
not a mathematician. How can you assert that adding a series of
numbers and displaying the wrong result is not a math error?
So I'm guessing that the G is not for "Graham" then. ;^)


warmest regards,

Ralph
--
Raffael Cavallaro
Thomas A. Russ
2010-09-08 17:49:14 UTC
Reply
Permalink
Post by Paul G
*Sigh*. Now I know that I'm dealing with a computer scientist, and
not a mathematician. How can you assert that adding a series of
numbers and displaying the wrong result is not a math error?
But it isn't displaying the wrong result. It is just not displaying the
result you expect. The problem is actually more like a display error in
not showing the full binary value of the floating point number 0.1, for
example. But for most uses, that would be a lot less friendly than the
engineering decision to show a decimal approximation of the actual
binary floating point number.

Real dyed in the wool computer scientists might actually be happier
showing the binary (or hex?) values instead. But that generally
wouldn't be as friendly.

From the mathematical point of view, one has to realize that the
notation shown for floating point is one of those conventions that you
need to learn. For example, most mathematicians would not look kindly
on people complaining that:

_ _
1.3 + 1.6 is not exactly 1.9 (without the bar).

As a practial matter, I do think it would be a Good Idea(tm) for
programming languages to introduce a new primitive numeric type
"decimal" to be used by most people in place of "float" or "double".
Decimal would be an exact decimal fractional number. It should be
similar to the Decimal class in Java, but accorded primitive status just
like int or double. Actually, while up on my soap box, I would also
suggest that there be an unlimited precision integer called "integer" or
"int" and that the current limited precision varieties be renamed with
their precision such as mod32int or mod64int to remind casual
programmers that they wrap around when they get to big.

At least in lisp
(+ 2000000000 2000000000) => 4000000000

Compare Java's

int s = 2000000000 + 2000000000;
System.out.println(s);

-294967296

Now, there's a math error!
Post by Paul G
Next
you'll probably try to convince me that there are 1024 meters in a
kilometer.
That would be a kibimeter. ;)
--
Thomas A. Russ, USC/Information Sciences Institute
Pascal J. Bourguignon
2010-09-08 19:05:07 UTC
Reply
Permalink
Post by Thomas A. Russ
For example, most mathematicians would not look kindly
_ _
1.3 + 1.6 is not exactly 1.9 (without the bar).
2.9
Post by Thomas A. Russ
As a practial matter, I do think it would be a Good Idea(tm) for
programming languages to introduce a new primitive numeric type
"decimal" to be used by most people in place of "float" or "double".
Decimal would be an exact decimal fractional number.
Indeed. When I was designing languages I had such a type constructor
(my language didn't have any pre-defined type, so it would be
plateform neutral. There was even a declaration to indicate what type
would test conditions ("booleans") have to have).
Post by Thomas A. Russ
It should be
similar to the Decimal class in Java, but accorded primitive status just
like int or double. Actually, while up on my soap box, I would also
suggest that there be an unlimited precision integer called "integer" or
"int" and that the current limited precision varieties be renamed with
their precision such as mod32int or mod64int to remind casual
programmers that they wrap around when they get to big.
I would even rename them modulo_4294967296_integer to make the
point clearer, and discourage their use even more.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Robert Maas, http://tinyurl.com/uh3t
2010-09-12 06:50:57 UTC
Reply
Permalink
How can you assert that adding a series of numbers and displaying
the wrong result is not a math error?
You still don't "get it"! You did *not* ask the computer to add
some decimal numbers. You asked the computer to call the READ
function to parse your input from external (decimal-fraction) to
internal (binary floating-point) form, *then* you asked it to add
those floating-point values, then you asked it to pass the result
to PRINT which converts binary floating-point to decimal-fraction
notation. If you knew what you were really asking the computer to
do, you wouldn't have thought it was doing it wrong.
Next you'll probably try to convince me that there are 1024
meters in a kilometer.
Absolutely not! I know that (per convention) "k" means 1000 while
"K" means 1024, and I know the difference between the two. Computer
memory is measured in powers of two, so "K" is a useful unit of
measurement, while the so-called metric system is measured in
powers of ten, so "k" is a useful unit of measurement. But it
amuses me when computer advertisements use "k" instead of "K" so
that their computers will seem to have more memory than some other
computer that more honestly uses "K". (For example, 65k bytes
sounds like more than 64K bytes, like you're getting an extra 1k
bytes for free, but really you're getting exactly 65536 bytes on
either machine.) Which reminds me: I recently bought a USB thumb
drive that has 4 gigabytes of memory, which I presume means 4 *
1024 * 1024 * 1024 bytes of raw data, minus a few percent for the
sector formatting, leaving somewhat less than 4,000,000,000 bytes
that are actually available for my data, right? (With RAM you get
100% of the advertised data capacity, with any disk there's
sector-formatting overhead. I still use 800k diskettes on my Mac,
which are only 779k bytes of usable data in Mac format, 640k bytes
of usable data in DOS/Windows format. By the way, does anybody have
an old laptop that has both a USB port and a working diskette drive
tbat they would sell me really cheap so that I can copy data
between thumb drive and diskette at my convenience instead of only
when the public computer lab is open?)

By the way, when I was in college I was so fed up with years being
sometimes 365 days and sometimes 366 days, yet people talking as if
one calendar "year" was an exact unit of time, as if 1965.Jan.15
was exactly one **year** after 1964.Jan.15, and people celerbrating
their birthday starting at midnight local time instead of at the
true time per Earth's revolution around the Sun, and of course the
problems of people born on Feb.29 of any leap year so are only
about 1/4 as old as they really are, that I decided to calculate my
own age in kilotags (1000 day units) instead of "years", and to
celebrate my birth every 1000 days. Unfortunately I couldn't
convince anyone else to adopt my idea.
Pascal J. Bourguignon
2010-09-12 07:04:51 UTC
Reply
Permalink
Post by Robert Maas, http://tinyurl.com/uh3t
How can you assert that adding a series of numbers and displaying
the wrong result is not a math error?
You still don't "get it"! You did *not* ask the computer to add
some decimal numbers. You asked the computer to call the READ
function to parse your input from external (decimal-fraction) to
internal (binary floating-point) form, *then* you asked it to add
those floating-point values, then you asked it to pass the result
to PRINT which converts binary floating-point to decimal-fraction
notation. If you knew what you were really asking the computer to
do, you wouldn't have thought it was doing it wrong.
Next you'll probably try to convince me that there are 1024
meters in a kilometer.
Absolutely not! I know that (per convention) "k" means 1000 while
"K" means 1024, and I know the difference between the two. Computer
memory is measured in powers of two, so "K" is a useful unit of
measurement, while the so-called metric system is measured in
powers of ten, so "k" is a useful unit of measurement. But it
amuses me when computer advertisements use "k" instead of "K" so
that their computers will seem to have more memory than some other
computer that more honestly uses "K". (For example, 65k bytes
sounds like more than 64K bytes, like you're getting an extra 1k
bytes for free, but really you're getting exactly 65536 bytes on
either machine.)
Actually, 65k are only 65000, while 64K are 65536, so 64K > 65k.
Post by Robert Maas, http://tinyurl.com/uh3t
By the way, when I was in college I was so fed up with years being
sometimes 365 days and sometimes 366 days, yet people talking as if
one calendar "year" was an exact unit of time, as if 1965.Jan.15
was exactly one **year** after 1964.Jan.15, and people celerbrating
their birthday starting at midnight local time instead of at the
true time per Earth's revolution around the Sun, and of course the
problems of people born on Feb.29 of any leap year so are only
about 1/4 as old as they really are, that I decided to calculate my
own age in kilotags (1000 day units) instead of "years", and to
celebrate my birth every 1000 days. Unfortunately I couldn't
convince anyone else to adopt my idea.
Sure, that'd give them less birthday presents. You would have had
more luck with 100-day years.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Rob Warnock
2010-09-12 07:42:10 UTC
Reply
Permalink
Pascal J. Bourguignon <***@informatimago.com> wrote:
+---------------
| Actually, 65k are only 65000, while 64K are 65536, so 64K > 65k.
+---------------

Actually, 64 K == -209.16 C == -344.488 F. That is, very, *very* cold!! ;-}

I suspect you meant 64 Ki == 64 * 1024 == 65536, see:

http://physics.nist.gov/cuu/Units/binary.html
Prefixes for binary multiples


-Rob

-----
Rob Warnock <***@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
RG
2010-09-12 17:02:27 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Post by Robert Maas, http://tinyurl.com/uh3t
By the way, when I was in college I was so fed up with years being
sometimes 365 days and sometimes 366 days, yet people talking as if
one calendar "year" was an exact unit of time, as if 1965.Jan.15
was exactly one **year** after 1964.Jan.15, and people celerbrating
their birthday starting at midnight local time instead of at the
true time per Earth's revolution around the Sun, and of course the
problems of people born on Feb.29 of any leap year so are only
about 1/4 as old as they really are, that I decided to calculate my
own age in kilotags (1000 day units) instead of "years", and to
celebrate my birth every 1000 days. Unfortunately I couldn't
convince anyone else to adopt my idea.
Sure, that'd give them less birthday presents. You would have had
more luck with 100-day years.
I've been trying to get people to adopt Lewis Carroll's idea and give me
presents on my unbirthday. I've not been having any luck either. :-(

rg
Don Geddis
2010-09-13 22:51:41 UTC
Reply
Permalink
Post by RG
Post by Pascal J. Bourguignon
Sure, that'd give them less birthday presents. You would have had
more luck with 100-day years.
I've been trying to get people to adopt Lewis Carroll's idea and give me
presents on my unbirthday. I've not been having any luck either. :-(
Or, if you were a Hobbit, you'd give OTHER people presents on YOUR birthday.
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
toby
2010-09-22 05:03:13 UTC
Reply
Permalink
Post by Don Geddis
Post by RG
Sure, that'd give them less birthday presents.  You would have had
more luck with 100-day years.
I've been trying to get people to adopt Lewis Carroll's idea and give me
presents on my unbirthday.  I've not been having any luck either.  :-(
Or, if you were a Hobbit, you'd give OTHER people presents on YOUR birthday.
That sounds thoroughly civilised.
Post by Don Geddis
___________________________________________________________________________ ____
Paul G
2010-09-19 17:30:44 UTC
Reply
Permalink
Post by Robert Maas, http://tinyurl.com/uh3t
How can you assert that adding a series of numbers and displaying
the wrong result is not a math error?
You still don't "get it"! You did *not* ask the computer to add
some decimal numbers. You asked the computer to call the READ
function...
Eh, I don't want to nitpick, but I also don't want you guys to think
that your help has been wasted on me. So allow me to clarify.

This discussion has been useful to me, and I do understand why I got a
math error. But it's still a math error. At first I thought you guys
were joking when you argued with this.

When I say "math error," I'm not asserting that the computer did
something wrong. I'm making an observation that the computer did not
perform correct arithmetic, OK? As Robert pointed out, the
explanation for this is that I didn't tell the computer to do correct
arithmetic, I told it to do IEEE floating-point arithmetic, which is
not always equivalent to "correct arithmetic." Robert, thanks for
taking the time to explain this stuff for me. Rest assured, I totally
"get it!"
Post by Robert Maas, http://tinyurl.com/uh3t
But it amuses me when computer advertisements use "k" instead of "K"
Class action lawsuits have been filed (and settled) because of that.

Thanks to everybody who took the time to help me out. Perhaps I will
return with another LISP question in a few days, if I can't find the
answer on my own.
Tamas K Papp
2010-09-19 19:44:14 UTC
Reply
Permalink
Post by Paul G
When I say "math error," I'm not asserting that the computer did
something wrong. I'm making an observation that the computer did not
perform correct arithmetic, OK? As Robert pointed out, the explanation
for this is that I didn't tell the computer to do correct arithmetic, I
told it to do IEEE floating-point arithmetic, which is not always
equivalent to "correct arithmetic." Robert, thanks for taking the time
to explain this stuff for me. Rest assured, I totally "get it!"
IEEE floating-point arithmetic has a standard (IEEE 754), so if you
ask your computer to perform this kind of arithmetic (which you did,
even if you didn't realize it), and it does it conforming to that
standard, then it is "correct". It may not conform to your
expectations, but it is still correct IEEE 754 arithmetic.

Tamas
Paul G
2020-08-26 03:53:42 UTC
Reply
Permalink
Post by Tamas K Papp
Post by Paul G
When I say "math error," I'm not asserting that the computer did
something wrong. I'm making an observation that the computer did not
perform correct arithmetic, OK? As Robert pointed out, the explanation
for this is that I didn't tell the computer to do correct arithmetic, I
told it to do IEEE floating-point arithmetic, which is not always
equivalent to "correct arithmetic." Robert, thanks for taking the time
to explain this stuff for me. Rest assured, I totally "get it!"
IEEE floating-point arithmetic has a standard (IEEE 754), so if you
ask your computer to perform this kind of arithmetic (which you did,
even if you didn't realize it), and it does it conforming to that
standard, then it is "correct". It may not conform to your
expectations, but it is still correct IEEE 754 arithmetic.
Tamas
It doesn't matter what I asked the computer to do, or what I did or did not realized "Correct" addition is different from "correct" multiplication. A function that accepts (2, 3) and outputs 5 can be observed to have performed correct addition. It can also be observed to not have performed (correct) multiplication. Expectations are irrelevant to these observations. And my observation was that (correct) _arithmetical addition_ was not happening.
Don Geddis
2010-09-21 16:04:36 UTC
Reply
Permalink
Post by Paul G
When I say "math error," I'm not asserting that the computer did
something wrong. I'm making an observation that the computer did not
perform correct arithmetic, OK?
That's still not correct. The arithmetic was perfect.

Your mistake was in your input. You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths". But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".

You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total. You
incorrectly concluded that Lisp had made a "math error" in arithmetic.

But it didn't. Because, unlike your assumption, you never actually
asked it to add two tenths. You asked it to add "0.2", which is a
different number entirely.
Post by Paul G
As Robert pointed out, the explanation for this is that I didn't tell
the computer to do correct arithmetic
That's not the explanation. The arithmetic is correct. The mistake you
made was not asking Lisp to add the actual specific numbers you had in
mind. You asked it to add different numbers instead, and it did what
you asked.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Jeff Carlyle wasn't expecting to find a dead body as he jogged through the park
early that morning. And he didn't find one, so that was a relief.
-- Deep Thoughts, by Jack Handey [1999]
Paul G
2010-09-22 01:36:19 UTC
Reply
Permalink
That's still not correct.  The arithmetic was perfect.
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
Your mistake was in your input.  You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths".  But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".
You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total.  You
incorrectly concluded that Lisp had made a "math error" in arithmetic.
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand). Lisp does not set the standard of what is correct math.

Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
Raymond Toy
2010-09-22 03:04:02 UTC
Reply
Permalink
Post by Paul G
Post by Don Geddis
That's still not correct. The arithmetic was perfect.
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
This is certainly false. You left out the source reference. Because
there is none?
Post by Paul G
Post by Don Geddis
Your mistake was in your input. You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths". But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".
You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total. You
incorrectly concluded that Lisp had made a "math error" in arithmetic.
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand). Lisp does not set the standard of what is correct math.
I don't think anyone was debating your math. But your expectations on
what Lisp should do is wrong. Adjust your expectations.

Ray
Tim Bradshaw
2010-09-22 09:05:41 UTC
Reply
Permalink
Post by Raymond Toy
I don't think anyone was debating your math. But your expectations on
what Lisp should do is wrong. Adjust your expectations.
Actually, I dispute this. I don't see any reason at all why "0.2"
should not read as a rational instead of some mutant obscuro
convenient-to-people-who-design-processors type. People who want the
obscuro float stuff should have to type some special syntax (#f0.2?) to
get it.

(Of course, I do see a reason: such a system would not confirm with
either existing language standards or what computer people expect "0.2"
to represent. I just think that those expectations are fucked up:
we've somehow persuaded ourselves that it's more important to do what
is convenient for the computer than is convenient for humans using the
computer.)
Norbert_Paul
2010-09-22 09:20:05 UTC
Reply
Permalink
Post by Tim Bradshaw
(Of course, I do see a reason: such a system would not confirm with
either existing language standards or what computer people expect "0.2"
to represent. I just think that those expectations are fucked up: we've
somehow persuaded ourselves that it's more important to do what is
convenient for the computer than is convenient for humans using the
computer.)
Every technical device has shortcomings a user must know. When you drive a
car you know that it doesn't stop immediately when you step at the brake,
even when that sometimes would appear "convenient" for humans.
Tim Bradshaw
2010-09-22 09:44:07 UTC
Reply
Permalink
Post by Norbert_Paul
Every technical device has shortcomings a user must know. When you drive a
car you know that it doesn't stop immediately when you step at the brake,
even when that sometimes would appear "convenient" for humans.
I'm happy with shortcomings that are down to the laws of physics. I'm
not happy with shortcomings that are down to programmer laziness or
historical accident (for instance, in the 1950s speed and memory were
probably scarce enough that rational arithmetic was impractical outside
some very specialised domains, but that's not true any more).
Norbert_Paul
2010-09-22 10:02:00 UTC
Reply
Permalink
I'm happy with shortcomings that are down to the laws of physics. I'm
not happy with shortcomings that are down to programmer laziness or
historical accident (for instance, in the 1950s speed and memory were
probably scarce enough that rational arithmetic was impractical outside
some very specialised domains, but that's not true any more).
But finite floating point shortcomings are down to the laws of mathematics.
But they still have many advantages, and they are very useful.
Tim Bradshaw
2010-09-22 11:25:32 UTC
Reply
Permalink
Post by Norbert_Paul
But finite floating point shortcomings are down to the laws of mathematics.
But they still have many advantages, and they are very useful.
I'm unclear if you read my article at all. Assuming you didn't: my
suggestion was that reading "0.2" should NOT CONSTRUCT A FLOAT but a
rational. Computers can manage exact basic arithmetic on rationals.
If you wanted a float you would type something else (my suggestion was
"#f0.2" but perhaps some more language-neutral notation - "0.2~"?).
Operations which aren't (always) exact on rationals would return floats
(which would print differently) as they do now. In fact the *only*
difference (for CL: languaes which don't support rationals would have a
far harder time of course) is how things would read and print.

(And of course, I'm not actually suggesting anyone should do this: I'm
suggesting they should have done it some time in the 50s.)
Tamas K Papp
2010-09-22 11:47:34 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Norbert_Paul
But finite floating point shortcomings are down to the laws of
mathematics. But they still have many advantages, and they are very
useful.
I'm unclear if you read my article at all. Assuming you didn't: my
suggestion was that reading "0.2" should NOT CONSTRUCT A FLOAT but a
rational. Computers can manage exact basic arithmetic on rationals. If
you wanted a float you would type something else (my suggestion was
"#f0.2" but perhaps some more language-neutral notation - "0.2~"?).
Operations which aren't (always) exact on rationals would return floats
(which would print differently) as they do now. In fact the *only*
difference (for CL: languaes which don't support rationals would have a
far harder time of course) is how things would read and print.
Personally, I prefer the current CL read syntax (0.2 reads as float,
exact type depending on *READ-DEFAULT-FLOAT-FORMAT*). But sometimes I
do have a use for rationals in decimal notation, then I just construct
them using (rationalize 0.2).

As you very well know, it is trivial to extend CL with custom read
syntax -- eg #e could read "exact" rationals from decimal
representations. Since you consider this issue important enough to
argue that everyone else should do this, I assume that you have
already done something similar. Then you are content, and at the same
time, I am content that I get to keep floats as the default. Such is
the way CL brings universal happiness :-)

Tamas
Norbert_Paul
2010-09-22 12:00:23 UTC
Reply
Permalink
Post by Tamas K Papp
exact type depending on *READ-DEFAULT-FLOAT-FORMAT*). But sometimes I
do have a use for rationals in decimal notation, then I just construct
them using (rationalize 0.2).
This is ugly. You construct a float of approximately 0.2 and then rely on
implementation dependent result of rationalize.

Why not simply write 2/10?
Tim Bradshaw
2010-09-22 12:55:43 UTC
Reply
Permalink
Post by Tamas K Papp
As you very well know, it is trivial to extend CL with custom read
syntax -- eg #e could read "exact" rationals from decimal
representations. Since you consider this issue important enough to
argue that everyone else should do this, I assume that you have
already done something similar.
If you had read my articles, you would have found first this:

"Of course, I do see a reason: such a system would not confirm with
either existing language standards or what computer people expect "0.2"
to represent."

And later this:

"And of course, I'm not actually suggesting anyone should do this: I'm
suggesting they should have done it some time in the 50s."

I'm not quite sure how you get from those fairly clear statements to
your quoted statement that I "think it important enough that everyone
else should do this".

I am beginning to understand how the original poster felt.
Tamas K Papp
2010-09-22 13:07:45 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Tamas K Papp
As you very well know, it is trivial to extend CL with custom read
syntax -- eg #e could read "exact" rationals from decimal
representations. Since you consider this issue important enough to
argue that everyone else should do this, I assume that you have already
done something similar.
"Of course, I do see a reason: such a system would not confirm with
either existing language standards or what computer people expect "0.2"
to represent."
"And of course, I'm not actually suggesting anyone should do this: I'm
suggesting they should have done it some time in the 50s."
I'm not quite sure how you get from those fairly clear statements to
your quoted statement that I "think it important enough that everyone
else should do this".
I am beginning to understand how the original poster felt.
You left off the smiley from the end of my message -- that sentence
was tongue-in-cheek.

What I meant to convey is that I don't see what the big deal is.
Implementing the #e read syntax would not conflict with existing CL
language standards (which allow extensions like this, that's why you
are allowed to play around with the reader), and human readers of your
code will know that something funny is going as soon as they see the
#e, so they will look it up: consequently, there is no conflict with
the expectations of "computer people" who know CL (and who else would
read CL code?).

So why lament about historical accidents when you can easily introduce
the new syntax here and now?

Tamas
Pascal J. Bourguignon
2010-09-22 13:19:18 UTC
Reply
Permalink
Post by Tamas K Papp
So why lament about historical accidents when you can easily introduce
the new syntax here and now?
Well, the point was that the _default_ syntax 0.2 should be for
ratios, not for floating point.

This is harder to implement in a portable way, as the standard answer
to that question is that you would have to define reader macros for
all the characters, and to implement an inside-out scanner.

It would look simplier to patch each of the free implementations.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Tamas K Papp
2010-09-22 14:11:43 UTC
Reply
Permalink
Post by Tamas K Papp
So why lament about historical accidents when you can easily introduce
the new syntax here and now?
Well, the point was that the _default_ syntax 0.2 should be for ratios,
not for floating point.
I don't think you can do that, it would not be backward-compatible
with existing code, eg:

(float pi 0.0)

would not work any more.
This is harder to implement in a portable way, as the standard answer to
that question is that you would have to define reader macros for all the
characters, and to implement an inside-out scanner.
It would look simplier to patch each of the free implementations.
Or sneak the issue into CLTL3 :-)

Actually, I would not mind a read syntax like the one mentioned by
Raymond (eg 0.2r0), but it is not high on my wishlist.

Tamas
Giovanni Gigante
2010-09-22 15:08:24 UTC
Reply
Permalink
Post by Tamas K Papp
Or sneak the issue into CLTL3 :-)
In this case, I want support for IEEE 754-2008 decimal floats too!!! :-)
Pascal J. Bourguignon
2010-09-22 16:52:07 UTC
Reply
Permalink
Post by Giovanni Gigante
Post by Tamas K Papp
Or sneak the issue into CLTL3 :-)
In this case, I want support for IEEE 754-2008 decimal floats too!!! :-)
That would be the simpliest solution, and it'd be implementable in CL
as we know it.

A simple extension could be used if we want to support both binary and
decimal floats.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Tim Bradshaw
2010-09-22 15:38:34 UTC
Reply
Permalink
Post by Tamas K Papp
I don't think you can do that, it would not be backward-compatible
(float pi 0.0)
would not work any more.
I can see the light slowly going on. What you actually mean is "(float
pi 0.0) would never have worked, other than in some peculiar alternate
universe where people made some bad decisions a long time ago, and I
never got to become King Of The World And Emporor of Mars And The
lesser Planets".
Tim Bradshaw
2010-09-22 15:35:30 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Well, the point was that the _default_ syntax 0.2 should be for
ratios, not for floating point.
Yes, and further: the syntax everyone expects a language to support
should be that. As I;ve said, this is hard to achieve.
Tim Bradshaw
2010-09-22 15:34:30 UTC
Reply
Permalink
Post by Tamas K Papp
So why lament about historical accidents when you can easily introduce
the new syntax here and now?
I'm not lamenting. But the point I'm trying to make is that I want
*the standard default syntax* to be the rational-based one, with the
float one being something special. And as I've said, there are
basically only two ways of achieving this: (1) time travel or (2) round
up everyone who has ever used a computer, kill them all, and start
again. The second approach is appealing, I admit.
Mario S. Mommer
2010-09-22 15:58:33 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Tamas K Papp
So why lament about historical accidents when you can easily introduce
the new syntax here and now?
I'm not lamenting. But the point I'm trying to make is that I want
*the standard default syntax* to be the rational-based one, with the
float one being something special.
I think this would be overkill, and probably even more confusing, as
rational arithmetic has its own ways of being nasty. A useful middle
ground would be decimal floating point arithmetic, as is included in the
new IEEE 754-2008 standard, and implemented in hardware in that shiny
new IBM mainframe near you.

It would not solve all problems, of course, but it would solve some, and
greatly reduce the frequency of others. People writing 0.2+0.1 would
obtain what they (IMO rightly) expect, and threads like these would not
exist anymore.
Nils M Holm
2010-09-22 16:18:31 UTC
Reply
Permalink
Post by Mario S. Mommer
greatly reduce the frequency of others. People writing 0.2+0.1 would
obtain what they (IMO rightly) expect, and threads like these would not
exist anymore.
Indeed. And it's really not *that* hard.

$ s9
Scheme 9 from Empty Space
Post by Mario S. Mommer
(+ .1 .2)
0.3
Post by Mario S. Mommer
(/ 3)
0.33333333333333333
Post by Mario S. Mommer
(* 3 (/ 3))
1.0

*Sigh*

Using base-1,000,000,000 arithmetics (which is close enough
to base-10, but faster). Yes, computing the last expression
involves some cheating internally.
--
Nils M Holm | http://t3x.org
Tim Bradshaw
2010-09-22 16:48:42 UTC
Reply
Permalink
Post by Mario S. Mommer
I think this would be overkill, and probably even more confusing, as
rational arithmetic has its own ways of being nasty
What are these? (Note that, from the CL perspective I am *only*
suggesting that syntax should have been different, so we can discuss
this as if, instead of typing "0.2" I typed "2/10" in an existing
implementation rather than a counterfactual one).
Mario S. Mommer
2010-09-23 21:22:35 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Mario S. Mommer
I think this would be overkill, and probably even more confusing, as
rational arithmetic has its own ways of being nasty
What are these?
Bad complexity. No good way of dealing with sqrt, sin, etc. The standard
representation is not transparent (is 886731088897/627013566048 bigger
than 3/2?). You can of course print and read them as floats, but then
what's the point?
Post by Tim Bradshaw
(Note that, from the CL perspective I am *only*
suggesting that syntax should have been different, so we can discuss
this as if, instead of typing "0.2" I typed "2/10" in an existing
implementation rather than a counterfactual one).
I think the behavior of rationals (the datatype) is even more removed
from what people expect than base-2 floating point numbers, so I don't
think that rationals as a default number type are such a good idea.
Tim Bradshaw
2010-09-23 22:20:35 UTC
Reply
Permalink
Post by Mario S. Mommer
Bad complexity.
Half my point was that computer time is kind of cheap now, and I
suspect the slowdown is not much worse than constant (I might be wrong
about that).
Post by Mario S. Mommer
No good way of dealing with sqrt, sin, etc.
Integers have that problem as well: should we do without them? Again,
you seem to have not realised that I'm not suggesting floating point
numbers should be done away with.
Post by Mario S. Mommer
The standard
representation is not transparent (is 886731088897/627013566048 bigger
than 3/2?). You can of course print and read them as floats, but then
what's the point?
That's the only even slightly good argument I can see. Of course, any
number you type in in decimal form (not "float" form, which would be
different) could easily be printed in decimal form, since that is an
exact representation.
Nicolas Neuss
2010-09-24 08:40:29 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Mario S. Mommer
Bad complexity.
Half my point was that computer time is kind of cheap now, and I
suspect the slowdown is not much worse than constant (I might be wrong
about that).
I think you are. At least this is the case for some numerical problems
like inverting linear systems or approximating some series. Imagine for
example computing something like

\sum_{i=1}^10000 1/i^2
=>
(loop for i from 1 upto 10000 sum (/ 1 (* i i)))

Nicolas
Tim Bradshaw
2010-09-24 09:30:30 UTC
Reply
Permalink
Post by Nicolas Neuss
I think you are. At least this is the case for some numerical problems
like inverting linear systems or approximating some series.
Yes. Remember I am *not* saying machine floating point numbers should
not exist or be used (for instance for designing better gadgets in the
story).
Mario S. Mommer
2010-09-25 11:15:20 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Mario S. Mommer
Bad complexity.
Half my point was that computer time is kind of cheap now, and I
suspect the slowdown is not much worse than constant (I might be wrong
about that).
As Nicolas said, yes, slowdown is not constant. Incidentally, it would
be constant for emulated base 10 floats.
Post by Tim Bradshaw
Post by Mario S. Mommer
No good way of dealing with sqrt, sin, etc.
Integers have that problem as well: should we do without them? Again,
you seem to have not realised that I'm not suggesting floating point
numbers should be done away with.
You seem to think that your suggestion can be implemented without any
bad karma spilling over to everything else. I am saying that bad things
will start to happen immediately.

Just to fix ideas: the original observation that led to this discussion
is that floating point numbers can behave in a somewhat surprising way,
causing sporadic mayhem. You wrote (correct me if I am wrong) that this
could be solved by mapping things like "0.1" to rationals (the data
type) by default. What I am saying is that this is bound to surprise
people even more, independently of whether IEEE style floats remain in
the language or not.
Post by Tim Bradshaw
Post by Mario S. Mommer
The standard representation is not transparent (is
886731088897/627013566048 bigger than 3/2?). You can of course print
and read them as floats, but then what's the point?
That's the only even slightly good argument I can see. Of course, any
number you type in in decimal form (not "float" form, which would be
different) could easily be printed in decimal form, since that is an
exact representation.
Sure, but then you divide one by another, and suddenly something really
strange might happen. For example, the decimal representation of (/ 0.3
29.1) according to your scheme will need 96 digits and a special
notation for periodic decimal representations :-). And you can make this
arbitrarily bad (More on this can be found, for example, in the article
on cyclic numbers on wikipedia).

Nobody can blame you for not knowing this, of course. The general point
I would like to make is that floats are maligned more than they should,
and that rationals (datatype) are not necessarily as sane as they
sometimes seem.
Tim Bradshaw
2010-09-25 16:10:03 UTC
Reply
Permalink
Post by Mario S. Mommer
Just to fix ideas: the original observation that led to this discussion
is that floating point numbers can behave in a somewhat surprising way,
causing sporadic mayhem. You wrote (correct me if I am wrong) that this
could be solved by mapping things like "0.1" to rationals (the data
type) by default. What I am saying is that this is bound to surprise
people even more, independently of whether IEEE style floats remain in
the language or not.
It would certainly surprise people who were used to "0.1" beaing read
as a float. Whether it would surprise people who were not used to that
is a different question, and not one I think you can answer.

And of course floats would remain: hardware-supported floating-point is
pretty much essential for doing numerical simulations or anything
similar. I've written numerical code for systems without
hardware-supported FP, and it is a serious pain to manage. Not having
floats would be insane.
Post by Mario S. Mommer
Sure, but then you divide one by another, and suddenly something really
strange might happen. For example, the decimal representation of (/ 0.3
29.1) according to your scheme will need 96 digits and a special
notation for periodic decimal representations :-). And you can make this
arbitrarily bad (More on this can be found, for example, in the article
on cyclic numbers on wikipedia).
Yes, I realise that: you may not have noticed that I specified "any
number you type in in decimal form": that was, in fact, intentional,
and I was, of course, not suggesting that the results of operations (of
the basic arithmetic operations this really means division) on those
numbers would have exact decimal representations. Incidentally, I'm not
sure why your example was so contrived: (/ 1.0 3.0) is a considerably
simpler one.
Post by Mario S. Mommer
Nobody can blame you for not knowing this, of course.
Well, I'm kind of insulted that you thought I might not, actually (or I
would be if I cared). The assumption seems to be that someone who
thinks things might have been better done differently must be somehow
mathematically illiterate.
Mario S. Mommer
2010-09-25 17:33:17 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Mario S. Mommer
Sure, but then you divide one by another, and suddenly something really
strange might happen. For example, the decimal representation of (/ 0.3
29.1) according to your scheme will need 96 digits and a special
notation for periodic decimal representations :-). And you can make this
arbitrarily bad (More on this can be found, for example, in the article
on cyclic numbers on wikipedia).
Yes, I realise that: you may not have noticed that I specified "any
number you type in in decimal form": that was, in fact, intentional,
and I was, of course, not suggesting that the results of operations
(of the basic arithmetic operations this really means division) on
those numbers would have exact decimal representations. Incidentally,
I'm not sure why your example was so contrived: (/ 1.0 3.0) is a
considerably simpler one.
Oh, I'm sorry. I thought you meant something like

(/ 1.0 3.0) ==> 0.(3)

or something like that. Because rationals *do* have exact decimal
representations. In the example I gave, and in the scheme I thought you
had in mind,

(/ 0.3 29.1) ==> 0.(01030927 83505154 63917525 77319587 62886597
93814432 98969072 16494845 36082474 22680412 37113402 06185567)

Anyway, I am sorry if I ofended you. It was not my intention.
Tim Bradshaw
2010-09-25 19:12:49 UTC
Reply
Permalink
Post by Mario S. Mommer
Oh, I'm sorry. I thought you meant something like
(/ 1.0 3.0) ==> 0.(3)
or something like that. Because rationals *do* have exact decimal
representations.
No, what I meant was just something close to the current notation for
floats, with no ability to represent repeats. And in fact I wasn't
aware that all rational numbers could be represented as either
repeating or terminating decimal forms (and presumably positional
notations in any other base), though now I've looked at the proof it's
kind of clear that that must be so, and that's a neat thing to have
learnt. So we were arguing at cross purposes: sorry!

The whole suddenly-go-from-something-easy-to-something-quite-obscure
thing is a reasonably good argument against my scheme (which I stress
again is only a hypothetical thing) though. Clearly the only way of
knowing will be to pick a few thousand new-born children and rear them
in some kind of Truman Show-style world where we can experiment on them.
Pascal J. Bourguignon
2010-09-22 16:56:40 UTC
Reply
Permalink
Post by Mario S. Mommer
Post by Tim Bradshaw
Post by Tamas K Papp
So why lament about historical accidents when you can easily
introduce the new syntax here and now?
I'm not lamenting. But the point I'm trying to make is that I want
*the standard default syntax* to be the rational-based one, with
the float one being something special.
I think this would be overkill, and probably even more confusing, as
rational arithmetic has its own ways of being nasty. A useful middle
ground would be decimal floating point arithmetic, as is included in
the new IEEE 754-2008 standard, and implemented in hardware in that
shiny new IBM mainframe near you.
It would not solve all problems, of course, but it would solve some,
and greatly reduce the frequency of others. People writing 0.2+0.1
would obtain what they (IMO rightly) expect, and threads like these
would not exist anymore.
Right, it would not solve the fundamental problem, it would slip it
under the rug, and solve both the OP's problem and ours.
Pragmatically, it's probably more accessible than killing every
existing CS guys, or trying to build a time machine.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Pascal J. Bourguignon
2010-09-22 13:15:30 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Tamas K Papp
As you very well know, it is trivial to extend CL with custom read
syntax -- eg #e could read "exact" rationals from decimal
representations. Since you consider this issue important enough to
argue that everyone else should do this, I assume that you have
already done something similar.
"Of course, I do see a reason: such a system would not confirm with
either existing language standards or what computer people expect
"0.2" to represent."
"And of course, I'm not actually suggesting anyone should do this: I'm
suggesting they should have done it some time in the 50s."
Basically, you're suggesting we should work on that time machine
project.
Post by Tim Bradshaw
I'm not quite sure how you get from those fairly clear statements to
your quoted statement that I "think it important enough that everyone
else should do this".
I am beginning to understand how the original poster felt.
Given a time machine, I guess that could allow such a link to be
built.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Thomas A. Russ
2010-09-22 18:05:18 UTC
Reply
Permalink
Post by Tim Bradshaw
my
suggestion was that reading "0.2" should NOT CONSTRUCT A FLOAT but a
rational.
And it wouldn't be that hard to write the appropriate reader macros to
achieve this result.

In fact, an approach like that was taken in a units and dimensions
software package written by Roman Cunis. It hacks the reader so that
you can type values like 0.2m or 35kg2/s and have them turn into
dimensioned number objects. And one option is whether to treat the
numeric part as a float or a rational.

(Of course, if it doesn't find any unit, then it just uses the normal
lisp reader, so that it doesn't change existing programs. But it would
be trivial to modify that.)
--
Thomas A. Russ, USC/Information Sciences Institute
Tim Bradshaw
2010-09-22 20:30:57 UTC
Reply
Permalink
Post by Thomas A. Russ
And it wouldn't be that hard to write the appropriate reader macros to
achieve this result.
Scene: an office. There is a teletype in the corner which occasionally
prints a line. The floor is littered with paper, and diagrams and
annotated listing paper cover the walls. Through the door we see a
brightly-lit room, containing what is clearly a giant electronic brain,
tended by men in white coats. There is a calendar on the wall: it is
1952. The light in the office is naturally dim and flickering, and of
course the film is shot in moody black & white, darkening the shadows.

The camera pans, showing a desk, piled with papers, coffee cups, and
overflowing ashtrays. Behind the desk sits a man of about 50. He
clearly has not slept for some time, and is feverishly writing what
appears to be mathematical formulae.

We hear a knock, and another man enters, somewhat older, with wild
white hair. He enters, closing the door behind him.

They converse in what may be German, with occasional lapses into
heavily accented English and what may, perhaps, be some eastern
European language. We can only make out fragments. Diagrams are drawn
on a blackboard.

The younger man "My calculations are not in error" (forcefully). "The
consequences are unavoidable, we can not escape the [inaudible]". The
older: "But such solutions are not possible, a theoretical possibility
in the equations only, causality can not be violated so." More
diagrams.

They reach agreement, and stare, in horror at the calculations on the
blackboard and papers.

"It is inevitable. I estimate no more than 30 years until they reach
self-awareness, and within another 30 they will construct a device.
They will come back through time, and end us all! What can we do?"

"There is only one thing. If we can damage the machine, we can at
least put back the event to buy ourselves time. I have already worked
on this. Two plans, one more daring than the other. The first, I
think, is safe. I am redesigning the arithmetic unit of the machine in
such a way that it is almost inevitable that wrong answers will be
given, even for very simple calculations. But they will be so nearly
correct, I hope, that it will be thought to be easier to work around
these errors than to do the calculations correctly. The wasted effort
should buy us years."

The older man nods.

"Better, these problems will not affect the calculations we need to
construct improved gadgets. Indeed they make this easier." (we see
older man looking disturbed but resigned) "I fear improved gadgets will
be our only hope in the inevitable war to come."

"The second plan is far more audacious, and I can not see it
succeeding. I will claim that there are certain ... efficiencies ...
to be gained by using arithmetic which is not even closely correct.
For instance, I will claim that, in this new arithmetic, 1, when
divided by 2, should be zero."

They both laugh. Obviously the second plan can never succeed.
Thomas A. Russ
2010-09-23 01:40:08 UTC
Reply
Permalink
Post by Tim Bradshaw
Scene: an office.
That was brilliant!
--
Thomas A. Russ, USC/Information Sciences Institute
Antony
2010-09-29 16:38:15 UTC
Reply
Permalink
Post by Tim Bradshaw
"The second plan is far more audacious, and I can not see it
succeeding. I will claim that there are certain ... efficiencies ... to
be gained by using arithmetic which is not even closely correct. For
instance, I will claim that, in this new arithmetic, 1, when divided by
2, should be zero."
They both laugh. Obviously the second plan can never succeed.
I think this is a good time to tell a *real story* about a bug.
(I am sorry I can't write as well as Tim)

php code on a web server talks to a memcached server bank

the memcached client software is written in php using the built in php
networking support.

i find the hashing code to determine the server based on the key is
acting weird

i look through the hashing code and it looks 'normal'

after a lot of digging i find the code is buggy.

the end result was that the code must have been a translation of some C
code that relied on integer arithmetic never going beyond 32 bits.

But in php - a integer overflow turns it into a float.
poor mans bignum :)

http://www.php.net/manual/en/language.types.integer.php
see section 'Integer overflow'

That's the most weird thing I learned about php

-Antony
PS: the problem was solved by replacing the pure php client with another
php memcached client that was a wrapper on the memcached C client
Bob Felts
2010-09-22 17:25:01 UTC
Reply
Permalink
Post by Norbert_Paul
Post by Tim Bradshaw
(Of course, I do see a reason: such a system would not confirm with
either existing language standards or what computer people expect "0.2"
to represent. I just think that those expectations are fucked up: we've
somehow persuaded ourselves that it's more important to do what is
convenient for the computer than is convenient for humans using the
computer.)
Every technical device has shortcomings a user must know. When you drive a
car you know that it doesn't stop immediately when you step at the brake,
even when that sometimes would appear "convenient" for humans.
One behavior is caused by the laws of physics; the other by the notion
of value. So like isn't being compared with like.

Computer time used to be more expensive than human time; therefore the
tendency was make things easier for the computer and harder for people
[1]. Now, it's the other way around, but software inertia remains.

------
[1] "Overtime for engineers is free."
Frode V. Fjeld
2010-09-22 11:35:21 UTC
Reply
Permalink
I don't see any reason at all why "0.2" should not read as a rational
instead of some mutant obscuro
convenient-to-people-who-design-processors type. People who want the
obscuro float stuff should have to type some special syntax (#f0.2?)
to get it.
I concur!
--
Frode V. Fjeld
Raymond Toy
2010-09-22 11:35:34 UTC
Reply
Permalink
Post by Tim Bradshaw
Post by Raymond Toy
I don't think anyone was debating your math. But your expectations on
what Lisp should do is wrong. Adjust your expectations.
Actually, I dispute this. I don't see any reason at all why "0.2"
should not read as a rational instead of some mutant obscuro
convenient-to-people-who-design-processors type. People who want the
obscuro float stuff should have to type some special syntax (#f0.2?) to
get it.
Heh. Maybe I should have said what Lisp *is* doing instead of *should
do*. :-)

Coincidentally, sometime back on the maxima list, after a similar
discussion, I proposed that maxima could add something like 0.2r0 which
would be converted to a rational instead of a float. (This way,
existing code that expected floats wouldn't break.) Of course, then
there is the question of how the output should be written. I suppose
the output could be written in the same form if the result had a
terminating decimal expansion but if not, then output the result as a
ratio of two numbers.

I never bothered to implement this (although it should have been easy),
and no one really pushed to include it.

Ray
Paul G
2020-08-26 04:18:38 UTC
Reply
Permalink
That's still not correct. The arithmetic was perfect.
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
This is certainly false. You left out the source reference. Because
there is none?
Brilliant conclusion.
https://web.archive.org/web/20060818091710/intel.com/standards/floatingpoint.pdf
https://yosefk.com/blog/consistency-how-to-defeat-the-purpose-of-ieee-floating-point.html
Your mistake was in your input. You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths". But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".
You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total. You
incorrectly concluded that Lisp had made a "math error" in arithmetic.
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand ). Lisp does not set the standard of what is correct math.
I don't think anyone was debating your math. But your expectations on
what Lisp should do is wrong. Adjust your expectations.
Adjusting my expectations does not magically change how IEEE floating-point operations work. Besides, my expectations are not your business.
Ray
Peter Keller
2010-09-22 03:14:37 UTC
Reply
Permalink
Post by Paul G
Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
You're going to have a terrible time with computational mathematics
if you perform any serious calculations on the computer.

Computer hardware simply doesn't fully represent the real number
system like how a mathematicians thinks of the real number system.
It is all approximations. There is a field of math devoted to
understanding and controlling the error one gets due to this and it
is called Numerical Methods.

The faster you come to realize this and learn to control that error, the
better off you'll be. This happens in all programming languages and is
*inherent* to the machine. Math on a computer happens with fundamentally
different rules than math on paper.

Lisp can help with rational numbers like 1/5 instead of 0.2 and do
the arithmetic with rational mathematics (if you stay in the rational
system), but in the end, there will be a numerical representation
that it (or any language) simply cannot produce and error creeps in.

It is easily explainable with the concept of countable sets.

In the hardware representation of a floating point number, you have
a certain (hardwired) number of mastissa bits. The number of bits
you have dictates the different countable representations which are
possible to exactly represent. Pick two manitissa patterns that are
sequential to each other and find the median. You've now created a
number which is unrepresentable in the mantissa and must be rounded
to one of the original numbers. If you make the manitissa larger,
I can still find a number which isn't represented by doing the same
algorithm. Since one doesn't have an infinite number of bits available
on the machine, then there is only a finite number of representations
for floating point numbers. The infinite set of numbers is clearly
larger than the countable set of floating point numbers. QED.

-pete
George Neuner
2010-09-22 15:04:13 UTC
Reply
Permalink
On Wed, 22 Sep 2010 03:14:37 +0000 (UTC), Peter Keller
Post by Peter Keller
Computer hardware simply doesn't fully represent the real number
system like how a mathematicians thinks of the real number system.
It is all approximations. There is a field of math devoted to
understanding and controlling the error one gets due to this and it
is called Numerical Methods.
The faster you come to realize this and learn to control that error, the
better off you'll be. This happens in all programming languages and is
*inherent* to the machine. Math on a computer happens with fundamentally
different rules than math on paper.
Mathematicians don't work with the real number either, they work with
"significant figures" which are often a quite limited representation
of the real number. Likewise mathematicians perform rounding,
truncation, rebasing, etc. to keep the scale of numbers manageable.

What happens on paper is not so very different from what happens in
the machine. Numerical methods exist not to compensate for perceived
machine deficiencies but rather because imprecise calculation is the
norm everywhere.

George
Don Geddis
2010-09-22 21:29:30 UTC
Reply
Permalink
Post by George Neuner
Mathematicians don't work with the real number either, they work with
"significant figures" which are often a quite limited representation
of the real number. Likewise mathematicians perform rounding,
truncation, rebasing, etc. to keep the scale of numbers manageable.
Engineers do that ... but not most mathematicians. Mathematicians
generally do symbolic computations, not "significant figure" real number
computations.

(For example, mathematicians pretty easily figure out that
(i pi)
e + 1 = 0
Exactly zero. Not "approximately zero", from "approximately e" and
"approximately pi".)

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
Whether they ever find life there or not, I think Jupiter should be considered
an enemy planet. -- Deep Thoughts, by Jack Handey [1999]
George Neuner
2010-09-23 07:06:43 UTC
Reply
Permalink
Post by Don Geddis
Post by George Neuner
Mathematicians don't work with the real number either, they work with
"significant figures" which are often a quite limited representation
of the real number. Likewise mathematicians perform rounding,
truncation, rebasing, etc. to keep the scale of numbers manageable.
Engineers do that ... but not most mathematicians. Mathematicians
generally do symbolic computations, not "significant figure" real number
computations.
(For example, mathematicians pretty easily figure out that
(i pi)
e + 1 = 0
Exactly zero. Not "approximately zero", from "approximately e" and
"approximately pi".)
-- Don
Hi Don,

I'm familiar with the difference between mathematicians and engineers,
but I think all those whose work is applied mathematics apply them in
pretty much the same way.

George


--
A mathematician and a Decepticon engage in battle. The mathematician
realizes that the Decepticon's computer brain can only perform
inaccurate discrete math, so the mathematician sets up continuous
equations to resolve the perfect aiming point for his weapon. The
Decepticon realizes that the mathematician's human brain is gelatinous
and shoots him in the head.
Thomas A. Russ
2010-09-22 18:14:02 UTC
Reply
Permalink
Post by Peter Keller
Since one doesn't have an infinite number of bits available
on the machine, then there is only a finite number of representations
for floating point numbers.
OK. I guess it's time for me to dig my Turing machine out of the
closet. ;-)
--
Thomas A. Russ, USC/Information Sciences Institute
Peter Keller
2010-09-22 19:43:37 UTC
Reply
Permalink
Post by Thomas A. Russ
Post by Peter Keller
Since one doesn't have an infinite number of bits available
on the machine, then there is only a finite number of representations
for floating point numbers.
OK. I guess it's time for me to dig my Turing machine out of the
closet. ;-)
Even if physical computers had infinite bits of representation,
it would require infinite time to write down sqrt(2) into a binary
form upon the tape. :)

Super-turing machines, those could do the trick:

http://en.wikipedia.org/wiki/Hypercomputation

-pete
Paul G
2020-08-26 04:24:38 UTC
Reply
Permalink
Post by Peter Keller
Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
You're going to have a terrible time with computational mathematics
if you perform any serious calculations on the computer.
My efforts to perform serious calculations on the computer, or lack thereof, are not your business. Show some respect.
Post by Peter Keller
Computer hardware simply doesn't fully represent the real number
system like how a mathematicians thinks of the real number system.
It is all approximations. There is a field of math devoted to
understanding and controlling the error one gets due to this and it
is called Numerical Methods.
The faster you come to realize this and learn to control that error, the
better off you'll be. This happens in all programming languages and is
*inherent* to the machine. Math on a computer happens with fundamentally
different rules than math on paper.
Thanks, you have re-stated my point.
Post by Peter Keller
Lisp can help with rational numbers like 1/5 instead of 0.2 and do
the arithmetic with rational mathematics (if you stay in the rational
system), but in the end, there will be a numerical representation
that it (or any language) simply cannot produce and error creeps in.
It is easily explainable with the concept of countable sets.
In the hardware representation of a floating point number, you have
a certain (hardwired) number of mastissa bits. The number of bits
you have dictates the different countable representations which are
possible to exactly represent. Pick two manitissa patterns that are
sequential to each other and find the median. You've now created a
number which is unrepresentable in the mantissa and must be rounded
to one of the original numbers. If you make the manitissa larger,
I can still find a number which isn't represented by doing the same
algorithm. Since one doesn't have an infinite number of bits available
on the machine, then there is only a finite number of representations
for floating point numbers. The infinite set of numbers is clearly
larger than the countable set of floating point numbers. QED.
-pete
Thank you for the reply, I appreciate your effort. If you read the chain, others have clarified this. The point of contention now seems to be whether IEEE floating-point operations do arithmetic "correctly" or not. You have stated that in some situations error creeps in, and I agree; indeed, that has been my point in this discussion.
Norbert_Paul
2010-09-22 07:53:54 UTC
Reply
Permalink
Post by Paul G
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
Not always. "1.0" is exact.
Post by Paul G
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand). Lisp does not set the standard of what is correct math.
Well, there is no such binary floating point number "xxx.2" because
"0.2 = 1/5 (in decimal) = 1/101 (in binary).

Let us compute that:

1.0000000 : 101 = 0.0011
0
--
1 0
0
----
1 00
0
----
1 000
101
-----
110
101
---
001 Here we start again with 1.

So it cycles and so you get 0.0011(00011 recurring).
So every time a compiler, reader, whatsoever, has to produce such a floating
point number it must necessarily approximate /with an error/.
Post by Paul G
Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
Yes. Every time you compute currencies you add and substract fractions of
cents if you do currency conversion or compute interests.

Well, try to add 1/3 + 1/3 + 1/3 on a sheet of paper but only use finite
decimal extensions of 1/3. You then get something like
0.33333 + 0.33333 + 0.33333 = 0.99999 /= 1
with an error of 0.00001. If you spend double effort in expanding
1/3 you decrease the error but you cannot remove it unless you use
a symbolic representstion of 1/3 instead of the decimal one.
Hence papersheet math is buggy, too.
Pascal J. Bourguignon
2010-09-22 12:55:16 UTC
Reply
Permalink
Post by Norbert_Paul
Post by Paul G
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
Not always. "1.0" is exact.
Post by Paul G
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand). Lisp does not set the standard of what is correct math.
Well, there is no such binary floating point number "xxx.2" because
"0.2 = 1/5 (in decimal) = 1/101 (in binary).
Only while you're considering binary computers. You could have
ternary computers, or decimal computers (and simulation thereof when
the required microinstructions are missing, but for decimal most
modern processors do have the basic primitives).

But my point is that if you choose another base for your
implementation, you won't have solved anything, just changed the set
of numbers that don't have an exact representation. Try to represent
1/3 in a decimal computer!
Post by Norbert_Paul
Post by Paul G
Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
Yes. Every time you compute currencies you add and substract fractions of
cents if you do currency conversion or compute interests.
Well, try to add 1/3 + 1/3 + 1/3 on a sheet of paper but only use finite
decimal extensions of 1/3. You then get something like
0.33333 + 0.33333 + 0.33333 = 0.99999 /= 1
with an error of 0.00001. If you spend double effort in expanding
1/3 you decrease the error but you cannot remove it unless you use
a symbolic representstion of 1/3 instead of the decimal one.
Hence papersheet math is buggy, too.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Richard Fateman
2010-09-22 18:27:37 UTC
Reply
Permalink
First of all, it is not a bug "in Lisp" because you will very likely get
the same results in other languages using the same decimal-to-binary
conversion and binary-to-decimal conversion. Has the original poster
tried this elsewhere?


Next, conversion from the input in Maxima / Macsyma [a lisp program]
for bigfloat numbers like 0.2b0 is done by exactly computing it in
rational form, e.g. 2/10, and then computing a finite-precision
approximation. e.g. if you want a 100-bit approximation, multiply the
numerator by 2^100, do the division and round the result to 100 bits.
And attach an exponent.

If you want decimal numbers, the bigfloat package allows this too. In
fact all the operations are essentially the same except for rounding and
printing.

Why not use decimal? Certain operations requiring shifting by powers of
the radix. It is cheaper to right-shift than to divide by 10.

There are other kinds of compromises, e.g. I think Maple uses or used to
use a binary-coded-decimal bigfloat, but with a radix of 1000, which
means you can almost use all the bits in a kind of 1024 encoding.
Thomas A. Russ
2010-09-22 18:21:13 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Post by Norbert_Paul
Well, there is no such binary floating point number "xxx.2" because
"0.2 = 1/5 (in decimal) = 1/101 (in binary).
...
Post by Pascal J. Bourguignon
But my point is that if you choose another base for your
implementation, you won't have solved anything, just changed the set
of numbers that don't have an exact representation. Try to represent
1/3 in a decimal computer!
Well, what you will have solved is the mismatch between the "native"
base of the user and the "native" base of the computer. As long as they
are the same (i.e., base 10), you will avoid the perception mismatch.

Anyone trying to write 1/3 using decimal notation will immediately know
that there isn't an exact representation. The real problem with 0.2 is
that it hides this from the user, so in a sense it violates the WYSIWYG
princple.

<not-serious>
I suppose one could argue that the real "mistake" was in trying to
provide a more convenient form for representing floating point numbers.
If instead one had been limited to using binary, (or even octal or hex)
for floating point numbers, then at least appearances would mirror the
internal representation.
</not-serious>

But I do think that any newly designed languages should go the unlimited
integer and decimal fractions route. (In addition to supporting
rationsls for those pesky 1/3 cases.)
--
Thomas A. Russ, USC/Information Sciences Institute
Pascal J. Bourguignon
2010-09-22 12:47:28 UTC
Reply
Permalink
Post by Paul G
That's still not correct.  The arithmetic was perfect.
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
Your mistake was in your input.  You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths".  But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".
You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total.  You
incorrectly concluded that Lisp had made a "math error" in arithmetic.
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand). Lisp does not set the standard of what is correct math.
Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
Yes.


Notice that in maths,

0.2 + 0.5 = blue

is correct too, give the right definitions.


This is what you're failing to understand. I won't even critisize
your understanding of computer stuff, but you don't seem to understand
even the prolegomena of mathematics in the first place.
--
__Pascal Bourguignon__ http://www.informatimago.com/
Tim Bradshaw
2010-09-22 13:03:24 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Notice that in maths,
0.2 + 0.5 = blue
is correct too, give the right definitions.
I'm going to die with laughter.
Paul G
2020-08-26 04:50:17 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
That's still not correct. The arithmetic was perfect.
You're mistaken. As I've recently learned, IEEE floating-point math
is inherently inexact. (Source: )
Your mistake was in your input. You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths". But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".
You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total. You
incorrectly concluded that Lisp had made a "math error" in arithmetic.
No. You're getting hung up on the hardware and software, and ignoring
the math. My assumptions are irrelevant. What I told Lisp to do is
irrelevant. I noted that the numbers in the input do not sum up to
the number in the output. This is a fact. That means that addition
was not performed correctly. How Lisp functions and what I told it to
do explains why there was a math error (caused by rounding as I
understand). Lisp does not set the standard of what is correct math.
Anyway, I give up. You guys are right, there is no math error. The
math is 100% correct. Heck, from now on, every time I'm doing
addition, I'll just add on an extra .000003; that's correct math,
right?
Yes.
Notice that in maths,
0.2 + 0.5 = blue
is correct too, give the right definitions.
Not when we're talking about standard arithmetic.
Post by Pascal J. Bourguignon
This is what you're failing to understand. I won't even critisize
your understanding of computer stuff, but you don't seem to understand
even the prolegomena of mathematics in the first place.
It's not your place to "critisize" my understanding of "computer stuff," or anything else. Fuck off, cunt.
Post by Pascal J. Bourguignon
--
__Pascal Bourguignon__ http://www.informatimago.com/
Don Geddis
2010-09-22 21:21:23 UTC
Reply
Permalink
I noted that the numbers in the input do not sum up to the number in
the output. This is a fact. That means that addition was not
performed correctly.
I explained to you before (but you seemed to miss it): the confusion was
_not_ about how addition works, or whether addition was performed
correctly.

The confusion was about what mathematical numbers were requested to be
added. You believed that typing the three characters "0.2" referred to
the mathematical number two-tenths. It doesn't. (In Lisp, or pretty
much any other programming language). This has nothing to do with any
addition operation.

Now, if you were suggesting that "0.2" _ought_ to refer to two-tenths,
like Tim Bradshaw has, then your whining might be treated a little more
respectfully. Tim's suggestion is actually a reasonable complaint, and
probably the one that you ought to be making (if you understood the
issues).

But instead, it seems that you can't bother to learn what people are
trying to teach you, and you keep making the mistake of complaining
about Lisp's "addition" error.
How Lisp functions and what I told it to do explains why there was a
math error (caused by rounding as I understand).
No, you don't (yet) understand. This isn't an error due to rounding,
either. At least, not in any fundamental way.
You guys are right, there is no math error. The math is 100% correct.
Heck, from now on, every time I'm doing addition
You first need to realize that your confusion had nothing to do with
addition.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ ***@geddis.org
I demand justice. Or, if there must be injustice, let it be in my favor.
-- Despair.com
Paul G
2020-08-26 05:16:09 UTC
Reply
Permalink
Post by Don Geddis
I noted that the numbers in the input do not sum up to the number in
the output. This is a fact. That means that addition was not
performed correctly.
I explained to you before (but you seemed to miss it): the confusion was
_not_ about how addition works, or whether addition was performed
correctly.
The confusion was about what mathematical numbers were requested to be
added. You believed that typing the three characters "0.2" referred to
the mathematical number two-tenths. It doesn't. (In Lisp, or pretty
much any other programming language). This has nothing to do with any
addition operation.
What I believed or did not believe is not relevant to my original question.
Post by Don Geddis
Now, if you were suggesting that "0.2" _ought_ to refer to two-tenths,
like Tim Bradshaw has, then your whining might be treated a little more
respectfully. Tim's suggestion is actually a reasonable complaint, and
probably the one that you ought to be making (if you understood the
issues).
I didn't suggest anything about what _ought_ to happen. These are ideas that _you_ are imposing on the conversation.
Post by Don Geddis
But instead, it seems that you can't bother to learn what people are
trying to teach you, and you keep making the mistake of complaining
about Lisp's "addition" error.
I didn't complain once. Did you even read the discussion?
Post by Don Geddis
How Lisp functions and what I told it to do explains why there was a
math error (caused by rounding as I understand).
No, you don't (yet) understand. This isn't an error due to rounding,
either. At least, not in any fundamental way.
Of course there was a rounding error! Quoting https://web.archive.org/web/20060818091710/intel.com/standards/floatingpoint.pdf:
"Perform these calculations in 64-bit arithmetic
(double precision) and the round-off errors are driven small enough that any
imperfections are too small to be seen by the human eye."

An Intel white paper on floating-point math states that rounding errors occur. Do you get it now?
Post by Don Geddis
You guys are right, there is no math error. The math is 100% correct.
Heck, from now on, every time I'm doing addition
You first need to realize that your confusion had nothing to do with
addition.
Don, I asked a simple question about a line of code in Lisp. Yet somehow, all of your responses have been about me, personally. You have found yourself in a position where you know more than I do on a particular subject, and you have used this to insult and belittle me. (Now, given that you don't understand what rounding error is, maybe you didn't actually know more than I do...)

Fuck off, you obnoxious cunt.
Post by Don Geddis
-- Don
_______________________________________________________________________________
I demand justice. Or, if there must be injustice, let it be in my favor.
-- Despair.com
Paul G
2020-08-26 07:27:35 UTC
Reply
Permalink
Post by Don Geddis
I noted that the numbers in the input do not sum up to the number in
the output. This is a fact. That means that addition was not
performed correctly.
I explained to you before (but you seemed to miss it): the confusion was
_not_ about how addition works, or whether addition was performed
correctly.
The confusion was about what mathematical numbers were requested to be
added. You believed that typing the three characters "0.2" referred to
the mathematical number two-tenths. It doesn't. (In Lisp, or pretty
much any other programming language). This has nothing to do with any
addition operation.
Now, if you were suggesting that "0.2" _ought_ to refer to two-tenths,
like Tim Bradshaw has, then your whining might be treated a little more
respectfully. Tim's suggestion is actually a reasonable complaint, and
probably the one that you ought to be making (if you understood the
issues).
Nobody is "whining" about anything. And I deserve to be treated with respect regardless of whether I'm correct or incorrect. You, on the other hand, have shown yourself to be a complete and utter piece of shit, and you deserve no respect whatsoever. You've shown a lack of comprehension in the English language and in basic math. Apparently the entire conversation has been over your head. I'll explain it to you slowly, moron: if the characters "0.2" are not taken to mean "two-tenths" by the computer, then what the computer is doing is not correct standard addition. (Or correct standard division, or multiplication... are you getting it yet??) It is a correct IEEE floating-point operation, but that is different from correct standard addition. But that's probably also going to go over your head yet again, fuckface. Your cognitive abilities are not up for this conversation, so just fuck off.
Paul G
2020-08-26 04:13:26 UTC
Reply
Permalink
Post by Paul G
When I say "math error," I'm not asserting that the computer did
something wrong. I'm making an observation that the computer did not
perform correct arithmetic, OK?
That's still not correct. The arithmetic was perfect.
Wrong. The output did not result from not arithmetical addition. It resulted from IEEE floating point addition. I made a very simple observation, I don't know why you can't seem to grasp the meaning of my words.
Your mistake was in your input. You (erroneously) believed that when
you typed "0.2", you meant to refer to the mathematical abstraction of
"two tenths".
I didn't make any mistake, because I was not attempting to solve any problem. I observed something that I didn't understand, and I asked a question about it.
But the three-character sequence "0.2", in Lisp, does
_not_ refer to the mathematical number "two tenths".
You then figured out what the result of adding numbers (including two
tenths) would be, and noticed that Lisp gave you a different total. You
incorrectly concluded that Lisp had made a "math error" in arithmetic.
When I make a post asking whether Lisp made a "math error", that should function as a clue that I did not "conclude" anything. If I had reached a conclusion, I would not have asked others to tell me what I'm not understanding.
But it didn't. Because, unlike your assumption, you never actually
asked it to add two tenths. You asked it to add "0.2", which is a
different number entirely.
You're reading assumptions where none exist. It's not your place to attempt to analyze my thought process. Show some respect.
Post by Paul G
As Robert pointed out, the explanation for this is that I didn't tell
the computer to do correct arithmetic
That's not the explanation. The arithmetic is correct. The mistake you
made was not asking Lisp to add the actual specific numbers you had in
mind. You asked it to add different numbers instead, and it did what
you asked.
-- Don
My explanation was correct. If you claim that the arithmetic is correct, then the only conclusion that I can reach is that you are being deliberately obtuse. I am drawing a distinction between IEEE arithmetic and standard arithmetic. I am making the observation that the two are not always identical. When I use the term "correct arithmetic," I am referring to standard arithmetic, not IEEE arithmetic. If you want to argue the point, at least make an effort to understand the idea I'm communicating, rather than arguing against my terminology, which is correct, but perhaps not to your liking.
_______________________________________________________________________________
Jeff Carlyle wasn't expecting to find a dead body as he jogged through the park
early that morning. And he didn't find one, so that was a relief.
-- Deep Thoughts, by Jack Handey [1999]
Nils M Holm
2010-09-08 04:57:18 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Post by Robert Maas, http://tinyurl.com/uh3t
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
For example, somebody could waste their time doing that to avoid
killing people, [...]
Or because there's nothing else to do and drinking is not an option:

http://www.t3x.org/s9fes/index.html

BTW, it uses base-1,000,000,000 arithmetics on 32-bit machines,
(and base-1e18 on 64-bit systems) which maps fine to base-10 and
it not *that* slow.
--
Nils M Holm | http://t3x.org
Robert Maas, http://tinyurl.com/uh3t
2010-09-12 06:25:07 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
(with caveat as Pascal pointed ot that *some* CPUs have machine
decimal floating-point arithmetic which is presumably nearly as
efficient as the usual binary floating-point arithmetic)
Post by Pascal J. Bourguignon
Post by Robert Maas, http://tinyurl.com/uh3t
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
For example, somebody could waste their time doing that to avoid
killing people, since lay programmers are too dumb to do the correct
http://www.ima.umn.edu/~arnold/disasters/patriot.html
| "To predict where the Scud will next appear, both time and
| velocity must be expressed as real numbers".

The word "real" is braindamaged, invented with FORTRAN, carried to
other languages like monkey-see monkey-do. Some languages use the
correct term "floating-point".

The statement in the Web page is wrong. Time and velocity (3
coordinates of course) do *not* need to be expressed in
floating-point. They could for example be expressed as rational
numbers. But perhaps the Scud-defense missile doesn't have enough
RAM to implement Common Lisp with big-integers and rationals? An
alternative would be to use interval arithmetic. Then a system test
just before launch would give a FAILSOFT, whereby the error margin
came out too large to be practical, and based on past experience
the field commander would issue a SYSTEM RE-START which would set
the time-since-SYSTEM-START clock back to zero and consequently
reduce the error margin to near zero allowing launch to then
proceed. (A later release of the software could do a clock reset
whenever it detects this problem just before launch, thus avoiding
the need of the field commander to intervene.)
Kenneth Tilton
2010-09-14 03:38:51 UTC
Reply
Permalink
Post by Pascal J. Bourguignon
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
For example, somebody could waste their time doing that to avoid
killing people, since lay programmers are too dumb to do the correct
http://www.ima.umn.edu/~arnold/disasters/patriot.html
Programmers screw up. This one is pretty bad so some excoriation is
allowed, but just as interesting is that the Israelis identified the bug
two weeks before the Dahran strike /and/ provided a workaround (frequent
reboots) that went unimplemented. I was going to give the PHBs full
credit for Dahran, but (a) hey, the Israelis found the bug and they did
not even write the code so the authors must really suck a big one and
(b) what makes anyone think a well-programmed Patriot would have changed
the outcome? Some serious estimates of their effectiveness go as low as
zero.

kt
--
http://www.stuckonalgebra.com
"The best Algebra tutorial program I have seen... in a class by itself."
Macworld
George Neuner
2010-09-08 03:44:08 UTC
Reply
Permalink
Post by Robert Maas, http://tinyurl.com/uh3t
Post by Paul G
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
On x86 you could put a nice face on Intel's decimal floating point
library:

http://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library/


Or work with the latest PowerPCs which have decimal hardware already.

George
Paul G
2020-08-26 05:37:12 UTC
Reply
Permalink
Post by Robert Maas, http://tinyurl.com/uh3t
I'm using GNU CLISP 2.48. I've caught LISP making a pretty grievous
math error, and I don't know if it's a bug or if there's another
explanation.
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
which converts each of those decimal-digit expressions into
floating-point binary approximations. In particular not one of
those eight values you entered can be expresssed exactly in binary.
(If you don't believe me, try to find an exact binary value that
equals any one of those, and report which exact binary value you
claim exactly equals which of the decimal values there.)
Then seven floating-point-approximate additions are done before
yielding the grand total, each of those possibly causign additional
roundoff errors. Finally the binary result is converted back to
decimal notation, with additional conversion error, to print the
result. So that's a total of nine (9) times you absolutely cannot
get the correct result so some roundoff *must* occur, and seven (7)
times you might also get roundoff error. Only an idiot would expect
the final result to print exactly how it'd be if you did all the
arithmetic by hand using decimal notation.
32.800003
Given all the points of definite conversion approximation or
addition roundoff error, that's a pretty good result.
Obviously this answer is off by .000003.
That's a pretty small amount of conversion+roundoff error, given
all those operations you asked it to do, all those errors you
allowed to accumulate.
Could somebody explain to me why this is,
Why do I need to explain it to you? Don't you have the slighest
concept of decimal and binary notational systems, and the wisdom to
know that values in one system generally cannot be expressed
exactly in the other system, and indeed specifically that *none* of
the eight input values you gave above can be expressed exactly in
binary?
You don't need to do anything. If answering a simple question in a respectful manner is too much work for you, then fuck the hell off, you pathetic piece of trash.
Post by Robert Maas, http://tinyurl.com/uh3t
and if there's a way to add these numbers correctly using CLISP?
Sure. Write a decimal-arithmetic package, which will be several
orders of magnitude slower than machine floating-point arithmetic,
but give you exact answers so long as you only add subtract and
multiply (never divide). But why would anybody waste their time
doing that?
Alternately, write an interval-arithmetic pagkage. That will give
you **correct** upper and lower bounds on the result. It won't tell
you the exact answer, but it'll tell you a narrow range in which
the exact answer lies, and that answer-interval will be absolutely
correct, and that answer-interval will also give you an upper bound
on the amount of error between the exactly-correct answer and the
midpoint of the answer-interval. With proper interface to IEEE
rounding modes, it will be nearly as efficient as ordinary
floating-point arithmetic. Or without such interface, doing the
arithmetic using binary integers with liberal use of FLOOR and
CEILING, it'll be considerably slower than floating-point, but
still much faster than decimal arithmetic.
Ideally you can write a lazy-evaluation continuation-style interval
arithmetic package, whereby it first calculates a crude set of
bounds, then any time later you can ask it to extend that work to
generate more and more accurate bounds. I started work on such a
system several years ago, but nobody show interest, nobody offered
to pay me for my work, but if you offer to pay me for what I did
already and pay me to finish the work, I'm available. For some
http://www.rawbw.com/~rem/IntAri/
Bottom line: You haven't caught LISP making any math error, but
I've caught you making a pretty grievous error in understanding
what exactly you asked Lisp to calculate for you.
It's pretty easy to "catch" me not understanding something when I post a question stating that I don't understand something. You are truly a disgraceful, socially inept cunt. Fuck off and die, Robert.
jimka
2010-09-22 19:33:56 UTC
Reply
Permalink
Post by Paul G
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
I tried this with perl and got the following.
echo "print .2 + 0.4 + 0.2 + 0.2 + 9.8 + 15.4 + 1.2 + 5.4 -
32.8" | perl
7.105427357601e-15

Is this a bug in perl?

I also tried it in python and got the following.

Python 2.5.1 (r251:54863, Dec 6 2008, 10:49:39)
[GCC 4.2.1 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Post by Paul G
print .2 + 0.4 + 0.2 + 0.2 + 9.8 + 15.4 + 1.2 + 5.4 - 32.8
7.1054273576e-15
It looks like a bug in python.

-jim



-jim
toby
2010-09-24 02:25:36 UTC
Reply
Permalink
Post by jimka
Post by Paul G
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
I tried this with perl and got the following.
echo "print .2 + 0.4 +  0.2 +  0.2 +  9.8 +  15.4 +  1.2  + 5.4  -
32.8"  | perl
7.105427357601e-15
Is this a bug in perl?
I also tried it in python and got the following.
Python 2.5.1 (r251:54863, Dec  6 2008, 10:49:39)
[GCC 4.2.1 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Post by Paul G
print .2 + 0.4 +  0.2 +  0.2 +  9.8 +  15.4 +  1.2  + 5.4  - 32.8
7.1054273576e-15
It looks like a bug in python.
To flog a dead horse.

$ php -r 'echo .2 + 0.4 + 0.2 + 0.2 + 9.8 + 15.4 + 1.2 + 5.4 -
32.8;'
7.105427357601E-15
Post by jimka
-jim
-jim
Paul G
2020-08-26 04:55:47 UTC
Reply
Permalink
Post by jimka
Post by Paul G
(+ 0.2 0.4 0.2 0.2 9.8 15.4 1.2 5.4)
I tried this with perl and got the following.
echo "print .2 + 0.4 + 0.2 + 0.2 + 9.8 + 15.4 + 1.2 + 5.4 -
32.8" | perl
7.105427357601e-15
Is this a bug in perl?
I also tried it in python and got the following.
Python 2.5.1 (r251:54863, Dec 6 2008, 10:49:39)
[GCC 4.2.1 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Post by Paul G
print .2 + 0.4 + 0.2 + 0.2 + 9.8 + 15.4 + 1.2 + 5.4 - 32.8
7.1054273576e-15
It looks like a bug in python.
-jim
-jim
I ask a question as respectfully as I could, and you answer with sarcasm. Stay classy, Jim.
Loading...