Discussion:
DEFUN list argument
(too old to reply)
B. Pym
2024-09-29 00:52:08 UTC
Permalink
(defun list-of-length (n list)
"Predicate which tests whether LIST is a list
of length N."
(loop for i from 0 below n
when (null list) do (return nil)
do (pop list)
finally (return (null list))))
Instead of using a macro whose source measures 60 kilobytes,
we can make it shorter by simply using recursion. Of course,
that's not possible in CL since it's not a Lispy language and
doesn't offer tail-call optimization. (Lispy languages
encourage recursive solutions.)

Scheme:

(define (list-of-length? n lst)
(cond ((zero? n) (null? lst))
((null? lst) #f)
(#t (list-of-length? (- n 1) (cdr lst)))))
Kaz Kylheku
2024-09-29 02:21:46 UTC
Permalink
Post by B. Pym
(defun list-of-length (n list)
"Predicate which tests whether LIST is a list
of length N."
(loop for i from 0 below n
when (null list) do (return nil)
do (pop list)
finally (return (null list))))
Instead of using a macro whose source measures 60 kilobytes,
... we can use a macro whose source is 20 lines and which is not
included in our languager implementation, and then claim that a 7 line
function that must be accompanied by that macro is shorter than a 5 line
function depending only on ANSI Common Lisp.
Post by B. Pym
we can make it shorter by simply using recursion. Of course,
that's not possible in CL since it's not a Lispy language and
doesn't offer tail-call optimization.
The Common Lisp specification doesn't require it, but implementations
offer it.

The ISO C specification deosn't require any optimization at all; yet C
users rely on optimizations (one of them being TCO, by the way).

In C, nobody in their right mind writes i >> 1 rather than i / 2 any
more, even though ISO C doesn't "offer" the assumption that i / 2 will
be reduced to a shift.
Scheme offers very little; most serious work done with Scheme picks
an implementation and uses it.

Many of your own posts pick a particular Scheme like Gauche and
often use its extensions that are not in Scheme.
Post by B. Pym
(define (list-of-length? n lst)
(cond ((zero? n) (null? lst))
((null? lst) #f)
(#t (list-of-length? (- n 1) (cdr lst)))))
You would avoid this kind of function entirely (and linked lists
all together) in code that is to be maximally performant. Linked
lists have poor cache performance. Chasing down a linked list
involved dependent loads which the machine cannot prefetch, unlike
marching down an array that is contiguously laid out in memory.

The battle you are trying to pick here is outdated.

Whether list processing code is recursive or iterative, and how fast it
is, is largerly irrelevant. Most list processing is compile time code
wrangling (macros and whatnot). Performance of the compile time is
not highly relevant, or else nobody would be using C++.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
Axel Reichert
2024-09-29 11:51:54 UTC
Permalink
Scheme offers very little; most serious work done with Scheme picks an
implementation and uses it.
Many of your own posts pick a particular Scheme like Gauche and often
use its extensions that are not in Scheme.
Linked lists have poor cache performance. Chasing down a linked list
involved dependent loads which the machine cannot prefetch, unlike
marching down an array that is contiguously laid out in memory.
The battle you are trying to pick here is outdated.
I have read so a couple of times. Interesting. But what is a Lisper to
do in the source code? Convert it all to vectors/arrays? Use more
imperative idioms than recursion? Do not care, because it is all handled
by the implementation?

Honestly curious,

Axel
Kaz Kylheku
2024-09-29 15:52:50 UTC
Permalink
Post by Axel Reichert
Scheme offers very little; most serious work done with Scheme picks an
implementation and uses it.
Many of your own posts pick a particular Scheme like Gauche and often
use its extensions that are not in Scheme.
That's a harder problem than picking some concrete criteria for
deciding where it ends, and applying them consistently when comparing
the sizes of London, Liverpool or Birmingham.
Post by Axel Reichert
Linked lists have poor cache performance. Chasing down a linked list
involved dependent loads which the machine cannot prefetch, unlike
marching down an array that is contiguously laid out in memory.
The battle you are trying to pick here is outdated.
I have read so a couple of times. Interesting. But what is a Lisper to
do in the source code? Convert it all to vectors/arrays? Use more
imperative idioms than recursion? Do not care, because it is all handled
by the implementation?
Honestly curious,
Lists are fine in most situations. Just maybe when you find yourself
doing discrete Fourier transforms on nested lists on what is supposed
to be some sort of real time signal processing code, maybe have a word
with yourself.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
Axel Reichert
2024-09-30 08:16:20 UTC
Permalink
Post by Kaz Kylheku
Lists are fine in most situations. Just maybe when you find yourself
doing discrete Fourier transforms on nested lists on what is supposed
to be some sort of real time signal processing code, maybe have a word
with yourself.
OK, thanks. Having done numerics all my professional life, this was a
no-brainer. (-:

Best regards

Axel
Waldek Hebisch
2024-09-30 14:03:15 UTC
Permalink
Post by Axel Reichert
Post by Kaz Kylheku
The battle you are trying to pick here is outdated.
I have read so a couple of times. Interesting. But what is a Lisper to
do in the source code? Convert it all to vectors/arrays? Use more
imperative idioms than recursion? Do not care, because it is all handled
by the implementation?
It depends. Arrays type declarations and optimize settings will get you
to about 2-6 times slower than C on general style code working within CPU
cache, that is good enough for some purposes. Many programs have
poor cache behaviour, if behaviour is inherently bad, than cache
misses may hide effects of poorer code generator in available Lisp
implementations.

ROW-MAJOR-AREF may help for multidimensional arrays. But if you
want to do better, than you probably need external code. C may
be good enough, but "fast" libraries freqently use specialized
assembler. Frequently, small number of fast external routines can
give you needed speed.

Note that many programmer do not care much about program speed.
Certainly it makes no sense to spend lot of effort to speed up
non-critial parts.
--
Waldek Hebisch
Loading...