Discussion:
Is Lexical Binding The Norm Yet?
(too old to reply)
Kaz Kylheku
2024-01-15 18:31:30 UTC
Permalink
... I never used Maclisp or Lisp 1.5 so I don't know how their
scoping worked.
MacLisp was weird. You typically debugged your program using the
MacLisp interpreter because it made the debugging cycle faster, and the
interpreter was purely dynamicly scoped. But when the MacLisp compiler
compiled your code, all the variables you hadn't declared SPECIAL became
lexical! I know that sounds crazy, but because MacLisp didn't really
support closures, it wasn't too hard to write code in a style such that
it didn't really matter whether variables were dynamic or lexical.
Assuming that the interpreter in the appendix of the Lisp 1.5 manual, I
would say that Lisp 1.5 was lexical. But I'm not 100% certain that that
interpreter really reflects the system's true semantics. (Does the
source code for the Lisp 1.5 system still exist anywhere?)
Now you have me wondering about the scoping rules of Church's lambda
calculus, or if the concept even applies. At first impression it seems
lexical.
It's definitely lexical, but you are right to wonder if the distinction
even applies. Our notion of "dynamic" scoping seems pretty closely tied
to the workings of the typical applicative order Lisp evaluator. I
imagine Church himself would be pretty puzzled by our notion of dynamic
scoping...
Even though LC is lexical, I suspect that examples of LC that break
under dynamic scope have to be either incorrect, or convoluted.

What I'm referring to by "incorrect" is the presence of unbound
references. Some function (lambda (x) y) occurs where y is not
bound to anything, and that is then passed and called somewhere where y
is dynamically bound. This "breaks" in the sense that it's ill
formed under normal LC, but dynamic LC accepts it.

The other cases deal with references to the wrong parameter.

Wrong references happen in two ways:

- the same identifier is reused for two different purposes, which
"cross wires" due to the scoping.

- the same variable is activated multiple times, which are expected
to be different instances but aren't.

The first instance is easier to first show with bound functions,
like:

(define fun (x) (x x))

(lambda (x)
(fun (lambda (y) x)))

Where inside fun, the x references are to its own parameter,
so the (lambda (y) x) gets confused. We can get rid of the define:

(lambda (x)
((lambda (x) (x x)) (lambda (y) x)))

OK, but when we do this, this becomes confusing regardless of
the scoping discipline, due to the reuse of the parameter name!
All the variables in the example are singly instantiated, so
the only problem is the naming; if we make it:

(lambda (z)
((lambda (x) (x x)) (lambda (y) z)))

the problem goes away and it now means the same thing under
dynamic scope.

The remaining category is problems where the LC breaks because a the
same variable is instantiated multiple times, and those instantiations
are not separately being captured by a lambda, since there is
actually only one variable.

In Lisp, it's easy to come up with examples: such as the difference
between a loop construct which steps the dummy variable versus one
which freshly binds it.

People are running into this even in Blub languages. E.g. Go language
recently fixed it foreach-type looping construct to bind a fresh
variable, under the belief that this is what programmers expect.

Under dynamic scope, it won't make a difference. You can pretend to bind
a fresh variable, but it's really equivalent to stepping, since there is
only one variable. In Common Lisp, if our implementation has a DOLIST
loop that freshly binds, but we use a special variable as the dummy,
then that is reduced to being equivalent to mutating DOLIST.

It's not obvious to me how to make a simple example of this in lambda
calculus. You don't have loops that can step a variable versus bind.
You have to use recursion in order to evaluate the same functions
multiple times, with different values of free variables that have
to be demonstrably captured.

I can see why it might not have been obvious to MacCarthy that saving
and restoring variables isn't enough for lambda-calculus-like
computation.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Alan Bawden
2024-01-15 20:08:19 UTC
Permalink
Kaz Kylheku <433-929-***@kylheku.com> writes:

Even though LC is lexical, I suspect that examples of LC that break
under dynamic scope have to be either incorrect, or convoluted.

I'm having trouble understanding what you wrote here because I can't
figure out for the life of me what you mean by "LC". At first I thought
you meant LC as short for Lambda Calculus, but later you start talking
about "normal LC" vs. "dynamic LC", and since I have no idea what
"dynamic lambda calculus" could be, I'm stumped.

- Alan
Stefan Ram
2024-01-15 20:27:31 UTC
Permalink
Post by Kaz Kylheku
Even though LC is lexical, I suspect that examples of LC that break
under dynamic scope have to be either incorrect, or convoluted.
In Usenet, I prefer to use ">" to mark quotations, not indentation.

"LC" could also be "Lisp code".
Kaz Kylheku
2024-01-16 01:29:06 UTC
Permalink
Post by Kaz Kylheku
Even though LC is lexical, I suspect that examples of LC that break
under dynamic scope have to be either incorrect, or convoluted.
I'm having trouble understanding what you wrote here because I can't
figure out for the life of me what you mean by "LC". At first I thought
you meant LC as short for Lambda Calculus, but later you start talking
about "normal LC" vs. "dynamic LC", and since I have no idea what
"dynamic lambda calculus" could be, I'm stumped.
It has been observed that a lot of Lisp code works fine if we subsitute
dynamic binding for lexical.

So we can translate that into the lambda calculus domain. We need to
imagine a lambda calculus that has dynamic binding.

Some regular lambda calculus examples would still work. Some that don't
work could be repaired just by renaming clashing variables.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Lawrence D'Oliveiro
2024-01-16 01:55:30 UTC
Permalink
Post by Kaz Kylheku
It has been observed that a lot of Lisp code works fine if we subsitute
dynamic binding for lexical.
I tend to use function factories a lot. And Python allows class factories
as well. None of that would work.
Alan Bawden
2024-01-15 20:23:34 UTC
Permalink
***@zedat.fu-berlin.de (Stefan Ram) writes:

Stoyan wrote:

|McCarthy admits that he underestimated the problem. "I must
|confess that I regarded this difficulty as just a bug and
|expressed confidence that Steve Russell would soon fix it..."

Stoyan commented:

|The funarg event proves that McCarthy was now far from his
|LISP. He won the theoretical battle und searched for the next
|field. The mathematical theory of computation, time sharing,
|and proof checking were his new interests. LISP lost first
|priority.

It's always important to keep in mind that McCarthy didn't add LAMBDA to
Lisp because he was trying to build a programming language based on
Lambda Calculus. What he _was_ trying to do was build a programming
language based on first order predicate calculus. He introduced LAMBDA
because he needed it in order to define recursive functions. I can't
find it right now, but someplace in his writings he apologized for the
fact that LAMBDA made it possible to write code that was definitely
_not_ first order, which is not something that he intended!

McCarthy was a really smart guy. So smart that some of his mistakes are
more interesting than many other smart people's best ideas.

- Alan
Jeff Barnett
2024-01-16 00:25:08 UTC
Permalink
I will never understand now these myths perpetuate.
Maybe people confuse Common Lisp with Emacs Lisp, which was historically
purely dynamically scoped. I don't know if it is still that way, not
counting Guile Emacs. I never used Maclisp or Lisp 1.5 so I don't know
how their scoping worked. But dynamic scope is convenient for
simple-minded implementations.
Now you have me wondering about the scoping rules of Church's lambda
calculus, or if the concept even applies. At first impression it seems
lexical.
In the old days, Lisps were LOCALLY scoped and included special
variables (dynamic scoped) too. Closures were available in some
implementations for a specified set of special variables. N.B. that by
LEXICAL scope we normally mean that nested function definitions see
local bindings where ever they are executed - not so for LOCAL binding.

Lexical bindings were desired for most of us but we neither had the
address space or memory to make effectively fast, usable
implementations. ALGOL had enough extra structure rules and no specials
so could do LEXICAL as shown by the Boroughs (sp?) Algol Machines that
existed decades before the various Lisp machines and were primary used
in the Banking business.
--
Jeff Barnett
Lawrence D'Oliveiro
2024-01-16 00:37:53 UTC
Permalink
Post by Jeff Barnett
Lexical bindings were desired for most of us but we neither had the
address space or memory to make effectively fast, usable
implementations.
Basically, call frames (or parts of them) go on the heap. I think Python
manages some efficiency gains by only including referenced variables.
Jeff Barnett
2024-01-16 18:26:04 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Jeff Barnett
Lexical bindings were desired for most of us but we neither had the
address space or memory to make effectively fast, usable
implementations.
Basically, call frames (or parts of them) go on the heap. I think Python
manages some efficiency gains by only including referenced variables.
The optimization code itself needed to fit in small computer
environments. The early Lisp 1.5s, for the most part, were developed on
computers with a few hundred thousand bytes (in today's memory units)
with no virtually addressing and the later ones more or less followed
with compatible languages on newer hardware.

To see the effect of such memory limitations, consider the GC:
structures were folded or compacted and under the table erasure lists
were used. There wasn't enough address space or memory to do copy
collects and flips.

The technology was understood but the hardware wasn't up to it.
--
Jeff Barnett
Stefan Monnier
2024-01-16 23:16:02 UTC
Permalink
In the old days, Lisps were LOCALLY scoped and included special variables
(dynamic scoped) too. Closures were available in some implementations for
a specified set of special variables. N.B. that by LEXICAL scope we normally
mean that nested function definitions see local bindings where ever they are
executed - not so for LOCAL binding.
See for example the definition of "closures" in the context of
Lisp-Machine-Lisp at https://hanshuebner.github.io/lmman/fd-clo.xml#closure

Basically, they're functions that remember the value of dynbound
variables at the time that the closure was created and then rebind those
dynbound vars to those values around the evaluation of their body.

In ELisp you could implement it as follows:

(oclosure-define (lml-closure
(:predicate lml-closurep))
bindings function)

(defun lml-closure (varlist function)
"Create a \"closure\" over the dynamic variables in VARLIST."
(oclosure-lambda
(lml-closure (bindings (mapcar (lambda (v) (cons v (symbol-value v)))
varlist))
(function function))
(&rest args)
(cl-progv (mapcar #'car bindings) (mapcar #'cdr bindings)
(apply function args))))


-- Stefan
Jeff Barnett
2024-01-17 20:12:26 UTC
Permalink
Post by Stefan Monnier
In the old days, Lisps were LOCALLY scoped and included special variables
(dynamic scoped) too. Closures were available in some implementations for
a specified set of special variables. N.B. that by LEXICAL scope we normally
mean that nested function definitions see local bindings where ever they are
executed - not so for LOCAL binding.
See for example the definition of "closures" in the context of
Lisp-Machine-Lisp at https://hanshuebner.github.io/lmman/fd-clo.xml#closure
Note that above I was talking about Lisps that predated the Lisp Machines.
Post by Stefan Monnier
Basically, they're functions that remember the value of dynbound
variables at the time that the closure was created and then rebind those
dynbound vars to those values around the evaluation of their body.
There was no "rebinding". Rather the original binding was used by all
who could see it vis a vis the scoping rules. Since multiple closures
(including the one where that binding occurred) could reference the same
lexical binding, creating multiple rebinding would screw the semantics.
Post by Stefan Monnier
(oclosure-define (lml-closure
(:predicate lml-closurep))
bindings function)
(defun lml-closure (varlist function)
"Create a \"closure\" over the dynamic variables in VARLIST."
(oclosure-lambda
(lml-closure (bindings (mapcar (lambda (v) (cons v (symbol-value v)))
varlist))
(function function))
(&rest args)
(cl-progv (mapcar #'car bindings) (mapcar #'cdr bindings)
(apply function args))))--
Jeff Barnett
Kaz Kylheku
2024-01-17 20:37:43 UTC
Permalink
Post by Jeff Barnett
Post by Stefan Monnier
In the old days, Lisps were LOCALLY scoped and included special variables
(dynamic scoped) too. Closures were available in some implementations for
a specified set of special variables. N.B. that by LEXICAL scope we normally
mean that nested function definitions see local bindings where ever they are
executed - not so for LOCAL binding.
See for example the definition of "closures" in the context of
Lisp-Machine-Lisp at https://hanshuebner.github.io/lmman/fd-clo.xml#closure
Note that above I was talking about Lisps that predated the Lisp Machines.
Post by Stefan Monnier
Basically, they're functions that remember the value of dynbound
variables at the time that the closure was created and then rebind those
dynbound vars to those values around the evaluation of their body.
There was no "rebinding". Rather the original binding was used by all
who could see it vis a vis the scoping rules. Since multiple closures
(including the one where that binding occurred) could reference the same
lexical binding, creating multiple rebinding would screw the semantics.
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!

The dialect used deep binding for the dynamic scope. Basically it's
the same as lexical scope in a recursive interpreter, except that the
environment chain crosses the function call boundary.

Closures worked by actually walking the environment chain and capturing
it.

To make the closures lexical, the function activation frames had a
special marker in them to identify them. Thus the dynamic closure
capture mechanism was able to stop at that marker and not capture
beyond.

When calling a closure, the captured environment got spliced in, so that
the search for a dynamic variable would look at the function parameters
first, then the captured environment, then the calling environment.

This dialect also had threads.

I remember it was quite nice to be able to override special variables
around lambdas that executed in separate thread.

(let* ((*some-param* (whatever))
(thr (create-thread (lambda ...))))
...)

I was using that for testing API's on an embedded target, where the
some of the test functions could be controlled by various special
variable parameters.

I /think/ I may also have had dlambda construct that captured the
full dynamic scope across the function boundary, so this was possible:

(defun foo ()
(dlambda () ...))

The returned lambda would see bindings of dynamic variables as they
existed at the time foo was called.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Lawrence D'Oliveiro
2024-01-17 23:33:40 UTC
Permalink
Post by Kaz Kylheku
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
I’m just a humble Python programmer, but it seems to me there are easier
ways of doing such things: create and instantiate a class.

Then you can define methods that access and update the internal state,
coupled with a method that says “do the actual work”. You can even manage
that internal state via assignable “properties”, instead of explicit
getter/setter method calls.

And for added flavour, if you define a “__call__()” method that does the
actual work, you can call the class instance as though it were a function.
Kaz Kylheku
2024-01-18 00:06:03 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Kaz Kylheku
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
I’m just a humble Python programmer, but it seems to me there are easier
ways of doing such things: create and instantiate a class.
Whether or not that is true, that dialect didn't evolve to the point
it had objects. I quit that company and trained for a marathon for 3
months instead of working anywhere. 5 days a week of weights, 6 days
of running. I hit a time of 3 hours and 19 minutes, with no natural
talent or anything, and well over thirty.

I caught word from the manager that he also bailed two weeks later
and the project folded.

While I was training for the marathon, the resumes I had sent out at the
beginning of the three month period started to gain traction. I got hired at a
new place, and my starting day was exactly the Monday after the Sunday marathon
race.

Good times!
Post by Lawrence D'Oliveiro
Then you can define methods that access and update the internal state,
coupled with a method that says “do the actual work”. You can even manage
that internal state via assignable “properties”, instead of explicit
getter/setter method calls.
I think all that would have been verbose compared to the dynamic closures,
but of course it provides tighter encapsulation.
Post by Lawrence D'Oliveiro
And for added flavour, if you define a “__call__()” method that does the
actual work, you can call the class instance as though it were a function.
In TXR Lisp that is called lambda. No ugly underscores, because the
lambda symbol is properly packaged: you can have my:lambda in your own
package called "my" or whatever which doesn't clash with usr:lambda.

1> (defstruct functor ()
(:method lambda (me arg) (put-line `@me: @arg!`)))
#<struct-type functor>
2> (new functor)
#S(functor)
3> [*2 42]
#S(functor): 42!
t

There is also a lambda-set which will let the call expression
denote a place that can take assignments:

4> (defmeth functor lambda-set (me arg new-val)
(put-line `@me: @arg set to @{new-val}!`))
(meth functor
lambda-set)
5> (set [*2 42] :blah)
#S(functor): 42 set to :blah!
:blah
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
George Neuner
2024-01-19 04:39:26 UTC
Permalink
On Wed, 17 Jan 2024 23:33:40 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Post by Kaz Kylheku
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
I’m just a humble Python programmer, but it seems to me there are easier
ways of doing such things: create and instantiate a class.
Closures ARE objects - the major difference vs conventional OOP being
that closures - at runtime - can associate an arbitrary set of data
with an arbitrary set of functions.
Post by Lawrence D'Oliveiro
Then you can define methods that access and update the internal state,
coupled with a method that says “do the actual work”. You can even manage
that internal state via assignable “properties”, instead of explicit
getter/setter method calls.
And for added flavour, if you define a “__call__()” method that does the
actual work, you can call the class instance as though it were a function.
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
https://en.wikipedia.org/wiki/Prototype-based_programming


Python OO is class based. Python objects can be modified by
reflection and can be cloned ... but that is not the normal way of
defining and creating objects in Python.

There is a Python library which redefines classes and objects to
enable some prototype like behavior ... but there still is a
distinction between "class" and "object".
Lawrence D'Oliveiro
2024-01-19 05:42:33 UTC
Permalink
Post by George Neuner
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
George Neuner
2024-01-19 17:11:13 UTC
Permalink
On Fri, 19 Jan 2024 05:42:33 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Post by George Neuner
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
You can clone/copy objects in Python, but - barring use of reflection
- you can't easily change an object's *declaration*: ie. its
properties (data fields) and methods (functions). Those are tied to
the object's class.

In prototype OO you can modify any object to add, delete, change or
replace its data fields or methods. Every object then is a template
for creating new objects having the same set of fields and methods.
[You could say that every object is both a class and an instance of
that class, but really the whole notion of what it means to be a class
or a member thereof disappears.]
Lawrence D'Oliveiro
2024-01-19 20:35:14 UTC
Permalink
Post by George Neuner
On Fri, 19 Jan 2024 05:42:33 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Post by George Neuner
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
You can clone/copy objects in Python, but - barring use of reflection
- you can't easily change an object's *declaration*: ie. its
properties (data fields) and methods (functions). Those are tied to
the object's class.
No they are not.
George Neuner
2024-01-21 00:20:46 UTC
Permalink
On Fri, 19 Jan 2024 20:35:14 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Post by George Neuner
On Fri, 19 Jan 2024 05:42:33 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Post by George Neuner
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
You can clone/copy objects in Python, but - barring use of reflection
- you can't easily change an object's *declaration*: ie. its
properties (data fields) and methods (functions). Those are tied to
the object's class.
No they are not.
Yes they are.

Python does share some implementation details with prototype systems:
runtime type information (including object structure) is kept as a map
(dict). However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class, and the object returned is an instance of that
new class. The original class remains unmodified and continues to be
used as long as instances of that class remain.

But Python does not actively promote the notion that objects should be
manipulated in this way. The *preferred* way is to create a new type
is to define a new named class.

In contrast, in a prototype system manipulating an object's structure
and cloning it is the ONLY way to create a new type of object. Every
object IS its own class, so when/if you modify it, you truly are
modifying the class itself.
Lawrence D'Oliveiro
2024-01-21 00:32:55 UTC
Permalink
Post by George Neuner
However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class ...
Shut up already. And try this:

class ExampleClass :

def method(self) :
print("I am the true method.")
#end method

#end ExampleClass

def false_method() :
print("I am the impostor method.")
#end false_method

inst1 = ExampleClass()
inst2 = ExampleClass()
inst2.method = false_method

inst1.method()
inst2.method()
del inst2.method
inst2.method()

Output:

I am the true method.
I am the impostor method.
I am the true method.
George Neuner
2024-01-21 00:54:12 UTC
Permalink
On Sun, 21 Jan 2024 00:32:55 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Post by George Neuner
However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class ...
print("I am the true method.")
#end method
#end ExampleClass
print("I am the impostor method.")
#end false_method
inst1 = ExampleClass()
inst2 = ExampleClass()
inst2.method = false_method
inst1.method()
inst2.method()
del inst2.method
inst2.method()
I am the true method.
I am the impostor method.
I am the true method.
That is not changing the object's declaration at all - it is simply
rebinding a member to a new function in one instance.

Create a new instance of ExampleClass() and it will have the original
method, not the "change" you made to inst2.


How about you try to learn something?
Alan Bawden
2024-01-21 04:30:29 UTC
Permalink
Lawrence D'Oliveiro <***@nz.invalid> writes:

...

Shut up already. And try this:

Calm down. The point that George was trying to make is that neither
Python nor Common Lisp is a prototype OO system. Prototype OO was once
(in the early 80s) a serious contender for how to design an
object-oriented language. Before Common Lisp had an object system there
were several proposals floating around, some of which were prototype
based. But I don't think any programming language ever actually wound
up using such a system.

It is true that in Python you can do this:

class ExampleClass :
def method(self) :
print("I am the true method.")

def false_method() :
print("I am the impostor method.")

inst1 = ExampleClass()
inst2 = ExampleClass()
inst2.method = false_method

inst1.method()
inst2.method()
del inst2.method
inst2.method()

Output:

I am the true method.
I am the impostor method.
I am the true method.

But consider the following:

Python 3.10.5 (v3.10.5:f37715, Jul 10 2022, 00:26:17) [GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
... def __len__(self):
... return 17
...
x = Frob()
len(x)
17
x.__len__()
17
x.__len__ = lambda: 23
x.__len__()
23
len(x)
17

Why didn't that work? Because the Python implementors knew that
assigning new methods like that was not the _normal_ way to make objects
with new behaviors. The _could_ have made that work (which would
arguably be more consistent), but in the name of efficiency they chose
not to. In a _true_ prototype system, where all new object behavior
comes from cloning followed by modification, they would not have made
that choice.

- Alan
Kaz Kylheku
2024-01-21 04:57:09 UTC
Permalink
Post by Alan Bawden
...
Calm down. The point that George was trying to make is that neither
Python nor Common Lisp is a prototype OO system. Prototype OO was once
(in the early 80s) a serious contender for how to design an
object-oriented language. Before Common Lisp had an object system there
were several proposals floating around, some of which were prototype
based. But I don't think any programming language ever actually wound
up using such a system.
Today, JavaScript is said to be prototype-based. I have no idea where
the goalposts are: is it True Scotsman's prototype-based or not.

(All I know is that I hate every aspect of the approach, in
pretty much any shape. It is a copout from doing the gruntwork of
implementing a proper object system, and that shows at every corner.)
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Lawrence D'Oliveiro
2024-01-21 05:14:32 UTC
Permalink
... return 17 ...
x = Frob()
len(x)
17
x.__len__()
17
x.__len__ = lambda: 23 x.__len__()
23
len(x)
17
Why didn't that work?
Not sure what you are saying didn’t “work” in this case. Monkey-patching
the member of the instance does “work”: access the member on the instance,
and you get the monkey-patched value.
Stefan Ram
2024-01-21 10:15:56 UTC
Permalink
Post by Alan Bawden
Why didn't that work?
Implicit invocations of special methods are only guaranteed
to work correctly if defined on an object's type, not in
the object's instance dictionary. (See: The Python Language
Reference, Release 3.13.0a0; 3.3.12 Special method lookup).
Stefan Ram
2024-01-21 11:46:31 UTC
Permalink
Post by Stefan Ram
Post by Alan Bawden
Why didn't that work?
Implicit invocations of special methods are only guaranteed
to work correctly if defined on an object's type, not in
the object's instance dictionary. (See: The Python Language
Reference, Release 3.13.0a0; 3.3.12 Special method lookup).
That was a rhetorical question!

To use "prototypes", you need to write a custom clone method anyway
(I just posted one to "comp.lang.python"). So, while you're at it,
you can then as well define your custom global "len" function:

def len( x ): return x.__len__()

. (It might still be some work to get the rest of the
standard library to use this definition, but now we are
talking about a library and not about a language. Yes,
the Python standard library assumes class-based objects.)
Stefan Ram
2024-01-21 17:35:34 UTC
Permalink
Post by Stefan Ram
the Python standard library assumes class-based objects.)
Since the thread is about "binding": Python can be very
explicit when it comes to binding (should you need it).

main.py

import types

x = 2

def f():
return x

# prints "2"
print( f() )

g = types.FunctionType( f.__code__, { 'x': 4 } )

# prints "4"
print( g() )

h = types.FunctionType( f.__code__, globals() )

# prints "2"
print( h() )

def a():
x = 3

# prints "2"
print( f() )

# prints "3"
print( types.FunctionType( f.__code__, locals() )() )

# prints "2"
print( types.FunctionType( f.__code__, globals() )() )

a()
Alan Bawden
2024-01-21 21:18:22 UTC
Permalink
Post by Stefan Ram
Post by Alan Bawden
Why didn't that work?
Implicit invocations of special methods are only guaranteed
to work correctly if defined on an object's type, not in
the object's instance dictionary. (See: The Python Language
Reference, Release 3.13.0a0; 3.3.12 Special method lookup).
That was a rhetorical question!

I'm glad you recognized that. Yes, I was well aware of that quote from
the Python documentation. The real question I was asking is _why_ did
the Python implementors make that choice. It's because they weren't
trying to implement a prototype-based language!

To use "prototypes", you need to write a custom clone method anyway
(I just posted one to "comp.lang.python"). So, while you're at it,
you can then as well define your custom global "len" function:

def len( x ): return x.__len__()

. (It might still be some work to get the rest of the
standard library to use this definition, but now we are
talking about a library and not about a language. Yes,
the Python standard library assumes class-based objects.)

And you have to also define:

def add(x, y): return x.__add__(y)

And convince everybody to stop using infix notation. Or you have to
start with a base class like:

class PrototypeObject(object):
def __add__(self, x):
return self.add(x)
def __radd__(self, x):
return self.radd(x)
# As well as definitions for __sub__, __rsub__, __mul__, __rmul__,
# etc.

And convince everybody to use this class instead of the built-in
`object' class -- so that behavior modifications can be made by doing
`x.add = ...' etc.

Yes, you can do that. In fact, it works out pretty well if you really
need to do prototype-based programming for some reason. You can do
something similar in Common Lisp.

But that doesn't mean that Common Lisp and Python _are_ prototype-based
systems. Both are class-based and were engineered that way from the
very beginning. They both allow you to _emulate_ prototype-based
object-orientation, but is not the natural way to do things in either
language.

We started down this rat hole becase of the following exchange:

From: Lawrence D'Oliveiro <***@nz.invalid>
Date: Fri, 19 Jan 2024 05:42:33 -0000 (UTC)
Post by Stefan Ram
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.

Lawrence's brief statement can be read as saying: "By that definition,
Python is prototype-based." That is how I read it initially, but I
think it's pretty clear that Python is not prototype-based, but
class-based.

But you can also read it as making the weaker claim: "Python can easily
support prototype-based programming." Which is true, as we've
demonstrated above, but perhaps not as interesting a claim, since it is
also true of some other class-based programming languages.

In any case, I think we've beaten this thread to death now.

- Alan
Lawrence D'Oliveiro
2024-01-21 21:33:37 UTC
Permalink
... but I think it's pretty clear that Python is not prototype-based,
but class-based.
Or maybe Python doesn’t easily fit into these neat pigeon holes you have
for conventional categories of “object-oriented” programming languages.
See the descriptor concept
<https://docs.python.org/3/reference/datamodel.html#descriptors>
for an illustration of what I mean.

Kaz Kylheku
2024-01-21 04:53:53 UTC
Permalink
Post by George Neuner
However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class ...
I used Python for a total of about four or five days, yet I
managed to attach a new property at run-time to the object
coming out of Argparse (or whatever that argument parsing
thing is called).

I can't remember exactly why, but it was expedient.
The code was converted from ad-hoc argument parsing to
Argparse, and the technique helped represent some feature
of the original logic.

Python has a mechanism to attach new propeties to objects
dynamically. In that mechanism, their names appear
as strings, but then end up visible in the normal
member access syntax as obj.name.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Stefan Ram
2024-01-20 10:13:40 UTC
Permalink
Newsgroups: comp.lang.lisp,comp.lang.python
Post by Lawrence D'Oliveiro
Post by George Neuner
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
Yes that's true. Forgive me guys if that's too "off topic"
in comp.lang.lisp, but it might not be obvious how to create
an object in Python and then attach fields or methods to it.
So here I humbly submit a small example program to show this.

main.py

from types import *

# create a new object
def counter_object(): pass

# attach a numeric field to the object
counter_object.counter_value = 0

# define a named function
def increment_value( self ): self.counter_value += 1

# attach the named function to the object as a method
counter_object.increment_value = \
MethodType( increment_value, counter_object )

# call the method
counter_object.increment_value()

# attach an unnamed function to the object
counter_object.get_value = \
MethodType( lambda self: self.counter_value, counter_object )

# call that method
print( counter_object.get_value() )

This program will then print "1" and a line terminator.

Newsgroups: comp.lang.lisp,comp.lang.python
Lawrence D'Oliveiro
2024-01-20 21:23:15 UTC
Permalink
Post by Stefan Ram
from types import *
Death to wildcard imports!
Kaz Kylheku
2024-01-21 04:51:15 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Stefan Ram
from types import *
Death to wildcard imports!
I was regarded as a maverick/lunatic in the 1990's when, in the context
of C++, I called out that wholesale imports like "using std;" are
an incredibly idea, only suitable at best for throwaway test programs.

Mostly, the C++ world came around to it.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Lars Brinkhoff
2024-01-18 06:06:33 UTC
Permalink
Post by Kaz Kylheku
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
Didn't Interlisp do that with spaghetti stacks?

"One of the most innovative of the language extensions introduced by
Interlisp was the spaghetti stack. The problem of retention (by
closures) of the dynamic function-definition environment in the presence
of special variables was never completely solved until spaghetti stacks
were invented." The Evolution of Lisp, Steele and Gabriel, 1993.

Well done with your marathon training!
Kaz Kylheku
2024-01-18 19:06:43 UTC
Permalink
Post by Lars Brinkhoff
Post by Kaz Kylheku
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
Didn't Interlisp do that with spaghetti stacks?
Well, yes. I think I consciously had spaghetti stacks in mind when
working on that implementation.

IIRC (I don't have the code) it may be that the variable frames were
actually just kept in the regular run-time stack, and not dynamically
allocated. Only the closing mechanism created dynamic copies.

Also, I think, when taking a closure, I didn't just blindly replicate
the chain of frames up to the capture delimiter, but flattened it down
to one environment frame.
Post by Lars Brinkhoff
"One of the most innovative of the language extensions introduced by
Interlisp was the spaghetti stack. The problem of retention (by
closures) of the dynamic function-definition environment in the presence
of special variables was never completely solved until spaghetti stacks
were invented." The Evolution of Lisp, Steele and Gabriel, 1993.
Okay, so there was an awareness about spaghetti stacks solving a more
general problem; it wasn't just for the sake of continuations.

So that's what I did in that implementation: a more limited form
of the spaghetti stack technique that wouldn't support resumption
of a continuation, only visibility into dynamic environments.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
Lawrence D'Oliveiro
2024-01-17 23:26:07 UTC
Permalink
In ELisp you could implement it as follows ...
Luckily, you can now specify it for the contents of a .el file with a
header comment:

;;; -*- lexical-binding: t; -*-
Stefan Monnier
2024-01-16 23:01:21 UTC
Permalink
It is seen as useful in the Scheme community, where it gets reinvented
as fluid-let (SRFI 15) or the really bizarre concoction of "parameter
objects" (SRFI 39).
Most of SRFI-39 strikes me as intuitive rather than bizarre (it's
basically plain-old dynamic scoping just using objects instead of
symbols to "name" the "variables"), but I do find the `converter` part
rather odd.

Does anyone know why that `converter` was added to SRFI 39?

I also can't see how to define something equivalent to `progv` on top of
SRFI-39.


Stefan
Loading...