[Pharo-project] pharo closures
eliot.miranda at gmail.com
Thu Mar 24 20:17:57 EDT 2011
On Thu, Mar 24, 2011 at 4:46 PM, Toon Verwaest <toon.verwaest at gmail.com>wrote:
> I've just been working the whole day on the OPAL decompiler, getting it
> almost to run (already works for various methods). Now ...
> could someone please explain to me why the current implementation of
> closures isn't a terribly bad idea?
No I can't. Since I did it, I naturally think it's a good idea. Perhaps,
instead of denigrating it without substantiating your claims you could
propose (and then implement, and then get adopted) a better idea?
> I can see why it would pay off for Lisp programmers to have closures that
> run like the Pharo closures, since it has O(1) access performance. However,
> this performance boost only starts paying off once you have at least more
> than 4 levels of nested closures, something which, unlike in LISP, almost
> never happens in Pharo. Or at least shouldn't happen (if it does, it's
> probably ok to punish the people by giving them slower performance).
Slower performance than what? BTW, I think you have things backwards. I
modelled the pharo closure implementation on lisp closures, not the other
> This implementation is pretty hard to understand, and it makes
> decompilation semi-impossible unless you make very strong assumptions about
> how the bytecodes are used. This then again reduces the reusability of the
> new bytecodes and probably of the decompiler once people start actually
> using the pushNewArray: bytecodes.
Um, the decompiler works, and in fact works better now than it did a couple
of years ago. So how does your claim stand up?
> You might save a teeny tiny bit of memory by having stuff garbage collected
> when it's not needed anymore ... but I doubt that the whole design is based
> on that? Especially since it just penalizes the performance in almost all
> possible ways for standard methods. And it even wastes memory in general
> cases. I don't get it.
What has garbage collection got to do with anything? What precisely are you
talking about? Indirection vectors? To understand the rationale for
indirection vectors you have to understand the rationale for implementing
closures on a conventional machine stack. For lisp that's clear; compile to
a conventional stack as that's an easy model, in which case one has to store
values that outlive LIFO discipline on the heap, hence indirection vectors.
Why you might want to do that in a Smalltalk implementation when you could
just access the outer context directly has a lot to do with VM internals.
Basically its the same argument. If one can map Smalltalk execution to a
conventional stack organization then the JIT can produce a more efficient
execution engine. Not doing this causes significant problems in context
The explanation is all on my
in my Context
Management in VisualWorks
But does a bright buy like yourself find this /really/ hard to understand?
It's not that hard a transformation, and compared to what goes on in the
JIT (e.g. in bytecode to machine-code pc mapping) its pretty trivial.
> But probably I'm missing something?
It's me who's missing something. I did the simplest thing I knew could
possibly work re getting an efficient JIT and a Squeak with closures
(there's huge similarity between the above scheme and the changes I made to
VisualWorks that resulted in a VM that was 2 to 3 times faster depending on
platform than VW 1.0). But you can see a far more efficient and simple
scheme. What is it?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pharo-dev