Guille and me went to MoreVMs https://2022.programming-conference.org/home/MoreVMs-2022 event this week.
Interpreter Register Autolocalisation: Improving the performance of efficient interpreters
And here there is a summary about the presentations we saw:
This presentation shows an optimisation to interpreters merging AST nodes into supernodes, the same idea of superinstructions but at AST level. It’s implemented on Truffle framework. Now the possible supernodes are selected manually, they have tools to help in this tasks. The results in presented benchmarks show better performance (reducing run time) than without supernodes.
This work presents a Mutation Testing framework for languages built using Truffle framework. They offer common language-agnostic mutators and possibility to implement language specific mutators. After that, they perform Mutation Testing using Truffle API to determine the quality of a suite of unit tests.
I’m not sure why they call “JIT" Mutation Testing. The abstract show the capability to mutate compiled code, take dynamic information at run-time and determine the utility of mutants for skip some of them as techniques with impact in mutant testing efficiency.
Sophie studied ruby on rails applications to see how many call sites are mono/poly/mega morphic with the objective of observing if they present phased behavior. Phased behavior means that the application behaviour changes at some point during its execution, in which moment it does a change of phase.
Thing is that call site caches are usually global to the entire lifetime of the application.
However, it may happen that some call-site is monomorphic for some time, then it's monomorphic with another method for some other time. But a normal PIC will make a PIC with two entries instead of resetting the cache and creating monomorphic calls.
Then she studied how phased behaviour would impact positively some applications.
They then manually marked the start of new phases and observed if the degree of polymorphism got effectively reduced.
Pytorch is a tool to script machine learning applications in Python that uses the GPU.
The problem of pytorch is that it eagerly performs calls to the GPU for every call in Python, which makes lots of useless roundtrips.
He presented a tracing compiler that creates linear execution traces that are sent all at once to the GPU to avoid such overhead.