[Pharo-users] parameters of the virtual machine

stepharo stepharo at free.fr
Wed May 25 02:46:01 EDT 2016


clement


you should add that to a simple blog post :)


Stef


Le 24/5/16 à 19:35, Clément Bera a écrit :
> Hi,
>
> Sorry the mail is quite long... I CC'd Nevena for section II, she used 
> the VM caches for type inference.
> *
> *
> *Section I. VM parameters*
> *
> *
> */46      size of machine code zone, in bytes/*
>
> Well, the size of the machine code zone :-). To speed-up execution, 
> the cog uses internally a JIT compiler which translates to machine 
> code frequently used bytecoded method and run the machine code to 
> execute them instead of using the interpreter. The machine code zone 
> is an executable memory zone containing all the machine code versions 
> of methods.
>
> The default size should be between 1Mb and 2Mb for standard 
> applications. If memory is a constraint, it can be lowered down to 
> 750kb, under that you should use the stack VM else the performance 
> starts to drastically decrease.
>
> This setting is useful to tune your application performance. For 
> example, on compilation-intensive benchs, 750kb is the optimal machine 
> code zone size. On large benchmarks, 2 Mb, maybe more, is better, but 
> then one may see machine code garbage collection pauses. On small 
> benchs all the methods fit into this zone so it doesn't really matter.
>
> Growing the machine code zone:
> - increase the number of methods that can be present there, hence 
> decreasing machine code zone garbage collection frequency and method 
> jit compilation.
> - decrease the machine code zone to cpu instruction cache mapping
> - slow down machine code zone garbage collection
>
> To set the machine code zone you need to use another parameter (47 I 
> think) and restart the VM+image.
> /
> /
> /*      59 number of check event calls since startup (read-only)*/
>
> If I remember correctly, the number of times the VM has checked for OS 
> events since start-up.
>
> /* 60      number of stack page overflows since startup (read-only; 
> Cog VMs only)
>  61      number of stack page divorces since startup (read-only; Cog 
> VMs only)*/
>
> This is related to stack page management. Read the Cog blog to 
> understand how it's done. See 
> http://www.mirandabanda.org/cogblog/2009/01/14/under-cover-contexts-and-the-big-frame-up/ 
> section Stack pages.
>
> *Section II. Accessing the caches*
>
> */Is there a way to get access to the cache? How many caches does Cog 
> has? I guess it has
>         - a method call cache (associating (object,symbol) -> compiled 
> method
>         - a cache for the JIT-compiled methods right?/
> *
> /*
> */
> /*Are there other cache? How can I access them? In particular to query 
> about their fill.*
> /
>
> There is indeed a global look-up cache, mapping (receiver type, 
> symbol) -> (compiled method, primitive function pointer). In the 
> jitted methods there are per send-sites look-up caches, also known as 
> inline caches.
>
> The most useful is the inline cache values, they provide the receiver 
> types met for each send site and are fairly reliable. There are ways 
> to access those caches. I personally use those caches values for the 
> runtime optimizing compiler, but more recently I worked with 
> Nevena Milojkovic (I CC'd her) and she was able to use those cache 
> values for type inference, while she is not a VM expert. We wrote an 
> IWST paper about that, hopefully it'll get accepted and you will be 
> able to read it. In the IWST paper I sum-up Eliot's implementation of 
> inline caches and how the VM provides the types.
>
> In short, to access the caches:
> 1) add those primitives:
> VirtualMachine>>allMachineCodeMethods
>    <primitive: 'primitiveAllMethodsCompiledToMachineCode' module:''>
>    ^#()
> CompiledMethod>>sendAndBranchData
>    <primitive: 'primitiveSistaMethodPICAndCounterData' module:''>
>    ^#()
> 2) add glue code to map caches values from bytecode pc to the AST, 
> something like that:
> CompiledMethod>>astWithCacheAnnotations
> | tmp1 |
> tmp1 := self sendAndBranchData.
> tmp1
> ifEmpty: [ 'No data' logCr.
> ^ self ast ];
> do: [ :arg1 |
> (self sourceNodeForPC: arg1 first)
> propertyAt: #cacheInformation
> put: arg1 allButFirst ].
> ^ self ast
> 3) Compile the VM with the SistaCogit and SistaVM=true (Slang 
> compilation settings).
>
> I guess you could read our IWST paper, that would help. I don't know 
> if that's possible before the reviewers answer.
>
> Don't build production tools using those values though, the VM is 
> evolving and the cache values may vary in the future with the sista 
> optimizer.
>
> Regards,
>
> Clement
>
> On Tue, May 24, 2016 at 5:39 PM, Alexandre Bergel 
> <alexandre.bergel at me.com <mailto:alexandre.bergel at me.com>> wrote:
>
>     Hi,
>
>     The Cog virtual machine expose some very useful parameters.
>     However, some of them are a bit obscure to me. For example, what
>     are the following parameters? What do they mean?
>
>                     46      size of machine code zone, in bytes
>                     59      number of check event calls since startup
>     (read-only)
>                     60      number of stack page overflows since
>     startup (read-only; Cog VMs only)
>                     61      number of stack page divorces since
>     startup (read-only; Cog VMs only)
>
>     Is there a way to get access to the cache? How many caches does
>     Cog has? I guess it has
>             - a method call cache (associating (object,symbol) ->
>     compiled method
>             - a cache for the JIT-compiled methods right?
>
>     Are there other cache? How can I access them? In particular to
>     query about their fill.
>
>     Cheers,
>     Alexandre
>     --
>     _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>     Alexandre Bergel http://www.bergel.eu
>     ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.pharo.org/pipermail/pharo-users_lists.pharo.org/attachments/20160525/4016154b/attachment.html>


More information about the Pharo-users mailing list