It is really difficult to follow a technical presentation with code examples when the camera changes so quickly. Sticking to the slide and presenter picture-in-picture view would be a lot easier to follow. 😃
I see V8 is following the Java HotSpot team and writing an Interpreter in low level code instead of C++, only difference is that HotSpot's interpreter is written directly in assembly :P
Hi There, im wondering which file in the source code is the one that actually gets the input of the JS before it gets interpreted. Also, how can we debug this? Thanks
It is already more than 5 years gone. Does anybody know what is the current state of affairs? Is google still sticking with Ignition or there are some new approaches got emerged?
TL;DR: The current pipeline consists of Ignition (interpreter) > Sparkplug (non-optimzing compiler) > TurboFan (optimizing compiler) Quick recap: Full-codegen and Crankshaft retired in 2017, and they changed TurboFan to compile directly from the bytecode. Room for optimization shrank as Ignition was already quite fast in its interpreting capabilities. Making TurboFan compile quicker would have meant reducing its optimizing passes, impacting the peak performance of the compiled code. As a compromise, with version 9.1, a non-optimizing compiler called Sparkplug was introduced in between them. Sparkplug is, at its core, just a transpiler that spits out machine code snippets for individual bytecode instructions generated by Ignition, which in turn conveniently has already done all the hard work like register allocation and stuff like this. To be really precise, Sparkplug "serializes" Ignition's execution, effectively baking what Ignition would dynamically do as a static machine code sequence into memory. Because of this, Sparkplug is not only extremely fast in compiling but can also be run at a whim because it has almost no real work to do. I don't know exactly when Sparkplug is invoked, but they certainly do it a lot more and a lot earlier than they would invoke TurboFan. Hence, you get non-optimized machine code pretty quickly, possibly long before TurboFan has stable information about object shapes. From that point onward, once code becomes hot, TurboFan kicks in with its speculative optimization and performs on-stack replacement of the Sparkplug code with its own.
A very interesting talk only to be destroyed by extremely poor editing. Why would anyone think that changing scenes all the time would be appropriate for a technical talk? Display the slides *at all times* and add a PiP of the speaker at the bottom right or outside the slide frame, it's as simple as that, no more than 1 camera, no fancy tricks.
Please keep the focus on the slides, the face of the speaker is not that important to understand the presentation. Have a split screen or something if you really want to show the speaker.
It is really difficult to follow a technical presentation with code examples when the camera changes so quickly. Sticking to the slide and presenter picture-in-picture view would be a lot easier to follow. 😃
Agreed! I was watching and had to go back and pause several times to see what he was trying to show.
I see V8 is following the Java HotSpot team and writing an Interpreter in low level code instead of C++, only difference is that HotSpot's interpreter is written directly in assembly :P
Hi There, im wondering which file in the source code is the one that actually gets the input of the JS before it gets interpreted. Also, how can we debug this?
Thanks
It is already more than 5 years gone. Does anybody know what is the current state of affairs? Is google still sticking with Ignition or there are some new approaches got emerged?
TL;DR: The current pipeline consists of Ignition (interpreter) > Sparkplug (non-optimzing compiler) > TurboFan (optimizing compiler)
Quick recap: Full-codegen and Crankshaft retired in 2017, and they changed TurboFan to compile directly from the bytecode. Room for optimization shrank as Ignition was already quite fast in its interpreting capabilities. Making TurboFan compile quicker would have meant reducing its optimizing passes, impacting the peak performance of the compiled code.
As a compromise, with version 9.1, a non-optimizing compiler called Sparkplug was introduced in between them. Sparkplug is, at its core, just a transpiler that spits out machine code snippets for individual bytecode instructions generated by Ignition, which in turn conveniently has already done all the hard work like register allocation and stuff like this. To be really precise, Sparkplug "serializes" Ignition's execution, effectively baking what Ignition would dynamically do as a static machine code sequence into memory.
Because of this, Sparkplug is not only extremely fast in compiling but can also be run at a whim because it has almost no real work to do. I don't know exactly when Sparkplug is invoked, but they certainly do it a lot more and a lot earlier than they would invoke TurboFan. Hence, you get non-optimized machine code pretty quickly, possibly long before TurboFan has stable information about object shapes.
From that point onward, once code becomes hot, TurboFan kicks in with its speculative optimization and performs on-stack replacement of the Sparkplug code with its own.
@@Asatruction Thank you very much! It was quite helpful
thanks for sharing this
what's with the oversized nametags?
A very interesting talk only to be destroyed by extremely poor editing. Why would anyone think that changing scenes all the time would be appropriate for a technical talk? Display the slides *at all times* and add a PiP of the speaker at the bottom right or outside the slide frame, it's as simple as that, no more than 1 camera, no fancy tricks.
Please keep the focus on the slides, the face of the speaker is not that important to understand the presentation. Have a split screen or something if you really want to show the speaker.
Jesus. The editing of this. I had to keep pausing or rewinding because of the ridiculous scene changes away from the slides.