10x faster - taking charge of the compiler backend - Folkert de Vries - Oct 26, 2023

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 พ.ย. 2023
  • 10x faster - taking charge of the compiler backend - Folkert de Vries
    A talk from the Rust meetup "Rust at TU Delft", Delft (NL), October 26, 2023, organized by the Rust Nederland meetup group and hosted at the university in Delft.
    www.meetup.com/rust-nederland
    Slides of the talk:
    github.com/rustnl/meetups/blo...
    About the speaker:
    Folkert de Vries
    github.com/folkertdev
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 7

  • @ajinkyax
    @ajinkyax 2 หลายเดือนก่อน +2

    I still didn't get it how to use it with Rust

  • @xNul
    @xNul 7 หลายเดือนก่อน +3

    Cranelift reduces compile time significantly, but maybe he said it was not great because his standards are super high

    • @coderedart
      @coderedart 7 หลายเดือนก่อน +3

      LLVM is a low bar to cross because of how slow it is. We should compare with golang (which is pretty fast), as the author seems to be talking about a vertically integrated compiler.

    • @faisalxd369
      @faisalxd369 7 หลายเดือนก่อน

      @@coderedart Golang compiler trades off a lot of code optimizations for faster compile times.

    • @coderedart
      @coderedart 7 หลายเดือนก่อน

      @@faisalxd369 yes, Both cranelift and Go sacrifice opitmisations when compared to LLVM. But i think you just want (really) fast compilation instead of optimized binaries sometimes. eg: iterating while debugging/coding, building on CI, testing, hot-reloading wasm in local browser etc..
      Then, finally we can use LLVM when deploying to production.

    • @mateuszschyboll2310
      @mateuszschyboll2310 7 หลายเดือนก่อน +3

      @@faisalxd369 that is what we are doing for debug builds already anyway. And that is what we are here talking about, for release I suspect we won't replace LLVM any time soon

    • @bytefu
      @bytefu หลายเดือนก่อน

      @@mateuszschyboll2310 Right. I've implemented a lousy compiler for a toy language from scrath, and for that I had to read a bunch of books, articles and papers about SSA form, linking and whatnot. Even then I didn't actually produce ELF binaries, but rather implemented my own rudimentary linking and ran the resulting RISC-V (chose it because it is much simpler than x86) machine code via an emulator library. Implementing many optimizations on an IR in SSA form is rather straightforward, because you get to design the IR that has nice graph representation, very convenient to work with, there are many algorithms already available that work on SSA graphs, most of which were being ironed out and optimized for decades. But some of those optimizations are fairly expensive and have non-linear complexity. And when you get to machine code optimization (which I didn't dare to touch), it's not merely a bunch of well-defined algorithms, like with SSA graphs, but hundreds of various rules and heuristics with sometimes very difficult to predict interplay. Implementing a production-grade optimizing compiler is an insanely huge undertaking, that's why it's basically only LLVM these days: very few people in the world are crazy and motivated enough to attempt that.