Intro to JAX: Accelerating Machine Learning research

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 พ.ย. 2024

ความคิดเห็น • 62

  • @domenicovalles2498
    @domenicovalles2498 2 ปีที่แล้ว +35

    This guy is so epic. He looks like he's enjoying every second of life.

  • @iskrabesamrtna
    @iskrabesamrtna 3 ปีที่แล้ว +90

    NumPy on steroids

  • @EnricoRos
    @EnricoRos 3 ปีที่แล้ว +89

    This video maximizes dInsights/dtime, is well written and easy to understand! I want to see more videos from Jake!

    • @SinDarSoup
      @SinDarSoup 3 ปีที่แล้ว +5

      JAke X

    • @oncedidactic
      @oncedidactic 3 ปีที่แล้ว +3

      EduTube needs a like button for specifically this metric 🤜🤛

    • @nrrgrdn
      @nrrgrdn 3 ปีที่แล้ว +2

      It maximizes Insights/time, not the derivative

    • @ilyboc
      @ilyboc 3 ปีที่แล้ว

      @@nrrgrdn yeh maybe that's better, but I think he means you gain continuously more insights as you advance in the video

  • @pablo_brianese
    @pablo_brianese 3 ปีที่แล้ว +7

    I burst out laughing with the ExpressoMaker that overloads the + operator.

  • @OtRatsaphong
    @OtRatsaphong 2 ปีที่แล้ว +5

    Thank you for this good intro to JAX. Very easy to follow and understand, Jake. Definitely going to add this to my toolkit. 👍🙏

  • @emiljanQ3
    @emiljanQ3 3 ปีที่แล้ว +8

    Looks great! I tend to default to numpy when I want to do something that is not fully supported in keras or pytorch and if i can get paralellization on gpu very easily from this that is perfect!

  • @lacasadeacero
    @lacasadeacero 3 ปีที่แล้ว +3

    i have a question, whats the porpuse of doing so many frameworks? time? efficiency? cuz i don't see it.

  • @karansarkar1710
    @karansarkar1710 3 ปีที่แล้ว +4

    Thiis sounds very good especially the grad and vmap functionality. I think more libraries would have to be released to compete with pytorch.

  • @joshuasmith2450
    @joshuasmith2450 3 ปีที่แล้ว +1

    How are you going to compare torch to tf/jax when run on a different GPU? There is no way you can argue the 2 gpus are comparable, they will be faster/slower at different types of computation regardless of software used. Should have compared the 3 on a common gpu if for some reason torch couldnt be run on the tpuv3.

  • @Shikalegend
    @Shikalegend 3 ปีที่แล้ว +2

    This typically looks like a problem that could be easily solved with a language that supports multi-stage programming; meta-programming as a first class citizen, which is not really the case with Python. Like Rust or Elixir via the Nx library which is actually directly inspired of Jax.

  • @valshaev1145
    @valshaev1145 ปีที่แล้ว

    Thanks! For me it helps alot! Being a C/C++ / Python developer, somehow I left behind such an important framework / library.

  • @subipan4593
    @subipan4593 3 ปีที่แล้ว +5

    JAX seems to be more similar to PyTorch i.e., dynamic graph instead of static graph as in Tensorflow.

    • @bender2752
      @bender2752 3 ปีที่แล้ว +2

      There's something called AutoGraph in TensorFlow actually

    • @geekye
      @geekye 2 ปีที่แล้ว

      That's Flax. Jax is more like the backbone of that

  • @TohaBgood2
    @TohaBgood2 3 ปีที่แล้ว +5

    Ok, this is seriously cool. Is this brand new? Haven't seen it before.
    Also, in the first code sample did you mean to import vmap and pmap instead of map, or is that some kind of namespace black magic I don't understand?

    • @enricoshippole2409
      @enricoshippole2409 3 ปีที่แล้ว +2

      It has been around for over 2 years now I believe

    • @linminhtoo
      @linminhtoo 3 ปีที่แล้ว +1

      ya it's a typo, there's no magic

  • @srinivastadepalli9431
    @srinivastadepalli9431 2 ปีที่แล้ว

    Awesome intro!

  • @sitrakaforler8696
    @sitrakaforler8696 ปีที่แล้ว

    Great content ! BRAVO and THANKS !

  • @markoshivapavlovic4976
    @markoshivapavlovic4976 3 ปีที่แล้ว +1

    nice talk that will be interesting

  • @gaborenyedi637
    @gaborenyedi637 3 ปีที่แล้ว +2

    Why do you need a new lib? Tensorflow can do 90+% of this, doesn't it? Is it a good idea to make a completely new thing instead extending the old one?
    One more question: do/will you have Keras support?

  • @brandomiranda6703
    @brandomiranda6703 3 ปีที่แล้ว +2

    What is the difference btw numerical vs automatic differentiation?

    • @amitxi-y5q
      @amitxi-y5q 3 ปีที่แล้ว +1

      Numerical differentiation computes f’(x) by evaluating the function around x: (f(x+h)-f(x-h))/2h with a small h. Automatic differentiation represents the function expression or code as a computational graph. It looks at the actual code of the function. The final derivative is obtained by propagating the value of local derivatives of simple expressions through the graph via the chain rule. The simple expressions are functions like +, -, cos(x), exp(x) for which we knows the derivatives at a given x.

  • @L4rsTrysToMakeTut
    @L4rsTrysToMakeTut 3 ปีที่แล้ว +2

    Why don't use julia lang?

  • @AlphaMoury
    @AlphaMoury 3 ปีที่แล้ว +4

    I thought JAX was running as default in tensorflow, am I missing something here?

  • @AJ-et3vf
    @AJ-et3vf 2 ปีที่แล้ว

    Awesome video! Thank you!

  • @CharlesMacKay88
    @CharlesMacKay88 10 หลายเดือนก่อน

    2:14 why in predict function inputs is reassigned but never used ? should be outputs = np.tanh(outputs)

  • @HibeePin
    @HibeePin 3 ปีที่แล้ว +2

    Active: Jax enters Evasion, a defensive stance, for up to 2 seconds, causing all basic attacks against him to miss.

    • @dl8083
      @dl8083 3 ปีที่แล้ว

      I knew this is going to come up lol

  • @hfkssadfrew
    @hfkssadfrew 3 ปีที่แล้ว

    Seems tensorflow is fast enough?

  • @matthewpublikum3114
    @matthewpublikum3114 3 ปีที่แล้ว

    Is this much better than simd?

  • @nightwingphd8580
    @nightwingphd8580 3 ปีที่แล้ว

    This is wild!

  • @eddisonlewis8099
    @eddisonlewis8099 ปีที่แล้ว

    Interesting Stuff

  • @brandomiranda6703
    @brandomiranda6703 3 ปีที่แล้ว +1

    Does this support apples gpus in M1 max?

    • @toastrecon
      @toastrecon 3 ปีที่แล้ว

      I also wonder if they utilize the neural processors, too?

  • @markoshivapavlovic4976
    @markoshivapavlovic4976 3 ปีที่แล้ว

    Nice framework.

  • @marcosanguineti2710
    @marcosanguineti2710 3 ปีที่แล้ว

    Really interesting!

  • @bicarrio
    @bicarrio 3 ปีที่แล้ว +3

    it says "from jax import map", but it seems it should be vmap?

    • @boffo25
      @boffo25 3 ปีที่แล้ว

      from jax import map as vmap

  • @RH-mk3rp
    @RH-mk3rp ปีที่แล้ว

    Something's wrong with the audio. His voice gets so soft it's hard to hear at the end of some sentences.

  • @kuretaxyz
    @kuretaxyz 3 ปีที่แล้ว +1

    Seeing JAX on the TensorFlow channel, now I am scared they'll mess this codebase too. Please don't, k thx.

  • @rickhackro
    @rickhackro 3 ปีที่แล้ว

    Amazing!

  • @chrisioannidis2295
    @chrisioannidis2295 3 ปีที่แล้ว +2

    Imagine if it had a real weapon

  • @satwikram2479
    @satwikram2479 3 ปีที่แล้ว

    Amazing👏

  • @RoyRogersMusicShop
    @RoyRogersMusicShop ปีที่แล้ว

    Googles Bard sent me here . Anyone know why ?

  • @captainlennyjapan27
    @captainlennyjapan27 3 ปีที่แล้ว +1

    Top Jax OP

  • @harryali4601
    @harryali4601 3 ปีที่แล้ว

    Is it me or does the backend technology of JAX sound very similar to the one in tensorflow.

  • @brandomiranda6703
    @brandomiranda6703 3 ปีที่แล้ว +1

    I dont get it. Why do we need this if pytorch and keras/tf already exist?

    • @simonb.979
      @simonb.979 3 ปีที่แล้ว +3

      I mean it is kinda niche but suppose you solve a problem that heavily relies on many custom functions, e.g., a very specific algebra like quaternion-operations. Then you can write super-fast basic operations and compose them to build a complicated loss-function that as a whole you can then jit-compile and let it get optimized. Or differentiate it, or vectorize it, all with a tiny decorator.

    • @tclf90
      @tclf90 3 ปีที่แล้ว

      torch and keras is "slow" and is only meant for the development phase. not sure how fast jax can outperform them.
      edit: "slow" as in computation/inference time

    • @MrAmgadHasan
      @MrAmgadHasan ปีที่แล้ว

      @@tclf90 So what frameworks are "fast"?

  • @mominabbas125
    @mominabbas125 3 ปีที่แล้ว

    Wow! 🏋️

  • @jakewong6305
    @jakewong6305 2 ปีที่แล้ว

    JAX come out because of torch

  • @AnimeshSharma1977
    @AnimeshSharma1977 3 ปีที่แล้ว +1

    My Call Jax Son #AI ?

  • @sashanktalakola
    @sashanktalakola 5 หลายเดือนก่อน

    1:14 lol they compared TPU runtimes with GPU runtimes

  • @HealthZo
    @HealthZo 9 หลายเดือนก่อน

    😮😮😮😮 0:28

  • @millco-.-
    @millco-.- 3 ปีที่แล้ว

    its tiresome