10x Faster Than NumPy (GPU Magnet Simulation)

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 ม.ค. 2025

ความคิดเห็น • 23

  • @markusbuchholz3518
    @markusbuchholz3518 ปีที่แล้ว +17

    You are an exceptional and distinctive individual, and your channel has been instrumental in helping numerous people appreciate advanced concepts in physics and mathematics. Although my primary area of expertise is in software applied in the robotics field, your channel has opened my mind to think beyond my usual scope. Thank you for your contributions, and I wish you all the best in achieving your goals. Have a good day!

  • @wafflenuno
    @wafflenuno ปีที่แล้ว +5

    Super cool series so far! I've wanted to get into GPU computing for a while, but as a grad physics student, the books on this seemed so dense and lacking in scientific computing. This series is a whole lot easier to get through. Thank you for the great video!

  • @Nabla_Squared
    @Nabla_Squared ปีที่แล้ว +3

    I love this series, I made 2D Ising model in C++ with metropolis but other implementation. And for 300x300 spins, and 300.000 steps in time, take like 20 minutes to run. So this implementation in python is impressive more faster. Maybe I'm going to use that code to the final protect of the simulation course.
    Thanks for all this work for us!

    • @Ricocossa1
      @Ricocossa1 ปีที่แล้ว

      You can also do cuda/openCL with C++ if you like C++. Torch is nice for doing all that under the hood.

    • @Nabla_Squared
      @Nabla_Squared ปีที่แล้ว

      @@Ricocossa1 yeah, I know that, but I don't know nothing about cuda already, and this python code use the same power but in a pretty simple way. So until I learn cuda, this is the best option

  • @haldanesghost
    @haldanesghost ปีที่แล้ว

    The Ising model sometimes makes its way into evolutionary biology (spin glasses in general tbqh). Thanks for covering this in general along with the implementation!

  • @panoskotoulas759
    @panoskotoulas759 ปีที่แล้ว +3

    Me crying while my laptop is burning and the kernels are dying when he mentions that "most people have access to a GPU these days"

    • @MrPSolver
      @MrPSolver  ปีที่แล้ว +2

      Use Google Colab 😉

    • @panoskotoulas759
      @panoskotoulas759 ปีที่แล้ว

      @@MrPSolver but then I couldn't write the joke :((

  • @stephenowinoomondi-si8mk
    @stephenowinoomondi-si8mk ปีที่แล้ว

    Cool science and coding. I love your channel man. keep doing this, we learn a lot

  • @varunahlawat9013
    @varunahlawat9013 ปีที่แล้ว

    4:14
    Bro is hitting gym daily!

  • @jasperpostema7098
    @jasperpostema7098 9 หลายเดือนก่อน

    How could you use the kernel convolution in case of the Random Bond Ising model, where bonds are +J or -J according to some probability distribution function?

  • @HitAndMissLab
    @HitAndMissLab ปีที่แล้ว

    Can this simulation be done to actually match real world magnetic materials?
    What one needs to do so he can simulate behaving various materials in the magnetic field?

  • @renzostefanmp7937
    @renzostefanmp7937 ปีที่แล้ว

    Can we do this on a Colab premium account?

  • @ItumelengS
    @ItumelengS ปีที่แล้ว

    Not going to use it, but my word, this was some interesting topic

  • @AJ-et3vf
    @AJ-et3vf ปีที่แล้ว

    Great video 📷📸 thank you 👍😊

  • @jdcrunchman999
    @jdcrunchman999 ปีที่แล้ว

    I made it to the end of the video, I'm just trying to understand the math. It's a little beyond me, so it looks like I got to visit 3brown1blue some more to learn more math. I'm using an M1 Mac. Not sure how fast it will run.

  • @WasimAbdul_1994
    @WasimAbdul_1994 ปีที่แล้ว

    I tried i::4, j::4 but that does not work. can you help me understand why?

  • @alizia1847
    @alizia1847 29 วันที่ผ่านมา

    Bro. Use cupy!

  • @frankkoslowski6917
    @frankkoslowski6917 11 วันที่ผ่านมา

    Implemented Cupy in favor of NumPy as far as possible.
    Appart from integrating arrays as does SciPy, Cupy is proving itself to be a neat way of using the Tesla M4 card elegantly without additional instructions like `.to(device)`. CuPy's background routines when installed correctly, are smart enough to know that graphics hardware is to be used. Here we can see how fairly easy it is to rescript NumPy to CuPy.
    The code below turned out to be 100ms faster than it's functionally similar cousin employing torch-cuda:
    init_random = cp.random.random((N,N))
    lattice_n = cp.zeros((N, N))
    lattice_n[init_random>=0.75] = 1
    lattice_n[init_random=0.25] = 1
    lattice_p[init_random

  • @fxtech-art8242
    @fxtech-art8242 ปีที่แล้ว

    cool stuff

  • @arunsebastian4035
    @arunsebastian4035 ปีที่แล้ว

    😍