Ben Andrew
Ben Andrew
  • 7
  • 30 260
"Fluid" simulation
Messed up an implementation of the Lattice-Boltzmann method for fluid dynamics simulation, leading to strange geometric behaviour.
"The method used involves a "propagation" step, where the fluid pressure is propagated outwards in eight directions, and a "collision" step working out the pressure changes due to the collisions between molecules. I forgot to add the collision step, and so the initial pressure wave propagates outwards infinitely in the eight directions (wrapping at the edges), giving the strange overlapping patterns seen in the video."
GitHub repo: github.com/benmandrew/lattice-boltzmann/tree/geometric
มุมมอง: 393

วีดีโอ

Rendering Geometric Patterns
มุมมอง 38110 หลายเดือนก่อน
Retrospective of a mini-project I did, inspired by "Islamic Patterns: An Analytical and Cosmological Approach" by Keith Critchlow. Presentation to Tarides colleagues on the 13th March 2024. Blog post: www.benmandrew.com/articles/rendering-geometric-patterns Project repository: github.com/benmandrew/mirror LinkedIn: www.linkedin.com/in/benmandrew/
Parallelising an OCaml Software Ray-tracer
มุมมอง 3922 ปีที่แล้ว
Retrospective of a mini-project I did to learn about OCaml 5 and the domainslib parallel processing library. Presentation to Tarides colleagues on the 3rd November 2022. Blog post: benmandrew.com/articles/parallelising-a-software-ray-tracer Project repository: github.com/benmandrew/Otorus LinkedIn: www.linkedin.com/in/benmandrew/
Porting OCaml to the nRF52 Microcontroller
มุมมอง 3023 ปีที่แล้ว
A talk given to Tarides colleagues and others in the OCaml ecosystem on the 28th September 2021, to demonstrate the work and research I had done over the internship. Abstract: In this talk I will explain how we can run OCaml bytecode on a memory-constrained bare-metal device, using the RIOT operating system as a base layer. Website: www.benmandrew.com/ LinkedIn: www.linkedin.com/in/benmandrew/ ...
Forward and Deferred Rendering - Cambridge Computer Science Talks
มุมมอง 27K3 ปีที่แล้ว
A talk given to my fellow Cambridge computer science students on the 27th January 2021. Abstract: The visuals of video games and films have deep influences on our culture, from Shrek to Garfield Racing. The modern history of real-time rendering is deeply tied to the architecture of GPUs and what they allow us to do. How have our approaches to rendering changed over time, and what may the future...
Unity Water Ripple Effect Demonstration
มุมมอง 1.6K4 ปีที่แล้ว
Explained here: www.benmandrew.com/articles/custom-post-processing-effects-in-unity Excuse the lag, my recording software is bad.

ความคิดเห็น

  • @Zettoman
    @Zettoman หลายเดือนก่อน

    what did crysis use?

  • @lukecronquist6003
    @lukecronquist6003 หลายเดือนก่อน

    Man universities have fallen so far. This kind of rendering is really slow and incapable of anything that compute can do. Yea lets make the pipeline even bigger and add even more draw calls lmao.

  • @machinefannatic99
    @machinefannatic99 หลายเดือนก่อน

    forward rendering looked cleaner and sharper, defered just looks blurred

  • @donnyjoe7653
    @donnyjoe7653 2 หลายเดือนก่อน

    Thank you! :)

  • @soulroll-nz
    @soulroll-nz 3 หลายเดือนก่อน

    I just tried forward rendering in unreal engine with MSAA and I feel like I am born again without needing glasses, who decided it was such a great idea to go crazy with the AA smudge tools.

  • @Codec264
    @Codec264 4 หลายเดือนก่อน

    "this will change in the near future" Nope, we just get games with worse and worse fps! Thanks for the great video

  • @A3azel
    @A3azel 4 หลายเดือนก่อน

    Thanks for video!

  • @GeorgeTsiros
    @GeorgeTsiros 6 หลายเดือนก่อน

    why do you repeat "moving the g-buffer" ? It doesn't need to move. It's only ever allocated once. It is written once for each frame. Once it has been written, it is read-only. I do not understand. Do we transfer its contents between RAM and VRAM frequently or something?

  • @MrTomyCJ
    @MrTomyCJ 6 หลายเดือนก่อน

    7:25 In webgpu, the vertex and fragment shader code is provided to the pipeline. This means that a pipeline can only execute 1 fragment shader. So to render the scene we wouldn't just have the nested loops: lights>objects, but rather materials>lights>objectsWithThisMaterial, and for each material set a different pipeline. Am I missing something here? is that pipeline-per-material, the intended way to draw the objects for this case? 19:30 In WGSL it doesn't seem to be possible to use some samplers inside branching code. Is there a way around that?

  • @anoomage
    @anoomage 7 หลายเดือนก่อน

    Thank you so much for sharing your knowledge sir, I learned a lot from your presentation!

  • @charactername263
    @charactername263 9 หลายเดือนก่อน

    But surely you just put your lights into a GPU buffer and then you can sample the buffer whilst drawing meshes. That makes it just M draw calls for M meshes, with sampling into the buffer for N lights which is really not any different from deferred, other than that deferred avoids redrawing fragments - but even a depth prepass on forward solves that issue.

  • @JuanesChiwirosky
    @JuanesChiwirosky 10 หลายเดือนก่อน

    So that's how the pattern of my socks was made..

  • @Malenbolai
    @Malenbolai 10 หลายเดือนก่อน

    where is the "messed up" part ?

    • @benmandrew
      @benmandrew 10 หลายเดือนก่อน

      The method used involves a "propagation" step, where the fluid pressure is propagated outwards in eight directions, and a "collision" step working out the pressure changes due to the collisions between molecules. I forgot to add the collision step, and so the initial pressure wave propagates outwards infinitely in the eight directions (wrapping at the edges), giving the strange overlapping patterns seen in the video. I'm still working on it, and mostly have the correct behaviour now, but this looked interesting.

  • @rtyzxc
    @rtyzxc 11 หลายเดือนก่อน

    Can't wait for the return of MSAA and sharp graphics again!

    • @wojciechcwynar4781
      @wojciechcwynar4781 6 หลายเดือนก่อน

      th-cam.com/video/EIWHvPP0U64/w-d-xo.html

    • @Argoon1981
      @Argoon1981 5 หลายเดือนก่อน

      Exactly!

  • @thhm
    @thhm ปีที่แล้ว

    Definitely still a heady topic for me, but thank you for explaining it. Specially for the emerging trends and outlook in the end, definitely interesting.

  • @cafe_underground
    @cafe_underground ปีที่แล้ว

    Amazing explanation, I could finally grasp the pros and cons of each technique

  • @gordazo0
    @gordazo0 ปีที่แล้ว

    excellent

  • @slothsarecool
    @slothsarecool ปีที่แล้ว

    Shrek?? no way 😅 that’s great. Awesome talk, thanks

  • @StealthMacaque
    @StealthMacaque ปีที่แล้ว

    Unbelievably good explanation. I cannot thank you enough!

  • @schmildo
    @schmildo ปีที่แล้ว

    Thanks mate

  • @lx2222x
    @lx2222x ปีที่แล้ว

    Memory Bandwidth > Shader Complexity Memory bandwith is way more important than shader complexity, you can see this in games like Zelda TOTK where the only reason why lag spikes exist is because of memory bandwidth only, I would highly advise you to look into it you can learn so much from that

  • @SergioWolf843
    @SergioWolf843 ปีที่แล้ว

    Deferred rendering has the advantage of calculating lights per block and not per pixel, decreasing the GPU overload, so it doesn’t matter how many lights cross your blocks because it won’t affect performance. If I’m not mistaken, Apple has TBDR Patents and has been using it on the iPhone since 2017.

  • @BumBum_SM
    @BumBum_SM ปีที่แล้ว

    How do you make this can you please make a tutorial

  • @rubenhovhannisyan317
    @rubenhovhannisyan317 ปีที่แล้ว

    Thanks a lot. Saved a lot of time and effort.

  • @TechDiveAVCLUB
    @TechDiveAVCLUB ปีที่แล้ว

    Can't believe such a perfect digestion of high level information into actionable mental models exist. Thank you!

  • @santitabnavascues8673
    @santitabnavascues8673 ปีที่แล้ว

    Is curious how everybody who illustrates a depth buffer always use the reverse depth approach, where white is closer and black is farther, more curious is that the reverse depth buffer distributes the depth precision much better than the original, 'forward' depth buffer, where closer objects have a depth close to 0 and far objects have a depth closer to 1 😊

    • @benmandrew
      @benmandrew ปีที่แล้ว

      Correct, for those interested this is due to the non-linear perspective transformation (1/z) either combining with or cancelling out the somewhat-logarithmic distribution of points in IEEE floating point numbers. A really good explanation is on the Nvidia developer website -- developer.nvidia.com/content/depth-precision-visualized.

  • @penneywang6552
    @penneywang6552 ปีที่แล้ว

    best video to explain them, from the history , hardware to gpu pipeline work , thank you. looking forward more tutorial with this way .

  • @jeffg4686
    @jeffg4686 ปีที่แล้ว

    have you checked out the "clustered forward renderer" in bevy? Looks pretty nice. Don't know if any downsides. Says unlimited lights

  • @carlosd562
    @carlosd562 ปีที่แล้ว

    Very good video!

  • @zugolf4980
    @zugolf4980 2 ปีที่แล้ว

    And this is why you're at Cambridge University <3

  • @dan_perry
    @dan_perry 2 ปีที่แล้ว

    Hmm, I thought the PowerVR/Dreamcast was the first tile based deffered renderer?

  • @chenbruce3784
    @chenbruce3784 2 ปีที่แล้ว

    谢谢你

  • @egoinstart8756
    @egoinstart8756 2 ปีที่แล้ว

    Excellent. Best video about this topic I've found. Thank you.

  • @gideonunger7284
    @gideonunger7284 2 ปีที่แล้ว

    why is forward always portrayed as lights x meshes. i have never written a forward renderer like that. just put the lights in a buffer and then send the lights affecting a mesh as indices. gives you 1 uniform branch for the loop but that should be fine and way faster than multiple draw calls lol

  • @glass1098
    @glass1098 2 ปีที่แล้ว

    Thanks for the video, i had a lot of questions on the topic and this was an absolute clear explanation of the differences

  • @Lavimoe
    @Lavimoe 2 ปีที่แล้ว

    "If you cannot explain something in simple terms, you don't understand it." This video really shows how deep an understanding you have on the shader topic. Thanks so much!

  • @Kalandrill
    @Kalandrill 2 ปีที่แล้ว

    Thanks a lot for sharing, didn't expect a dive into the current state of things in games. It was a very pleasant surprise :)

  • @leeoiou7295
    @leeoiou7295 2 ปีที่แล้ว

    Excellent talk. I did a little research and found out that you are just a young lad. Wish you all the best and thanks for such a great talk.

  • @haos4574
    @haos4574 2 ปีที่แล้ว

    This is gold content, watched several videos on the topic, this is the one that actually makes me understand.

  • @pwhv
    @pwhv 2 ปีที่แล้ว

    very well explained, loved it

  • @jgriep
    @jgriep 2 ปีที่แล้ว

    Hands down the best, most straightforward explanation of forward vs. reverse rendering I have seen!

  • @jiayuezhu5848
    @jiayuezhu5848 2 ปีที่แล้ว

    This is such a helpful video!

  • @donovan6320
    @donovan6320 2 ปีที่แล้ว

    Should also mention all lighting in doom eternal is dynamic. The "pre-processing" that is being done is called clustered forward rendering, in which a culling stage reduces the lights sampled in a specific part of the scene.

    • @benmandrew
      @benmandrew 2 ปีที่แล้ว

      Yep, unfortunately had to cut out clustered rendering to keep the talk focused and under half an hour. It's a very cool technique explained in the Doom Eternal graphics study by Adrian Courrèges (one of my sources at the end).

    • @donovan6320
      @donovan6320 2 ปีที่แล้ว

      @@benmandrew I figured, thought I should clarify for those that are curious about the technique and "prepossessing" (technically correct but I would have just called it a culling pass, preprocessing implies a static, pre-runtime/serialised nature to which the light culling pass is not), definitely beyond my paygrade, but is really cool.

  • @stephenkamenar
    @stephenkamenar 2 ปีที่แล้ว

    thank you shrek

  • @onevoltten7352
    @onevoltten7352 2 ปีที่แล้ว

    Thank you! Been going back and forth between defferred and forward as it's a lot more effort using forward shading - requiring much more planning and optimising. I plan to force myself to use Forward rendering during development and commit to a much more optimised game rather than go for dynamic lighting.

    • @donovan6320
      @donovan6320 2 ปีที่แล้ว

      I mean you can use forward and have a lot of dynamic lighting... Doom Eternal uses all dynamic forward lighting.

    • @vitordelima
      @vitordelima ปีที่แล้ว

      @@donovan6320A shader can loop over many light sources during the same rendering step, but many screen space effects are compromised if you don't use deferred.

    • @donovan6320
      @donovan6320 ปีที่แล้ว

      @@vitordelima You arent wrong?

    • @vitordelima
      @vitordelima ปีที่แล้ว

      @@donovan6320Deferred is only important if you need some extra data from each separate rendering step that isn't easily generated by forward only, but lighting can be calculated in a single step for forward nowadays.

    • @vitordelima
      @vitordelima ปีที่แล้ว

      @@donovan6320I found out more about it, modern hardware still supports a lot of light sources in forward mode simply by iterating over them but there are methods to improve this via something similar to culling. If you use a method for global illumination that is good enough, deferred or forward don't matter that much because the lighting is calculated in another rendering step.

  • @SSaka77
    @SSaka77 3 ปีที่แล้ว

    very useful ! thanks a lot

  • @MikelBober
    @MikelBober 3 ปีที่แล้ว

    😍😍😍😍 my favourite youtuber

  • @riley8010
    @riley8010 4 ปีที่แล้ว

    What’s up?! Great content. Have you ever thought about using smzeus . c o m to promote your videos?!