Ansys Fluent Native GPU Solver Capabilities

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ส.ค. 2023
  • Rand Simulation presented an overview of Ansys Fluent GPU (graphics processing unit) solver capabilities on Wednesday, August 16. We covered a range of simulations -- from internal and external aerodynamics, to subsonic and low supersonic -- providing a fair assessment of the newly added Fluent native GPU solver.
    The end goal is to better understand how the native GPU solver performs in real world applications, when it makes sense to use it, and what are the potential benefits.
    Using real world applications, we will learn:
    The benefits and limitations of Ansys Fluent Native GPU solver
    About simulation time and hardware cost difference between Native GPU and CPU solves
    The impact of the Native GPU technology in the industries that are adopting it
    What the GPU solver can do for you and the steps involved to make it happen
    #gpu #nvidia #engineering
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 3

  • @TudorMiron
    @TudorMiron 9 หลายเดือนก่อน +1

    Thanks for the video! What are RAM memory requirements of multy gpu solver or does it rely on GPU memory only? OK, I figured that one out - obviously it uses GPU memory only.

    • @randsimulation
      @randsimulation  9 หลายเดือนก่อน

      A fluent native multi-GPU solver relies on both system RAM to load the data and GPU RAM to run the simulation. We advise you to get about 2Gb of RAM per 1E6 cells that you intend to run. So if you intend to run 8E6 cells, having 16Gb system RAM and 16GB GPU RAM should work well. Some specific physics/setup may require a bit more RAM but the recommendation above should work for most.

    • @TudorMiron
      @TudorMiron 9 หลายเดือนก่อน

      @@randsimulation Many thanks for your reply. One more thing - does NVlink technology have anything to do with multi gpu solver? I mean, is it helpful if using (for example) multiple V100 GPU's on one server? Also, where I can learn/read more about speeding up CoupledSolver DP with GPU? I understand that it is not as straight forward as with "native gpu solver". Information that I've found so far is very controversial and many people report worsening of performance when using GPU acceleration with CPU based solver.