Awesome Jorge! thank you for the very informative video. Quick question though, do you know if GPU acceleration supports UELs or VUELs developed in Fortran as of today (2024)? Thank you!
This was an extremely interesting video on topic that is not covered in a great way by Dassault/Simulia (at least publicly). Thanks so much, insta-subscription from me! Would you please take some time to elaborate on what kind of problems did you run into trying to use AMD/OpenCL? I am partial to AMD hardware, and would always prefer supporting them first, if possible, due to nVidia's widely documented history of anti-competitive, anti-consumer and dishonest behavior. Your input would help me to spec-out my next accelerator purchase (K40 can be had very cheaply on eBay now, but maybe FirePro's W8100 or S9150 would be more according to my preferences, seeing they achieve similar FP64 and are from the similar era...). On a side note, did you try experimenting with GPU acceleration of Calculix (open source FE solver taking the same input style as Abaqus) by chance? That would be a killer combination if it works - you'd get 80% of Abaqus quality for 0% of Abaqus price. You could do all the design iterations work on Calculix and then in the very end validate your work on the very expensive Abaqus. Just some thoughts... Rant off... :)
Awesome content. Love the detail. Have you experimented at all with the new RTX A series? The A5000's with maybe a 5950x or 7950x look pretty attractive. Unfortunately, there is not a lot of good published data out there on benchmarks
Hello! Thanks a lot for this very helpfull video. I have build a pc do use for Abaqus simulations for my thesis but i haven't bought a gpu yet. I have been doing some research and narrowed down my choices to a quadro gp100 or a titan v. The FP64 performance indicates that the titan is better, but on the other hand the quadro has more vram and 4096bit bus compared to 3072bit on the titan. Could you recommend one or the other?
Hi, how big are your models in terms of elements or DOFs? I would definitely go for the Titan V, the GP100 is good but I think is still more expensive and unless you are working with huge models in your thesis, the Titan V will do a better job. Thanks for watching!
For my thesis i'll be dealing with smaller models but recently i started doing some simulations for a team getting ready for a competition, designing a small uav. For that specific job plus i would like to always have the option for acceleration (it is beautiful after all), i am running 1M + element simulations. One more thing i haven't been able to verify is weather the titan v has all the bells and whistles that a quadro card has. Finaly, how severe will the performance drop if you run out of vram in a simulation? Btw i would like to thank you a lot because from all the videos and forums i've been through you are the only one who has answered me!
Hey man! Great video, thanks! I am building a new PC to (almost) solely run ABAQUS analyses. I am running dynamic explicit analyses with models having ca. 200,000 elements. I am thinking of getting an AMD Ryzen 7 5800X, and would love your recommendation for a graphics card to accelerate the sh... out of it :-)
Hello Seventh, I'm glad that you like the video. Regarding your question, how much does your budget look like? If you are not interested at all for gaming, and small budget I would recommend you get a GTX titan black the old good Kepler. If you have a little bit more budget, go for the Quadro K6000. If money is not a question at all, get yourself a Quadro GV100 or GP100. Because double precision compute capabilities, those are the best options. I hope it helps you :)
@@JorgeMoralesAv Thanks Jorge for the video! Really love the explanation. I was wondering where the titan V comes into it for abaqus. Since it's the same core as the GV100 and roughly same double precision performance and but with slightly lower bus width (4096bit->3072bit) and less HBM2 memory 32GB->12GB. Any idea how that affects simulation times?
I to get roughly a halfing in sim times or so. Although wildly varying with problem type. However, for some reason I haven't figured out yet, I can't view my results when the sim is run on the GPU. Even though Abaqus reports the simulation was succesfull, and it runs and shows the results perfectly fine for the exact same simulations on the CPU. This randomly started happening, after it initially worked on the GPU. I get the message: "The selected primary Variable is not available in in the current frame for any elements in the current display group". For which there are several explanation online, that doesn't really fit to my case, as the only difference is running it on different hardware, and as far as I know, it should produce the exact same result no matter the hardware. So there's nothing wrong with my model or mesh or something like that, but somehow, it seems something goes wrong with storing the results for the GPU or something... 🤔
Hello Jorge, thank you for this very informative video. I have enabled GPU acceleration for my NVIDIA GTX1650, and the task manager shows that the card's memoory is being used during the substeps, but apparently the usage (even in the CUDA tabs) stays at 0. Consequently, there is not difference in completion time compared to using only CPUs. What could be the culprit here?
Hi Jorge, can any GPU with cuda core be used to accelerate? Does it have to be Quadro line up? I have Geforce RTX 3090 and I checked that it has also significant amount of Cuda cores. Do you think Geforce will work too? Thanks
Hello, any GPU with Cuda cores will work. The 3090 will be perfect due to the high amount of memory as well. Probably one of the best for Abaqus. Thanks for watching!
Hey, interesting video. I am using xps 13 having only Intel(R) UHD graphics. I installed CUDA toolkit and activated GPU acceleration but my GPU utilization is less than 20%. Will this work on internal intel graphic card?
Hello N G, not yet, i think it will be difficult to run Abaqus native because of the different set of instructions supported on the M1 Chips. But i would love to try if it is possible.
Hi Danisya, the RTX 5000 is also a Quadro Graphics Card, was just renamed last year. It will do the job more than fine, the RTX5000 is an amazing GPU for abaqus. Thanks for watching!
Hey great video! The research group I am in has a workstation with dual CPU sockets and a QuadroP4000, I have installed CUDA toolkit but when I try to run standard.exe jobs I get the message: "warning cannot load the GPU solver library GPU acceleration is disabled". I have edited the environment file also to include the line: os.environ["ABA_ACCELERATOR_TYPE"]="PLATFORM_CUDA" # Nvidia Still it doesn't work! Any ideas? Do I need to disable integrated graphics in the BIOS? I don't think our Xeon chip even has integrated graphics...
Hi Brian, i don´t think, if it is a dual xeon cpu, that it has an integrated graphics card. But yes, that error happens mostly when you have more than 1 GPU in your system.
Hello Sergio, no, you do not need to install the CUDA toolkit before hand, but, i can always recommend to install it. It provides a set of libraries and instructions that Abaqus could take advantage of for advanced GPU configurations. Thanks for watching !
@@JorgeMoralesAv Thank you for your answer. Does this "GPU acceleration" applies to NVIDIA graphic cards inside laptops? after a full day working on this subject i always found the following message "GPU SOLVER ACCELERATION UNAVAILABLE. SEE JOB LOG FILE FOR MORE DETAILS."
Hello again, for latpops, i would say you have also an integrated intel or AMD radeon graphics card, right? it that is the case, install the CUDA toolkit and follow the next steps. Look in this directory (in my case i have abaqus 2018, if you have 2017,2018 or 2019 it will be the same): C:\Program Files\Dassault Systemes\SimulationServices\V6R2018x\win_b64\SMA\site and look for the file: abaqus_v6.env open it with Notepad ++ or a good text editor that is not the basic notepad from windows and add the following instruction at the very end of it in a new line: os.environ["ABA_ACCELERATOR_TYPE"]="PLATFORM_CUDA" # Nvidia this will tell the operating system to look for the CUDA graphics card. Tell me if you still have problems :)
@@JorgeMoralesAv Thanks again for your detailed input. My case is a laptop with integrated intel graphics + Nvidia as you suggested. Well, I also tried to edit that abaqus_v6.env file. Noticed that it changed a line windows command line interface when oppening abaqus CAE. But once again, when selecting a job with 6 CPU + 1 GPU, the "GPU SOLVER ACCELERATION UNAVAILABLE (...)" poped up once again. Also performed some test on a fast job, to check if it was working despite that message. But no success, times with and without GPU where exactly the same.
What I did once in my workstation was to disable the integrated graphics card in the BIOS. Mine was a Dell precision, they allow you very easy to change this setting. Try if your computer allows you, and pretty sure it will work after it. I used to have this problem always in laptops with integrated grapchis.
Hello, I have 2 GPUs on my PC. I could run 1 job with GPU acceleration well. However, when I tried to run a second parallel job, also with CPU=16 GPU=1, the Abaqus won't arrange second GPU to the second job, still using the first GPU. Do you know how to let the jobs use an independent GPU separately? Thank you!
Awesome Jorge! thank you for the very informative video. Quick question though, do you know if GPU acceleration supports UELs or VUELs developed in Fortran as of today (2024)? Thank you!
This was an extremely interesting video on topic that is not covered in a great way by Dassault/Simulia (at least publicly). Thanks so much, insta-subscription from me!
Would you please take some time to elaborate on what kind of problems did you run into trying to use AMD/OpenCL? I am partial to AMD hardware, and would always prefer supporting them first, if possible, due to nVidia's widely documented history of anti-competitive, anti-consumer and dishonest behavior. Your input would help me to spec-out my next accelerator purchase (K40 can be had very cheaply on eBay now, but maybe FirePro's W8100 or S9150 would be more according to my preferences, seeing they achieve similar FP64 and are from the similar era...).
On a side note, did you try experimenting with GPU acceleration of Calculix (open source FE solver taking the same input style as Abaqus) by chance? That would be a killer combination if it works - you'd get 80% of Abaqus quality for 0% of Abaqus price. You could do all the design iterations work on Calculix and then in the very end validate your work on the very expensive Abaqus. Just some thoughts... Rant off... :)
Awesome content. Love the detail. Have you experimented at all with the new RTX A series? The A5000's with maybe a 5950x or 7950x look pretty attractive. Unfortunately, there is not a lot of good published data out there on benchmarks
Hello! Thanks a lot for this very helpfull video. I have build a pc do use for Abaqus simulations for my thesis but i haven't bought a gpu yet. I have been doing some research and narrowed down my choices to a quadro gp100 or a titan v. The FP64 performance indicates that the titan is better, but on the other hand the quadro has more vram and 4096bit bus compared to 3072bit on the titan. Could you recommend one or the other?
Hi, how big are your models in terms of elements or DOFs? I would definitely go for the Titan V, the GP100 is good but I think is still more expensive and unless you are working with huge models in your thesis, the Titan V will do a better job. Thanks for watching!
For my thesis i'll be dealing with smaller models but recently i started doing some simulations for a team getting ready for a competition, designing a small uav. For that specific job plus i would like to always have the option for acceleration (it is beautiful after all), i am running 1M + element simulations. One more thing i haven't been able to verify is weather the titan v has all the bells and whistles that a quadro card has. Finaly, how severe will the performance drop if you run out of vram in a simulation? Btw i would like to thank you a lot because from all the videos and forums i've been through you are the only one who has answered me!
Hey man! Great video, thanks! I am building a new PC to (almost) solely run ABAQUS analyses. I am running dynamic explicit analyses with models having ca. 200,000 elements. I am thinking of getting an AMD Ryzen 7 5800X, and would love your recommendation for a graphics card to accelerate the sh... out of it :-)
Hello Seventh, I'm glad that you like the video. Regarding your question, how much does your budget look like? If you are not interested at all for gaming, and small budget I would recommend you get a GTX titan black the old good Kepler. If you have a little bit more budget, go for the Quadro K6000. If money is not a question at all, get yourself a Quadro GV100 or GP100. Because double precision compute capabilities, those are the best options. I hope it helps you :)
@@JorgeMoralesAv Thanks Jorge for the video! Really love the explanation. I was wondering where the titan V comes into it for abaqus. Since it's the same core as the GV100 and roughly same double precision performance and but with slightly lower bus width (4096bit->3072bit) and less HBM2 memory 32GB->12GB. Any idea how that affects simulation times?
Thank you Sir. That was a very informative video!
I to get roughly a halfing in sim times or so. Although wildly varying with problem type. However, for some reason I haven't figured out yet, I can't view my results when the sim is run on the GPU. Even though Abaqus reports the simulation was succesfull, and it runs and shows the results perfectly fine for the exact same simulations on the CPU. This randomly started happening, after it initially worked on the GPU. I get the message: "The selected primary Variable is not available in in the current frame for any elements in the current display group". For which there are several explanation online, that doesn't really fit to my case, as the only difference is running it on different hardware, and as far as I know, it should produce the exact same result no matter the hardware. So there's nothing wrong with my model or mesh or something like that, but somehow, it seems something goes wrong with storing the results for the GPU or something... 🤔
Hello Jorge, thank you for this very informative video. I have enabled GPU acceleration for my NVIDIA GTX1650, and the task manager shows that the card's memoory is being used during the substeps, but apparently the usage (even in the CUDA tabs) stays at 0. Consequently, there is not difference in completion time compared to using only CPUs. What could be the culprit here?
Great explanation. Thanks 👍
You're welcome!
you are the perfect man! Thanks a lot
Hi Jorge, can any GPU with cuda core be used to accelerate? Does it have to be Quadro line up? I have Geforce RTX 3090 and I checked that it has also significant amount of Cuda cores. Do you think Geforce will work too? Thanks
Hello, any GPU with Cuda cores will work. The 3090 will be perfect due to the high amount of memory as well. Probably one of the best for Abaqus. Thanks for watching!
Hey, interesting video. I am using xps 13 having only Intel(R) UHD graphics. I installed CUDA toolkit and activated GPU acceleration but my GPU utilization is less than 20%. Will this work on internal intel graphic card?
Any comments on the new M1 SOC to run Abaqus?
Hello N G, not yet, i think it will be difficult to run Abaqus native because of the different set of instructions supported on the M1 Chips. But i would love to try if it is possible.
Hi! I dont have much budget to buy NVIDIA Quadro, does NVIDiA RTX 5000 fine?
Hi Danisya, the RTX 5000 is also a Quadro Graphics Card, was just renamed last year. It will do the job more than fine, the RTX5000 is an amazing GPU for abaqus. Thanks for watching!
Gran informacion
Me suscribo
muchas gracias Carlos! Saludos desde alemania.
Hey great video! The research group I am in has a workstation with dual CPU sockets and a QuadroP4000, I have installed CUDA toolkit but when I try to run standard.exe jobs I get the message: "warning cannot load the GPU solver library GPU acceleration is disabled". I have edited the environment file also to include the line:
os.environ["ABA_ACCELERATOR_TYPE"]="PLATFORM_CUDA" # Nvidia
Still it doesn't work! Any ideas? Do I need to disable integrated graphics in the BIOS? I don't think our Xeon chip even has integrated graphics...
Hi Brian, i don´t think, if it is a dual xeon cpu, that it has an integrated graphics card. But yes, that error happens mostly when you have more than 1 GPU in your system.
Excellent Video!
Thanks a lot for you explanation. Do I need to install CUDA toolkit to use a NVIDIA graphic card to activate the GPU acceleration capabilities?
Hello Sergio, no, you do not need to install the CUDA toolkit before hand, but, i can always recommend to install it. It provides a set of libraries and instructions that Abaqus could take advantage of for advanced GPU configurations. Thanks for watching !
@@JorgeMoralesAv Thank you for your answer. Does this "GPU acceleration" applies to NVIDIA graphic cards inside laptops? after a full day working on this subject i always found the following message "GPU SOLVER ACCELERATION UNAVAILABLE. SEE JOB LOG FILE FOR MORE DETAILS."
Hello again, for latpops, i would say you have also an integrated intel or AMD radeon graphics card, right? it that is the case, install the CUDA toolkit and follow the next steps. Look in this directory (in my case i have abaqus 2018, if you have 2017,2018 or 2019 it will be the same):
C:\Program Files\Dassault Systemes\SimulationServices\V6R2018x\win_b64\SMA\site
and look for the file:
abaqus_v6.env
open it with Notepad ++ or a good text editor that is not the basic notepad from windows and add the following instruction at the very end of it in a new line:
os.environ["ABA_ACCELERATOR_TYPE"]="PLATFORM_CUDA" # Nvidia
this will tell the operating system to look for the CUDA graphics card. Tell me if you still have problems :)
@@JorgeMoralesAv Thanks again for your detailed input. My case is a laptop with integrated intel graphics + Nvidia as you suggested. Well, I also tried to edit that abaqus_v6.env file. Noticed that it changed a line windows command line interface when oppening abaqus CAE. But once again, when selecting a job with 6 CPU + 1 GPU, the "GPU SOLVER ACCELERATION UNAVAILABLE (...)" poped up once again.
Also performed some test on a fast job, to check if it was working despite that message. But no success, times with and without GPU where exactly the same.
What I did once in my workstation was to disable the integrated graphics card in the BIOS. Mine was a Dell precision, they allow you very easy to change this setting. Try if your computer allows you, and pretty sure it will work after it. I used to have this problem always in laptops with integrated grapchis.
Hello, I have 2 GPUs on my PC. I could run 1 job with GPU acceleration well. However, when I tried to run a second parallel job, also with CPU=16 GPU=1, the Abaqus won't arrange second GPU to the second job, still using the first GPU. Do you know how to let the jobs use an independent GPU separately?
Thank you!
Hi How do you check DOF from an abaqus job? I can check the element number.. but how can I see the DOF? Thanks
Great Video!
Is it works for abaqus explicit dynamics?
unfortunately GPU acceleration still does not work with explicit analyses :/ for later realeases is being planned.
@@JorgeMoralesAv For 2020 or 21?
Until today's release, 2021, is still not implemented :/