Maybe it's not supported but I got it working in your Spiderverse texture tutorial. At the time you said it didn't work and there was a youtube comment that it had recently become possible. I tested it and it was working. I wonder what the limitations are.
NVLINK is the ability to pool memory cross GPU, it will depend on the applications to support it also. The 2070 and the Quadro RTX 4000 are both half speed NVLINK with 50gb/s and the 2080ti and Quadro RTX 5000 or better use full speed 100gb/s. Using NVLINK also comes with a slight performance hit but its still much better than going out of core on GPU engines. It also isn't a perfect doubling of VRAM, its "conceptually" double but in real world scenarious just like any GPU the OS/CUDA allocate X amount of the total so for example 2x 2080ti isn't 22gb its closer to 19gb of VRAM. When we were testing Arnold Beta in studio we also felt very much how you do. Its a great lookdev start but the CPU version is still where it's at. It needs a lot more work, especially in speed when compared to the other GPU engines. Arnold is using RTX support to accelerate and it's still slower than the other GPU engines without RTX. But it's great to see more competition.
You commented about 30 hours per frame at Pixar. Please note that these render times are for a single thread. In practice, it means 2/3h per frame depending on the render node.
Thanks guys for breaking this down. I thought that Arnold GPU was taking so long because they were creating a hybrid GPU/CPU renderer. Imagine your Threadripper and 2080’s working in tandem all at 100% usage. I’m sure this may come down the road. But why not just make it Arnold Renderer “which uses both GPU and CPU. I know it may be not possible but it was what I anticipated. Thanks again for this.
The only renderer I know with support CPU/GPU without restrictions is FinalRender, you can use cpu, gpu, or both combined. for 3dsmax, and now open beta for Maya. The goal of Arnold is in the future combine cpu+gpu too. So far, Arnold gpu, so good!
Would love to hear your take on how to make a workhorse of an laptop, like eGPU and send it to a Threadripper Render Node for final? Or is dual GPU render better than a Threadripper?
hello Mister Chad ! >> thanks for sharing your awesome energy in passing information like that! it's pure gold melting in my ears and eyes! >> We just started to add Arnold in our C4d pipeline since a couple of days ;) ! hahaha !!! and then >>> boom >>> this news about GPU available !!! >>> mind blowing !!!! hahaha !! >> anyhow >>> here is a little question >> I am working on a shot that requires some CG re-lighting enhancement. and was very happy of my setup to render in CPU mode to the farm. but since our past experience with Octane, then redshift ;)) we got a little 3 machines (4 GPU each) render farm >> so I thought > hum might as well kick some ass with GPU power! >>> well apparently >> the Arnold Object TAG CAMERA (primary) that would prevent to have the EMISSIVE LIGHT object visible by the camera works fine in the CPU mode. >> 8 minutes render. BUT not in the GPU mode >> SAAAAAD... would have been a 34 seconds render. but it's ok like you described the practicality of the cheap CPU render-farm aspect ;) versus GPU monster machines. Again >>> THANKS AGAIN SO MUCH to you and Chris and Nick and the rest of your awesome team !!! yann
One takeaway I had from Siggraph 2018 was that GPUs have a few caveats. The Arnold GPU lead engineer mentioned that GPUs aren't as good as CPUs at "splitting". And they were discussing internally whether to expose a few more dials/checkboxes to the users. They were also running every Standard Surface material as its own program, which is not what nVidia expected. But the amount of time spent Compiling real world production shaders was taking forever when Solid Angle followed the API rules. The coolest thing they showed though was how fast the RTX cores were. It almost looked like a flat line compared to the GV100.
@@Myleh he made cycles after he left arnold. he was one of the lead devs of arnold and made cycles based on the things the didn't like about arnold and other render engines.
Right now I've mostly been using Arnold because I require a Fisheye lense witch is not yet in Corona but it will be soon! But when I need something to look amazing with minimal effort first pick is Corona. Unfortunately though there are many features missing in Corona that I with made the final release like X-Particle support for one. But for much of my work Corona rocks and is faster than Arnold in many situations. It really depends on your needs.
Wonder how total time, RT is only one influence... also very curious about the Nvidia statements about a driver update 'next month' to 'enable ray tracing on certain(recent higher quality) GTX cards'. In this there is very little info except saying 'all ray tracing from GTX won't be available... ' >< in what I read. Octane/Otoy might not even bother writing their own Octane Driver to address this.One wonders if owning a bunch of 1080 Tis and 2080s are not much faster before Vulcan is applied, wondering, wondering, wondering.. but know tired of buying each subsequent Ti especially with the 2080 Ti cost!
This is what i know !! . Basic difference between SLI and nvlink. In sli graphics card (GC) one is master and all others are slave, so master transfers the work load at a transfer speed of 2GBps only. And in nvlink all GC are independent and the transfer rates are also much higher as the is no master n slave concept. In simple terms, yes nvlink the uses all the available GC memory as one bulk as the software threats all the GCs as one single GC. Hope this helps.
I'd be interested to know what made you say that 'a CPU farm is so more cost effective that GPU farm'... How were you thinking of measuring that? Dollars per physical space..? The only reasonable way of measuring that for a company, is total cost of ownership of the hardware across it's entire lifespan. That includes mostly power bills. You'd be VERY hard pressed to demonstrate that a CPU farm would be more cost effective?
@@cgpov Well, that's a very different scenario - if you own a general-purpose data center and rent it's usage for anything any customer may want, that's one thing. But if you're a VFX studio looking at your yearly expenses, if you are able to shift any massively paralelisable workload from many CPU cores to much fewer GPUs, your yearly expenses should drop dramatically over time. It's a big initial investment, for sure - but it ends up easily paying for itself in the long run. It may still be early days, but if in 5 or 10 years from now there hasn't been a massive shift of rendering and simulation workloads to GPUs on render farms in VFX, I'd be very surprised.
@@AinurEru another friend who owns a render farm would agree with me. "A CPU farm is more cost effective. A GPU farm requires a CPU farm PLUS a bunch of power-hogging, buggy, expensive, hotbox GPU's."
@@cgpov I don't know what kind of experience you're friends have had but sounds like they had a bit of bad luck and/or insufficient cooling apparatus. I didn't say a GPU is cheap, I said that IF you have parallelizable workloads that can run on either GPUs or CPUs, and can run on GPUs more efficiently, then across the entire lifespan of a GPU farm you'll spend less money on power compared to the power you'd need for a CPU farm of a size that can produce equivalent throughput, across IT's entire lifespan It's simple, really - if one GPU can do the same amount of work that 10 CPUs would in the same amount of time while taking a fraction of the combined power that those 10 CPUs would take, then over time powering that GPU farm costs less, not more. You also have fewer machines and fewer components to worry about, compared to say 10 times as many servers, so maintenance is also reduced, not increased. I Know that my personal experience may not be very representative, as I work at Weta Digital, but our data center has tons of CPUs but also many GPUs, and we use both - heavily. I wouldn't say that our GPU farm is more buggy or anything like that. Our department is some of the heaviest users of the GPU farm, and we monitor our usage daily. It's under heavy load almost 24/7 and has hundreds of GPUs in it, and is rock-stable.
@@AinurEru I think what we have here is a difference of opinion from two very different (but valid) povs. My buddy runs a medium sized render service that offers nearly every DCC in the market. I'm sure this is VERY different than Weta's farm. Sounds like both points of view have good reasons for thinking one is more cost effective. Thank you for sharing your findings! I'll share this with my friend. I love Weta btw, keep up the great work!
You've answered your own question :) What makes you think that because a company is larger, that they should be able to release more often/quicker? If anything, it'd be less often... Politics, beurocracy, corporate culture, etc always makes everything go slower. It easily nullifies any advantage that can be gained by having more people working.
10:40 Shawn was totally correct (and I was wrong), C4D noises are not yet supported in Arnold GPU.
Why ? I can't understand why ?
Maybe it's not supported but I got it working in your Spiderverse texture tutorial. At the time you said it didn't work and there was a youtube comment that it had recently become possible. I tested it and it was working. I wonder what the limitations are.
NVLINK is the ability to pool memory cross GPU, it will depend on the applications to support it also. The 2070 and the Quadro RTX 4000 are both half speed NVLINK with 50gb/s and the 2080ti and Quadro RTX 5000 or better use full speed 100gb/s. Using NVLINK also comes with a slight performance hit but its still much better than going out of core on GPU engines. It also isn't a perfect doubling of VRAM, its "conceptually" double but in real world scenarious just like any GPU the OS/CUDA allocate X amount of the total so for example 2x 2080ti isn't 22gb its closer to 19gb of VRAM.
When we were testing Arnold Beta in studio we also felt very much how you do. Its a great lookdev start but the CPU version is still where it's at. It needs a lot more work, especially in speed when compared to the other GPU engines. Arnold is using RTX support to accelerate and it's still slower than the other GPU engines without RTX. But it's great to see more competition.
You commented about 30 hours per frame at Pixar. Please note that these render times are for a single thread. In practice, it means 2/3h per frame depending on the render node.
Good to know! I remember reading that somewhere a long time ago.
Thanks guys for breaking this down. I thought that Arnold GPU was taking so long because they were creating a hybrid GPU/CPU renderer. Imagine your Threadripper and 2080’s working in tandem all at 100% usage. I’m sure this may come down the road. But why not just make it Arnold Renderer “which uses both GPU and CPU. I know it may be not possible but it was what I anticipated. Thanks again for this.
The only renderer I know with support CPU/GPU without restrictions is FinalRender, you can use cpu, gpu, or both combined. for 3dsmax, and now open beta for Maya. The goal of Arnold is in the future combine cpu+gpu too. So far, Arnold gpu, so good!
and cycles.
I'm hoping Otoy release RTX support soon, if it really is a 3-4x performance boost animation is going to be speedy
It's coming in Octane 2019. I'd expect that to be released later this year. Performance boosts will be 3x iirc
Would love to hear your take on how to make a workhorse of an laptop, like eGPU and send it to a Threadripper Render Node for final? Or is dual GPU render better than a Threadripper?
Shawn it's been 4 months since your last tutorial, we miss u!
I know sorry! More coming soon!
Thanks guys!!
hello Mister Chad ! >> thanks for sharing your awesome energy in passing information like that! it's pure gold melting in my ears and eyes!
>> We just started to add Arnold in our C4d pipeline since a couple of days ;) ! hahaha !!! and then >>> boom >>> this news about GPU available !!! >>> mind blowing !!!! hahaha !!
>> anyhow >>> here is a little question >> I am working on a shot that requires some CG re-lighting enhancement. and was very happy of my setup to render in CPU mode to the farm.
but since our past experience with Octane, then redshift ;)) we got a little 3 machines (4 GPU each) render farm >> so I thought > hum might as well kick some ass with GPU power!
>>> well apparently >> the Arnold Object TAG CAMERA (primary) that would prevent to have the EMISSIVE LIGHT object visible by the camera works fine in the CPU mode. >> 8 minutes render. BUT not in the GPU mode >> SAAAAAD... would have been a 34 seconds render.
but it's ok like you described the practicality of the cheap CPU render-farm aspect ;) versus GPU monster machines.
Again >>> THANKS AGAIN SO MUCH to you and Chris and Nick and the rest of your awesome team !!!
yann
One takeaway I had from Siggraph 2018 was that GPUs have a few caveats. The Arnold GPU lead engineer mentioned that GPUs aren't as good as CPUs at "splitting". And they were discussing internally whether to expose a few more dials/checkboxes to the users. They were also running every Standard Surface material as its own program, which is not what nVidia expected. But the amount of time spent Compiling real world production shaders was taking forever when Solid Angle followed the API rules. The coolest thing they showed though was how fast the RTX cores were. It almost looked like a flat line compared to the GV100.
Beast unleashed! Need a Titan Rtx badly!!
Cycles (in blender) also has seamless CPU to GPU switching.
the guy that built cycles moved to the Arnold team, to make the switch from cpu to gpu
@@Myleh Thats actually really interesting
@@Myleh yeah from 2013 to early 2017 if i'm not mistaken, and then he joined the blender dev team full time from mid 2017 for the code quest
Right forgot about Cycles witch is fantastic BTW
@@Myleh he made cycles after he left arnold. he was one of the lead devs of arnold and made cycles based on the things the didn't like about arnold and other render engines.
22:38 Now that's some synchronicity.
@Shawn: Please tell up in what instances you'd use Corona over Arnold. thanks.
Right now I've mostly been using Arnold because I require a Fisheye lense witch is not yet in Corona but it will be soon! But when I need something to look amazing with minimal effort first pick is Corona. Unfortunately though there are many features missing in Corona that I with made the final release like X-Particle support for one. But for much of my work Corona rocks and is faster than Arnold in many situations. It really depends on your needs.
Hi, does the EMC (Everyday material collection) works with the arnold GPU render ?
Can you test the speed with nvlink with the scene over 11gb vram? Redshift doesn’t support nvlink so far, vray gets slower but works...
can work on my old gtx 660 ? just like redshift and octane ??
They have a list of supported cards on their site. Thanks for watching
Anyone know if Arnold 5.3 will allow networked GPUs to be utilized for rendering in IPR mode?
What happened to the giveaway!???
Winners were announced in the last video.
Have you guys tried FStorm? It's very fast.
Yeah it’s very fast, unfortunately only max version...
Wish they had a C4D plugin!
Don't forget RS and Octane are not yet RTX accelerated. once they are the speed differences are going to grow
Wonder how total time, RT is only one influence... also very curious about the Nvidia statements about a driver update 'next month' to 'enable ray tracing on certain(recent higher quality) GTX cards'. In this there is very little info except saying 'all ray tracing from GTX won't be available... ' >< in what I read. Octane/Otoy might not even bother writing their own Octane Driver to address this.One wonders if owning a bunch of 1080 Tis and 2080s are not much faster before Vulcan is applied, wondering, wondering, wondering.. but know tired of buying each subsequent Ti especially with the 2080 Ti cost!
Does it work on AMD Cards
No, Nvidia only
This is what i know !! .
Basic difference between SLI and nvlink. In sli graphics card (GC) one is master and all others are slave, so master transfers the work load at a transfer speed of 2GBps only. And in nvlink all GC are independent and the transfer rates are also much higher as the is no master n slave concept. In simple terms, yes nvlink the uses all the available GC memory as one bulk as the software threats all the GCs as one single GC.
Hope this helps.
I'd be interested to know what made you say that 'a CPU farm is so more cost effective that GPU farm'... How were you thinking of measuring that? Dollars per physical space..? The only reasonable way of measuring that for a company, is total cost of ownership of the hardware across it's entire lifespan. That includes mostly power bills. You'd be VERY hard pressed to demonstrate that a CPU farm would be more cost effective?
This information was based on talking to a friend who owns a data center.
@@cgpov Well, that's a very different scenario - if you own a general-purpose data center and rent it's usage for anything any customer may want, that's one thing. But if you're a VFX studio looking at your yearly expenses, if you are able to shift any massively paralelisable workload from many CPU cores to much fewer GPUs, your yearly expenses should drop dramatically over time. It's a big initial investment, for sure - but it ends up easily paying for itself in the long run. It may still be early days, but if in 5 or 10 years from now there hasn't been a massive shift of rendering and simulation workloads to GPUs on render farms in VFX, I'd be very surprised.
@@AinurEru another friend who owns a render farm would agree with me. "A CPU farm is more cost effective. A GPU farm requires a CPU farm PLUS a bunch of power-hogging, buggy, expensive, hotbox GPU's."
@@cgpov I don't know what kind of experience you're friends have had but sounds like they had a bit of bad luck and/or insufficient cooling apparatus.
I didn't say a GPU is cheap, I said that IF you have parallelizable workloads that can run on either GPUs or CPUs, and can run on GPUs more efficiently, then across the entire lifespan of a GPU farm you'll spend less money on power compared to the power you'd need for a CPU farm of a size that can produce equivalent throughput, across IT's entire lifespan
It's simple, really - if one GPU can do the same amount of work that 10 CPUs would in the same amount of time while taking a fraction of the combined power that those 10 CPUs would take, then over time powering that GPU farm costs less, not more. You also have fewer machines and fewer components to worry about, compared to say 10 times as many servers, so maintenance is also reduced, not increased.
I Know that my personal experience may not be very representative, as I work at Weta Digital, but our data center has tons of CPUs but also many GPUs, and we use both - heavily. I wouldn't say that our GPU farm is more buggy or anything like that. Our department is some of the heaviest users of the GPU farm, and we monitor our usage daily. It's under heavy load almost 24/7 and has hundreds of GPUs in it, and is rock-stable.
@@AinurEru I think what we have here is a difference of opinion from two very different (but valid) povs. My buddy runs a medium sized render service that offers nearly every DCC in the market. I'm sure this is VERY different than Weta's farm. Sounds like both points of view have good reasons for thinking one is more cost effective. Thank you for sharing your findings! I'll share this with my friend. I love Weta btw, keep up the great work!
Finally
Will it later come to AMD or it's only Nvidia?
Unfortunately Arnold GPU will only ever work on Nvidia cards because it uses Nvidia Optix code.
Arnold gpu is significantly slower than octane but it can handle more polygons than octane
beta testing "for eeeevverr" ;))))) hahahahahaa!! awesome !
blender gets daily upgrades and Autodesk is like 100 times bigger. i dont know what the problem is.
You've answered your own question :)
What makes you think that because a company is larger, that they should be able to release more often/quicker? If anything, it'd be less often... Politics, beurocracy, corporate culture, etc always makes everything go slower. It easily nullifies any advantage that can be gained by having more people working.
@@AinurEru because Autodesk products seems to be on life support.
I LIKE CORONA RENDER
Me too!!! I hope these guys do some testing with Corona. Feel free to visit my Corona work here: th-cam.com/video/ikPJgQIaNe8/w-d-xo.html
Corona is coming!
@@shawnastrom Where? it's already there
@@rampally07 Well, I'm planning on making many more Corona tutorials soon!
@@shawnastrom Hoo good I have watched a couple of you ytube videos there are good
Cycles in blender does that ;)
and in C4D.
The main problem of Arnold render is the price! Too expensive for me. I'm a happy Corona render user.
Corona is amazing!