Why are you minimizing AMD's advantage so much? You almost present their API advantage as a disadvantage, lol. When it's about Nvidia's drivers, you're like "OMG THE OPTIMIZATIONS ARE INCREDIBLE" but when you're talking about AMD's performance, you're saying that "Well you can see that AMD's performance is very poor on OpenGL". Why didn't you say that Nvidia has disasterous performance on every new API, and it's laughable that the FuryX beats a 1070 by 30% a matches the 1080? A 1 year+ old card vs the new generation? You let us down with every video. Try to be objective. We know Nvidia boosts your pockets, but try to be more subtle about it.
because an analysis of results shows that AMD on the new API is doing what nvidia is doing on the new API, the HUGE GAINS comes from AMDs openGL performance absolutely choking when it comes to CPU time. *go back and check their original vulkan benchmarks. CPU time on AMD and nvidia in vulkan is 6ms, nvidia on openGL is 6-7ms, AMD on openGL is like 10-11 ms.* lets use some numbers to illustrate the point. lets say with openGL your CPU time is 10ms and your GPU time is 6ms your total render time is 16ms, which means you can render 62.5 frames per second. lets say with vulkan CPU time and GPU time are both 6ms, your total render time is then 12ms, which means you can render 83.33 frames per second. 83.3 / 62.5 = 1.3333 in other words shaving just 4ms off your CPU time is a 33.33% increase in your framerate. that is exactly what is happening to AMD moving from openGL to vulkan. nvidias CPU time has been 6ms in both openGL and vulkan aka the time didnt improve because it was already good = no gain. given the 4 combinations of openGL vs vulkan, AMD vs nvidia, AMDs openGL CPU time is the clear odd man out, aka AMDs openGL performance was realy bad.
If AMD's large performance gains are due to them being hindered by opengl drivers then the move to Vulkan should have just evened the odds and brought cards like the Fury X and the 1070 to similar performance levels. The only thing that makes sense for the difference in performance is the use of async compute shaders, but it still seems like too much of a performance gain even considering the async shaders.
With respect, anybody talking about how "bad" AMD's OpenGL drivers are is tech-illiterate. It's not a case of "bad drivers", it's a case of the API simply not being able to utilise AMD's hardware setup properly. There's no driver solution possible that's going to enable asynchronous compute and the usage of AMD ACEs via OpenGL. It simply can't be done. AMD have bet the farm on newer bare metal APIs since GCN's inception, designing it entirely around performance in (what were at the time theoretical) APIs such as Mantle, Vulkan and DX12. Now, you can criticise that choice of hardware design if you wish, as it's certainly hurt their performance in older APIs like OpenGL and DX11, but there's no valid criticism to be made of driver support. The hardware simply isn't designed for OpenGL performance, and no software can ever fix that.
***** alright I did mistep on that point but it is still the case that AMDs openGL performance is the clear odd man out. iancu vlad was going on a paranoid delusion accusing digital foundry of taking money to puff up nvidia, but the truth is clear AMDs openGL performance just wasn't there. tech illiterate seems quite harsh, this shit is crazy complicated, I spend time reading architecture break downs on GPU reviews, semi-engineering articles which are only slightly dumbed down crash course in CPU and GPU architectures. even a few actual white papers in GPU architecute, there is a moutain to digest and its easy to still end up in a minefield of sounding like a complete idiot from misunderstanding.
"We downclocked our Fury X to 800MHz on the core and overclocked our GTX 1070 to 1.9GHZ yet we still cant make our 1070 win on Vulcan, so we ripped off the watercooler on the Fury X then LN2 OC'ed our 1070 to 2.2GHz... The 1070 edged out the Fury X by 2FPS, so nVidia is really the clear winner here, nVidia has really nailed the drivers and Vulcan performance" *-Digital Foundry 2016*
+someone Im obviously kidding, dont be too much of an idiot, you'll just get yourself embarassed. I literally just copy and pasted this comment from the comments section in Hardware Unboxed's video because I thought it was funny
It would use the GPU to the same extent, you would get identical (or at least very close to identical) framerates, but downscaling the resolution is essentially like the best possible form of anti-aliasing. But I've tested Doom, downscaling 1440p to 1080p on max settings and _holy shit it looks fucking stunning_. I've got a 7970 (flashed to 280x) and I can actually play the game at 1440p maxed out on ~30 FPS. On 1080p I get constant >60fps maxed out. And all that on a 5 year old mid-range card.
Quico Gil My 780 perform less than 40 fps when everything set to ultra,when I set to high,guess what?The fps hasn't change a bit.Typical Nvidia marketing,
There is a good chance that the Pascal Titan will equal or outperform Vega on shear power alone. Nvidia has faced horrible yields on their GP100 die but it is absolutely massive. we will see a fight at the ultra high end. Vega will likely still be ahead in price to performance over the 1080 ti though.
Damn... I played the demo but didn't want to buy the game, I too have a 280x and the performance was barely hitting 60fps, now I want to buy the game, too bad this update came after steam sales.
Another reason to make the Switch to GCN. I currently have 970's in SLI and have had them since they launched. I been and still am happy with my performance. But it's only apparent that GCN is made for Vulkan / DX12 low lvl API performance gains. And with True Hardware Async Compute AMD has a Major Lead with performance. So I am skipping out on PAscal this discreet GPU gen and going back to AMD when Vega hit's the market.
smart choice , i actually think about doing the same exact thing and just build a whole new system for 4k gaming (hopefully 120hz monitors are released at that moment)
John Mellinger true , i will make sure to get me some sweet free sync monitor and the good thing is .. it doesn't add to the price like Gsync the future is bright !
i tried the demo a while back with an i3-3240 3.2Ghz 8GB Ram and an R9 270 2GB oc. And it ran terribly. I couldn't get a stable 60fps no matter what resolution or settings i used. Reminded my of Wolfenstein The New Order which had the same problem on low end PC's
I am very dissapointed with nvidia, Their 9 series didn't even got boost, Not even talking about 6 or 7 series cards. on AMD side even 7 series got boost. I wanted to buy nvidia card, but now I am thinking to grab r9 380 4GB
Dude even 10 series failed in that task, 1080 and 1070 got just ~10% boost. Considering that DirextX12 and Vulkan are future, going with AMD makes more sense.
Yep, most Nvidia cards (except 1080) are limited by their poor Feature support and slow VRAM. You should go for a RX470 8GB, that will come out in 1 or 2 weeks.
i have a new level of respect for DF for the amount of work, quality of analysis, and very good commenary that went into this very quiet/not talked about issue that us nvidia/amd gamers really wanted to know about. keep up the great work DF!
mcpooface Maybe I wasn't clear enough. I said DX11/OpenGL *drivers* though, and it's AMD which makes drivers so I thought it was implied. But yes, it was AMD's lack of effort on the software side which was holding back their hardware.
Yeah but you guys are running it in 4k. I have 1080p and no desire to waste power or money trying to obtain a higher resolution for a long time. If I want higher fidelity I'll up AA for now.
Older Nvidia cards WILL NOT gain fps in Vulkan unlike old AMD GPU's. Why? Because Nvidia killed the driver support for gtx700 and older a while ago (you know, the Kepler gimp?). Anyway good guy AMD is still supporting people using cards like the HD 7970, I hear those guys are seeing some ridiculous improvements.
same happens with AMD, MY 5870 has enough horsepower to play fallout 4 (on modest settings) but the drivers put glitches all over in certain lighting conditions it's just a matter of limited resources they can't be expected to keep doing updates for old gear forever. however nvidia disabling SLI on the 1060 is pure greed.
Have to keep in mind that the HD6000 series and older from AMD/ATI aren't using GCN. Where as the HD7000 series does (granted a older version), and driver wise isn't much of a jump to back port to the older GCN cards since the architecture is similar. GCN also being optimized from the ground up for Async computer, but AMD kinda hoped it was going to come out a lot sooner. Meaning it's only now that we see these cards being used how they were meant to. Only now that we get to see the 8,600 GFLOPS the 650$ FuryX has vs the 1000$ TitianX w/ 6100, or the 4000 GFLOP 7970 vs the 3000 GFLOP GTX 680, actually show up in use. Like if 10 years ago if Nvidia had been making supper strong single core CPUs, while AMD was making *technically* stronger quad core CPUs. Yeah the weaker per core performance would make games of that time run for garbage, but later down the line now quad cores are a minimum for serious gaming. .
ROFL! Listen to NVIDA FOUNDRY! XD The RX 480 under Vulkan in Doom is just 15% slower than a GTX1070, but the RX 480 costs LESS THAN HALF the price! You can get TWO RX 480's for £350, which is STILL £50 less than a single GTX1070. And yet in this video, which SHOULD be ALL ABOUT how AMD is now DESTROYING NVIDIA in the next gen API's, all I can hear is "AMD has amazingly SHIT OpenGL drivers! Nvidias OpenGL drivers are so great compared to AMD's!" HAHHAHAHAHA Meantime, RX 480, R9 390, even the old R9 285 are showing from 40-50% FREE EXTRA PERFORMANCE just by using a new API, while ALL Nvidia cards, including the supposed (lol) VULKAN/DX12 capable "Pascal" cards, show a pathetic 5% increase at BEST. Fucking Nvidia Foundry strikes again! You can't hide the truth, no matter how much money you take from Nvidia! In 4-6 months time, when ALL games are Vulkan / DX12, what you gonna say then? You still be benchmarking DX11? HAHAHAH
***** Dude it's Q3 2016! By Q1 2017 most games will be DX12 / Vulkan. And once all 3 new consoles hit, with their Polaris / VEGA based APU's, then ALL games will be Vulkan / DX12. That is in 6-9 months time! The RX 480 is going to get SUPERB as the weeks pass by. With performance boosts of 50% or higher (for FREE), putting it up there with the 980Ti / 1070 (£400+ cards), it's just insane! And GPU scaling from the second card in crossfire is 90% average? UNREAL! XD The best card released since the Geforce 4Ti 4200, without a doubt!
TheVanillatech i upgrade every 2 years the gpu,my 1080 will rape everything amd has or will have this architecture.just because its a beast.dx 12 or vulkan doesnt matter.even nvidia has a boost in performance is just amd was soooo shit its way more noticeable.how much do you keep your fucking gpu a lifetime?when dx 12 and vulkan is gonna be the norm in 2 3 years what good is a old ass 'optimised for them' 480?lol.yall got some voice amd fanboy yall a little happy now you cant believe your eyes how your piece of turd got this boost in performance, next architecture and youre gonna cry again meanwhile i can play 99.9% games that are dx11 at a huge advantage of your amd meatball,and is not like this architecture amd even with vulkan will beat nvidia.look at this video barely it hit 60fps while my 1080 is over that 99% of the time.vulkan or not raw muscle.
TheVanillatech and where do you get your numbers?a 480 is 240$ one 1070 is 400$.2 480s is about 500$.even tough crossfire is shit and doesnt work with 1 in 2 games lmao.and even then it wont beat a 1070 vulkan doesnt support crossfire and in dx11 its basically the same performance but sadly in only 1 in 2 games for the 480.
the move to DX12/vulcan will be much faster than you think it won't be like the move from DX9 to DX11, nvidia gave themself a big handicap with the 10 series (Async Compute). So I hope AMD stays competitve and make sure nvidia needs to drop prices. this is how I want the hardware market to work.
Most likely this one www.ebay.com/itm/Datapath-VisionSC-DP2-New-in-box-3-year-warranty-we-are-the-US-Datapath-dist-/261867472780 Or some other Datapath card. The cheapest 4K@60 one would probably be Avermedia : www.av.company/en/video-capture/6800032-4k-hdmi-20-hybrid-pcie-capture-card-ce511-hn.html
Just got an r9 fury triX for $240 brand new and will be getting a Ryzen chip on the 28th... I don't think I could be happier! All this kinda reminds me of when uncharted 2 came out on ps3.
I have a very similar configuration. I7-6700K overclocked to 4.66 Ghz,with Corsair Vengeance LPX 2666 Mhz on an Asus Sabertooth Z170 Mark 1. I have XFX Fury X overclocked to 1150 Mhz without issues. My question is , at which frequency yours Fury X working on the test ? Thanks !
Not so sure why you said "the i7 is light years ahead of the FX8350"...I mean I own the FX8350 and I know that the i7 is better (new ones also using DDR4 memory...quad channel and all...), but in gaming if games are well optimised and make more use of the GPU (most of the games basically) the gain in performance is quite minimal. The only games that are a real issue for my FX8350 are those that relies heavily in the CPU as Arma 3 or Total War, then there is not that much of a gap at all. My CPU costs 160€ compared to 335€...of course, if you are going to do other things like rendering and "play" benchmarks the i7 destroys the FX8350...but since I don't "play benchmarks" to take a look at numbers or do rendering...the FX is good enough for gaming (a CPU from 2012 against one from 2016...). At the moment if I look at gaming benchmarks with my GPU (Asus Strix GTX970) and CPU compared to what everyone is using (i7 4770 or i7 6700), the performance is quite the same. Not a single noticeable bottleneck. Go figure. Yet funny thing...I'll upgrade to ZEN once they come out if they are good enough, since architecture is going to be the same or quite similar to Intel, and it will use DDR4 memory as well. Even if they are not as good as Intel, if the price is on point I'll go for them. I'm a little bit tired of all the crap that AMD is receiving (mostly by non AMD users)...but since I use whatever thing I need aside from brands and such, I really care about the money and use I'm gonna give to that device. Nothing more.
***** New and more expensive i5's and i7's...that's the thing...with more memory bandwidth and so on. It is quite obvious that performance will be better (it should, if not they won't sell)...but performance gain is not that much, we are talking about 5 to 10FPS more or less, give it or take. (perhaps more depending in the game) I am not having issues with my CPU at the moment in any game aside from those that I mentioned before, I know I need to upgrade soon tho, but I'm not in a hurry or having massive bottlenecks at all to be desperate or anything close to that. That's why "light years ahead" doesn't make any sense to me. Now if you want to use your PC to work (rendering. photoshop, editing, etc), I recommend an i7 any day that's for sure. For gaming it will always depends in the budget you have. For me today the most important thing is the GPU and it will be like that from now on. A decent CPU with enough cores will do the job since new API's are meant for that. let's see the new gen CPU's from AMD and then do the real comparisons.
It's because of the recent outcrop of shitily-optimized games that have engines that fail to use more than 2/4 cores _(looks at Fallout 4)_, even when it would improve performance. Those would be the only instances where an FX would lose to an i3. Anyone remember Crysis 3? cdn.overclock.net/7/7d/900x900px-LL-7d31c35c_proz.jpeg How about Battlefield 4? cdn.overclock.net/0/04/500x1000px-LL-04d6fd37_bf4proz.jpeg Although Zen should make these results much better.
LOL 6700 cost twice as best AMD FX! really AMD FX can be compared only to i3 as anything else from intel its just much more expensive and FX takes i3 in almost any game today and in new APIs FX is comparable to newest i7 anyway.
Fun times, DX12/ A-sync computing/ and the new video cards. Good times I say. Of course it means allot of benchmarks and tests but these improvements are good for all of us.
No nightmare. Guys, I have two Radeon Pro Duo (Total 4 AMD Fuji XT gpus in crossfire) and I wanna let you know that author of this video - paid by AMD did not tell you that this is not maxed out and Doom can not run on AMD at "ABSOLUTE REMARKABLE MAXIMUM VISUALS" because Nightmare mode can not be enabled due to 4Gb of HBM. It says Error message with Sorry but you don`t have enough VRAM. When Author said it is absolutely maxed out - this is a FALSE STATEMENT. Guys, do not buy AMD Fury X or you could not run most of the new games at 5K and soon at 4K. I have both two Radeon Pro Duo video cards and 4x Titan X SLI. Fury X card has a big disadvantage - it is 4Gb of VRAM. It is not going to run Battlefield 1 on 4K at all, not because not enough juice on GPU but because not enough of VRAM. No matter how many Fury X in crossfire - it is not going to run Battlefield 1 in 4K maxed out - and this is just a beginning. I am sharing some real information as a gamer and hardware enthusiast. I am not a fan - I have both AMD and Nvidia top end 4 gpus in crossfire/sli.
Hxdoom G Nightmare settings is not a mod. It is highest setting besides ultra for some options. Radeon Pro Duo or Fury X can not run it at 1080p not even 4K because out of VRam for those settings.
No they do not, I recorded video about it when you can clearly see this on the video. I own two Radeon Pro Duo not a one video card but two. I tested in all configuration. During my video you can clearly see that when you switch to nightmare Doom Id Tech V engine said that not enough VRAM. So shut up, there is also no nightmare videos with Fury, Fury X or Radeon Pro Duo available on the inet.I own the physical hardware and you own the virtual hardware. I tell people the truth and you are half troll and half broke AMD fan (AMD is Garbage company after their last king R9 290x). I have facts to proof and you have nothing to proof except be ignore troll and confuse and misinform the people, so they will do wrong choices in life just like you did in your shitty life.
Actually, you can use the Nightmare settings on the Fury. With HBM it makes up the difference of not having 6GB, tested it myself, works perfectly. Just need to add the argument to the launch options to force allow all settings.
Sure, no worries, I'll post a link to imgur when I'm back from work. Incase you'd like to try it yourself, add this line to the launch options in the settings for the game on Steam: +menu_advanced_AllowAllSettings 1 , and it'll let you force Nightmare Texture Paging. 4GB HBM is NOT 4GB GDDR5, so don't be worried about trying it out. If it doesn't work for you, the worst that you'll have to do is just turn it down.
Question. Do you think Fermi based GPUs could outperform Maxwell in compute or a-sync compute in games and applications? ie GTX 570 vs GTX 680 vs gtx 970 or what have you.
What if you took an overclocking tool for each card and cut their clock speeds in half? Would the game then run with exactly half of the fps while? Same relative performance, only less "magnified"? Or is it not that simple?
You should test the performance of vulkan on a gtx 960, gtx 970, R9 380 and R9 390. It would be cool to see how midrange GPUs handle on vulkan and see if ether company have a noticeable edge.
the 270x will easily beat the gtx 760 , as they said the kepler cards didn't gain much , and i've seen on reddit that people got 100-120% boost with older AMD hardware such 270x
My laptop's i7 6700HQ can just about do it at 100% load, there is very occasional slight stutter though. I thought hardware acceleration was supposed to fix this...
There actually is a difference, since higher resolutions have a higher bitrate on youtube, though of course the difference it will make gets smaller and smaller the higher you go.
To John and Richard at Digital Foundry, Async Compute is not enabled under Vulkan with Nvidia hardware and Bethesda are currently working with them to enable it. Hence the performance rift.
because, shock surprise, nvidia don't actually have hardware async, the best and closest Nvidia hardware can do is pre-emption. which ironically is most likely already implemented for nvidia hardware.
+Grimm Zane While you're forgetting Dynamic load balancing which allows compute work loads to run in the graphics queue when the graphics queue for becomes idle due to compute workloads to be completed, I am not going to touch base on it. Rather I would find it dishonest that Bethesda states is "working with Nvidia to enable Async compute" when it should state that "Current Nvidia hardware does not any Asynchronous Compute and is not available with Vulkan". If there is no hardware based schedulers for running and halting processes for concurrent Graphics and Compute workloads on Nvidia hardware then by all means Bethesda just say it. It didn't stop Oxide or Amd from disclaiming it with any reprocussions.
+wiser3754 yeah preemption is the only "dynamic task scheduling" they support, according to a recent slide that's 1/3rd of the async scheduling methods (the least beneficial, especially at full GPU load scenarios(i.e. most AAA games most the time) there is absolutely no concurrent support, they can apparently use preemption to schedule parallel compute but it requires explicit task switching regardless. Which actually decrease performance in full load scenarios. if you've seen AoTS tests 8 months after async was introduced, and nvidia were still attempting to push it's pre emption, it still had a negative performance delta of like 2-5% with async enabled, it would not surprise me if their "driver fix" for existing cards when they just can't improve it, they'll just get drivers to skip async, bam "driver fix" no gain, but also no loss. wouldn't surprise me if that's what they're doing, knowing the level of driver side optimizations, they could even wait for async option to be enabled to trigger some completely unrelated driver optimizations (i.e shader replacement which is extremely common) so there is a positive performance delta improvement, smoke and mirrors style...
Also I should add doom is a hard game to further optimize in drivers, it has multiple render paths and loads of vendor specific optimizations in engine (hold overs from Carmack's optimizations, as idtech6 is an iteration of idtech5. an engine that fully utilizes all nvidias exclusive openGL instruction set, with ARB assembly shaders etc, ironically in openGL the nvidia advantage is to a measurable extent hardware based, much like the Vulkan advantage is for AMD. it's also less about drivers and more about hardware level instruction set optimizations that AMD simply lack) (I think a lot of these openGL features were primarily developed to give nvidia it's real-time performance edge in its Quadro cards Vs Firepro. because let's be honest where is openGL been in mainstream gaming in the last 14 years? (Linux users :'( lol)) blaming AMD drivers exclusively as some people have been doing is ignorant towards the entire openGL history, and especially in highly optimized id software games. it's more optimal game than optimal drivers (driver side optimisation will always have an overhead compared to game level optimization anyway, when there is a vendor bias in a game it's hard to reverse in drivers, case in point Hitman and conversely basically every nvidia gameworks game)
Jnan Mckenzie yea.. Trying to pin point it myself.... Could be interesting... Then what.. What is the difference between reality and virtual reality.. Especially as we develope more interactive forms of engaging the virtual construct...I.E... Virtual headsets gaming gloves.. Gaming vests.?....idk.. Things are Changing far more then I think people realize.. And technology is showing us how quickly things can change...... Food for thought... What's reality. Once we can completely create one that we believe in just as much as we believe in this one....?.😐😯😔🙏
Getting over 100fps with a Fury X and my FX 8 core at 4.4ghz, pretty nice given what I paid for it, the Fury X can be found for $400 on sale atm. That's with Vulkan running 1080p high settings
BustyChicks FTW It runs it at probably 120fps average, 100fps is for when the big stuff starts happening. But ya it is a CPU bottleneck of some kind anyways. It's even worse for Dota 2.
Its definitely your CPU or something in your system.. cuz i'm getting the same 120fps average at 2k with an I5 6600K at 4.6.. I would say it's a problem if you are using a 1080p 144hz monitor or something.. thus you don't need to waste your money
I don't do a whole lot of pc gaming, so how do people normally deal with screen tearing without adding the input lag from v-sync? Without buying a G-sync monitor that is
Vulkan allowed me to overclock my 780Ti and reach a stable 60fps and above while on OpenGL with overclock didn't help. Vulkan is amazing! Game changer for sure
I have a fairly budget AMD build (FX-6300, R9 270x OC 4gb and 8 GB Ram) and I run the game with Vulkan at High between 60 and 80 FPS. Before Vulkan I was at Medium around 50 FPS. I highly recommend you check out the older AMD GPUs. It's quite amazing. I can get it going at Ultra 50 to 60 FPS but it often freezes. Vulkan is the future.
Performance for me with 2x GTX 980's in SLI in VULKAN gave me WORSE performance by about 10 FPS or MORE! Im not sure if its nvidia's drivers or the hardware that isnt optimized for VULKAN or asynchronous compute? It looks like AMD if designed better for this right now. I wish that the game supported SLI in some form so that I would at least get some kind of fps boost with my hardware. Im actually getting tired of shelling out all this money for fancy nvidia cards and seeing almost no performance boost over and equivalent AMD card and now with VULKAN drivers being used finally. Im wondering though if OpenGL will eventually update to 5.0 to allow better performance on Nvidia cards?
OpenGL is dead, there will be no more development of the API, those people are now making Vulkan so expect games to gradually transition completely to Vulkan/DX12 and future versions of the two API's
What are the GTX 1070 numbers with Open GL? Because if the GTX 1070 is doing better in Open GL than Vulkan, then I wouldn't care much at all about the 1070s numbers here.
Where is R9 380 vs GTX 960 or R9 390 vs GTX 970 or R7 370 vs GTX 770 vs GTX 950 vs GTX 750TI? Can you please do some benchmarks that actually matter... where is kepler now? I really want to know how many people bought GTX 1070 or GTX 1080 or R9 Fury X... vs GTX 970/R9 390/290.
The gains are also down to "shader intrinsic functions" not just async compute. See radeon.com/doom-vulkan. It is part of the GPU open initiative and provides a way for game developers to directly access graphics hardware instructions in situations where those instructions would normally be abstracted by an API. This approach has been used successfully on gaming consoles to extract more performance from the GPU.
so does 1070 , 980 ti is the original fury x competitor 1070 is from completely different generation and Amd counterpart to it is not even out yet P.S. Doom runs better on 980 ti(compared to 1070) ,Vulkan or not
The API on consoles is already somewhat similar to Vulkan. It also gives the games full control over the GPU's hardware. Also, Vulkan itself, along with DX12 have been programmed to work similarly to the console API's, to make porting games easier. All that being said, I don't think that Vulkan is going to come to consoles, unless that's what the new generation of consoles are going to use.
if i remember correctly Vulkan is also available on consoles or pretty much on everything but apple products. That's why i was curios if they will create a vulkan api for consoles
Agus Darmawan they're basically the same price here in USA right now at some storefronts, if you can find the cheap 1070s in stock that is. Might just be a temporary sale but newegg has them for $419, the fury x that is. And yeah, I did forget about the pro duo, but who would even buy one anyway when 2 fury xs would be so much cheaper?
So THAT's why I felt like the game was stuttering in Vulkhan. I typically use adaptive Vsync, and the sudden transition to full Vsync when comparing with OpenGL was absolutely jarring. I guess I'll keep using OpenGL until the issue is resolved :/
It'd be interesting to see budget PC (750 Ti/R7 360) vs. PS4 frame-rate test again, using Vulkan. I know both 750 Ti and 360 are weaker GPUs than PS4, but still.
Can this be true?? Nvidiamark "async compute" So yeah... 3D Mark does not use the same type of Asynchronous compute found in all of the recent game titles. Instead.. 3D Mark appears to be specifically tailored so as to show nVIDIA GPUs in the best light possible. It makes use of Context Switches (good because Pascal has that improved pre-emption) as well as the Dynamic Load Balancing on Maxwell through the use of concurrent rather than parallel Asynchronous compute tasks. If parallelism was used then we would see Maxwell taking a performance hit under Time Fly as admitted by nVIDIA in their GTX 1080 white paper and as we have seen from AotS.
Don't understand why we are seeing big gains for 1070-1080 at 1080p with vulkan not as much as r300 series or fury/x but at 4k there are no gains what so ever, any knows what is happening there? Is there a fix coming with a driver update or the problem goes deeper.
It has everything to to with GPU utilisation (not GPU load). If the GPU is utilised 100% and isn't bottle-necked by anything else it's running at it's full potential... Off course driver optimizations can increase performance but this is only the case when there is still something 'left on the table'. Developers have been working with AMD and Nvidia for a long time before releasing the Vulkan patch so I don't think there are much gains possible (but that's my assumption).
voodoochild346 I'm not bashing the Fury X, it is a great card. The thing is, 4K 60fps means 4K 60fps, DF messed up a little with the title of the video.
If the company put the effort in to properly optimize the game, then yes. Take Dota 2 as an example. It's on-par with Windows, on both AMD and Nvidia. The Talos Principle is also similar in performance under both Vulkan and OpenGL between Windows and Linux, but their Vulkan renderer is currently only a wrapper for DX11. That means it uses DX11 internally (yes, even on Linux), and then translates it's calls to Vulkan. If they had fully implemented it instead of using a DX11 wrapper, we probably would have performances matching or exceeding these of DX11 on Windows.
I play Doom on ultra on my Fury X/FX8350 system and get 130+FPS playing at 1080p. On my 144hz 1ms response monitor. It's smooth as butter on a baby's ass. Love my damn Fury X. When I upgrade to Ryzen it's nice to know i'll be able to squeeze even more performance out of the Fury.
So where is the "Async Compute" driver? that nVidia said they are going to release 4 months ago to fix the pathetic performance of my 970SSC in Ashes and some other dx12 games.
i wish you would touch on the aspect of nvidia using closed source gameworks to cripple AMD graphics and prevent them from releasing optimized drivers for gameworks. vs open source vulkan
Surprised to see the FuryX outpace the 1070 here, especially given nvidia's input on vulkan, heck, the first showing of the 1080 was running doom with vulkan! Strange too, that with dx12 it's the complete opposite, the 1070 stomps the FuryX in the timespy benchmark by around 800-1000points. That's the same as the difference between the RX480 and the furyX!
Why surprised? Both Nvidia and AMD cards are performing as they should. Fury X has more computational power aka TFLOPS than 1070 and closer to 1080 and it shows here.
Md Ray I was surprised only based on the dx12 results which I'd seen before I saw this test. Dx12 and Vulkan are fairly similar. Having seen the furyx loose out in the only neutral (non amd optimised) dx12 benchmark I'd expected similar results with Vulkan, I was wrong.
pSyk I get that benchmarks aren't representative of games in general. But trends in game performance do generally fall in line with benchmark performance.
They ran below 60fps intentionally since their FCat software didn't support Vulkan. That means they couldn't capture it with Fcat they had to use a console capture card, then analyze the footage. Since the console capture card only captures up to 60fps they had to bring the game below 60fps to actually benchmark it with other cards.
+Dez fuk-u are you legitimately fucking asking what's wrong with playing an fps on a fucking pc with a controller when you have a perfectly fine mouse and keyboard in front of you
Why are you minimizing AMD's advantage so much? You almost present their API advantage as a disadvantage, lol.
When it's about Nvidia's drivers, you're like "OMG THE OPTIMIZATIONS ARE INCREDIBLE" but when you're talking about AMD's performance, you're saying that "Well you can see that AMD's performance is very poor on OpenGL".
Why didn't you say that Nvidia has disasterous performance on every new API, and it's laughable that the FuryX beats a 1070 by 30% a matches the 1080? A 1 year+ old card vs the new generation?
You let us down with every video. Try to be objective. We know Nvidia boosts your pockets, but try to be more subtle about it.
because an analysis of results shows that AMD on the new API is doing what nvidia is doing on the new API, the HUGE GAINS comes from AMDs openGL performance absolutely choking when it comes to CPU time. *go back and check their original vulkan benchmarks. CPU time on AMD and nvidia in vulkan is 6ms, nvidia on openGL is 6-7ms, AMD on openGL is like 10-11 ms.*
lets use some numbers to illustrate the point.
lets say with openGL your CPU time is 10ms and your GPU time is 6ms your total render time is 16ms, which means you can render 62.5 frames per second.
lets say with vulkan CPU time and GPU time are both 6ms, your total render time is then 12ms, which means you can render 83.33 frames per second.
83.3 / 62.5 = 1.3333 in other words shaving just 4ms off your CPU time is a 33.33% increase in your framerate.
that is exactly what is happening to AMD moving from openGL to vulkan.
nvidias CPU time has been 6ms in both openGL and vulkan aka the time didnt improve because it was already good = no gain.
given the 4 combinations of openGL vs vulkan, AMD vs nvidia, AMDs openGL CPU time is the clear odd man out, aka AMDs openGL performance was realy bad.
If AMD's large performance gains are due to them being hindered by opengl drivers then the move to Vulkan should have just evened the odds and brought cards like the Fury X and the 1070 to similar performance levels. The only thing that makes sense for the difference in performance is the use of async compute shaders, but it still seems like too much of a performance gain even considering the async shaders.
didn't you hear how many times wichard kept pwaising AMD and saying how wemakable AMD's pef-womance is with Vulkan and async?
With respect, anybody talking about how "bad" AMD's OpenGL drivers are is tech-illiterate. It's not a case of "bad drivers", it's a case of the API simply not being able to utilise AMD's hardware setup properly. There's no driver solution possible that's going to enable asynchronous compute and the usage of AMD ACEs via OpenGL. It simply can't be done. AMD have bet the farm on newer bare metal APIs since GCN's inception, designing it entirely around performance in (what were at the time theoretical) APIs such as Mantle, Vulkan and DX12. Now, you can criticise that choice of hardware design if you wish, as it's certainly hurt their performance in older APIs like OpenGL and DX11, but there's no valid criticism to be made of driver support. The hardware simply isn't designed for OpenGL performance, and no software can ever fix that.
***** alright I did mistep on that point but it is still the case that AMDs openGL performance is the clear odd man out. iancu vlad was going on a paranoid delusion accusing digital foundry of taking money to puff up nvidia, but the truth is clear AMDs openGL performance just wasn't there.
tech illiterate seems quite harsh, this shit is crazy complicated, I spend time reading architecture break downs on GPU reviews, semi-engineering articles which are only slightly dumbed down crash course in CPU and GPU architectures. even a few actual white papers in GPU architecute, there is a moutain to digest and its easy to still end up in a minefield of sounding like a complete idiot from misunderstanding.
"We downclocked our Fury X to 800MHz on the core and overclocked our GTX 1070 to 1.9GHZ yet we still cant make our 1070 win on Vulcan, so we ripped off the watercooler on the Fury X then LN2 OC'ed our 1070 to 2.2GHz... The 1070 edged out the Fury X by 2FPS, so nVidia is really the clear winner here, nVidia has really nailed the drivers and Vulcan performance"
*-Digital Foundry 2016*
Made my day.
Haha 😂
you make yourself .. and many of those love AMD hardware look bad , congrats
+someone Can't be worse than novideo fangirls.
+someone Im obviously kidding, dont be too much of an idiot, you'll just get yourself embarassed. I literally just copy and pasted this comment from the comments section in Hardware Unboxed's video because I thought it was funny
id Software = God tier developer
+Brent Posy fun game
***** I enjoyed it but understand many didn't and id Software have proven their pedigree with the idTech 6 engine and Vulkan implementation.
+Lirik H Fun doesnt mean its good . FO4 is fun-is it good-NO
It is surprising that even after all these years they still are the best in optimizing pc games in the business.
The new people from Crytek and the departure of Crazy Carmack made the new engine leagues better than id5.
Whenever I see 60 FPS in a game, my cloths fly off.
And I've seen it. Seen it all.
www.mycitybynight.co.za/wp-content/uploads/2013/03/nerd-on-PC1.jpg ?
So what happens when you see fps over 120
Roman Dougherty
i know the feeling. im running that currently
Once you go higher than 60hz, you never want to go back.
Project Scorpio will have stable 30 FPS
Oh i didn't know you work on it? :O
This joke is dam old man, let it die!!!
yawn...
stable 30 at 4k yep. Amazing isn't it? And all these people who doubted LOL
this is pc mah boi
Would using Virtual Super Resolution given you a similar result? Setting it to 4k on a 1080p screen?
It would use the GPU to the same extent, you would get identical (or at least very close to identical) framerates, but downscaling the resolution is essentially like the best possible form of anti-aliasing.
But I've tested Doom, downscaling 1440p to 1080p on max settings and _holy shit it looks fucking stunning_. I've got a 7970 (flashed to 280x) and I can actually play the game at 1440p maxed out on ~30 FPS. On 1080p I get constant >60fps maxed out. And all that on a 5 year old mid-range card.
Nice! I'm still on a GTX 660, but eyeing up the 480, or more likely the 470 once prices settle down a bit :)
Nvidia still doesn't know what Async is?
well he was busy getting tattoo of his dead company
+kakasedfg Yep.
www.pcper.com/files/imagecache/article_max_width/news/2012-02-23/nvidia_ceo_tattoo.jpg
Wolfang102 Hahahahahahha i'm dying!!!
Wolfang102 he will be spending his foundry edition money to get laser tattoo removal
at the present they don't really need it.
Kepler performance in Vulkan is ridiculously bad
Now I'm curious. How bad?
Quico Gil My 780 perform less than 40 fps when everything set to ultra,when I set to high,guess what?The fps hasn't change a bit.Typical Nvidia marketing,
Mitkin Kerman
That's weird, my old 780 Ti had 70-100 most of the time on Ultra when the game came out.
Quico Gil When the game came out it didn't have Vulkan,it only support DX11,which is running fine on my 780.
+Quico Gil It was running OpenGL "when the game came out", not Vulkan. That is the point of comparing OpenGL with Vulkan.
Future looks bright for Vega
We destroyed him though ;)
GDI skum!
hype for Vega!
There is a good chance that the Pascal Titan will equal or outperform Vega on shear power alone. Nvidia has faced horrible yields on their GP100 die but it is absolutely massive. we will see a fight at the ultra high end. Vega will likely still be ahead in price to performance over the 1080 ti though.
+Brandon Casey directx12, vulkan = vega, opelgl directx11 = pascal
Played this yesterday on ultra 1080p, r9280x, i5 @3.30gh with vulcan api enabled and the game ran so much better.
Same. Got a 7970 flashed to 280x, and I can now play through the entire game on Ultra. I was barely hitting 60FPS on medium on OpenGL.
Damn... I played the demo but didn't want to buy the game, I too have a 280x and the performance was barely hitting 60fps, now I want to buy the game, too bad this update came after steam sales.
It is worth a purchase, but you have to enable vulcan, then restart the game.
Just searched a brazilian site and found the game was still on discount, on steam the game is ~$70 i got it for ~$33 :D now comes the long download.
Another reason to make the Switch to GCN. I currently have 970's in SLI and have had them since they launched. I been and still am happy with my performance. But it's only apparent that GCN is made for Vulkan / DX12 low lvl API performance gains. And with True Hardware Async Compute AMD has a Major Lead with performance. So I am skipping out on PAscal this discreet GPU gen and going back to AMD when Vega hit's the market.
smart choice , i actually think about doing the same exact thing and just build a whole new system for 4k gaming (hopefully 120hz monitors are released at that moment)
someone Make sure it's Free-Sync at 4K as games keep getting more and more demanding.
John Mellinger true , i will make sure to get me some sweet free sync monitor and the good thing is .. it doesn't add to the price like Gsync
the future is bright !
I'd do the same except the games I play are old and I don't touch modern triple-A titles very often. So Nvidia would be very relevant for me still.
Wichard is quite wemarkable.
*wemarkable
We love Rich really
+Scon Videos OoOoOoOoOoOoOoOoO...I'm gonna edit that :p
And yes, we love wichard ^_^
We weally love wichard fwom digital foundwy
wichard OG is my man
It's wabbit season, and I'm hunting wabbits, so be vewy, vewy quiet!
Cool! John and Richard, the best of DF on the same video!
Wichard*
Go test it with your console-level PC running with the i3 and R7 360! :)
that would be really intresting to see..
or better the RX 460 (costs 99$)
+someone Rx 460 is not out yet.
NiDz3 i know but it will be out soon
i tried the demo a while back with an i3-3240 3.2Ghz 8GB Ram and an R9 270 2GB oc. And it ran terribly. I couldn't get a stable 60fps no matter what resolution or settings i used. Reminded my of Wolfenstein The New Order which had the same problem on low end PC's
Doom is so optimized and smooth, it's way ahead of anything on the market.
I am very dissapointed with nvidia, Their 9 series didn't even got boost, Not even talking about 6 or 7 series cards. on AMD side even 7 series got boost. I wanted to buy nvidia card, but now I am thinking to grab r9 380 4GB
Nvidia GPU can't get a boost since they don't support async compute.
Dude even 10 series failed in that task, 1080 and 1070 got just ~10% boost. Considering that DirextX12 and Vulkan are future, going with AMD makes more sense.
Yep, most Nvidia cards (except 1080) are limited by their poor Feature support and slow VRAM.
You should go for a RX470 8GB, that will come out in 1 or 2 weeks.
+pSyk as I know RX 470 will feature 4GB? It is enough for me tbh
I have 60 fps 4k on overclocked Titan X in first day of game release. Dissapointed with nvidia? LOL!
i have a new level of respect for DF for the amount of work, quality of analysis, and very good commenary that went into this very quiet/not talked about issue that us nvidia/amd gamers really wanted to know about. keep up the great work DF!
I hear Wichard, i thumb up
Quite a bit of page tearing going on there, Is that about v-sync? Sorry am noob
AMD's cards will no longer be held back by bad DX11 and OpenGL drivers. Their hardware is super fast but was unfortunately held back.
Held back by amd's shitty opengl drivers.
It was amds own shitty drivers, not really dx11 and opengls fault.
mcpooface Exactly what I said.
Marcus Antonius you made it sound like to me, that it was dx11's and opengl's fault, not amds
mcpooface Maybe I wasn't clear enough. I said DX11/OpenGL *drivers* though, and it's AMD which makes drivers so I thought it was implied.
But yes, it was AMD's lack of effort on the software side which was holding back their hardware.
Yeah but you guys are running it in 4k. I have 1080p and no desire to waste power or money trying to obtain a higher resolution for a long time. If I want higher fidelity I'll up AA for now.
Older Nvidia cards WILL NOT gain fps in Vulkan unlike old AMD GPU's. Why? Because Nvidia killed the driver support for gtx700 and older a while ago (you know, the Kepler gimp?). Anyway good guy AMD is still supporting people using cards like the HD 7970, I hear those guys are seeing some ridiculous improvements.
It's not Nvidia killed the support, it's older Nvidia have hardware limitation which causes the no gain in DX12.
Perseides
No, Kepler and older have fallen behind in the DX11 API too.
same happens with AMD, MY 5870 has enough horsepower to play fallout 4 (on modest settings) but the drivers put glitches all over in certain lighting conditions
it's just a matter of limited resources they can't be expected to keep doing updates for old gear forever.
however nvidia disabling SLI on the 1060 is pure greed.
Have to keep in mind that the HD6000 series and older from AMD/ATI aren't using GCN. Where as the HD7000 series does (granted a older version), and driver wise isn't much of a jump to back port to the older GCN cards since the architecture is similar. GCN also being optimized from the ground up for Async computer, but AMD kinda hoped it was going to come out a lot sooner.
Meaning it's only now that we see these cards being used how they were meant to. Only now that we get to see the 8,600 GFLOPS the 650$ FuryX has vs the 1000$ TitianX w/ 6100, or the 4000 GFLOP 7970 vs the 3000 GFLOP GTX 680, actually show up in use. Like if 10 years ago if Nvidia had been making supper strong single core CPUs, while AMD was making *technically* stronger quad core CPUs. Yeah the weaker per core performance would make games of that time run for garbage, but later down the line now quad cores are a minimum for serious gaming.
.
That's right. I have an AMD 7970 GHz edition and I keep seeing great performance improvements in my games unlike the competition Nvidia generation.
testing the later levels (last 2) would be interesting as those levels seem to stress the gpu a lot more
ROFL! Listen to NVIDA FOUNDRY! XD
The RX 480 under Vulkan in Doom is just 15% slower than a GTX1070, but the RX 480 costs LESS THAN HALF the price! You can get TWO RX 480's for £350, which is STILL £50 less than a single GTX1070. And yet in this video, which SHOULD be ALL ABOUT how AMD is now DESTROYING NVIDIA in the next gen API's, all I can hear is "AMD has amazingly SHIT OpenGL drivers! Nvidias OpenGL drivers are so great compared to AMD's!" HAHHAHAHAHA
Meantime, RX 480, R9 390, even the old R9 285 are showing from 40-50% FREE EXTRA PERFORMANCE just by using a new API, while ALL Nvidia cards, including the supposed (lol) VULKAN/DX12 capable "Pascal" cards, show a pathetic 5% increase at BEST.
Fucking Nvidia Foundry strikes again! You can't hide the truth, no matter how much money you take from Nvidia!
In 4-6 months time, when ALL games are Vulkan / DX12, what you gonna say then? You still be benchmarking DX11? HAHAHAH
*****
Dude it's Q3 2016! By Q1 2017 most games will be DX12 / Vulkan. And once all 3 new consoles hit, with their Polaris / VEGA based APU's, then ALL games will be Vulkan / DX12. That is in 6-9 months time!
The RX 480 is going to get SUPERB as the weeks pass by. With performance boosts of 50% or higher (for FREE), putting it up there with the 980Ti / 1070 (£400+ cards), it's just insane! And GPU scaling from the second card in crossfire is 90% average? UNREAL! XD
The best card released since the Geforce 4Ti 4200, without a doubt!
TheVanillatech i upgrade every 2 years the gpu,my 1080 will rape everything amd has or will have this architecture.just because its a beast.dx 12 or vulkan doesnt matter.even nvidia has a boost in performance is just amd was soooo shit its way more noticeable.how much do you keep your fucking gpu a lifetime?when dx 12 and vulkan is gonna be the norm in 2 3 years what good is a old ass 'optimised for them' 480?lol.yall got some voice amd fanboy yall a little happy now you cant believe your eyes how your piece of turd got this boost in performance, next architecture and youre gonna cry again meanwhile i can play 99.9% games that are dx11 at a huge advantage of your amd meatball,and is not like this architecture amd even with vulkan will beat nvidia.look at this video barely it hit 60fps while my 1080 is over that 99% of the time.vulkan or not raw muscle.
TheVanillatech btw how is that 970 performance from 2014 on that new 480 lmao.
TheVanillatech and where do you get your numbers?a 480 is 240$ one 1070 is 400$.2 480s is about 500$.even tough crossfire is shit and doesnt work with 1 in 2 games lmao.and even then it wont beat a 1070 vulkan doesnt support crossfire and in dx11 its basically the same performance but sadly in only 1 in 2 games for the 480.
the move to DX12/vulcan will be much faster than you think it won't be like the move from DX9 to DX11, nvidia gave themself a big handicap with the 10 series (Async Compute). So I hope AMD stays competitve and make sure nvidia needs to drop prices. this is how I want the hardware market to work.
What expensive capture card? is it a non-commercial product not available to the public?
Something that can capure in raw uncompressed 4K/60fps.
Like what? I WANT ONE!
Most likely this one www.ebay.com/itm/Datapath-VisionSC-DP2-New-in-box-3-year-warranty-we-are-the-US-Datapath-dist-/261867472780
Or some other Datapath card.
The cheapest 4K@60 one would probably be Avermedia : www.av.company/en/video-capture/6800032-4k-hdmi-20-hybrid-pcie-capture-card-ce511-hn.html
Just got an r9 fury triX for $240 brand new and will be getting a Ryzen chip on the 28th... I don't think I could be happier! All this kinda reminds me of when uncharted 2 came out on ps3.
Jake Lofgren You lucky man, I was about to upgrade my rig when the GPU market hit full overheat.
I have a very similar configuration. I7-6700K overclocked to 4.66 Ghz,with Corsair Vengeance LPX 2666 Mhz on an Asus Sabertooth Z170 Mark 1. I have XFX Fury X overclocked to 1150 Mhz without issues. My question is , at which frequency yours Fury X working on the test ? Thanks !
Not so sure why you said "the i7 is light years ahead of the FX8350"...I mean I own the FX8350 and I know that the i7 is better (new ones also using DDR4 memory...quad channel and all...), but in gaming if games are well optimised and make more use of the GPU (most of the games basically) the gain in performance is quite minimal.
The only games that are a real issue for my FX8350 are those that relies heavily in the CPU as Arma 3 or Total War, then there is not that much of a gap at all.
My CPU costs 160€ compared to 335€...of course, if you are going to do other things like rendering and "play" benchmarks the i7 destroys the FX8350...but since I don't "play benchmarks" to take a look at numbers or do rendering...the FX is good enough for gaming (a CPU from 2012 against one from 2016...).
At the moment if I look at gaming benchmarks with my GPU (Asus Strix GTX970) and CPU compared to what everyone is using (i7 4770 or i7 6700), the performance is quite the same. Not a single noticeable bottleneck. Go figure.
Yet funny thing...I'll upgrade to ZEN once they come out if they are good enough, since architecture is going to be the same or quite similar to Intel, and it will use DDR4 memory as well. Even if they are not as good as Intel, if the price is on point I'll go for them.
I'm a little bit tired of all the crap that AMD is receiving (mostly by non AMD users)...but since I use whatever thing I need aside from brands and such, I really care about the money and use I'm gonna give to that device. Nothing more.
***** New and more expensive i5's and i7's...that's the thing...with more memory bandwidth and so on.
It is quite obvious that performance will be better (it should, if not they won't sell)...but performance gain is not that much, we are talking about 5 to 10FPS more or less, give it or take. (perhaps more depending in the game)
I am not having issues with my CPU at the moment in any game aside from those that I mentioned before, I know I need to upgrade soon tho, but I'm not in a hurry or having massive bottlenecks at all to be desperate or anything close to that.
That's why "light years ahead" doesn't make any sense to me.
Now if you want to use your PC to work (rendering. photoshop, editing, etc), I recommend an i7 any day that's for sure.
For gaming it will always depends in the budget you have.
For me today the most important thing is the GPU and it will be like that from now on. A decent CPU with enough cores will do the job since new API's are meant for that.
let's see the new gen CPU's from AMD and then do the real comparisons.
It's because of the recent outcrop of shitily-optimized games that have engines that fail to use more than 2/4 cores _(looks at Fallout 4)_, even when it would improve performance. Those would be the only instances where an FX would lose to an i3. Anyone remember Crysis 3?
cdn.overclock.net/7/7d/900x900px-LL-7d31c35c_proz.jpeg
How about Battlefield 4?
cdn.overclock.net/0/04/500x1000px-LL-04d6fd37_bf4proz.jpeg
Although Zen should make these results much better.
@BustyChicks FTW
No, it's because we really care what our money goes on.
BustyChicks FTW
I switched from an FX 6300 to an i5 2320 because Saints Row (on Windows 10) is not optimized for AMD, so don't worry.
LOL 6700 cost twice as best AMD FX! really AMD FX can be compared only to i3 as anything else from intel its just much more expensive and FX takes i3 in almost any game today and in new APIs FX is comparable to newest i7 anyway.
Fun times, DX12/ A-sync computing/ and the new video cards. Good times I say.
Of course it means allot of benchmarks and tests but these improvements are good for all of us.
No nightmare. Guys, I have two Radeon Pro Duo (Total 4 AMD Fuji XT gpus in crossfire) and I wanna let you know that author of this video - paid by AMD did not tell you that this is not maxed out and Doom can not run on AMD at "ABSOLUTE REMARKABLE MAXIMUM VISUALS" because Nightmare mode can not be enabled due to 4Gb of HBM. It says Error message with Sorry but you don`t have enough VRAM. When Author said it is absolutely maxed out - this is a FALSE STATEMENT. Guys, do not buy AMD Fury X or you could not run most of the new games at 5K and soon at 4K. I have both two Radeon Pro Duo video cards and 4x Titan X SLI. Fury X card has a big disadvantage - it is 4Gb of VRAM. It is not going to run Battlefield 1 on 4K at all, not because not enough juice on GPU but because not enough of VRAM. No matter how many Fury X in crossfire - it is not going to run Battlefield 1 in 4K maxed out - and this is just a beginning. I am sharing some real information as a gamer and hardware enthusiast. I am not a fan - I have both AMD and Nvidia top end 4 gpus in crossfire/sli.
It's a game issue, if you mod the game to allow nightmare there is no difference due to hbm
Hxdoom G Nightmare settings is not a mod. It is highest setting besides ultra for some options. Radeon Pro Duo or Fury X can not run it at 1080p not even 4K because out of VRam for those settings.
Yes, they do. You clearly don't own Radeon pro duos in crossfire as you would know the get >90 fps 4k
No they do not, I recorded video about it when you can clearly see this on the video. I own two Radeon Pro Duo not a one video card but two. I tested in all configuration. During my video you can clearly see that when you switch to nightmare Doom Id Tech V engine said that not enough VRAM. So shut up, there is also no nightmare videos with Fury, Fury X or Radeon Pro Duo available on the inet.I own the physical hardware and you own the virtual hardware. I tell people the truth and you are half troll and half broke AMD fan (AMD is Garbage company after their last king R9 290x). I have facts to proof and you have nothing to proof except be ignore troll and confuse and misinform the people, so they will do wrong choices in life just like you did in your shitty life.
Try it with one pro duo, I don't think 4 cards is doing the game any favours, also in steam you can do a command to ignore the nightmare 5gb limit.
Actually, you can use the Nightmare settings on the Fury. With HBM it makes up the difference of not having 6GB, tested it myself, works perfectly. Just need to add the argument to the launch options to force allow all settings.
Is there a specific scene you'd prefer? Or just from getting off the table, to outside?
Sure, no worries, I'll post a link to imgur when I'm back from work. Incase you'd like to try it yourself, add this line to the launch options in the settings for the game on Steam: +menu_advanced_AllowAllSettings 1 , and it'll let you force Nightmare Texture Paging. 4GB HBM is NOT 4GB GDDR5, so don't be worried about trying it out. If it doesn't work for you, the worst that you'll have to do is just turn it down.
Cool to know that it works decent, I got 2 R9 Fury Nitros since they dropped to 350 for amazon prime day.
Question. Do you think Fermi based GPUs could outperform Maxwell in compute or a-sync compute in games and applications?
ie GTX 570 vs GTX 680 vs gtx 970 or what have you.
i have 120- 130 fps on my r9 290 1080p ultra settings epic... great job amd!!
What if you took an overclocking tool for each card and cut their clock speeds in half? Would the game then run with exactly half of the fps while? Same relative performance, only less "magnified"? Or is it not that simple?
You should test the performance of vulkan on a gtx 960, gtx 970, R9 380 and R9 390. It would be cool to see how midrange GPUs handle on vulkan and see if ether company have a noticeable edge.
what are you using to capture the footage???
I would be interested how my 270X would stack up against the gtx 760 when running doom with vulkan. Back then I had to decide between those two.
270x will destroy it by far 7850 kills 780ti :D
HHAHAHAHAHA LOL
the 270x will easily beat the gtx 760 , as they said the kepler cards didn't gain much , and i've seen on reddit that people got 100-120% boost with older AMD hardware such 270x
Where is 3dmark time spy dx12 benchmark Analysis?
When I see video a 4K 60 FPS, my i5 4460, suffers a lot
Just use hardware acceleration....
EDIT: nevermind, I also hit 85% CPU load (4670K 4,2ghz). Yeah I can see your 3,1ghz cpu struggling.
dude, can u tell me how to turn on or off hardware acceleration in windows 10? i dont find it :-/
When I see video a 4K 60 FPS, my 1Mbps internet, suffers a lot.
My laptop's i7 6700HQ can just about do it at 100% load, there is very occasional slight stutter though. I thought hardware acceleration was supposed to fix this...
There actually is a difference, since higher resolutions have a higher bitrate on youtube, though of course the difference it will make gets smaller and smaller the higher you go.
I propose a new drinking game - every time one of them says "actually", you drink!
To John and Richard at Digital Foundry, Async Compute is not enabled under Vulkan with Nvidia hardware and Bethesda are currently working with them to enable it.
Hence the performance rift.
because, shock surprise, nvidia don't actually have hardware async, the best and closest Nvidia hardware can do is pre-emption. which ironically is most likely already implemented for nvidia hardware.
how can they "fix" something they don't have? any solution would be software and that will never be as good
+Grimm Zane While you're forgetting Dynamic load balancing which allows compute work loads to run in the graphics queue when the graphics queue for becomes idle due to compute workloads to be completed, I am not going to touch base on it.
Rather I would find it dishonest that Bethesda states is "working with Nvidia to enable Async compute" when it should state that "Current Nvidia hardware does not any Asynchronous Compute and is not available with Vulkan".
If there is no hardware based schedulers for running and halting processes for concurrent Graphics and Compute workloads on Nvidia hardware then by all means Bethesda just say it. It didn't stop Oxide or Amd from disclaiming it with any reprocussions.
+wiser3754 yeah preemption is the only "dynamic task scheduling" they support, according to a recent slide that's 1/3rd of the async scheduling methods (the least beneficial, especially at full GPU load scenarios(i.e. most AAA games most the time) there is absolutely no concurrent support, they can apparently use preemption to schedule parallel compute but it requires explicit task switching regardless. Which actually decrease performance in full load scenarios.
if you've seen AoTS tests 8 months after async was introduced, and nvidia were still attempting to push it's pre emption, it still had a negative performance delta of like 2-5% with async enabled, it would not surprise me if their "driver fix" for existing cards when they just can't improve it, they'll just get drivers to skip async, bam "driver fix" no gain, but also no loss.
wouldn't surprise me if that's what they're doing, knowing the level of driver side optimizations, they could even wait for async option to be enabled to trigger some completely unrelated driver optimizations (i.e shader replacement which is extremely common) so there is a positive performance delta improvement, smoke and mirrors style...
Also I should add doom is a hard game to further optimize in drivers, it has multiple render paths and loads of vendor specific optimizations in engine (hold overs from Carmack's optimizations, as idtech6 is an iteration of idtech5. an engine that fully utilizes all nvidias exclusive openGL instruction set, with ARB assembly shaders etc, ironically in openGL the nvidia advantage is to a measurable extent hardware based, much like the Vulkan advantage is for AMD. it's also less about drivers and more about hardware level instruction set optimizations that AMD simply lack) (I think a lot of these openGL features were primarily developed to give nvidia it's real-time performance edge in its Quadro cards Vs Firepro. because let's be honest where is openGL been in mainstream gaming in the last 14 years? (Linux users :'( lol))
blaming AMD drivers exclusively as some people have been doing is ignorant towards the entire openGL history, and especially in highly optimized id software games. it's more optimal game than optimal drivers (driver side optimisation will always have an overhead compared to game level optimization anyway, when there is a vendor bias in a game it's hard to reverse in drivers, case in point Hitman and conversely basically every nvidia gameworks game)
how long until we hit true life realism level graphics.?. thoughts.?.
Jnan Mckenzie yea.. Trying to pin point it myself.... Could be interesting... Then what.. What is the difference between reality and virtual reality.. Especially as we develope more interactive forms of engaging the virtual construct...I.E... Virtual headsets gaming gloves.. Gaming vests.?....idk.. Things are Changing far more then I think people realize.. And technology is showing us how quickly things can change...... Food for thought... What's reality. Once we can completely create one that we believe in just as much as we believe in this one....?.😐😯😔🙏
Getting over 100fps with a Fury X and my FX 8 core at 4.4ghz, pretty nice given what I paid for it, the Fury X can be found for $400 on sale atm. That's with Vulkan running 1080p high settings
BustyChicks FTW
Waiting for Zen, hopefully AMD doesn't force me to buy an X99 system
Right, it's only more than 100fps. You need 500 to prevent from losing eyesight.
BustyChicks FTW Oh, I see what you mean. In that case, his CPU is likely holding back the FPS.
BustyChicks FTW
It runs it at probably 120fps average, 100fps is for when the big stuff starts happening.
But ya it is a CPU bottleneck of some kind anyways. It's even worse for Dota 2.
Its definitely your CPU or something in your system.. cuz i'm getting the same 120fps average at 2k with an I5 6600K at 4.6.. I would say it's a problem if you are using a 1080p 144hz monitor or something.. thus you don't need to waste your money
I'm curious about the Vulcan scaling over a dual core processors as well...
Any news about how to fix the stuttering when using Vulkan (vs OpenGL)? Any news about being able to frame limit the game while in Vulkan?
Imagine if every game was optimised like doom... Such a dream
I don't do a whole lot of pc gaming, so how do people normally deal with screen tearing without adding the input lag from v-sync? Without buying a G-sync monitor that is
A lot of screen tearing at the beginning
Vulkan allowed me to overclock my 780Ti and reach a stable 60fps and above while on OpenGL with overclock didn't help. Vulkan is amazing! Game changer for sure
Who the hell with a Fury X is going to choose OpenGL anyway???
Without Gsync Doom is screen tear central are u using a freesync monitor?
I have a fairly budget AMD build (FX-6300, R9 270x OC 4gb and 8 GB Ram) and I run the game with Vulkan at High between 60 and 80 FPS. Before Vulkan I was at Medium around 50 FPS. I highly recommend you check out the older AMD GPUs. It's quite amazing. I can get it going at Ultra 50 to 60 FPS but it often freezes. Vulkan is the future.
Performance for me with 2x GTX 980's in SLI in VULKAN gave me WORSE performance by about 10 FPS or MORE! Im not sure if its nvidia's drivers or the hardware that isnt optimized for VULKAN or asynchronous compute? It looks like AMD if designed better for this right now. I wish that the game supported SLI in some form so that I would at least get some kind of fps boost with my hardware. Im actually getting tired of shelling out all this money for fancy nvidia cards and seeing almost no performance boost over and equivalent AMD card and now with VULKAN drivers being used finally. Im wondering though if OpenGL will eventually update to 5.0 to allow better performance on Nvidia cards?
Um doom does not have sli
they said wayyy back in late may that it was going to be patched in.
OpenGL is dead, there will be no more development of the API, those people are now making Vulkan so expect games to gradually transition completely to Vulkan/DX12 and future versions of the two API's
vulkan 4k test HD6990, HD7990, R9 295X2 & Fury or fury X or nano. That would be interesting to see the difference and compare against a 1080.
What are the GTX 1070 numbers with Open GL?
Because if the GTX 1070 is doing better in Open GL than Vulkan, then I wouldn't care much at all about the 1070s numbers here.
Wheres the music from Music? Especially the E1M1 cover from the start?
Where is R9 380 vs GTX 960 or R9 390 vs GTX 970 or R7 370 vs GTX 770 vs GTX 950 vs GTX 750TI? Can you please do some benchmarks that actually matter... where is kepler now?
I really want to know how many people bought GTX 1070 or GTX 1080 or R9 Fury X... vs GTX 970/R9 390/290.
Temporarily running a gtx 660 here and Vulkan just crunches and spits that poor card out. It doesn't work, unfortunately.
Are we going to see Vulkan on consoles as well?
xbox going to use dx12 , and ps4 going to use their own api , so no as far as i know
+someone isn't Vulkan an upgraded version of Open GL? For all I know, ps4 games are developed using Open GL.
Tom Riddler they do , but they will use their own low level api in the future rather than vulkan
honestly i have no idea why they didn't use vulkan
Vulkan is Mantle 2.0
Glad this is running with TSSAA. If the game runs in SMAA, it won't use a-sync
The gains are also down to "shader intrinsic functions" not just async compute. See radeon.com/doom-vulkan. It is part of the GPU open initiative and provides a way for game developers to directly access graphics hardware instructions in situations where those instructions would normally be abstracted by an API. This approach has been used successfully on gaming consoles to extract more performance from the GPU.
DigitalFoundry why didn't you compare fury x with 980 ti ?
Because 980ti only gets a 1% boost by Vulcan.
so does 1070 , 980 ti is the original fury x competitor 1070 is from completely different generation and Amd counterpart to it is not even out yet
P.S. Doom runs better on 980 ti(compared to 1070) ,Vulkan or not
Try different cpus with vulkan :D
are they gonna add vulkan support for consoles too?
The API on consoles is already somewhat similar to Vulkan. It also gives the games full control over the GPU's hardware. Also, Vulkan itself, along with DX12 have been programmed to work similarly to the console API's, to make porting games easier.
All that being said, I don't think that Vulkan is going to come to consoles, unless that's what the new generation of consoles are going to use.
if i remember correctly Vulkan is also available on consoles or pretty much on everything but apple products.
That's why i was curios if they will create a vulkan api for consoles
I'm even more excited to see what the Zen CPU can do now. Nvidia and Intel's pricing is getting out of order.
there is no point showing 4k on TH-cam, it will be limited to 1080 at the end of the day
I love how AMD fanbois fail to realize the fury x is a top tier card while the 1070 is mid range.
BWAHAHAHAHAHA
1070 is around 600$ and fury X is 520$ in my country
The top tier is radeon pro duo and you can use firepro driver with it
The furyx is doing all of this without all of the bloat of the 1080. Accomplishing a lot more with less.
Agus Darmawan they're basically the same price here in USA right now at some storefronts, if you can find the cheap 1070s in stock that is. Might just be a temporary sale but newegg has them for $419, the fury x that is.
And yeah, I did forget about the pro duo, but who would even buy one anyway when 2 fury xs would be so much cheaper?
DF----I would like to see performance comparison on a FX-8350 / i3/i5
Project Scorpio WILL HAVE gtx1065
Can u please do the same tests with budget cards like gtx 750ti or grx 950....
Doom was never optimised for Vulkan ! They just plugged it in later ! AMD was always better hardware !
So THAT's why I felt like the game was stuttering in Vulkhan. I typically use adaptive Vsync, and the sudden transition to full Vsync when comparing with OpenGL was absolutely jarring. I guess I'll keep using OpenGL until the issue is resolved :/
so which is better 1070 or fury x?
Can we get some R9 390 performance tests? It is one of AMD's most popular graphics cards after all...
It'd be interesting to see budget PC (750 Ti/R7 360) vs. PS4 frame-rate test again, using Vulkan. I know both 750 Ti and 360 are weaker GPUs than PS4, but still.
ps4 arent using vulkan!!!
+nfs Wow... -_- I know that. I want them to re-test again using Vulkan API on budget PC GPUs vs. PS4.
+nfs Consoles use their own API that is even more low level than anything on PC.
+nfs PS4 uses GNM API.
+pSyk true
Lol is that first intro clip taken from Polygon's gameplay video :D kek
Aymen Fayçal R.I.P
AMD is using Vulkan 1.0.17
and Nvidia is using the older 1.0.8
Does that effect performance?
I doubt it because the problem with Nvidia is the hardware itself
Can this be true??
Nvidiamark "async compute"
So yeah... 3D Mark does not use the same type of Asynchronous compute found in all of the recent game titles. Instead.. 3D Mark appears to be specifically tailored so as to show nVIDIA GPUs in the best light possible. It makes use of Context Switches (good because Pascal has that improved pre-emption) as well as the Dynamic Load Balancing on Maxwell through the use of concurrent rather than parallel Asynchronous compute tasks. If parallelism was used then we would see Maxwell taking a performance hit under Time Fly as admitted by nVIDIA in their GTX 1080 white paper and as we have seen from AotS.
i love vulkan, especially on my nvidia vulkan api, it improved my gpu
Don't understand why we are seeing big gains for 1070-1080 at 1080p with vulkan not as much as r300 series or fury/x but at 4k there are no gains what so ever, any knows what is happening there? Is there a fix coming with a driver update or the problem goes deeper.
It has everything to to with GPU utilisation (not GPU load). If the GPU is utilised 100% and isn't bottle-necked by anything else it's running at it's full potential...
Off course driver optimizations can increase performance but this is only the case when there is still something 'left on the table'. Developers have been working with AMD and Nvidia for a long time before releasing the Vulkan patch so I don't think there are much gains possible (but that's my assumption).
PLEASE do a weak CPUs benchmark in Vulkan.
It rarely hits 60fps, while my 1080 is 95% of the time above that.
It's a $700 card from this gen. The 1070 and fury x are around the same price hence the comparison.
voodoochild346
I'm not bashing the Fury X, it is a great card. The thing is, 4K 60fps means 4K 60fps, DF messed up a little with the title of the video.
your 1080 costs twice as much as a fury x tho
it rarely hits 60fps at 4k. that's remarkable
They were actively trying to keep it below 60 fps. HTH.
Cant wait for Rage X to come out :D. Yes most of games are Old API but In 2017 all games will be on Dx12 or Vulkan :) And all Hail The GCN
What CPU was this with?
I use a FX 6300 CPU clocked to 4.6ghz and a RX 480 Graphics card. I get around a 100fps in Doom at 1080p maxed settings
Project scorpio for the win 4k 6o fps
what about vulkun support for linux? will linux's performance match win 10 performanmce
If the company put the effort in to properly optimize the game, then yes. Take Dota 2 as an example. It's on-par with Windows, on both AMD and Nvidia. The Talos Principle is also similar in performance under both Vulkan and OpenGL between Windows and Linux, but their Vulkan renderer is currently only a wrapper for DX11. That means it uses DX11 internally (yes, even on Linux), and then translates it's calls to Vulkan. If they had fully implemented it instead of using a DX11 wrapper, we probably would have performances matching or exceeding these of DX11 on Windows.
it would be nice, if this happens :)
I would assume the 4GB of VRAM would hold the Fury X back.
I play Doom on ultra on my Fury X/FX8350 system and get 130+FPS playing at 1080p. On my 144hz 1ms response monitor. It's smooth as butter on a baby's ass. Love my damn Fury X. When I upgrade to Ryzen it's nice to know i'll be able to squeeze even more performance out of the Fury.
So where is the "Async Compute" driver? that nVidia said they are going to release 4 months ago to fix the pathetic performance of my 970SSC in Ashes and some other dx12 games.
I've seen a 10fps boost on my laptops 970m. DOOM runs perfectly 1080p Ultra where before I used vulcan it was sub 40fps mostly.
i wonder how does the 295x2 perform?
i wish you would touch on the aspect of nvidia using closed source gameworks to cripple AMD graphics and prevent them from releasing optimized drivers for gameworks. vs open source vulkan
Surprised to see the FuryX outpace the 1070 here, especially given nvidia's input on vulkan, heck, the first showing of the 1080 was running doom with vulkan! Strange too, that with dx12 it's the complete opposite, the 1070 stomps the FuryX in the timespy benchmark by around 800-1000points. That's the same as the difference between the RX480 and the furyX!
Benchmarks aren't games, also Nvidia has been cought cheating in Benchmarks mulitble times.
Why surprised? Both Nvidia and AMD cards are performing as they should. Fury X has more computational power aka TFLOPS than 1070 and closer to 1080 and it shows here.
Md Ray I was surprised only based on the dx12 results which I'd seen before I saw this test. Dx12 and Vulkan are fairly similar. Having seen the furyx loose out in the only neutral (non amd optimised) dx12 benchmark I'd expected similar results with Vulkan, I was wrong.
pSyk I get that benchmarks aren't representative of games in general. But trends in game performance do generally fall in line with benchmark performance.
What was the reason they couldnt run above 60 fps ?
4k
They ran below 60fps intentionally since their FCat software didn't support Vulkan. That means they couldn't capture it with Fcat they had to use a console capture card, then analyze the footage. Since the console capture card only captures up to 60fps they had to bring the game below 60fps to actually benchmark it with other cards.
Is it sad that i get that fps on my r9 290 stock on ultra 4k?
Dont know for you but the furyx cost 800 cad and 1070 is 600 cad...
The Fury X is about 430 dollars now. In fact, Newegg was selling them last week for $400. The cheapest 1070 is $430.
I'm very happy with Vulkan gains but I just need 1-3 games to support vulkan specially overwatch that will be game changing
Why is this guy using a controller on PC?
i think they use a method like this to control 2 PCs so they play the game with different hardware in the same way/scene
Ahh, like benching software, cool cool
What's wrong with that?
+Dez fuk-u are you legitimately fucking asking what's wrong with playing an fps on a fucking pc with a controller when you have a perfectly fine mouse and keyboard in front of you
you know you're right, what gives this guy the right to play games the way its most comfortable for him. this guy is destroying lives.
Wichard vewy good job. I'm pwoud of you my lad.