Theyd probably rather just send their multi thousand dollar *waste of mo* ... good investment back for warranty, i dont think theyd be able to keep it.
it's not a mistake that most of those reports come from evga users, if you didn't know there are bioses for 400-500-1000watts on evga floating on the web, not for most other brands, for example my reference design only has a "grey" 390w bios, which I tried and now removed you can think "no one is going to use those" but people are clearly downloading and using them, in most games nothing happens because they either have low fps or are fps limited in some ways, an rtx 3090 in a game that is not fps limited will go max power instantly, I click launch and my gpu is at 390w and won't budge from it...whatever experience you have with previous gens card can't be compared to what an rtx 3090 does, you've got new, different problems, problems that aren't seen using irrealistic low fps benchmarks, especially 4K benchmarks, try 1080p with unlimited fps and the gates of hell open, I disabled vsync not thinking about it not monitoring fps, the room temp was 38°C/100°F a few hours later lol that's when I opened afterburner rivatuner OSD and was "oh...."
I don't get why no other game was able to fry cards. Plenty of games don't have frame limits, and people are blaming New World's frame limit? There should be nothing a developer can do to fry hardware, ever.
I think is no if is single focus is on single core or thread like in the pass? but when all thread works together in roughly the same, I tend to have a feeling this is going to happen. I did look at the core design. I had the feeling something like this will happen. Which is why, like when AMD or Intel cpu, the 1st core is always the only 1 take do the main task only. Not all at once. so measures probably was never taken into account for such scenario.
Could just be a bad combination. At any rate, I think no user mode program should be able to blow up hardware. You have the drivers then everything on the card to stop a user program doing a bad thing.
Here's some more speculation... Users reported that their GPUs died due to a combination of abnormally high frame rates (1000+ FPS) and high power draw. They also reported that the "not connected" error LED lit on one of the PCIe connectors after the card failed (so a fuse blew for sure). The GPU's power state switching might just be fast enough to follow these thousands of load changes per second (at 1000+ FPS), meaning that the VRM will switch entire phases on and off a lot. During the time it takes to switch on additional phases, the power drawn by the GPU necessarily gets supplied by the few phases that are already running and the PCIe power connector(s) they're connected to, resulting in these phases and connectors being overloaded for a short period of time. This is normally not a problem because the condition doesn't last very long. However, if the GPU switches back and forth between different VRM phase configurations all the time, this temporary imbalance in power distribution across the phases/connectors might result in significantly more current being drawn from one of them and its fuse pops (or the power stage itself dies). Maybe EVGA organized its main GPU core phases something like this? Connector 1: Phases 1-4 Connector 2: Phases 5-8 Connector 3: Phases 9-12 Then, if only phases 1-4 run in a low-power state, there will be a transient overload on connector 1 while the card's VRM transitions to a higher power state (and switches on additional phases). If this transient overload happens with a high enough frequency (i.e. with absurdly high FPS), connector 1 gets permanently overloaded and blows its fuse. I really hope EVGA (or someone else) will explain what happened, I'm really curious! If you have a 3090, maybe you could put current clamps on all three PCIe connectors and see if there's a significant imbalance while running a super high FPS load? (Triangle of death, maybe?)
@G E T R E K T 905 It’s not just uncapped frame rates. It’s a combination of frame rates and an abnormal surge in power usage if you read the entire comment.
Sorry if I'm totally off base here, I'm just starting college for electrical engineering. Switching these phases on and off rapidly would cause extremely high current spikes, right? After all inductance is a thing, and from the limited amount I know about inductors can't this cause voltage spikes as well? I think this is even more reason for the fuses to blow...
I get 1000+ fps on load screens in warzone and its yet to blow my 3090. i played amazon new world alpha with my 3090 and also did not have any issues.... these people are just morons. one argued "cards are meant to be overclocked" um no, overclocking is user choice and the cards are only designed to hit their advertised clocks/speeds.
@@goblinphreak2132 'I have never experienced X problem, therefore X problem doesn't exist and it's made up'. Also, read the entire comment, it's a speculation about how the PCB is designed to handle fast power draw changes rather than the game high FPS count on the menu being the problem.At most, the high FPS + other factors can be a trigger to a design flaw.
It may be lower effort, but it's still relevant and people will be looking for an explanation. Can't think of anyone closer to a gpu hardware authority than you, so good work.
In general, the idea that an unprivileged application such as a video game being able to kill hardware is in any way the fault of the application is funny. You can use graphics APIs from web browsers, would you really want to have a computer that can literally be blown up by a malicious web page?
@@halrichard1969 He has walked things back, he realized (eventually) how derpy his position was. Everyone shines somewhere somehow, I consider Jay’s strength to be teaching newbs. Between him and Greg Salazar I credit those two with guiding me through my first open custom loop.
I thought it was probably a fuse too. I can tell you a lot of the FTW3 3090s have power balancing issues and the PCIE slot is drawing too much power. Mine is affected by this, it was a big deal on the EVGA forums for awhile. They offered a replacement, but not a guarantee when they could replace it, so I just hung on to it for now as I can’t be without it indefinitely. I figure if it dies they’ll replace it immediately and I’m fairly satisfied with it for now. To be specific, one 8 pin, in my case #3, will max out at 150w, but 8 pins 1 & 2 will will get stuck at around 110-120w. It’s like as soon as the card detects one 8 pin at the limit, the stops any further power draw for the other two. There’s something fundamentally wrong with the design that can’t be fixed by a BIOS or firmware flash. Even with a higher power BIOS it won’t draw more power. And the PCI-E us drawing 80-85 watts regularly when it shouldn’t exceed 75 watts. From what I understand the fuse on the PCIE slot power is good for 120 watts roughly, so it would have to really spike hard to pop it. I can’t remember the fuse for the 8 pins, but it’s around 200 watts I think, maybe 225. It’s been several months since I’ve thought about it.
Its Nvidia’s fault The Vram on the back isnt cooled period and so it overheats and when something gets hot the resistance goes down and then kablam shit fries Happens with etherium as well
@@Android-ng1wn the Vram doesn’t throttle the original spec says 95c but truly the spec is 90c but Nvidia runs the chips over 100c Im sure someone will risk a 3090 and stick on an active backplate a liquid cooled one from say EK and then we can see if its Vram Power draw goes up as resistance goes down until it gets so low it shorts or blows a components limit and pops it such as a fuse
The "first time OCP triggers, second time the fuse blows" sounds a LOT like it could be that the fuse is undersized in comparison to the OCP or your theory, that the second spike is larger. But I think it's equally likely that the second time the fuse is already warm and thus the second time it manages to blow before the OCP triggers. Yeah, there seems to be a lot reports of it happening in menus and multiple claims that that hard limiting the max fps in the Nvidia driver settings avoids OCP triggering and greatly reduce power draw in this situation. And that kind of static content probably stress cache and memory rather than Vcore proper (and there's no game engine interaction limiting fps either) so it makes kind of sense that it could be one of the smaller primary rails. Definitely not Amazon's fault though the game should probably limit fps in menus especially since it was apparently reported them during the Alpha (so they had the time before the Beta). OTOH so should LOTS of other games, it's amazing how many games burn insane amount of power in the completely static start menu, even if they don't trigger this. It's surprisingly common. EVGA is definitely at fault, Nvidia COULD be partial at fault if they approved it but the problem could well be in compoents that Nvidia couldn't audit (fuse size, shunt size, OCP settings are all things they could change without Nvidia being able to find out).
We're seeing reports of other cards having issues as well incl AMD cards allegedly. If they're confirmed then that's a serious issue with the game not only the cards.
I don't know about SMD fuses but 'standard' fuses can be weakened by high currents that are too brief or low to actually blow the fuse. They can survive several near-overload conditions but they will eventually blow, especially when hot. So maybe the fuse ratings are on the low side but the didn't pick it up in initial testing because you have to overload them a few times before they will blow prematurely.
I mean fuses are always a bit random, EEVBlog did a video testing fuse pop times and they were all over the place, but that is an expected part of their design. Could improper cooling of that area of the board cause these fuses to warm up as well? With nvidia's new cooler design is there some oversight by board makers that causes these fuses to warm up as well? Until we see a card that has popped I guess all we can do is spitball. Oh and sorry if I'm completely off base, I'm an aspiring electrical engineer (starting college next month) and I like to think about these kinds of things.
@@andrewcharlton4053 I think the worry is that the hardware of the card should NEVER allow more power then what it's rated for. The hardware of the card (Voltage Regulation, power monitoring etc.) can't just get overridden. It is kind of like a layer before software. The software can't just say, "Give me more power, I want FPS." That's is where the hardware comes in, the power delivery circuitry should always limit or stop any attempt at giving too much power no matter what the source. So in this case it seems to have allowed more through then rated.
When you're hitting the maximum current the circuit is designed for your stressing everything, not just the VRM MOSFETs. There is a minimum inductance that is needed to achieve a specific max current in a switching regulator and when that max current is exceeded for that inductance it puts extra stress on the inductors, capacitors and the MOSFETs in the VRM. At such high switching frequencies I wouldn't be surprised if they blew a cap or blew a VRM right off the board as well as the fuse. If they designed this down to the minimum spec of the OCP with the minimum acceptable inductors then they certainly could end up blowing the VRM especially if the components have also gotten hot and haven't had time to cool. If they designed it for a current spec higher than OCP then they can easily get away with triggering OCP again and again. So it would likely be a design flaw of under-spec'd components and it is my guess the inductance isn't high enough for that load at that frequency causing dead MOSFETs in the VRM...
My guess would be memory voltage regulator. That GDDR6X seems very experimental in the first place so I would not be surprised if they did not give it enough power budget.
Thank you for not blaming the game! The exact same condition was created in my system while playing call of duty Cold War. Specs: evga rtx 3090 ftw3 ultra, amd 5900x, 64 gb Corsair 3600, evga 1300 watt psu, Asus tuf x570 gaming motherboard, Lian li 011 dynamic xl case with a total of 11 case fans displaying on a Samsung odyssey g7 monitor. I did notice the gpu was running at 57C under full load while playing the game. It had been having a few random crashes here and there before I heard the pop sound that bricked the card. I was actually loading out of the menu until a game when I failed. My opinion…the VRM is 100% to blame. I already capped my voltage on the custom curve in precision x1 and it should be fine.
they already have kingpin who I am sure can design a great card. just the price and margin constraints make it a big problem. I would love AHOC to get a shot at that.❤️
I've definitely never seen my system as hot as when I played New World today. Watercooled cpu/gpu/motherboard loop with 2 radiators, normally sits at 50C under load. My 2080ti was 75C with the GPU at 100% usage the whole time. It never dipped below that until I lowered settings aggressively and limited to 60 fps. I'm not suggesting that the game killed hardware, but it's definitely very demanding, much more so than you'd expect given the graphics in the game.
Apart from the menu thing (and a lot of games do that) it's not that uncommon to have a game using 100% of the gpu all of the time. I don't understand this, i don't think people have been using their cards right if this is now a new thing.
@@Jeffcrocodile Since it's not straight forward to see if liquid is moving in my loop, I usually have a temp monitor running on my second monitor, and this is not common across the mostly MMOs that I play, including WoW, FFXIV, ESO, and GW2, at least for my system. GPU usage will linger generally above 70% utilization, but will frequently spike to 100% when there is a lot going on, but it will die down after that. In New World, no matter what was on my screen, including the many fetch quests where you're running for 5 minutes straight with no mobs on the screen, I was still pegged at 100%. I am not suggesting that this is bad, but on my pretty high end setup, I wasn't interested in finding out how hot my GPU could get.
It actually vaporizes the copper and when it condenses and solidifies it forms those copper balls. It's more common in industrial power component failures (I've worked in power distribution for years). It's like copper dew drops.
You have been killing it lately with your reactionary content, its been great to see and I am glad it helps give you a procrastination break from your planned vids :P
Thank you for actually giving information, I subscribed. Most tech youtubers be like : ''OMG !!!! Amazon's game is destroyings your hardware.'' With both arms in the air.
I saw a streamer playing this like an hour ago, did not know there was an issue. Bricked my 3080 TI -- after 20 minutes of playing. It's a real thing. System has been stable for over a month, Played multiple other games today / last week.
Should NOT be able to happen. Totally the fault of graphics card manufacturer/nVidia NOT Amazon. There aught be no software configuration that can do damage to the card hardware. The BIOS should make sure that just cannot happen. And if this is happening by accident, it is only a matter of time before a malicious actor creates software to deliberately damage graphics cards. BIOS updates are needed as a priority for 3090 series to prevent this type of fault from happening.
Cool, i'd use such software to then return the broken card to the seller and get a full refund meaning i can buy better and better GPUs with the same money. Pretty much a free GPU.
Update on my strix pulling 505 watts. It died 2 days later, started artifacting and screens would lose signal and come back with no sound. Games couldnt get the card to boost anymore. Asus rep that setup the RMA said they had a lot of calls about cards dying from new world.
If it is a fuse, BZ is right, the fuses are protection/diagnostic indicators for HARDWARE failure. If it is the fuses causing this issue...it shouldn't be.
The only thing that might show this is not a fuse, is that it seems the issue may be power supply OCP kicking in for some people. That indicates a short to ground on the PCIe power input of the card...that's a strange and terrible problem in itself.
@@Android-ng1wn yes was just joking about what Rossman always says about fuses in macbooks. fuses aren't to protect the component from failure, they are to protect everything else from burning when the component does fail.
@@Android-ng1wn Nvidia's drivers have killed GPUs in the past, it's not impossible and has happened. So yes, software gone bad can kill your GPU. Look up the 169.75 drivers from nvidia, and I think that wasn't the first time.
I appreciate you doing multiple takes to shorten the videos, I always watch at 2x speed and I'm able to finish them that way. My favorite quote was: "blowing up VRMs and Nvidia go hand in hand" :D
This seems to be a particular issues with some RTX 3090's from EVGA. I remember watching a dumbass YTber who iirc said that his EVGA RTX 3090 literally *Smoked, but worked fine until is blacked screened and made a small pop*
Glad you also take the position that software (other than firmware) should never be able to break your hardware, no matter how it misuses it. Jayztwocents took the position of blaming Amazon for this, which I can get that maybe they shouldn't have gone into open beta when they knew of this problem during the Alpha, but it's still not their fault that NVidia is allowing the cards to draw more power than they can handle.
While I agree with your comment, its very bad software engineering to have your software trip OCP at random all the time. In this case it exposed a more serous problem on nvidia cards, or more particularly on the EVAG FTW3 design. I think the 3 parties have shit to get together.
Max I've seen on mine is about 430w. I put a hybrid kit on it and got the normal and oc bios from the evga forum. I undervolt it most of the time atm, mainly to not heat my room so much. Even undervolted the performance is still pretty darn good
From what i have heard its the dedicated fan controller µC IC that apparently died (more in line of exploded then just died). Something to do with a combination of high power load and very high refresh rates (bursts of 4 digit fps) and how evga implemented their fan control, which can go into thermal runaway under those conditions.
It's not the first RTX 3090 design flaw. The fact that you could use the backplate to grill a steak with the extreme high memory junction temperature is unacceptable. In my opinion it should never be possible for software to 'break' hardware. If Furmark, Mining, AI-stuff, Prime95, etc. can destroy hardware it's a design flaw! I'd really like to know whether Nvidia actually tested the memory properly with a synthetic load - a stresstest specifically designed to 'burn' the memory ... I bet they don't! Testing hardware in a 'realistic' scenario because that works for 99% of people is not an excuse. Hardware should not be designed to run so damn close to the red line! On a RTX 3090FE with a EK Quantum Vector-Waterblock (Front WB-only) + glued passive coolers and fan on the backplate I can STILL push the memory junction temperature to 90°C with a non-gaming workload. A good example is VRAM-demanding AI-stuff or mining Etherium. I have two 3090 FEs and without the watercooling both did start thermal-throttle due to the VRAM being 110°C hot ... in an open case ... fans sounding like a starting airplane at 100% ... ramping up the fans did not even help, maybe minus 2°C? ... obviously they are on the front and can't cool the back properly. At least the waterloop did drop the peak temperatures by 20°C. An active water-cooled-*sandwich* design would be the best to cool these cards. But you know what that will cost you for a single RTX 3090 FE? Front 330€ + Back 250€ = 580€ only for ONE waterblock to fix a problem that would not even exist if it was designed better! That doesn't mean the software could do better. A fps-cap for a game-engine should be enabled by default. 150 fps for a non-shooter game should be good. But it should NOT be hardcoded for people with 144Hz+ monitors. Nobody needs a menu rendered at 1000 fps!
I'm just goona point out Nvidia has a history of cooking cards... Do we want to talk about how shit the vrm on the gtx 590 was ? How about the reference 780ti and 980/980ti vrms ... Bit different then letting the Vram fry but their "reference" pcb they have a history of being perfectly happy with borderline bullshit ... Less ref. Vrms are atleast now almost reasonably good not just borderline... But yea... They have a history of this shit
Just a half educated guess from my side: Some older graphics cards (I mean really old, around 2005) would start a really annoying coil whine, when a game would run with hundreds of fps. I guess this could be caused by VRM switching frequencies and/or other side effects when the gpu processes so much data? My point is, that IF thats true (higher frequencies in the DC circutry during high fps) then this might be a problem of rogue AC currents on the board. When dealing with power regulation, these ripple currents may only flow through certain paths. Detecting those can be very difficult. These currents can (and will) damage fuses they are passing through. So if there are AC currents running wild and if the fuse really did blow, this might be the culprit. At my job we had a similar issue. Although we are dealing with high voltage, our system aswell runs DC power. We had one particular fuse blowing all the time. It wasnt related to the part we were testing. After weeks of investigation we detected exactly this scenario. The power stage of a DC/DC converter was pulling 20A DC. But with a specific switching frequency there was an AC ripple (around 600A! AC) between this component and the terminal block in the power distribution unit. For some reason (our electrical engineers aren't sure why, guesswork was in the realm of standing vs travelling waves, etc) this ripple was passed over to an inactive component, passing its fuse. It was this fuse that was destroyed over and over again. I think it would be interesting to see this tested by someone. GPUs are quite extreme these days with their power consumption and demand in regard to stable voltages. So AC ripple could be a problem.
also could you plz look at the Board differences between Evga 3090 ftw rev 0.1 vs rev 1.0 . evga seemed to have changed things on their boards since a lot of rev 0.1 seemed to just randomly die due to the same issue
I had 2 of these die after a handful of hours gaming on them back at launch. Refunded the 2nd one and haven't been able to get my hands on a card since.
@@crisnmaryfam7344 I respectfully disagree with your take. These gpus are designed with safties in mind that are supposed to prevent scenarios like this from happening, its what this video is all about.
From my POV, this is very simple: if a piece of generic downloadable software can fry a mass-produced consumer GPU then the GPU design has a fault somewhere (be it at the firmware, driver or hardware-level). All Nvidia AIB partner designs are approved by Nvidia. Nvidia is the only one to blame here for charging higher and higher GPU prices while delivering shoddy designs. I would not be surprised to see lawsuits coming out of this in the near future - we've seen lawsuits go through for less. As for frame-limiting, while I do agree that there should be a default frame limit at the driver level config just to avoid wasting power consumption for your normal end-users, for me, uncapped Furmark is a mandatory test to ensure a graphics card's cooling solution is able to handle extreme power draw for an extended amount of time, which can occur naturally in some gaming instances.
There is something wrong with EVGA cards, my RTX 3080 FTW3 ULTRA was not working properly just after 2 hours of playing Cyberpunk, after that with every game I tried instant blackscreen
yeah, i also think this should happen in other games too, just matter of time. and since nw is so hyped messages about dead cards started to appear publicly
I wonder if this has something to do with the cards crashing when they were launched. Maybe they fixed this quick and dirty back then by turning the OCP a bit down.
According to Igor´s Lab, the card die´s because the Fan IC fails (burns) With a frame cap set the cards do not die. ultra high frames in this game causes this.
cards should have limiters so any game cant kill it, if not then its manufacturer's fault. funny thing that ppl start to limit fps on any card they have after those fries but this is most likely happens on 3080(ti)/3090. and afraid to keep the game on the main menu screen in the background
frame rates don't magically kill cards sorry. fps has nothing to do with fan controller. not even close to being related. igor doesn't have a clue. this time, hes being fucking stupid. generally hes pretty smart. but not this time.....
@@ole7736 yeah don't care. EVGA will simply replace cards because as a business they just do shit like that. they want to keep their loyal customers. because they always buy evga. so evga goes "out of their way" to keep customers happy. its business 101. as far as him talking to evga. they aren't going to tell him anything.... most likely they gave him some generic response. I read igor's website which literally says "he doesn't know" but magically its the fan controller. rip. he GUESSES that its the fan controller. in fact, everyone on the reddit forums bitching about their card died said their fans ran 100%.... if the fans are running 100% then the fan controller isn't the issue.
Excactly Buildzoid. Great explanation! I was thinking the same yesterday. No Software developer (or Amazon) is responsible for your hardware "blowing up" Looks like EVGA has powerlimited this FTW3 card to high, tripping OCP, which no software on "factory settings" should be able to trip (OCP which is also seem calibrated incorrectly vs the fuse, which is very bad). Any "consumer grade" hardware, should be able to hit OCP many times without hardware issues. And never hit OCP on stock/factory settings. Seems like the 3090 chip is tough to handle. It just eats power ...when the software can take advantage of the chips potential. I think this issue will turn into an expensive situation for Nvidia / EVGA (maybe call back of cards).
As far as the "little copper balls" go, yes what you're seeing is the aftermath of the copper going liquid and then an affect either the same as or functionally similar to surface tension in water happens. As a long time welder it is extremely easy to observe this affect when welding lighter materials like aluminum and I have seen it occur with copper. I'm unsure if the weight plays a key role or moreso metallurgical composition as steels seem to drip and run apart rather then suck together.
Pretty sure it correlates to higher power limit cards being at risk. Probably power surges with the game generating a load similar to furmark or something. Especially considering EVGA's cards have pretty mediocre VRMs compared to the Nvidia FE when it has a much higher power limiter.
yeah if the fault is cause by overloading MSVDD or VMEM then primarily the 400W+ cards which have basically bare minimum MSVDD / VMEM rails would be affected.
@@ActuallyHardcoreOverclocking which are the EVGA FTW3 cards coincidentally....this is just something silly that shouldn't even be possible in the first place.
I have an 3090 FTW 3 Ultra and while mine didn't die completely, I was able to swap to the lower TDP bios and it seems to be working fine, although now the bios switch does not switch back to the other bios.
yeah, if game can kill the card its definitely card's manufacturer issue. curious to see how many dead 3080/90 we will see in near future coz of memory overheating
Yea I replaced the utterly garbage Gigabyte VRAM thermal pads on my 3080 and saw 30C+ VRAM temp drops. I can't imagine VRAM temps of 100C+ for sustained periods will work out well over time.
As an aside, my guess is those little balls of metal when a power stage blows are probably what's left of the QFN/etc package's lead frame... would probably fail a lot sooner than the board's copper plane, there's not much area/mass there.
I haven't tried New World on my FTW3 Ultra 3090 but I got the black screen fans spinning to 100% while playing American Truck Simulator, it happened a few times. I had to turn the power off and on again from the socket. It hasn't done it in a while but it did happen a few times. Thanks for the video BZ.
Please do a updated version of how to tune your memory with the Dram calc especially since ryzen 5000 series performance depends so much on timings .. I see a lot of videos of people rambling and cutting the videos short. I'd like to see one by someone who knows what they are doing and can explain it as they are going through the process
A few months ago I returned a Gainward RTX 3090 because it was constantly triggering OCP on multiple games at stock clock. I swapped it for an ASUS RTX 3080ti and haven't had any issues since.
That’s actually an interesting thing to mention. The 3090 FTW3 Ultra has a 500W XOC BIOS. However, there’s a channel called Griffin Gaming who saw his 3090 FTW3 Ultra die like this on day one playing Doom Eternal I believe.
You have to factor in that it is normal that about 2% of any GPU model will be faulty and require a return. This has happened to me twice (different generations) and it took a few days in each case for the fault to manifest. The replacements, like for like, worked fine the whole time I had/have them.
It may be hopping on a hot button issue but getting a clear understanding of what's going on I think is key and you are the kind of source that can go a long way to dispelling people's concerns.
I have a 3090 FTW3 ULTRA, v1.0 replacement and I've seen power limit violation before IN 3DMARK PR! It still under-draws on 8-pin #3, so I run a vBIOS mod that lets #1 & 2 go over 150W so the card can hit it's 500W limit. These days, it obeys that. However, there was a short-lived update to 3DMark a couple months ago that caused low PR scores on 30-series cards and I noticed the card hitting 520W. After the hotfix, back to normal. So, the card's power limit can be violated by application code.
These smaller VRM sections, like fans etc. Surely those aren't running the same drivers/mosfets. If improperly configured (eg. component shortage -> improper substitution?), insta-slamming all the fans could cause a very significant inrush current. If it's hot side fused, the fuse may blow first. Even super fast fuses are still mechanical weak links, their failure mode isn't hard binary. If OCP triggered the 1st time, it may still have materially degraded the fuse element, even if it (barely) didn't sever. Failure mode for a tiny flat-pack 10-20A fuse, can be pretty brutal on the PCB underneath, especially if there happens to be a large, beefy chunk of metal on top of the fuse. All those ragequitting pixies gotta go _somewhere._
Love these videos. For someone who doesn't know all the acronyms, I'd love if you quickly explain them the first time they come up. I wasn't familiar with MSVDD and it took me a surprising amount of googling before I came across something explaining it was connected to core cache.
I have one of the 3090's in question. I undervolt it to 825mv @1800mhz, under clock the vram by 500hz then put the power target to 70%. It down clocks the sweat on my balls by at least 30%
This sounds like Sin part 2. For those that weren't around or just didn't care about gaming stuffs when it happened. Back at the turn of the century. There was a fairly hyped up game called Sin that began to kill video cards. It wasn't really the game's fault, the cards had just never had anything to stress them so thoroughly. The cards would burn themselves out over the course of a couple hours. After that fiasco, cards were produced with much better thermal protections.
Out of curiosity, I had a Strix 3080 blow up on me a few weeks ago. A R005 resistor blew and seemingly burnt the PCB as when I RMAed it they sent me a new card back. I'm wondering what would cause this sort of thing to happen? I was just using the card on stock settings in a game of Valorant, so not even anything intensive. Just a faulty component? They sent me a picture of the board in case anyone was curious: imgur.com/h2WqigT
It's theoretically possible it is doing asynchronous shader business at too high a tick rate and is allowing the GPU to cook itself, sometimes called a 'heat virus'. The GPU shouldn't allow this, but at the same time, the game should never need to run off thread gpu math without reasonable throttle.
I would have thought this is caused in part by some kind of harmonics with the current sense circuitry, maybe worsened by the very fast current slew rate, as such the current spikes so high before the current sense circuitry picks up on it and pulls down the clocks... If you get this multiple times per frame I'd expect you end up with a lot more current going to the card than intended...
@@alouisschafer7212 anything with a feedback loop is susceptible to harmonics at certain frequencies. The main thing is ensuring that the intended/expected operation will not have any fluctuations in the frequencies that can potentially trigger the harmonics.
I still think this goes back to issues found at launch with crashes due to the mix of high and low quality caps on cards. EVGA was one of the ones who used more lower quality caps. Founders and Asus cards have not popped up as issues and both those opted for more higher quality caps with Asus using all high quality caps.
Uncapped fps in menus is honestly just bad game design. At least keep it capped to the monitors refresh rate by default and let the user decide if he wants to enable it. Otherwise the card is generating unnecessary heat and power if you leave it uncapped and many people leave their game running in the background.
Of course it's good practice to not ask any more of the hardware than is strictly needed, but no matter what a game asks of the API, the hardware shouldn't die.
@@miyagiryota9238 I never said that. I specifically did not put the blame on any of the parties involved because the cause wasn't quite clear at the time. If you read Igor's Lab latest story about this, it actually seems to be that fan controller on EVGA cards is the culprit - at least for 3090s made by EVGA.
Regulator OCP only applies to the output of the power stages in my experience. Each of them could well be at some load below their individual OCP limit but the total is well over what can be supplied by the input, which pops a fuse or burns a power plane. I agree the board current limits should kick in, but given its software driven, a sag in supply voltage could well lock up the GPU preventing board protection from kicking in.
Would love to know why it's ftw3 models mainly. I seriously don't think it's the game. Don't know how it could be. Seems like evgas bad design that they refused to fix so they could sell kingpin cards.
@@Jeffcrocodile Manufacturers should have tested their cards with furmark and passed 100% because their 2080ti cards had issues with furmark before so they should learn the lesson.
lol 100% agree with you. manufacturers have been using this BS for a long time, like with furmark, trying to convince us that a "stress test" should be a controlled specific load defined by them, thus detecting software like furmark and blocking it, to cover their design flaws. i mean, does the word stress means something else now? lol, maybe they shouldn't be calling them stress test, but "just enough" tests.
That should be caught on so many levels. From silicone design trough software and drivers. Before even hitting over current protection, not to even mention blowing fuse within own PCB.
A guy I worked with who worked for a power company would collect balls of aluminum that were formed by the sub station getting so hot it would melt the structure the substation was sitting it. When I asked about it, he told me "it happens when the water in the air we breath forces the aluminum into a ball shape from boiling it super quickly". I don't know how true that is but... He had a full shelf of them and most of them were the size of softballs.
Underated creator. Omg bruh I just entered a whole new world just to find out this new gpu breakdown. Like me in safety he just went in depth to asses,analyse, benchmark and provide the root cause for what's happened. Even though I'm not good in electronics. Great explanation for the lay man.
I like your explaination better than any other channels because it is absolutely stupid to blame Amazon for this stupid thing because basically the cards are supposed to run any software without failing. If the rtx 3090s or whatever card fails due to some game (which is a software) running the card like crazy. That's not supposed to happen by default. They must have thought of every single scenario before selling any hardware to the end user and tested it really well in every scenario (extremely high fps, extremely high power draw and everything else I can't think of because I'm not fucking nvidia) And the end user should not be able to blow the card at it's stock settings no matter what game or software they run unless they play with the card's clock speed or voltage or whatever. This is 100% NVidia's fault. I absolutely hate amazon for other reason's but blaming amazon for this is ridiculous. NVidia should either replace all the failed cards or compensate the customers (not EVGA or other AIBS but NVidia Directly) by refunding the full amount. It's 100% Nvidia's fault for approving a PCB that's not tested well.
Buildzoid each time he sees the length of his videos going a bit over what he’d like them to be: “Damn, the god that developed me really forgot to implement the Over Rambling Protection! Let’s start slamming right into it!”
There might be more to it, I have a 3090 XC3 Ultra water-cooled (both sides). It had issues freezing with a black screens and the entire computer locking up until I pulled the plug. After trying out a whole bunch of solutions, what made it run stable again was replacing my ~7 year old 1200W Supernova with a brand new Dark Power Pro of the same wattage. Maybe the power delivery on this cards is skewed somehow in the way it draws power from the PSU.
It was reported that this is happening with both current and previous generation AMD GPUs as well. This is a VERY strange series of events. One of the ways (that I can think of) to work out why this is happening is to monitor the game and the GPU at the same time and wait for the shutdown. Then dig through the game logs and the GPU logs around the time the event took place. Do it with several different GPUs and the answer will show itself. Getting the game Developers and the Card Creators in the same room could be a bit of a logistical nightmare. Anything's worth a try though right?
You can fry the RTX 3090 in virtually any game. If you take particles GFX for example, it drops the FPS considerably and will eventually cause the card to die. I did it recently with my 3090 XC3 Ultra gaming on WoW.
Sounds like the same issues they had at launch that they "fixed" by patching the drivers to pull less power. Seems like they should have actually fixed the problem then instead of putting a bandaid on it.
@@basshead. But I assume they have to get approval from Nvidia to have it released, right? So it must have been within the 3090's spec as designed by Nvidia.
My 3090 just blew the same way but while i was using a different game. I had just increased voltage in Afterburner from 100% to 112%. Pop... no extreme overclocking at all on the card.
If one game is killing cards now more games will do it later! The amazon game uses the CryEngine, star citizen also uses it (it's amazon's port) also far cry uses a older port or something from it.
Been playing many games and even Star Citizen and they run fine on max settings. New World destroyed my $2k video card. Luckily it's under warranty but I have a feeling there will be legal issues following this issue.
@@marcusborderlands6177 Amazon uses lumber yard which is a branch of cryEngine. Amazon paid cry for it, im sure amazon has sone some custom stuff to it but it's still from cry. en.wikipedia.org/wiki/Amazon_Lumberyard
If stress-tests like FurMark can run into power limits immediately and throttle down to prevent any sort of OCP shutdown, or damage, then what's the Amazon Lumberyard Engine doing to sneak past these protections, and blow fuses on stock VBIOS? Shouldn't these GPUs be designed in a way where OCP can't be reached by any kind software load to begin with, assuming stock configuration?
people coming out of theyr gaming lair after the gpu burnt , looking out into the sunlight, "its a NEW WORLD"
lol
Nice pic!
That’s really funny
LOL
@Phoenix :D
Please, someone send a dead/fried GPU to Buildzoid, for the sake of sciences, education and rambling...
Theyd probably rather just send their multi thousand dollar *waste of mo* ... good investment back for warranty, i dont think theyd be able to keep it.
ot at least high resolution shots of pcb without cooling system
@@mirage8753 it will break warranty
@@АртёмКучеренко-ь8г It generally should not, manufacturers that stop honoring warranty after that are probably illegal.
@@АртёмКучеренко-ь8г at least ppl can kindly ask services for those photos
Uncapped fps in menus isnt particularity rare, the design of these cards are just faulty.
it's not a mistake that most of those reports come from evga users, if you didn't know there are bioses for 400-500-1000watts on evga floating on the web, not for most other brands, for example my reference design only has a "grey" 390w bios, which I tried and now removed you can think "no one is going to use those" but people are clearly downloading and using them, in most games nothing happens because they either have low fps or are fps limited in some ways, an rtx 3090 in a game that is not fps limited will go max power instantly, I click launch and my gpu is at 390w and won't budge from it...whatever experience you have with previous gens card can't be compared to what an rtx 3090 does, you've got new, different problems, problems that aren't seen using irrealistic low fps benchmarks, especially 4K benchmarks, try 1080p with unlimited fps and the gates of hell open, I disabled vsync not thinking about it not monitoring fps, the room temp was 38°C/100°F a few hours later lol that's when I opened afterburner rivatuner OSD and was "oh...."
@@fredEVOIX Did you even watch the video lol, the cards arent made properly/
@@lookitsrain9552 It's affecting other cards though...
@@sophiethemasochisticninja7655 Holy shit. I remember Neverwinter Nights frying my Geforce back in the day. I didn't know it was a "thing".
@@fredEVOIX well I can confirm a streamer was trying to bench the game on a water cooled FTW 3090 dead stock and it fried his card.
The Way It's Meant To Be Fried.
These cards deserve to die if software is able to kill them.
nice one Baltcranck, lol.
+melted
I thought of the female voice actor saying “Nvidia”
Novideo strikes again
I don't get why no other game was able to fry cards. Plenty of games don't have frame limits, and people are blaming New World's frame limit? There should be nothing a developer can do to fry hardware, ever.
I think is no if is single focus is on single core or thread like in the pass?
but when all thread works together in roughly the same, I tend to have a feeling this is going to happen.
I did look at the core design. I had the feeling something like this will happen.
Which is why, like when AMD or Intel cpu, the 1st core is always the only 1 take do the main task only.
Not all at once. so measures probably was never taken into account for such scenario.
post hoc, ergo propter hoc
Could just be a bad combination. At any rate, I think no user mode program should be able to blow up hardware. You have the drivers then everything on the card to stop a user program doing a bad thing.
Frame uncapped plus high load vs low load and high fps I guess.
Furmark can completely fuck ur computer up. Same with badly coded games.
Here's some more speculation...
Users reported that their GPUs died due to a combination of abnormally high frame rates (1000+ FPS) and high power draw. They also reported that the "not connected" error LED lit on one of the PCIe connectors after the card failed (so a fuse blew for sure).
The GPU's power state switching might just be fast enough to follow these thousands of load changes per second (at 1000+ FPS), meaning that the VRM will switch entire phases on and off a lot. During the time it takes to switch on additional phases, the power drawn by the GPU necessarily gets supplied by the few phases that are already running and the PCIe power connector(s) they're connected to, resulting in these phases and connectors being overloaded for a short period of time. This is normally not a problem because the condition doesn't last very long. However, if the GPU switches back and forth between different VRM phase configurations all the time, this temporary imbalance in power distribution across the phases/connectors might result in significantly more current being drawn from one of them and its fuse pops (or the power stage itself dies).
Maybe EVGA organized its main GPU core phases something like this?
Connector 1: Phases 1-4
Connector 2: Phases 5-8
Connector 3: Phases 9-12
Then, if only phases 1-4 run in a low-power state, there will be a transient overload on connector 1 while the card's VRM transitions to a higher power state (and switches on additional phases). If this transient overload happens with a high enough frequency (i.e. with absurdly high FPS), connector 1 gets permanently overloaded and blows its fuse.
I really hope EVGA (or someone else) will explain what happened, I'm really curious! If you have a 3090, maybe you could put current clamps on all three PCIe connectors and see if there's a significant imbalance while running a super high FPS load? (Triangle of death, maybe?)
@G E T R E K T 905 It’s not just uncapped frame rates. It’s a combination of frame rates and an abnormal surge in power usage if you read the entire comment.
Sorry if I'm totally off base here, I'm just starting college for electrical engineering.
Switching these phases on and off rapidly would cause extremely high current spikes, right? After all inductance is a thing, and from the limited amount I know about inductors can't this cause voltage spikes as well? I think this is even more reason for the fuses to blow...
I get 1000+ fps on load screens in warzone and its yet to blow my 3090. i played amazon new world alpha with my 3090 and also did not have any issues.... these people are just morons. one argued "cards are meant to be overclocked" um no, overclocking is user choice and the cards are only designed to hit their advertised clocks/speeds.
@@goblinphreak2132 'I have never experienced X problem, therefore X problem doesn't exist and it's made up'. Also, read the entire comment, it's a speculation about how the PCB is designed to handle fast power draw changes rather than the game high FPS count on the menu being the problem.At most, the high FPS + other factors can be a trigger to a design flaw.
@@Penkazo34 keep reaching. A few twats on some forums crying about dead cards doesnt mean issue exists.
It may be lower effort, but it's still relevant and people will be looking for an explanation. Can't think of anyone closer to a gpu hardware authority than you, so good work.
Wrong, look out Igor's Lab
Nvidia after using shunt resistors for OCP of GTX 590 : 'it just works'
money flows
People Buy
Nvidia after letting their cards blow up: It's safe to upgrade now my Ampere friends!
In general, the idea that an unprivileged application such as a video game being able to kill hardware is in any way the fault of the application is funny. You can use graphics APIs from web browsers, would you really want to have a computer that can literally be blown up by a malicious web page?
Yeah Jayztwocents didn’t get this memo
yo lets do this to all the crypto miners
@@sandysand3097 Crypto miners undervolt and underclock their cards. Sometimes it’s VRM worried but sometimes it’s just power efficiency
@@depth386 Jay's two cents is all about ego in case you hadnt heard.
@@halrichard1969 He has walked things back, he realized (eventually) how derpy his position was. Everyone shines somewhere somehow, I consider Jay’s strength to be teaching newbs. Between him and Greg Salazar I credit those two with guiding me through my first open custom loop.
I thought it was probably a fuse too. I can tell you a lot of the FTW3 3090s have power balancing issues and the PCIE slot is drawing too much power. Mine is affected by this, it was a big deal on the EVGA forums for awhile. They offered a replacement, but not a guarantee when they could replace it, so I just hung on to it for now as I can’t be without it indefinitely. I figure if it dies they’ll replace it immediately and I’m fairly satisfied with it for now.
To be specific, one 8 pin, in my case #3, will max out at 150w, but 8 pins 1 & 2 will will get stuck at around 110-120w. It’s like as soon as the card detects one 8 pin at the limit, the stops any further power draw for the other two. There’s something fundamentally wrong with the design that can’t be fixed by a BIOS or firmware flash. Even with a higher power BIOS it won’t draw more power. And the PCI-E us drawing 80-85 watts regularly when it shouldn’t exceed 75 watts. From what I understand the fuse on the PCIE slot power is good for 120 watts roughly, so it would have to really spike hard to pop it. I can’t remember the fuse for the 8 pins, but it’s around 200 watts I think, maybe 225. It’s been several months since I’ve thought about it.
EVGA uses 20A fuses on the 8pin power connectors, don't know the delay rating
@@punktkomma9489 so, its 240W
A VGA shouldn't fatally fail due to software. To me it's nVidia or eVGA fault.
Really seems evga's fault there as other manufacturers do not seem to have that problem.
EVGA are not the only ones anymore. And it isn't just 3090s.
Its Nvidia’s fault
The Vram on the back isnt cooled period and so it overheats and when something gets hot the resistance goes down and then kablam shit fries
Happens with etherium as well
@@Android-ng1wn Oh grow up already. I said it wasn't ONLY EVGA cards that are blowing up. That doesn't make EVGA any less culpable.
@@Android-ng1wn the Vram doesn’t throttle the original spec says 95c but truly the spec is 90c but Nvidia runs the chips over 100c
Im sure someone will risk a 3090 and stick on an active backplate a liquid cooled one from say EK and then we can see if its Vram
Power draw goes up as resistance goes down until it gets so low it shorts or blows a components limit and pops it such as a fuse
inb4 Amazon has accidentally found a voltage limit bypass
So... this could be good news?
Amazon have said they will patch in a menu frame limiter.
It's only 3090. It should affect other model if that is the case.
... or Amazon found bitcoin mining on high end GPUs expecting them not to notice while playing game.
@@saywhat9158 smarts.
"i reshot this video because i wanted to make it shorter but i failed" Classic buildzoid
should have just swapped relive for nvenc, would have made the content better
The "first time OCP triggers, second time the fuse blows" sounds a LOT like it could be that the fuse is undersized in comparison to the OCP or your theory, that the second spike is larger. But I think it's equally likely that the second time the fuse is already warm and thus the second time it manages to blow before the OCP triggers.
Yeah, there seems to be a lot reports of it happening in menus and multiple claims that that hard limiting the max fps in the Nvidia driver settings avoids OCP triggering and greatly reduce power draw in this situation. And that kind of static content probably stress cache and memory rather than Vcore proper (and there's no game engine interaction limiting fps either) so it makes kind of sense that it could be one of the smaller primary rails.
Definitely not Amazon's fault though the game should probably limit fps in menus especially since it was apparently reported them during the Alpha (so they had the time before the Beta). OTOH so should LOTS of other games, it's amazing how many games burn insane amount of power in the completely static start menu, even if they don't trigger this. It's surprisingly common.
EVGA is definitely at fault, Nvidia COULD be partial at fault if they approved it but the problem could well be in compoents that Nvidia couldn't audit (fuse size, shunt size, OCP settings are all things they could change without Nvidia being able to find out).
That was informative
Thank you so much
We're seeing reports of other cards having issues as well incl AMD cards allegedly. If they're confirmed then that's a serious issue with the game not only the cards.
I don't know about SMD fuses but 'standard' fuses can be weakened by high currents that are too brief or low to actually blow the fuse. They can survive several near-overload conditions but they will eventually blow, especially when hot.
So maybe the fuse ratings are on the low side but the didn't pick it up in initial testing because you have to overload them a few times before they will blow prematurely.
I mean fuses are always a bit random, EEVBlog did a video testing fuse pop times and they were all over the place, but that is an expected part of their design. Could improper cooling of that area of the board cause these fuses to warm up as well? With nvidia's new cooler design is there some oversight by board makers that causes these fuses to warm up as well? Until we see a card that has popped I guess all we can do is spitball.
Oh and sorry if I'm completely off base, I'm an aspiring electrical engineer (starting college next month) and I like to think about these kinds of things.
@@andrewcharlton4053 I think the worry is that the hardware of the card should NEVER allow more power then what it's rated for. The hardware of the card (Voltage Regulation, power monitoring etc.) can't just get overridden. It is kind of like a layer before software. The software can't just say, "Give me more power, I want FPS." That's is where the hardware comes in, the power delivery circuitry should always limit or stop any attempt at giving too much power no matter what the source. So in this case it seems to have allowed more through then rated.
When you're hitting the maximum current the circuit is designed for your stressing everything, not just the VRM MOSFETs. There is a minimum inductance that is needed to achieve a specific max current in a switching regulator and when that max current is exceeded for that inductance it puts extra stress on the inductors, capacitors and the MOSFETs in the VRM. At such high switching frequencies I wouldn't be surprised if they blew a cap or blew a VRM right off the board as well as the fuse. If they designed this down to the minimum spec of the OCP with the minimum acceptable inductors then they certainly could end up blowing the VRM especially if the components have also gotten hot and haven't had time to cool. If they designed it for a current spec higher than OCP then they can easily get away with triggering OCP again and again. So it would likely be a design flaw of under-spec'd components and it is my guess the inductance isn't high enough for that load at that frequency causing dead MOSFETs in the VRM...
"We got our best man on the case!"
My man's dissecting this like the Chernobyl system failure report.
My guess would be memory voltage regulator. That GDDR6X seems very experimental in the first place so I would not be surprised if they did not give it enough power budget.
Thank you for not blaming the game! The exact same condition was created in my system while playing call of duty Cold War.
Specs: evga rtx 3090 ftw3 ultra, amd 5900x, 64 gb Corsair 3600, evga 1300 watt psu, Asus tuf x570 gaming motherboard, Lian li 011 dynamic xl case with a total of 11 case fans displaying on a Samsung odyssey g7 monitor. I did notice the gpu was running at 57C under full load while playing the game. It had been having a few random crashes here and there before I heard the pop sound that bricked the card. I was actually loading out of the menu until a game when I failed. My opinion…the VRM is 100% to blame. I already capped my voltage on the custom curve in precision x1 and it should be fine.
EVGA should hire Buildzoid for their next 4090 series so we can have a proper EVGA 4090 Buildzoid FTW Edition
they already have kingpin who I am sure can design a great card. just the price and margin constraints make it a big problem.
I would love AHOC to get a shot at that.❤️
Not the first time nV has has cards die from uncapped menu FPS. Star Trek Online did it with one of their dual GPU cards.
probably the 590 or 295.
I blame the board maker
@@ActuallyHardcoreOverclocking Wasn't the 295 AMD? Unless there was an NVidia version on their 200 series. :)
yes, GTX 295
@@bluedragon219123 zoidberg doesn't make mistakes
I've definitely never seen my system as hot as when I played New World today. Watercooled cpu/gpu/motherboard loop with 2 radiators, normally sits at 50C under load. My 2080ti was 75C with the GPU at 100% usage the whole time. It never dipped below that until I lowered settings aggressively and limited to 60 fps. I'm not suggesting that the game killed hardware, but it's definitely very demanding, much more so than you'd expect given the graphics in the game.
That sounds insane load
omg that sounds fucking terrible!
Yikes man.
Apart from the menu thing (and a lot of games do that) it's not that uncommon to have a game using 100% of the gpu all of the time. I don't understand this, i don't think people have been using their cards right if this is now a new thing.
@@Jeffcrocodile Since it's not straight forward to see if liquid is moving in my loop, I usually have a temp monitor running on my second monitor, and this is not common across the mostly MMOs that I play, including WoW, FFXIV, ESO, and GW2, at least for my system. GPU usage will linger generally above 70% utilization, but will frequently spike to 100% when there is a lot going on, but it will die down after that. In New World, no matter what was on my screen, including the many fetch quests where you're running for 5 minutes straight with no mobs on the screen, I was still pegged at 100%. I am not suggesting that this is bad, but on my pretty high end setup, I wasn't interested in finding out how hot my GPU could get.
It actually vaporizes the copper and when it condenses and solidifies it forms those copper balls. It's more common in industrial power component failures (I've worked in power distribution for years). It's like copper dew drops.
You have been killing it lately with your reactionary content, its been great to see and I am glad it helps give you a procrastination break from your planned vids :P
Thank you for actually giving information, I subscribed.
Most tech youtubers be like : ''OMG !!!! Amazon's game is destroyings your hardware.'' With both arms in the air.
It is indeed the fuses blowing. I've seen pictures of that on Tweakers, a Dutch hardware website
I saw a streamer playing this like an hour ago, did not know there was an issue. Bricked my 3080 TI -- after 20 minutes of playing. It's a real thing. System has been stable for over a month, Played multiple other games today / last week.
Nvidia: We have shunt resistors!
Amazon: Tell me more.
Amazon: STOP RESISTING!
Shunt resistors: 'OK"
Loved the video. New subscriber here. Awesome explanation, style and great voice.
Should NOT be able to happen. Totally the fault of graphics card manufacturer/nVidia NOT Amazon. There aught be no software configuration that can do damage to the card hardware. The BIOS should make sure that just cannot happen. And if this is happening by accident, it is only a matter of time before a malicious actor creates software to deliberately damage graphics cards. BIOS updates are needed as a priority for 3090 series to prevent this type of fault from happening.
Cool, i'd use such software to then return the broken card to the seller and get a full refund meaning i can buy better and better GPUs with the same money. Pretty much a free GPU.
Update on my strix pulling 505 watts. It died 2 days later, started artifacting and screens would lose signal and come back with no sound. Games couldnt get the card to boost anymore. Asus rep that setup the RMA said they had a lot of calls about cards dying from new world.
If it is a fuse, BZ is right, the fuses are protection/diagnostic indicators for HARDWARE failure. If it is the fuses causing this issue...it shouldn't be.
The only thing that might show this is not a fuse, is that it seems the issue may be power supply OCP kicking in for some people. That indicates a short to ground on the PCIe power input of the card...that's a strange and terrible problem in itself.
@@Android-ng1wn those are Apple fuses, and as we know Apple fuses never blow lmao
@@Android-ng1wn yes was just joking about what Rossman always says about fuses in macbooks. fuses aren't to protect the component from failure, they are to protect everything else from burning when the component does fail.
Just a reminder that nvidia's drivers have killed GPUs in the past.
It isn't always hardware failure, software can kill GPUs.
@@Android-ng1wn Nvidia's drivers have killed GPUs in the past, it's not impossible and has happened.
So yes, software gone bad can kill your GPU.
Look up the 169.75 drivers from nvidia, and I think that wasn't the first time.
I appreciate you doing multiple takes to shorten the videos, I always watch at 2x speed and I'm able to finish them that way. My favorite quote was: "blowing up VRMs and Nvidia go hand in hand" :D
This seems to be a particular issues with some RTX 3090's from EVGA. I remember watching a dumbass YTber who iirc said that his EVGA RTX 3090 literally *Smoked, but worked fine until is blacked screened and made a small pop*
Glad you also take the position that software (other than firmware) should never be able to break your hardware, no matter how it misuses it. Jayztwocents took the position of blaming Amazon for this, which I can get that maybe they shouldn't have gone into open beta when they knew of this problem during the Alpha, but it's still not their fault that NVidia is allowing the cards to draw more power than they can handle.
While I agree with your comment, its very bad software engineering to have your software trip OCP at random all the time. In this case it exposed a more serous problem on nvidia cards, or more particularly on the EVAG FTW3 design.
I think the 3 parties have shit to get together.
EVGA>NVIDIA>New World devs
This is the blame order probably.
Since software can _directly_ control the hardware, it is inevitable that software can (and will) damage hardware.
Great Video. Just to let you know the new 3090 FTW3 cards come pre installed with the 500 Watt Bios that they released as a update👍
yeah.. and most people still cant get over 450W on the ftw3 cards even with that bios. I love EVGA but this gen is =(
500watt for video card is insane. not a big surprise some of them have problems
@@joemehnert7590 mine only gets 448W, and the power rails are completely unbalanced. 110W/130W/90W while pulling 80W+ out of the pcei socket
@@aaronjessome1032 Isn’t PCI-E supposed to be 75W max? That’s out of spec and stressing your Motherboard/Mainboard.
Max I've seen on mine is about 430w. I put a hybrid kit on it and got the normal and oc bios from the evga forum. I undervolt it most of the time atm, mainly to not heat my room so much. Even undervolted the performance is still pretty darn good
From what i have heard its the dedicated fan controller µC IC that apparently died (more in line of exploded then just died).
Something to do with a combination of high power load and very high refresh rates (bursts of 4 digit fps) and how evga implemented their fan control, which can go into thermal runaway under those conditions.
It's not the first RTX 3090 design flaw. The fact that you could use the backplate to grill a steak with the extreme high memory junction temperature is unacceptable. In my opinion it should never be possible for software to 'break' hardware. If Furmark, Mining, AI-stuff, Prime95, etc. can destroy hardware it's a design flaw! I'd really like to know whether Nvidia actually tested the memory properly with a synthetic load - a stresstest specifically designed to 'burn' the memory ... I bet they don't! Testing hardware in a 'realistic' scenario because that works for 99% of people is not an excuse. Hardware should not be designed to run so damn close to the red line!
On a RTX 3090FE with a EK Quantum Vector-Waterblock (Front WB-only) + glued passive coolers and fan on the backplate I can STILL push the memory junction temperature to 90°C with a non-gaming workload. A good example is VRAM-demanding AI-stuff or mining Etherium. I have two 3090 FEs and without the watercooling both did start thermal-throttle due to the VRAM being 110°C hot ... in an open case ... fans sounding like a starting airplane at 100% ... ramping up the fans did not even help, maybe minus 2°C? ... obviously they are on the front and can't cool the back properly. At least the waterloop did drop the peak temperatures by 20°C. An active water-cooled-*sandwich* design would be the best to cool these cards. But you know what that will cost you for a single RTX 3090 FE?
Front 330€ + Back 250€ = 580€ only for ONE waterblock to fix a problem that would not even exist if it was designed better!
That doesn't mean the software could do better.
A fps-cap for a game-engine should be enabled by default.
150 fps for a non-shooter game should be good. But it should NOT be hardcoded for people with 144Hz+ monitors.
Nobody needs a menu rendered at 1000 fps!
I'm just goona point out Nvidia has a history of cooking cards... Do we want to talk about how shit the vrm on the gtx 590 was ? How about the reference 780ti and 980/980ti vrms ... Bit different then letting the Vram fry but their "reference" pcb they have a history of being perfectly happy with borderline bullshit ... Less ref. Vrms are atleast now almost reasonably good not just borderline... But yea... They have a history of this shit
Just a half educated guess from my side:
Some older graphics cards (I mean really old, around 2005) would start a really annoying coil whine, when a game would run with hundreds of fps. I guess this could be caused by VRM switching frequencies and/or other side effects when the gpu processes so much data?
My point is, that IF thats true (higher frequencies in the DC circutry during high fps) then this might be a problem of rogue AC currents on the board. When dealing with power regulation, these ripple currents may only flow through certain paths. Detecting those can be very difficult. These currents can (and will) damage fuses they are passing through. So if there are AC currents running wild and if the fuse really did blow, this might be the culprit.
At my job we had a similar issue. Although we are dealing with high voltage, our system aswell runs DC power. We had one particular fuse blowing all the time. It wasnt related to the part we were testing. After weeks of investigation we detected exactly this scenario. The power stage of a DC/DC converter was pulling 20A DC. But with a specific switching frequency there was an AC ripple (around 600A! AC) between this component and the terminal block in the power distribution unit. For some reason (our electrical engineers aren't sure why, guesswork was in the realm of standing vs travelling waves, etc) this ripple was passed over to an inactive component, passing its fuse. It was this fuse that was destroyed over and over again.
I think it would be interesting to see this tested by someone. GPUs are quite extreme these days with their power consumption and demand in regard to stable voltages. So AC ripple could be a problem.
Amazon Sales: "Damn, GPU cards are stacking up in the warehouse."
Amazon Game Development "Ok, hold my beer.."
Dude, I was just thinking this too.
not realistic but I was laughing 😁😂😂
If only there were GPU's stacking up in warehouses... I paid more for my 3080 than the listed retail of the 3090- and I got it cheap compared to most.
also could you plz look at the Board differences between Evga 3090 ftw rev 0.1 vs rev 1.0 . evga seemed to have changed things on their boards since a lot of rev 0.1 seemed to just randomly die due to the same issue
do we know the general timeframe for when rev 1.0 was released?
@@stephenta9992 around February from what I recall , could be march
I had 2 of these die after a handful of hours gaming on them back at launch. Refunded the 2nd one and haven't been able to get my hands on a card since.
should have paid attention to your settings. People who run limiters arent frying cards.... you let it run to 4k fps... yeah shits gonna break.
@@crisnmaryfam7344 both died running an PS2 emulator capped at 60 Hz. Try again.
@@crisnmaryfam7344 Did you literally not watch the video or were you thinking about something else while Buildzoid was talking?
@@crisnmaryfam7344 I respectfully disagree with your take. These gpus are designed with safties in mind that are supposed to prevent scenarios like this from happening, its what this video is all about.
@@crisnmaryfam7344 you missed the whole point of this video...
From my POV, this is very simple: if a piece of generic downloadable software can fry a mass-produced consumer GPU then the GPU design has a fault somewhere (be it at the firmware, driver or hardware-level).
All Nvidia AIB partner designs are approved by Nvidia.
Nvidia is the only one to blame here for charging higher and higher GPU prices while delivering shoddy designs.
I would not be surprised to see lawsuits coming out of this in the near future - we've seen lawsuits go through for less.
As for frame-limiting, while I do agree that there should be a default frame limit at the driver level config just to avoid wasting power consumption for your normal end-users, for me, uncapped Furmark is a mandatory test to ensure a graphics card's cooling solution is able to handle extreme power draw for an extended amount of time, which can occur naturally in some gaming instances.
There is something wrong with EVGA cards, my RTX 3080 FTW3 ULTRA was not working properly just after 2 hours of playing Cyberpunk, after that with every game I tried instant blackscreen
yeah, i also think this should happen in other games too, just matter of time. and since nw is so hyped messages about dead cards started to appear publicly
I wonder if this has something to do with the cards crashing when they were launched. Maybe they fixed this quick and dirty back then by turning the OCP a bit down.
According to Igor´s Lab, the card die´s because the Fan IC fails (burns)
With a frame cap set the cards do not die. ultra high frames in this game causes this.
cards should have limiters so any game cant kill it, if not then its manufacturer's fault. funny thing that ppl start to limit fps on any card they have after those fries but this is most likely happens on 3080(ti)/3090. and afraid to keep the game on the main menu screen in the background
th-cam.com/video/wEJ0u6wMAgM/w-d-xo.html
frame rates don't magically kill cards sorry. fps has nothing to do with fan controller. not even close to being related. igor doesn't have a clue. this time, hes being fucking stupid. generally hes pretty smart. but not this time.....
@@goblinphreak2132 He talked to someone from EVGA...
@@ole7736 yeah don't care. EVGA will simply replace cards because as a business they just do shit like that. they want to keep their loyal customers. because they always buy evga. so evga goes "out of their way" to keep customers happy. its business 101.
as far as him talking to evga. they aren't going to tell him anything.... most likely they gave him some generic response. I read igor's website which literally says "he doesn't know" but magically its the fan controller. rip. he GUESSES that its the fan controller. in fact, everyone on the reddit forums bitching about their card died said their fans ran 100%.... if the fans are running 100% then the fan controller isn't the issue.
Excactly Buildzoid. Great explanation! I was thinking the same yesterday. No Software developer (or Amazon) is responsible for your hardware "blowing up"
Looks like EVGA has powerlimited this FTW3 card to high, tripping OCP, which no software on "factory settings" should be able to trip (OCP which is also seem calibrated incorrectly vs the fuse, which is very bad). Any "consumer grade" hardware, should be able to hit OCP many times without hardware issues. And never hit OCP on stock/factory settings.
Seems like the 3090 chip is tough to handle. It just eats power ...when the software can take advantage of the chips potential. I think this issue will turn into an expensive situation for Nvidia / EVGA (maybe call back of cards).
Good work. I have been in new world with my 6800xt no cap on the fps and no issues . I really think if there is a problem it's a hardware problem.
Other people with AMD and Nvidia have been in New World with no cap. Most people playing it in fact.
As far as the "little copper balls" go, yes what you're seeing is the aftermath of the copper going liquid and then an affect either the same as or functionally similar to surface tension in water happens. As a long time welder it is extremely easy to observe this affect when welding lighter materials like aluminum and I have seen it occur with copper. I'm unsure if the weight plays a key role or moreso metallurgical composition as steels seem to drip and run apart rather then suck together.
Pretty sure it correlates to higher power limit cards being at risk. Probably power surges with the game generating a load similar to furmark or something. Especially considering EVGA's cards have pretty mediocre VRMs compared to the Nvidia FE when it has a much higher power limiter.
yeah if the fault is cause by overloading MSVDD or VMEM then primarily the 400W+ cards which have basically bare minimum MSVDD / VMEM rails would be affected.
@@ActuallyHardcoreOverclocking which are the EVGA FTW3 cards coincidentally....this is just something silly that shouldn't even be possible in the first place.
I have an 3090 FTW 3 Ultra and while mine didn't die completely, I was able to swap to the lower TDP bios and it seems to be working fine, although now the bios switch does not switch back to the other bios.
yeah, if game can kill the card its definitely card's manufacturer issue. curious to see how many dead 3080/90 we will see in near future coz of memory overheating
Yea I replaced the utterly garbage Gigabyte VRAM thermal pads on my 3080 and saw 30C+ VRAM temp drops. I can't imagine VRAM temps of 100C+ for sustained periods will work out well over time.
@@Battleneter this is insane how big companies can do such a big mistakes on so expensive products. thermal pads are nothing in the price of 3080/3090
@@mirage8753 planned obsolescence
As an aside, my guess is those little balls of metal when a power stage blows are probably what's left of the QFN/etc package's lead frame... would probably fail a lot sooner than the board's copper plane, there's not much area/mass there.
Counterfeit power stage components found their way into supply chain. That's my guess =)
I hope you're wrong, but I'm sure it's happened whether or not it's the cause of this.
@@OGBhyve I think other people much smarter than myself have already provided superior explanations ;)
I haven't tried New World on my FTW3 Ultra 3090 but I got the black screen fans spinning to 100% while playing American Truck Simulator, it happened a few times. I had to turn the power off and on again from the socket. It hasn't done it in a while but it did happen a few times. Thanks for the video BZ.
Please do a updated version of how to tune your memory with the Dram calc especially since ryzen 5000 series performance depends so much on timings .. I see a lot of videos of people rambling and cutting the videos short. I'd like to see one by someone who knows what they are doing and can explain it as they are going through the process
A few months ago I returned a Gainward RTX 3090 because it was constantly triggering OCP on multiple games at stock clock. I swapped it for an ASUS RTX 3080ti and haven't had any issues since.
That’s actually an interesting thing to mention. The 3090 FTW3 Ultra has a 500W XOC BIOS. However, there’s a channel called Griffin Gaming who saw his 3090 FTW3 Ultra die like this on day one playing Doom Eternal I believe.
You have to factor in that it is normal that about 2% of any GPU model will be faulty and require a return. This has happened to me twice (different generations) and it took a few days in each case for the fault to manifest.
The replacements, like for like, worked fine the whole time I had/have them.
How did the EVGA FTW3 3090s and 3080s get approved when they have power balance issues ?
i don;t think nvidia really cares or investigates the designs that much. they just want the $$$.
Its probably repeated spiking rather than one big spike that caused this.
It may be hopping on a hot button issue but getting a clear understanding of what's going on I think is key and you are the kind of source that can go a long way to dispelling people's concerns.
Now, EVGA becomes famous for cutting cornors and actually make those cards explosive. (MSI is on the edge of that, but haven't cross it)
meanwhile I own an MSI 3090 and ive played new world and my 3090 hasn't blown up.... facepalm.
"Cornors"
I have a 3090 FTW3 ULTRA, v1.0 replacement and I've seen power limit violation before IN 3DMARK PR! It still under-draws on 8-pin #3, so I run a vBIOS mod that lets #1 & 2 go over 150W so the card can hit it's 500W limit. These days, it obeys that. However, there was a short-lived update to 3DMark a couple months ago that caused low PR scores on 30-series cards and I noticed the card hitting 520W. After the hotfix, back to normal. So, the card's power limit can be violated by application code.
the drivers or the card itself shouldve been able to prevent it from dying
These smaller VRM sections, like fans etc. Surely those aren't running the same drivers/mosfets. If improperly configured (eg. component shortage -> improper substitution?), insta-slamming all the fans could cause a very significant inrush current. If it's hot side fused, the fuse may blow first.
Even super fast fuses are still mechanical weak links, their failure mode isn't hard binary. If OCP triggered the 1st time, it may still have materially degraded the fuse element, even if it (barely) didn't sever. Failure mode for a tiny flat-pack 10-20A fuse, can be pretty brutal on the PCB underneath, especially if there happens to be a large, beefy chunk of metal on top of the fuse. All those ragequitting pixies gotta go _somewhere._
After my expensive EVGA PSU went kaboom. I don’t view them as quality products.
Which PSU blew up on you?
@@jtland4842 it was the G2 1000w
Love these videos. For someone who doesn't know all the acronyms, I'd love if you quickly explain them the first time they come up. I wasn't familiar with MSVDD and it took me a surprising amount of googling before I came across something explaining it was connected to core cache.
If I ever get a gpu that uses way more than 200w I'm gonna undervolt it out of fear.
I have one of the 3090's in question. I undervolt it to 825mv @1800mhz, under clock the vram by 500hz then put the power target to 70%. It down clocks the sweat on my balls by at least 30%
Use a global fps limiter. It is beneficial to stay within g-sync range, anyway.
@@TanteEmmaaa If you are like me and play AAA games with high-highest settings you're never gonna hit that VRR ceiling anyway.
This sounds like Sin part 2. For those that weren't around or just didn't care about gaming stuffs when it happened. Back at the turn of the century. There was a fairly hyped up game called Sin that began to kill video cards. It wasn't really the game's fault, the cards had just never had anything to stress them so thoroughly. The cards would burn themselves out over the course of a couple hours. After that fiasco, cards were produced with much better thermal protections.
Out of curiosity, I had a Strix 3080 blow up on me a few weeks ago. A R005 resistor blew and seemingly burnt the PCB as when I RMAed it they sent me a new card back. I'm wondering what would cause this sort of thing to happen? I was just using the card on stock settings in a game of Valorant, so not even anything intensive. Just a faulty component? They sent me a picture of the board in case anyone was curious: imgur.com/h2WqigT
It's theoretically possible it is doing asynchronous shader business at too high a tick rate and is allowing the GPU to cook itself, sometimes called a 'heat virus'. The GPU shouldn't allow this, but at the same time, the game should never need to run off thread gpu math without reasonable throttle.
I would have thought this is caused in part by some kind of harmonics with the current sense circuitry, maybe worsened by the very fast current slew rate, as such the current spikes so high before the current sense circuitry picks up on it and pulls down the clocks... If you get this multiple times per frame I'd expect you end up with a lot more current going to the card than intended...
Very much a possibility.
But especially a VRM should never ever be able to oscillate...
@@alouisschafer7212 anything with a feedback loop is susceptible to harmonics at certain frequencies. The main thing is ensuring that the intended/expected operation will not have any fluctuations in the frequencies that can potentially trigger the harmonics.
@@wewillrockyou1986 yes that is true.
I still think this goes back to issues found at launch with crashes due to the mix of high and low quality caps on cards. EVGA was one of the ones who used more lower quality caps. Founders and Asus cards have not popped up as issues and both those opted for more higher quality caps with Asus using all high quality caps.
Uncapped fps in menus is honestly just bad game design. At least keep it capped to the monitors refresh rate by default and let the user decide if he wants to enable it. Otherwise the card is generating unnecessary heat and power if you leave it uncapped and many people leave their game running in the background.
Of course it's good practice to not ask any more of the hardware than is strictly needed, but no matter what a game asks of the API, the hardware shouldn't die.
Opcourse not fault of nvidia! ROFL!
@@miyagiryota9238
I never said that. I specifically did not put the blame on any of the parties involved because the cause wasn't quite clear at the time. If you read Igor's Lab latest story about this, it actually seems to be that fan controller on EVGA cards is the culprit - at least for 3090s made by EVGA.
Regulator OCP only applies to the output of the power stages in my experience. Each of them could well be at some load below their individual OCP limit but the total is well over what can be supplied by the input, which pops a fuse or burns a power plane.
I agree the board current limits should kick in, but given its software driven, a sag in supply voltage could well lock up the GPU preventing board protection from kicking in.
Man every day I'm glad I wasn't able to get a 30 series card.
Side note hopefully this brings about some change to all cards moving forward.
It will just be more hard limits, enforced stricter. Not necessarily a good thing.
The game makes, CPU and GPU's glowing hot! My case got so warm, you barly could touch it. And its started to sell burned.-
Would love to know why it's ftw3 models mainly. I seriously don't think it's the game. Don't know how it could be. Seems like evgas bad design that they refused to fix so they could sell kingpin cards.
The pins of the 8-pin power sockets on front of the card shown in the techpowerup photo could use a bit more solder 2:47 ... or doesn't it matter?
actually if that game causes this issue I would like to have it as a stress test!
just use furmark, can't be worst, it shouldn't go past 100% lol
@@Jeffcrocodile Manufacturers should have tested their cards with furmark and passed 100% because their 2080ti cards had issues with furmark before so they should learn the lesson.
lol 100% agree with you. manufacturers have been using this BS for a long time, like with furmark, trying to convince us that a "stress test" should be a controlled specific load defined by them, thus detecting software like furmark and blocking it, to cover their design flaws. i mean, does the word stress means something else now? lol, maybe they shouldn't be calling them stress test, but "just enough" tests.
@@Jeffcrocodile im almost sure that if Furmark could cause it we would already know it because everybody uses it for testing.
I've ran New World on my setup which includes 3090 and I didn't even face any issue or crash during 10 hours straight session. I'm using PYN 3090.
I suspect that somebody fat fingered one zero too much in the controller configuration, setting it to 1000A OCP instead of 100.
what made the fuse blow the second time was that it already had residual heat from the first run, I see this a lot.
When I heard this you were the first person I thought of.
That should be caught on so many levels.
From silicone design trough software and drivers. Before even hitting over current protection, not to even mention blowing fuse within own PCB.
I'm sure it's Amazon trying to take over the world with an AI net using your GPU's extra power. The name of the game kind of says it all. 😁
A guy I worked with who worked for a power company would collect balls of aluminum that were formed by the sub station getting so hot it would melt the structure the substation was sitting it. When I asked about it, he told me "it happens when the water in the air we breath forces the aluminum into a ball shape from boiling it super quickly". I don't know how true that is but... He had a full shelf of them and most of them were the size of softballs.
EVGA's VRMs blowing again...
It's apparently the fuses which makes me think you should be concerned what it's doing to the cards without those fuses.
Underated creator. Omg bruh I just entered a whole new world just to find out this new gpu breakdown. Like me in safety he just went in depth to asses,analyse, benchmark and provide the root cause for what's happened. Even though I'm not good in electronics. Great explanation for the lay man.
Back then: Can it run crysis?
Now: can it run new worlds?
It can. Once. Briefly.
I like your explaination better than any other channels because it is absolutely stupid to blame Amazon for this stupid thing because basically the cards are supposed to run any software without failing. If the rtx 3090s or whatever card fails due to some game (which is a software) running the card like crazy. That's not supposed to happen by default. They must have thought of every single scenario before selling any hardware to the end user and tested it really well in every scenario (extremely high fps, extremely high power draw and everything else I can't think of because I'm not fucking nvidia) And the end user should not be able to blow the card at it's stock settings no matter what game or software they run unless they play with the card's clock speed or voltage or whatever. This is 100% NVidia's fault. I absolutely hate amazon for other reason's but blaming amazon for this is ridiculous. NVidia should either replace all the failed cards or compensate the customers (not EVGA or other AIBS but NVidia Directly) by refunding the full amount. It's 100% Nvidia's fault for approving a PCB that's not tested well.
Buildzoid each time he sees the length of his videos going a bit over what he’d like them to be:
“Damn, the god that developed me really forgot to implement the Over Rambling Protection! Let’s start slamming right into it!”
There might be more to it, I have a 3090 XC3 Ultra water-cooled (both sides). It had issues freezing with a black screens and the entire computer locking up until I pulled the plug.
After trying out a whole bunch of solutions, what made it run stable again was replacing my ~7 year old 1200W Supernova with a brand new Dark Power Pro of the same wattage.
Maybe the power delivery on this cards is skewed somehow in the way it draws power from the PSU.
Funny, I got an email today saying I was going to get charged for a pre order of Amazon's New World when I have nor will ever pre order software.
prob phishing, I hope you didnt click it.
@@M0EJ0EKING No its not I accepted their beta test offer years ago and wonder if they had that buried somewhere?
It was reported that this is happening with both current and previous generation AMD GPUs as well.
This is a VERY strange series of events. One of the ways (that I can think of) to work out why this is happening is to monitor the game and the GPU at the same time and wait for the shutdown. Then dig through the game logs and the GPU logs around the time the event took place. Do it with several different GPUs and the answer will show itself. Getting the game Developers and the Card Creators in the same room could be a bit of a logistical nightmare. Anything's worth a try though right?
I barely understand what he is talking about, but I feel smarter by just listening
Get your lab coat on with all those pens in the pocket and listen with intent!🔬😆
You can fry the RTX 3090 in virtually any game. If you take particles GFX for example, it drops the FPS considerably and will eventually cause the card to die. I did it recently with my 3090 XC3 Ultra gaming on WoW.
So basically, the 3090's bad design was exposed by the game?
Sounds like the same issues they had at launch that they "fixed" by patching the drivers to pull less power. Seems like they should have actually fixed the problem then instead of putting a bandaid on it.
EVGA's design*
@@basshead. But I assume they have to get approval from Nvidia to have it released, right? So it must have been within the 3090's spec as designed by Nvidia.
@@nathanyt It's impossible to test everything.
@@basshead. said a 2021 game dev and released 0.000001 alpha game state to get the money out of sheep
My 3090 just blew the same way but while i was using a different game. I had just increased voltage in Afterburner from 100% to 112%. Pop... no extreme overclocking at all on the card.
from which manufacturer?
@@MegaChickenPunch Gigabyte 3090 Xtreme
If one game is killing cards now more games will do it later!
The amazon game uses the CryEngine, star citizen also uses it (it's amazon's port) also far cry uses a older port or something from it.
Uhhh. They use Amazon's own engine
Been playing many games and even Star Citizen and they run fine on max settings. New World destroyed my $2k video card. Luckily it's under warranty but I have a feeling there will be legal issues following this issue.
cryengine has word cry in its name for a reason :D
@@drganknstein I bet you once this is all analyzed there might be a class action against Amazon
@@marcusborderlands6177 Amazon uses lumber yard which is a branch of cryEngine.
Amazon paid cry for it, im sure amazon has sone some custom stuff to it but it's still from cry.
en.wikipedia.org/wiki/Amazon_Lumberyard
If stress-tests like FurMark can run into power limits immediately and throttle down to prevent any sort of OCP shutdown, or damage, then what's the Amazon Lumberyard Engine doing to sneak past these protections, and blow fuses on stock VBIOS?
Shouldn't these GPUs be designed in a way where OCP can't be reached by any kind software load to begin with, assuming stock configuration?