Beware! This game could brick your GPU! WARNING!
ฝัง
- เผยแพร่เมื่อ 14 ธ.ค. 2024
- Start configuring your next custom gaming PC with NZXT BLD at nzxt.co/JayzJu...
Get your JayzTwoCents Merch Here! - www.jayztwocen...
○○○○○○ Items featured in this video available at Amazon ○○○○○○
► Amazon US - bit.ly/1meybOF
► Amazon UK - amzn.to/Zx813L
► Amazon Canada - amzn.to/1tl6vc6
••• Follow me on your favorite Social Media! •••
Facebook: / jayztwocents
Twitter: / jayztwocents
Instagram: / jayztwocents
SUBSCRIBE! bit.ly/sub2Jayz...
If anyone with a 3090 is worried about this I'll swap you my GTX1060 for your card and $500 -- then you can game away to your heart's content, knowing that you won't have to worry about this issue. Yes, that's right -- for just your existing RTX3090 and $500 in cash, my GTX1060 could be yours! Act now, supplies are limited :-D
LMAOOO! And if they have a friend that is afraid as well, I have a 2060 to trade.
i have a 1660 ti, i will take your 3090 even without cash! limited offer
Better offer than @AresGodOfWar and @
Kitulous Gamedev Channel .... Id trade a 960, and id even throw in a 1070 with a blown voltage regulator lmao
I'll swap you 1660 ti and give you $250, how do ya like them apples
Stonks
Software should NEVER be able to affect hardware in a way that will cause a failure under normal operating conditions
In my opinion, that`s clearly an issue with Nvidia design for their boards, imagine a software (besides a intentional virus) that can literally fry/short parts of your hardware. I know that it looks like this but I don`t think the FPS boost on the menu is what`s causing the problem, but a lack of firmware power-delivery control, even over-clocking to the max, you should never be able to pass the safe threshold.
100% this.
This is what makes me think Jay has been scalping 3090's. He knows that software should never be able to affect the GPU. And yet he frames this videa as if its the GAMES fault and the GPU's are okay.
Jay probably scalps 3090's and this is hurting his bottom line.
the *SHOULD* part should be capitalized as well. - sometimes there are hardware and code path interactions that can cause a card to operate outside of it's intended operational parameters. This happens relatively infrequently due to the protections put in place in both the electronics as well as the drivers, but when it *does* inevitably happen and it's discovered and investigated, the typical remediation is a software update, either to the gpu drivers or to particular cards' firmware depending on what's most appropriate. Occasionally something like this happens though, and it'll take some engineering to figure out the exact cause of the issue and come up with some kind of remediation. until then, the best you can do is avoid the game temporarily until such time as nvidia and/or AIB QA engineers can come up with a fix and if it DOES happen to you, share what your hardware loadout and run-time parameters were with the QA engineers and the public so they have more data points for figuring out what's going wrong.
@@tron-8140 Show me another game that does this to any GPU and I will accept your garbage theory.
I'm already using a brick as a GPU, it's all I can afford.
when this craziness ends, get an RX 580, find the 8 GB version, at one point a used one was a hundred bucks
Same
@@dbunik44 you can get that now to hold you over. That or a 980ti , they're really good cards.
That defeats the purpose. Just use integrated.
Bro this comment is legit hilarious lol
You learn something new everyday from Jay's videos. I don't even own a 3090, but watched it anyway.
Lol. I don't even own a pc anymore and I'm here watching this
@@TheCrazierz are you telling me that you have willingly left PCMR, is that even possible?
@@mohammednurul9380 I never could afford a top of the line gaming PC so I had to settle with a decent PC that I also needed (justified to myself) for school. Now I make pretty decent money so I recently decided to start buying parts in the next few months and build one. Hopefully 3080s will drop a bit more by the end of the year and that will be the last time I bu
My requirements of PCs tend to be more low-power and reliability based. I have a Ryzen GPU built in so that I can program in Vulcan but that's about it. Still, it's always interesting to find these gotchas. I fixed a stacking problem in COBOLII on an IBM mainframe because I'd already solved it in MC68K assembler on the Commodore Amiga.
@@TheCrazierz I've been there brother, never could afford one at a time, worked my ass off to get a laptop first and then eventually built a system, couldn't go all out, but got a mid tier, Thanks to Jay and some other TH-camrs, I had the correct guidance to do it properly
This sounds like a job for GAMERS NEXUS MAN!!
F yeah, bring the pain!
Steve knows how to pop a cap in the ass of a manufacture who tries to burn down people's houses and not care about it, NZXT anybody? If the cause of this issue is manufacture related, hence EVGA, Steve should go after them too. But if its proven that the game itself is the cause, and no other games do it, then Amazon will need to pay up millions of dollars in a class action lawsuit to all the people who were effected, its only right. It will be interesting to see who's at fault here in the end.
@@stellarproductions8888 it's likely isn't an issue with evga specifically. Other aib cards were affected as well.
Tech Jesus. Going to bring the pain.
Yeah because Jay only knows enough to say "I dont know could be anything yuk yuk" and yet his title is set to make out that the GAME is at fault. I bet Jay Scalps 3090's.
The sad part is that even bricked graphics cards are going for $800 on ebay.
Because they’re very fixable for people who have basic GPU mod/repair skills. But the design of the card is inexcusable.
lol
Wow
wrong. bricked card = MSRP
@@PwadigytheOddity more like the seller isn't capable of anything beyond the basics......
When Jay says it’s important before the sponsor ad, it’s important.
@VantageGamingYT ez pz fix, don’t own a 3090
LoL
It's a great way to get people to sit through the ad too
@@TortoiseCashFlow lmao if you sit through any of his ads you are a 🤡
@@destruct1214 totally not like he's partly making a living of those ads.
Why are you here again . . . ?
Ahhhh finally, a bonus to being a peasant.
Lol
same lmao just got a 1050ti let alone a fuckin 3090
🤣🤣 I fuck with this comment
@@oHunterr lmao you got a 1050ti
I got a gt 710 🥲
@@charles7075 lmao mine's almost as bad I've got a GT1030
When a bricked graphics card is still worth more on Ebay than its MSRP.
"If you have an RTX 3090..." HA! Alright.
Some people do, just because you can't get one doesn't mean nobody does lol
Yes I do
@@kyler247 mate just shut up lol it’s a little joke
@@HandsomeManNamedTonyHave an MSI one in my backup system (their fans are terrible) Though I now have a 2060 in my laptop.
I'm rocking with 3090
A lot of folks here talking about how this doesnt apply to most of us. This is still an educational video, not just a warning about the 3090
Indeed Cody! One of his best.
Less people are playing that game too. Never heard of it
@@calvintrinh4869 It just came into beta... And it was played by hundred thousands
Well the thing is very less people have 3000 series card. Even my budget was for 2060 when 3000 series came out.
@@Gaukh everyone using a 3090 is for mining, for better games, and for other computing. The ones using it for this game are rich streamers who use the 3090 to play among us and flex. Thankfully, there are only around 50 people who fall under that category
3090 with most games: “Balanced, as all things should be”
3090 with Amazon’s New World: “UNLIMITED POWER!!!!!!”
"UNLIMITED POWER....... until total detonation!" lol
Power Supply: "He is too dangerous to be left alive!"
3090 with Amazon’s NW: IM FUCKING INVINCIBLE😤
Jay: "If you have a 3090..."
Everyone: LoLz good one.
Real men have rx 570, 3090 is for boys
Edit : The 4gb version
I got one on my birthday lol
@@beliall1592 real men have the gtx 920m 😎
real men have intel hd4000 graphics
I got one.
"Let's burn 'em up, they'll buy them again." -Bezos probably
Yea let's all buy in amazon
As soon as he said Amazon, I thought that same thing.
Gotta fund that second space trip
ABAB
Fuck illegal monopolies, protected predatory capitalism, poor quality products, full price games that aren’t finished, FTP chaff plagued with Microtransactions, back door crypto mining and most of all the vampires at the apex of our societal pyramid.
Jay, as a person in software development and responsible for QA, just wanted to say thank you for pointing out the importance of overlooking nothing 100%
I work qa for cars, some of the worst cockups happened because of stuff as minor as a dinged spacer or flaky coating
I used to work QA for a body armor manufacturer.
@@Emanouche and?
Shot the boss during a demo trial?
@@riccardo1796 Haha, I wish. ;) I was often at odd with manufacturing, it was a start-up and the armors were being made by hand... so I often had to send back vest for minor defects which was frustrating them as they try to speed to reach certain quotas... but when you're dealing with people's lives, can't really afford to say: "Oh there's a minor defect here... but what are the odd of the person getting shot right on that spot, right? riiiiight?" lol
It's certainly a 'New World' when GPUs die because of a problem with a single game. I wasn't aware this was even possible. Thanks for making this vid Jay.
yet all other gpus were fine......pppppssss it was the gpu.
@@hvdiv17 i hope that's a joke
It shouldn't be possible to kill GPUs through graphics APIs (DirectX / Vulkan). It's 100% the card's fault.
I had read this thing on Reddit. I didn't read but a few sentences but this guy was explaining similar issues he noticed with another game too
Graphics Programmer here (With experience in OpenGL and Vulkan), FPS skyrocketing in menus is pretty normal nor should uncapping FPS cause damage to the hardware. Without any limits the GPU shouldn't even get anywhere near max load since you're still limited by the CPU which has to submit the command buffers to the GPU in the first place.
As well GPUs will adjust voltage based on how difficult the task is, rendering a menu is generally a really easy task without too many shaders involved so it should be able to blaze through it so fast that the CPU can't keep up, hence FPS caps out to some value eventually.
The big thing that adds load to a GPU is complex shaders and geometry.
I could see the sudden introduction of complex shaders to cause the GPU's power draw to spike though, combine that with power not being grounded and the card being overclocked? The voltage spikes that could cause could definitely kill some components. Especially since the RTX 3090 series is known to have wild voltage spikes.
It's nice to see an information like this. When Valorant was fairly new people are saying that their pc has been bricked because of it and for the anti cheat they say that in Genshin Impact you can get banned because of it but as a person who plays both on their official release I never got banned in genshin nor get my laptop bricked and it's 2 years already.
it's gonna be a new world for people buying a gpu after their 3090 breaks
If a game breaks your gpu. It is the manufacturer's fault. Not the software developer. EVGA made shitty and defective 3090 GPU's.
@@Tiberious_Of_Elona my goodness.
Quit spamming the comments section with your opinion. Besides, you're being seriously narrow-minded by saying that. While it may very well be EVGAs fault, WE DONT KNOW FOR SURE YET. So quit acting like you are a Saint because you don't think it's the developer.
Yea my evga 3090 ftw3 did this a few months ago, while playing second extinction…
I’ll be glad when the 16GB 3080 super comes out
Dude that would suck so bad
Surely Nvidia’s fault. The drivers and hardware protection should never allow ANY software to damage a card
(Edit: I'm actually quite wrong about how this works but here's the original post for context:) "Its not really that simple as the way game engines work the Drivers have no authority or role to play in detecting game engine = runaway fps triggers throttling. Thermal throttling works but its a completely different part of the pipeline and any driver to Engine based solution would be Hyper specific and Nvidia would have to be responsible to tens of thousands of games a year. Its really Amazon's responsibility. This issue happened back when Starcraft II launched and its menu's had uncapped framerates for their 3d backgrounds."
That's not how that works at all.
I don’t think it’s NVIDIAs fault.
If just running a piece of software bricks your hardware, then the hardware was faulty all along.
Nvidia pretty much destroyed this market so id blame them for everything too.
"Be careful on reddit" GG Jay teaching people about how the internet works
(Josh Discovers the Internet)
"Donna! The internet people are going crazy."
"No kidding!"
Reddit, sometimes I feel like I should go there and join a community.
them I remember the dark side of reddit... And I don't.
This is impressively quick response, jay. You really don't slack man
The new meme: But Can It Run New World?
“Hello darkness my old friend.”
You know what they say about the Devil and his greatest tricks, right?
can it run off with New World
It can, but at what cost?
I’ve come to talk to you again
Yup
Dedicated power plugs FTW: My 3080 was first on a 750W with 2 plugs & 4 connectors, and it had a few near-unstable FPS drops running some benchmarks despite non-GPU power draw being low. Replaced PSU with 850W with 3 dedicted plugs & 6 connectors, got no drops and no instability even with OC!
Thats....not the issue here
@@youkosnake That.... could be the issue actually. Poor power supply mixed with too much demand
@@ErimlRGG it's really not. There's several people on the original reddit post who have used multiple psus of multiple different wattage, all with 3 individual 8pins.
I'm patiently waiting on Steve's video to make a judgment. It's not one specific brand, it's not a specific psu issue like that, it hasn't happened with other games. It's through and through related to new world
Jay, I’m beginning to understand your frustration with people after reading the posts here. So few people hear, and even fewer listen. Thank you for putting in the time and effort you do so the 1% of us that actually pay attention can learn something important. 👍
Very true
For me this sounds like that game pushed the card to their limits, but it will still be a problem with the manufacturing of the card, because there is no way that a software kills a hardware if the hardware was well done and the firmware is not modified
Bingo. If that's true then those cards have SERIOUS firmware problems
Disagree. Hardware is nothing without software to utilize it. I've had unmodified Samsung phones brick from a software update issued by Samsung themselves. Or the computers who bricked from windows updates.
People are reporting it happening even when running at mid-tier settings. Like Jay said it's still early so that could be people s***posting, but even at max settings there's not really any legit reason for a game to spike to 100% load all of a sudden.
@@doctorwhodude82 Both the phone and windows computers can be recovered by reinstalling their software. In case of the phone that may require additional tools. In any case, that's a software problem. It's highly unlikely for a game to brick hardware. The game does not get to interact with the hardware closely enough to do any damage at all.
Remember that hardware is still physical, electricity flows faster than OCP can react, there will be damage every time OCP is triggered (with the OCP damaged as well, much like a circuit breaker can only handle so much abuse). If the software is sending a signal for the GPU to hoard as many resources as it can as fast as possible, that's the fault of the software. Personally I think all games should have a frame cap. I remember years ago (and they probably still do it) when Minecraft players would boast about their games running at 350fps at a time when 60hz monitors were ubiquitous. All that power being completely wasted for absolutely no reason. Now that 144hz monitors is the new 60hz, why should any piece of software request more frames than the monitor attached to it can display, especially here in the age of two-way-communication between tech when my computer knows exactly how fast my monitor is because the monitor tells the operating system what it is.
It's like when the flash is running down the street and trips over a soda can and dies
I cant wrap my head around the fact that unlimited fps can damage a graphics card. There's no way. The hardwere is supposed to have limits to not damage itself. Like an engine has rev limit. If the engine is not limited to a healthy rev then it's the engine fault period. Not the fault of the one who rev it to the max
Too much of anything is bad…
Pure FPS shouldn’t be the problem like jay said it’s probably to do with power but that being said if you hit a limit like that for extended periods it will put strain on the hardware, even in something like a car if you constantly redline it shit will go wrong. I’ve seen plenty of people learn that failsafes like limiters only work so many times
@@ziggybrady5170 a car is not the same here, a car you can hit those limits. GPUs have built in limits that cannot be over written even when OC'ing, the GPU shouldnt be allowing power draw beyond what the firmware is capable of. this means either the firmware is at fault (so EVGA is to blame) or the GPUs hardware quality was not "binned" high enough to support the firmware limitations and simply died due to poor hardware quality control (again EVGAs fault)
Software cannot exceed firmware limitations unless you remove said firmware limits or the firmware/hardware is faulty or poor quality. Amazon cannot be to blame here, its 100% not possible for software to bypass firmware limitations.
@@123TheCloop Too many variables to pinpoint guilt, he said it himself we don’t know the exact hardware config and if the person is savvy, there’s plenty of virus that started as bugs that do just that software bypassing limitations. And I like the car analogy maybe not the same but that’s what analogies are for…
You're correct. This just brings out issues that aren't seen until the card is pushed to it's limit
Capacitors makes a pop when they go sometimes.
Even if a cap has blown (in a not old product) something must have caused it and would also have caused severe damage to the expensive silicon bits.
Pops like a zit
"Too much fps is bad" never thought I'd hear that lmao.
We've peaked guys
Too many FPS have always been bad
For now... Maybe sometime around RTX 9000 Series we'll be trying to get to 40k frames at 20k res.
something that never was a problem before the rtx 3080-90 really, the amount of fps those cards produce is insane in 1080p games that barely did 100fps on a 1080ti now run at 250-350fps it's crazy and makes your card run at max power 100% of the time, i had to limit all games to vsync or my pc becomes an inferno
@@earthtaurus5515 it was a joke. I type words on the internet in hope to make someone exhale air out their nostril a bit faster. It was a joke, I was joking.
Earth Taurus don’t gotta lie to kick it
Cow farts aren’t going to end the world or cause “climate change”
The climate is constantly “changing”
That’s the only constant in life. CHANGE
gosh people are thick.
glad evga has the best return policy and RMA POLICY so they will get new cards
Just as soon as they can source you one ;)
My 2060 died twice. Cost me a fortune in shipping and import duty. Wasnt too bad though as the 2nd 2060 was replaced with a 2070.
Actually with the 3090 they are making you pay tons shipping wise.
Fffft. Real gangstas run Furmark and Prime95 in the background while they play New World.
G shit
If these software can hurt your hardware, then you bought defective hardware.
Buncha pussies smh. I put nails through my GPU and in my cereal while doing that.
And also playing bdo on remastered simultaneously on a hot summer day
And roll their PC up in a thick heated blanket on max setting while doing so.
Nvidia releases cards: Yay!
Card prices go up: No!
Card that costs way too much starts to break from design flaws: WTF!
Gamers: Crypto mining is bad for your GPU
Amazon: I wonder....
This is Jeff's plan to get himself to Mars.
Prob mining in the background or something
@@crash6674 someone would've caught them by now.
Overwatch has 2 fps cap, 1 is fixed at 60 for the menu and then what you set for the game. All games should do this, no reason for my CPU and GPU to runs with the same load while the game is paused.
Something I've noticed since you brought that up is my XSX is a damn space heater if I'm playing a game or pause it and do other things. If the game is closed and you sit at the home screen the console runs cool.
Borderlands 2 fps cap starts as soon as the game loads. Love it. Unfortunately it doesn't go higher than 120 fps.
Mordakk mate - NVIDIA are supposed to design cards that don't commit suicide when they see a random phone game! Stop blaming everybody else it is squarely NVIDIA who fucked this up. Notice no other GPU manufacturers are seeing this.
Would love this to be standardized across the industry - as in an actual integrated game engine feature, because we can't rely on the devs to consider this kind of stuff, let alone implement it without problems.
I would keep my menus locked at 30 just so I didn't have to hear my system cranking air in menus.
No one is blaming anybody it's a suggestion. Also how in the hell was NVIDIA or EVGA or others are supposed to know that Amazon would make a game that would create this situation. You're the one that is blaming NVIDIA.
“If you have an RTX 3090” Oh never mind. *clicks off video*
lolll
I thought the cards had protection against overheating.
Vrm and core does tho capacitors rezistors and small components doesent
Not every part that gets hot has a temperature sensor. If one of those unmonitored ones overheat due to badly designed layouts...yeah, it would have no way of knowing until it fails
It is the Lumberyard engine. Star Citizen had the same issue for a while till they rewrote the code.
The Lumberyard engine has these wild frame rate spikes that makes zero sense.
I never heard of the spikes in Star Citizen being so bad it resulted in killed cards though. There's clearly been a massive oversight here by someone.
Someone said limiting the fps kinda keeps things working
@@squatmufu6117 on Star citizens release, I don't believe the 30 series was out. Nvidia has high power demands with the 3090 and 3080 Tie. So it's just a compounding issue
@@squatmufu6117 the massive oversight is on nvidia's end, they should have failsafes in the drivers and firmware that don't allow a frame spike to melt the card. jay should know this..
@@StrikeReyhi Agreed, at least if it has been ruled out that these users somehow modified their GPUs or did something else that helped create the situation.
Last time I was this early, GPU's were sold at msrp
Hey Jay, check "Son Of a Tech's" video. He had Vsync ON and he just changed the graphics quality from Very High to High and the PC shut down. So Vsync isn't the problem. He has the full recording on that video and stream on Twitch as well.
dude I just watched one of his videos, and this wasn't there. your fast :D
I was just watching that video when this one popped up. Crazy stuff.
Thats true. How is that even posible, a game destroying your top dolla gpu...
i hit jay and blindrun on twitter about blindrun being live and his vod and they should get togather and see about this seeing as jay does have the conections with evga
Hmmm. The vsync in the options may not take effect until your in-game. I also think this has something to do with the card pushing itself to 1.081v but I'm not sure and I'm definitely not experimenting with my 3080Ti. I've heard of this occuring with other low load games like Valorant. Not sure what's going on but apparently new voltage regulators on revision 1 of EVGA's 3090 solves these issues.
Jay’s comparison to a diesel overrun is brilliant. As an engineer I appreciate this comparison. Someone for got to install a relief valve mechanism
Its that the same as a diesel runaway?
"If you got a RTX 3090..." Some definitely playing in a New World reality.
This is literally EVGA's fault for making defective GPU's. But everyone is trying to blame it on New World. Shameless.
@@Tiberious_Of_Elona Scroll to 2:52 and listen carefully
@@Tiberious_Of_Elona man didn't even watch the video
@@Tiberious_Of_Elona Did you even WATCH the video for longer than 60 seconds?
@@fallbright So far he is right. It is only affecting EVGA. My brother owns RTX 3090 ASUS TUF OC and my cousin owns MSI RTX 3090 Gaming Trio and none of them have any issues. They have been playing for over 10 hours on high settings. I on the other hand have the EVGA 3090 and it got bricked at 6:40am.
I read the reddit post carefully and it is so far EVGA GPUs who are getting bricked.
Jay: "This is important."
Video: **Is about a game frying your RTX 3090 GPU**
Me: Welp, I'm safe on this one... 😅
Also me: **Sad poor gamer noises**
Me: **Sad poor gamer noises along**
*sad poor agreeing nods*
Me: “Buy another ya rich cucks!”
Also me: **sad poor console gamer noises**
*sad poor gamer noises too*
Its nice that this doesn’t happen with my 970
''The garbage on reddit'' this couldn't be more accurate LOL
@Joshua N. that line should get you to go out and get some sun.
What is a reddit
I think it’s pretty much the same everywhere nowadays…
@Yosef Cardoso I go on Reddit like playing an FPS game, team up with the good fellas and shoot down the shitposts.
I love how Jay makes a video for all 3 owners of that card. Such a nice guy :P
I spent some of my unemployment from covid on a 3070, so it's *almost* like I got it for free lmao
Video starts with "If you got a 3090..."
99.98% "noooo i don't..."
0.02% "oh that's helpful"
Nvidia has sold more 3090s than the other rest of the 30XX line combined.
@@RNG-999 i have a 3090 so this is good information to know. Even if the information is only relevant to a minority of people content creators shouldn’t withhold information as important as this.
@@RNG-999 not really. It's interesting news on its own. I don't even have an Nvidia GPU and I'm interested.
Doesn't make it any less important to be honest. Stuff like this SHOULD get this kind of attention.
@@RNG-999 Well, actually i have a 5700XT and still watched the whole video 'cause it's entertaining.
As a gamedev an often overlooked cause is PCIE bandwidth
I've hit HW bugs in the past when the PCIE bandwidth is fully saturated for prolonged time. Most games rarely do this.
If at the same time the card had sustained maxed out:
- RAM BW
- GPU compute / ALU
- PCIE BW
Then this could be a reasonable explanation.
That is assuming it's not a buffer overflow bug where the game or driver happens to overwrite a region memory that controls the voltage of the card (which should never happen but...)
Yea, no, also a dev. What you are describing is basically the state in which consoles use to work (well, not quite since archs have changed a ton, but everything busy at the same time for prolonged periods, was the gold standard). I don't really understand jay's position here, even if we go back to his example with the cars, the "game" can have the foot pegged at what he thinks is the gas pedal, but at the end, that's not the gas pedal, is just a system call. Then the system will try to "press the gass pedal" on the game's behalf, if it can, it'll do it, but at any point here, the hardware can interrupt to say "o hey, i can't continue like this", and the OS will handle the reply and if the game is extremely shitty, in windows you'll get a BSoD. The fact that the card kills itself and doesn't generate the interrupt is already a fault of the card's design. So could be dirty power, could be that the card allows itself to run outside spec for a longer time, could be a ton of things, but you can not do that much damage through software if the hardware doesn't allow you to, and that is already an issue for the hardware maker. So, it works fine with most cards except just a couple, o is the game's fault. No, that ain't gonna fly. More credence to dirty power due to incorrect 8-pin connection.
@@omi6816 I have support tickets of HW that went into constant TDR & BSOD because we were accidentally saturating the pcie bus.
It was low number of HW but it was happening.
We tried NV engineers repro the problem but no luck
HW design is very wild out there, and it's not uncommon for MB vendors to bend or break specs to save costs, which is for example the reason VR-over-USB-only didn't happen or why lots of MBs can't support DirectStorage.
But I agree dirty power delivery is likely to blame, and I also don't understand Jay's stance on blaming the game. User space SW cannot break HW other than through expected wear (e.g. SSDs exceeding TBW). Period.
If it does, then it's a HW flaw, incorrect firmware default thresholds, or faulty installation
This is clearly an Nvidia issue. Their drivers could limit the extent of that easily by defining safe bounds for ripple/fps and then clamping the fps to that if software wants to exceed it. Nvidia cards have so many sensors for things, if they're exposed to the driver and the driver took protective actions, this would be a non-issue. Not to mention that it could log such events in the system log for debugging purposes
Ive seen multiple people saying this was affecting their 3070s as well.
So far played ALOT of the NW beta and no problems at all. 🤞
My 3070 only hits 65 C when playing and streaming for me lol I’m good here.
I heard some loud sounds from my pc room as i was smoking, i dont know what they were and always thought it was my desk under the load of the pc… but this maybe the problem
NVidia have frame rate limiter in control panel like AMD and im ALWAYS using Freesync/G-Sync with enabled frame rate limiter and do it "Your monitors max refresh rate -2"... Example: if your monitor is 144hz, make limiter to 142 and you will be excellent fine without going out of G-Sync/Freesync range ;)!
I love he explains this problem as it is, and then uses a car analogy I love and understand too.
Unfortunately the analogy he used is wrong! Jay has sent you down the garden path on how the software side of this stuff works.
@@MrAlternateTheory Makes me laugh, couldn’t be any wronger 😂
How does a game even brick your GPU? To me, it seems like software shouldnt be able to do physical damage to your card, since it has protections to stop the hardware from taking damage, of course excluding bios flash and stuff like that. So how can you a piece of software break your GPU?
It's happened before StarCraft 2 had this problem back in the day
Bugs and glitches cause overdraw from the PSU which the card cant handle, can also ramp up temps on loading screens etc. Its super rare, but apparently - it can happen
the software managed to set up a specific environment that the GPU can not handle, so it draws too much power, shorts and it bricks. As such it's not the fault of the game as much as its the fault of the manufacture not testing properly or not having enough protections. Especially considering it's almost only EVGA 3090 FTW3 that is bricking and very very few other 3090 models. EVGA has as much to be blamed for as the game if not more.
i can write any software i want to kill any hardware i want. cpu, ram, gpu, ssd.
you can force various settings to overload through a software to the hardware level exactly like what drivers or overclocking tools can do.
obviously its an nvidia error since it affects only 3090 cards. Probably drivers cause a major malfunction or something.
Undervolting 30 series cards is essential at this point.
Shit cards
@@laoch5658 Not shit cards, the matter were the card manufacturer had use those cards to mining bitcoins before they leave the factory, they claim it was stress test, my ass, they are overclocking the cards to barely dying and have them be ruined before any customer take them.
@@Saviliana sounds like a conspiracy theory
maybe do some checking on that?
That knock-sensor is the graphics driver and firmware/VBIOS. If the graphics driver is broken, it might happen. Same with firmware and VBIOS.
No other software should even be able to cause the card to go bonkers. I'm a software engineer and I have written low level applications at one time and it IS possible to break the hardware if the firmware or driver allows it, but it shouldn't be possible by a third-party software.
Agreed 100%. Many real life objects have build in protections (Overpressure, overheat, overvolt etc) to prevent things from damaging the product. This is not amazons fault, this is NVidia's fault for allowing amazon access to this type of damage.
@@AarPlays I agree 110% with both of you. I read most forums and stuff on the net about this issue and is scary how people blame the game. The game no mater how poorly designed should not be able to brick the game using normal API calls from user space.
This game just seems to highlight a defect that the EVGA 3090 FTW3 seems to suffer from, possibly other cards to but right now thats seems to be to a lesser degree or just false positives.
This was my immediate thought. There is NO WAY for a game engine to manipulate the power\voltage limit of a GPU, that's on whoever wrote the vbios.
@@AarPlays God forbid it could be both at fault. I fail to see how the failures of one excuses the oversight of the other.
Jay said the people that got their cards bricked could have been overclocking their cards and that could be an ok for the vbios and firmware I’m just saying from experience with tech y’all know better
I have this EXACT Evga 3090 and about a week ago I had this same thing happen except it was with Microsoft Flight sim as I started a flight. It didn't kill my card but it did the PC shut down thing. It never happened again but it seems similar to this.
If it hasn't happened again it was likely just a random power spike on that one particular instance. Even a 3090 wouldn't be going into the 1000's of FPS that New Worlds is causing cards to go into.
2 very different issues at play here.
That can be many things
Could be anything
The abbreviations used in these will include
OS: Operating System
CPU: Central Processing Unit
GPU: Graphics Processing Unit
PSU: Power Supply Unit
RAM: Random-Access Memory
(For those who are computer illiterate)
In some sorts you have a similar experience. Your situation was caused by Overload/Overheating protection that is basically the suicide hotline for your hardware. FS2020 Is a very demanding game, I play it myself so I do know. In your case what might have happened was either your CPU/GPU/RAM or PSU had an overload on rendering or had too high of a temperature so in order for your PC to "Play it safe" your OS and CPU will share the info collected and instruct the system to shutdown immediately. Once again under normal circumstances it won't cause anything physical to break but I suggest incase it happens again to then figure out a better cooling solution for your hardware or even upgrading the PSU (if it isn't a good PSU) and if it still causes another emergency shutdown I will suggest lowering some of your "display" settings.
In this video it is basically explained how the GPU had no restrictions. In a simple point of view, you are an athlete and you always want to run at the fastest speed possible right? In order for you to not hurt yourself physically the races you run has speed restrictions and they have a device on you to make sure you don't run faster than what you can handle. The situation with New Worlds is now the hosts of the race removes your restrictions and now you attempt at running so fast your heart can't keep up with the blood load so your body essentially just gives up and refuses to continue being by veins popping, breaking bones or even tearing a muscle so bad that there is no connection between your muscle cells. This is what happens to the 3090s they had/have no restrictions on them or even there could be a bug removing it. The GPU decides "okay this is my time to shine" where it would demand so much power and would put so much load on itself that it is *_literally_* breaking itself apart.
The reason for this is to not only inform you but others aswell. The situation to witch you experienced is not similar instead it is a situation of the CPU and OS saving you thousands! So next time be sure to thank your PC for not killing itself and reward it by replacing the thermal paste, liquid and by cleaning out all those heat causing dust!
This is literally EVGA's fault for making defective GPU's. But everyone is trying to blame it on New World. Shameless.
Alternative title: "Don't buy that "cheap" used 3090 you saw on eBay, cause it's probably blown."
to long for mobile but nice :)
Thanks, don't have to waste 20 minutes of my life 👍
Tell that to Gladd who bought his new and it bricked his
Not at all what he explained in the video.
when mining all 3090 memories reach 105C or more. Which means the junction temps are reaching 110-120 degrees celsius. So yeah... Shit's gonna brick. It's almost the same with the 3080. The TUF version which has really good vram cooling reaches 100C.
Don't know if it has been mentioned but EVGA is replacing any RTX 3090 card toasted by New Worlds
I think it's actually Clippy, just hiding out and getting revenge.
Whew! Petting my brand new rx 6900 xt with fps limiter set on radeon software (as the Nvidia homeys go down in flames).
Gladd is a Destiny 2 streamer. I wouldn't call him a tech guru, but he's also not tech illiterate. Never expected to hear his name in a JayzTwoCents video, good day, even if it's for a bad thing
That's what I was thinking. My world's are colliding!
He’s mainly known for destiny 2 but he has branched out over the last year to a variety of other games. Gladds a good dude
@@blackandbluedress8500 true. Hard to full time destiny when there isn't a ton of new stuff to do haha
@@im2shady4u13 But to be fair. Gladd plays like 18 hours everyday, so no game has enough content for him. :)
@@AtlasNYC_ yeah it’s pretty cool😁
an alternative to turning on vsync like jayztwocents recommends would be going into the nvidia panel and limiting the fps to whatever number you want
there's a lot of games that dont benefit from 2 bajillion fps, like most MMOs, so lock that stuff down and let the system breathe while still seeing the eye candy
no point in running an application or game at a higher fps than the actual refresh rate of your screen anyways, it serves no point apart from looking cool and straining your system more than is necessary
@@moonman1209 if you're capping your 3080 at 50 fps you don't deserve a 3080. Go back to a 1660 super ffs
@@moonman1209 if you got 60hz screen play at 60fps
Son of a Tech reported his card was bricked while running VSync and using a water-cooled card.
so VSync doesnt fix the problem.
not in the game, in the cp of a gpu
Source?
@@menzac8892 son of a tech is a TH-cam channel. He has a recent video on this topic.
You shouldn't make a card that can physically fail doing what it supposed to do. Yes, the developer should put something to stop the software to act like a power virus (the same thing happened in other games like RE7, which really annoyed me), but maybe they shouldn't built a card that can draw that much power in the first place if you can't build enough protection for it. So I'm putting the blame 100% to the card, either the manufacturer or Nvidia. Even if the consumer is overclocking the card, it should still be the card's fault not to have a proper safety net. We are not living in the old days where you can simply push either CPU or GPU without care and ended up damaging the card. Unless the consumer try to bypass that protection (like trying to overclock it way off spec like using hardware or bios mod or maybe software hacks) then the consumer nor the software is at fault here.
I hear what you're saying but just like with jays car analogy I don't think agree. I blew my engine because I over revved it and it has a rev limiter, but even thought that rev limiter was in place it still blew the engine. There is only so much a manufacturer can account for with fail-safes. Modern coding and engines being what they are this is a rare occurrence and I doubt this level of power draw (if that really is the issue) was for seen from Nvidia or the AIBs. Add to that like jay said, the daisy chaining and stuff and you are really creating sub optimal operating conditions for the card. Manufacturers have to also keep in mind some people will want to overclock and they will be disappointed if their expensive video card doesn't overclock well so they leave some headroom and some performance uplift off the table so overclockers can make the most of it without having to go too far past safe limits. So I personally wouldn't put blame there... More to the devs for ignoring an issue that they were aware of
Overcurrent protection already exists, this is a failure on the developer side. Game engines should never demand 100% load instantly while also drawing 5k+ frames. If you noticed at the start of the video, the streamer he talked about experiencing a shutdown and then tried to launch the game again and the card got bricked. Not to mention they've speculated that it was part of that safety net that died, not the GPU itself. I would also blame the card or AIB if this were happening in a multitude of games but currently it seems to just be one game that is doing this, they knowingly let this issue out of Alpha and put it in Beta, they are equally responsible.
@@wadebrady7107 They can't demand more than the hardware allowed to. It is that simple. If the card allowed it to happen then it is the card's fault. You see, it only happened on the higher end card and doesn't happen in the lower end card. You should definitely hear people complaining about this problem if it happened on the lower than 3090 card because it is like less than 1% of people using 3090 card. If lesser card can survive, why 3090 can't? If they can't handle the sudden current that is possible by a game or application, maybe they shouldn't built it like that. It might just be this one game that exposes it, but like I've said, many games actually have similar behavior where basically games, especially in the menu, run like a power virus. My example was RE7 where in the menu, I'm seeing like max power draw, a lot more than in the actual game, and the fans are spinning at their max. So this type of behavior is not exclusive to this new game. What might be different is maybe because this is obviously a new game, it might use something else that probably ended up more taxing? But that doesn't diminish the fact that other some games have similar behavior, thus can potentially cause the same issue like on this new game. And yes, I wish developers would be smart enough to put a limiter when someone brought up the menu or in the menu screen because it is really stupid when the GPU using more power in the menu vs the actual game.
"If you've got an RTX 3090..."
Good joke.
He does 😂
@@AmirZeven I have 3090,
I have one. It's the Asus Strix model.
Dude, it's a joke. Why are you guys so serious!! @@SIedgeHammer83 👎
all 11 people with this card who bought it for gaming deserve to have their GPU bricked.... just saying.
Imagine actually shelling out the money to get a 3090 in this market and having it just fucking explode when you try to play a game lmao
Absolutely hilarious
This market is fucked thanks to people buying these cards.
@@BonusCrook the whales are getting what they deserve
@Anthony Lawrence lol
It's a power spike. My 3090 OC almost killed two PSUs. It demands more than 40A immediate power spikes on the 12V rail, which triggers OCP on many PSUs. Got that Corsair 1600W that has configurable OCP and since then, zero issue. (edit) and that's a watercooled card.
Whatever the problem is here.. it's NOT the code.
Hardware - especially for power delivery - at anytime has to prioritize safety and integrity over max performance. If it does not it's garbage.
If software is able to damage your hardware - under any condition - it's an engineering fail on the hardware (firmware) side not on the software side.
I don't think you realize how much the software can controle. Yes firmware should have some limits that should prevent damage, but if nvidia gave the wrong limit guidelines the firmware could be wrong. And it was never noticed until this kind of edge case cropped up.
There's no reason for the launcher to start running your card like it's minning bitcoin as soon as you click it.
In my mind this is a continuation of the capacitor conspiracy that happened at launch of these cards...
OS protect your hardware ... If he wants, W95 had not protection at all on hardware access, you can killed hardware from software at the time.
So, I don't agree, even if nowadays software behavior are mainly safeguarded by your OS 😁
@@SquintyGears the game developer doesn't have that much access to the card - they're just making DirectX calls (or Vulkan/OpenGL/insert other API here). If the firmware is wrong... that's still controlled by the hardware manufacturer. The driver... still NVIDIA. If userspace software can kill a card, that's definitely an issue for the hardware manufacturers to handle.
@@SquintyGears Everything you said might be true (except the software control part as I work in OS development) but still in all occasions this would be a fail by the Hardware manufacturer (like the capacitors) or nvidia respectivley for provinding the wrong parameters for the firmware - all of this is none of the game devs business.
It's also true that the launcher should not have hit the cards like that. But this should've at MAX crashed your machine, not damage it.
This is a glorious start to Amazon’s development history
nothing special. Blizzard made same stuff... what happened to them? nothing... same with Electronic Arts
Hopefully it'll be the end of it too. :)
When Jay starts screaming about cars you know it's serious.
Other games are doing this too. They way I look at it, the hardware should be able to run 100% under load. If they can't handle 100%, then their limits should be lower. Granted, this game and other games should be optimized better so we aren't running our hardware so hard, but it is what it is. My 1080ti runs at the same temp on New World and Warzone.
Exactly, it's insane to blame the program for this.
Because new tech 3090 is bad, performance is higher but at what cost? 360 watts tdp, u can run 2 gpus with that power. Technology advancement should mean efficiency in energy etc.
this sounds like a GPU design failure. No game should be able to "fry" a card that is not some how bypassing safety protocols. Guess we wont find out for some time but have we heard of anyones 6800XT or 6900XT blowing?
it's not that the game is bypassing the safety protocols, when you put that much strees on electronics eventually they will give up and your card will die, no matter what, too much current, volts, watts, will destroy any safety protocol it's matter of time
There is no way an AMD product these days would allow for such a basic catastrophe to occur like that. Adrenaline drivers have a hard-coded power and FPS limit, it would not even run code that would fry the card.
This is squarely at Nvidias feet.
This is really an nvidia problem, no software should ever cause damage to hardware like this.
I agree, when I read this on Google news feed about an hour or so ago, I thought to myself "how is the card letting this happen to itself?".
No game should be able to cause a card to fry itself, hell no card should be able to fry itself these days, so the question is... how?
@@linuxpirate u did not watch even the first 5 mins huh
Yeah, I mean what is this, 1999? :D
@@linuxpirate less than 3 minutes in "So, its not exclusive to the EVGA card"
Nah, the chip it self is not the problem, rather the game seems to have extreme loadspikes and cards with not good-enough components get into trouble.
This is a sad day. It sounds like all three RTX 3090's ever made have now died. RIP, 3090.
I mean if you bought a 3090 (for games) your probably pure evil so...
@@BonusCrook muhahahahahaha
I don't play that game and I have a RTX3090.
@@igordasunddas3377 C'mon man...do it for science.
Lol I have an 3090 FE
Jacob Freman (EVGA) posted a thread that says Amazon has issued a patch that caps the FPS in the menus and at launch. This is not supposed to be an issue anymore.
I think the more productive question is how or why this issue doesn't come up when benchmark testing? Isn't that supposed to push the hardware to its physical limits?
nah, cause for whatever reason if you dont cap your FPS in new world, New World is like "imma use ALL this GPU" and its gonna murder it in a way mining couldnt imagine. and for whatever reason, the 3090's especially the EVGA ones like to just outright explode in New World.
I think the true purpose of a benchmark is to test your hardware in a very controlled way based on whatever parameters you have set up. As Jay mentioned with Furmark and a few others in the past there have been problems but for the most part I think specific benchmarks are quite heavily tested before being released. If you, as a user, push your hardware past its intended limits and do damage thats a different scenario all together.
In my opinion, as someone who's been playing PC games pretty much since they were introduced, is we are seeing a very bad trend as far as testing and squashing bugs before games go live to the masses. I realize games, and the hardware we use to run them, has gotten infinitely more complex in the last 25 years. Just seems like more and more developers are being forced to rush the product out the door without adequate testing and polish we used to see (I'm looking at you Cyberpunk).
It's all too common now to install a game and almost every time I have to Google some kind of wierd problem or find a workaround for something that isn't working properly. I lost count how many times Warzone crashed my new PC, I ultimately just deleted it because I was worried eventually it wasn't going to boot back up. Literally hundreds of crashes over 6 months of playing.
I benchmarked myself. I can do 100 pushups..... The benchmark didn't find that i can't do a situp.
This is a very good point.
Because the software is not the problem and jay is just spreading bs.
lol the title reminds me of an old Simpsons gag where the anchorman is like a specific brand of soda is POISONED, we'll tell you which one at 11!
Clickbait is as clickbait does
"I've been to Vietnam, Afghanistan, and Iraq... and I can say without hyperbole that this is a million times worse than all of them put together." ~Kent Brockman
cyberpunk devs are like "FINALLY!"
EVGA's VBIOS is at fault just as the devs.
I never had a problem with my 3080 on cyberpunk on Ultra High graphics, Temps consistent at 69C. New World default settings my gpu overlooking and Temps at 80C.
@@soccerprocollinx I had a good time cyberpunk actually the main story was good a couple sides were ok but I was done after 50 hours. New World is running great on my 3070 over 100 FPS and normal 65degrees. I believe amazon patched the FPS on menu screen so they say
I love the mentality of "Well my computer just crashed so hard it literally powered off, better get right back to what I was doing immediately"
I was thinking the exact same thing! Why the FU@K would they do that? LMAO Maybe it was a rich kid or someone that doesn't know anything about computers and can afford to just go buy a new anything when their stuff breaks IDK.
@@guyincognito82 because 95% of the time litterly nothing happens?
@@guyincognito82 Lol, why wouldn't they?! PCs crash due to bugs in drivers and OS. It doesn't damage them. Games don't damage PCs. This case is a very rare exception. NVIDIA fucked up (or people building the PCs).
Jay came thiiiiiis close to saying "retard the timing", which would have been fine to car people but he knew others wouldn't get it and would get triggered lol.
Surprised CommieTube even allowed it. Any time I've tried, it gets insta deleted. 🙄
the word "triggered" nowadays is also offensive so be cautious about using it 🤨 people are turning into bigger and bigger snowflakes every day
@@Kitulous TH-camrs do the great majority of deleting comments.
@@thezen9 not really. If you say a banned word, it will appear your comment went through. But it will be gone when you refresh. Most YTers cant keep up with the 1000 comments per minute they get through all their videos..
@@gotdangedcommiesitellyahwa6298 based username and comment
Over current protection doesn’t necessarily protect devices from load imbalance, if what your describing is accurate. You could see an imbalance across the three power delivery plugs. I’ve seen VFDs get smoked from phase imbalance even with over current protections in place.
I feel like there have been signs of this issue since the beginning of the 30 series launch. I've seen a lot of complaints from users saying they had to upgrade their 1kW PSU to a bigger one or a better brand one due to the 3080 and 3090 GPUs spiking much higher than their rating, causing shutdowns.
Maybe its due to to an overly aggressive stock profile? I dont have a 30 series but i did have a similar issue with my 5700xt when playing games at higher resolutions until I adjusted my power limit
If people would just make use of the tools they have (aka limiting the fps pushed out) this would be a non-issue. No one needs 500fps in CS:GO to be competitve.
My 3090 strix can and does pull 400w on my 850w and I’ve had zero issues
@@mightymilkman the 3090 Strix has a board power limit of 480W when you crank it up (390W default) so it's not surprising you're hitting around 400W.
Rlly? Ive got a 850W psu and never had an issue with my 3080, even when running 4k games at max settings (i never use vsync or cap fps)
"This vids important, to all you 7 people in the world who own a 3090"
Car engines have rev limiters, I find it astounding that there’s no mechanical or software safety system to stop you from causing catastrophic failure.
There are however they clearly failed.
the problem itself is not geting your car to max rev permited by the limit but the amount of friction and heat generated that causes wear on the parts, not refering what heat does to metal and aluminium.
That rev limit is actualy your safety system. On older cars it was controlled for example by the fuel pump mechanicaly, nowadays is controlled by your ECU by software that cuts the fuek when such RPM is reached.
Imagin the catastrophic damage you would cause if none of this safety systems exist and the car keeps reving up and up.
If I'm not mistaken you can set a hard fps limit in the nvidia control panel
But if the card makers added max FPS limits people would then cry saying that they are locking there cards
@@pedrofalves I appreciate your effort but I do understand what rev limiters are and how they work and why they’re good but I was using them to compare a car’s engine to a gpu so I could ask “if car engines have had this protection for years then why don’t the worlds’s leading gpu’s have a similar protection?”
Me hiding in the corner with my 2080 Founder’s edition playing 𝓌o𝓌
Same I got my 2080 super rtx was going to buy that new world game but now idk man I love my gpu too much to loose it to that game
@@x93Vanquish I’m gonna wait a solid few months. I’m no rush. I had a friend sell his 2080 when the 3080’s released. Boy did he come to regret that. Only recently did he get his hands on one and only after months of daily searches.
It is ridiculous that Amazon decided to ignore these issues reported in Alpha. That alone makes them complicit in the problem
It shouldn't be possible to kill GPUs through graphics APIs (DirectX / Vulkan). It's 100% the card's fault.
@@RegBinary it shouldn't be possible to be negligent as to decide to not warn others of a known issue like this during the GPU shortage
@@aj0413_ I'm just saying the responsibility the rectify this issue falls on Nvidia/EVGA, not on Amazon. Games can't destroy cards, bad cards destroy itself.
@@RegBinary Well, of course, but Amazon became complicit in the issue effecting users when they decided to dismiss QA reports, thus they deserve heat as well. Hardware validation and compatibly testing is a part of QA;
Opinion from an Electronics Technician, with the knowledge gained from this video, I believe the draw is actually causing one of the capacitors on the motherboard of the GPU to pop. When you overload a motherboard circuit, the capacitors hold that extra current. When you REALLY overload it that capacitors is going to literally explode. This doesn’t mean that no other parts are damaged. You may have a trace burn out or fry a resistor, but unless they completely overlooked load protection, your processor on the card should be fine and you may could have your card fixed if you find someone with access to specific parts.
Also for sure listen to Jay. Don’t pig tail these cards that spike with such high draw. I even ran two separate delivery wires to my 3070. It just makes no sense to take the risk. This is why 24 pin connectors are standard on motherboards. It gives the motherboard all of the leeway in the world to draw spikes of power.
I have a question to Nvidia - HOW did you make the card the way it could be *destroyed by software*?
?!?!?!?!!?!
POOR BOARD DESIGN been saying it for ages.
It's only Nvidias fault if it's their reference design.
You can't blame a chip manufacturer if they say that this is how you should design the power delivery system, but some dude puts 220V/10 amps straight to the core.
@@FranksReactions Nvidia checks custom pcb designs and need to approve it.
@@AlphaDango True, but again. If the power the card is getting from the power supply is WAY of spec, the best design wouldn't matter.
And I see I worded myself clumsily. Not trying to defend Nvidia for some of the things they do, but they can't control everything.
Amazon must be planning a big GPU sale.
Someone gets it.
This is probably more of Nvidia's fault. Imagine pressing F5 on your code and your pc explodes. Especially when a game engine is usually interacting with an API (Cuda from Nvidia). It's just hard to blame Amazon, even though I would love to. These cards need a lot of power and odds are that's what killed it in my mind.
It's only Nvidias fault if it's their reference design.
You can't blame a chip manufacturer if they say that this is how you should design the power delivery system, but some dude puts 220V/10 amps straight to the core.
@@FranksReactions Nvidia isn't just the chip\board designer though. They are also the driver developer and drivers interact with the card's hardware which could be where the issue lies. Driver version should also be data that is collected on this issue.
@@Lowthar1425 Yeah, we need all kinds of data points and it might well be Nvidias fault.
I worded myself clumsily, but we need more data before blaming anyone and not even the best driver and PCB would matter if it's the power coming from the PSU that is totally of specs.
The card might have done exactly what it was supposed to do and blow a fuse or whatever to protect the GPU die. We just don't know yet.
As a mechanic I really appreciate the car analogy.
A product I don’t own, a game I’ve never heard of or care to learn about. I mean i got nothing better to do for my next 18 minutes.
Conspiracy: Amazon is using people's gpus to micro mine from the consumers pc. /s
thought the same XD
I wouldn't even be surprised.
Enough reasons for me not to put my 3080 Ti at risk either way though. Not playing this game!
Definitely my theory.
yeah i thought the same too
This wouldn’t surprise me in the least. Gotta pay for Bezos’ penis ship somehow
First of all, a game shouldn't be able to brick your card.
Second of all, because of heat and power and what not, I have RTSS running with a fps limit set to 140 fps so I can enjoy G-Sync.
The video goes out to the 1% of us having a 3090, and the probably 50% actually having a GPU at all bruh
I'm rather curious if this is a genuine issue with the game, or an oversight by manufacturers when it comes to vbios and firmware. Or potentially part choice / specification.
In theory, there should be NOTHING a user can do on a stock vbios to hurt the card. It should limit itself by power draw, temps, voltages, etc. A piece of software should not be able to damage physical hardware.
Pretty clearly Product issue if that really can happen
As far as we know, this is the only game that bricks your gpu. I would say it's the games fault.
@@michaelolsson5193 No. This shouldnt be possible with a good product.
exactly
@@Th1sUsernameIsNotTaken You cant take every piece of hardware in account if you programm something. If a component on the Card can be overloaded by maxing the card out, it is a design flaw. And especially evga has a history there.
ez fix for the future:
go nvidia control panel, 3D settings global and set the max fps to 300 or so.
Thanks for that tip. Would hate to see my new 3080 pop in less then 4 weeks after getting it
That's good advice actually
i believe fps calculation is based on swapchain switching, so it is still possible to produce 100 frames payload in a single frame if hardware can process it and it will not be capped
@@mikeycrackson Since its set as a global setting all games are locked at those frames and cs go for example you want 300 fps or even higher for the shortest input/output time possible. Any sort of sync introduces inputlag, some more some less, plus setting it on monitor refresh rate without any sync method can result in mircostutters, because of unstable frametimes.
Also no one said use 300 and nothing else, I just sugested a limit most people wont notic any difference to there games. Like I wrote "300 or so" its still up to you what you do.
If my explanation isn't clear enough here you got two great videos explaining the details
th-cam.com/video/hjWSRTYV8e0/w-d-xo.html
th-cam.com/video/OX31kZbAXsA/w-d-xo.html
Hope you understood my reason why I recommended 300 as a base value, have a good one :)
Imagine buying a card for 2400 euro's after 9 months of waiting and then getting it fried by a game that not even out yet....
Hey Jay and community. Your easiest way to prevent something like this from every happening is by going into your nvidia control panel, in 3d-settings, you can set a custom global FPS cap. This will make it so your card never pushes itself to the limit to get 6000 worthless FPS. It'll still push to get up to you cap, but *shouldn't** push it to seppuku.
Hey, so if any other game in my case was f1 2020, getting a 500 fps when I start the game, do you think it is dangerous for my card or should I set a custom global FPS cap like you mentioned? Thanks..
@@hansun6940 Hi Hansel, personally I have my global fps capped to 200 fps, as I don't see the need for anything higher with a 144hz monitor, and I'm not a pro FPS gamer that may benefit from a few milliseconds of 'input lag'.
It may not be dangerous for your card, but keeping it cooler/not pushing it to the red line can theoretically extend the life of your card :)
@@88phunke Ah, good point. Thanks for the insight man.
As somebody that is both a professional automotive technician and a computer nerd, I sincerely appreciate your automotive metaphors!
I find it interesting to see that people who tend to be into cars and stuff also like computer stuff... like myself.
@@sammy_1_1 Well, considering modern cars can have over 70 different computers on them that are all networked together I think there's a lot of overlap to be had.
I always cap my FPS at 144fps. That's the refresh rate of my monitor so there's no reason to go above that.
New World doesn't have a frame cap in the load screen
@@critical_unknown Nvidia control panel - manage 3d settings - max frame rate - set it to your monitors max.
@@critical_unknown for AMD cards theres a similar thing, I just dont own one and cant put it like I can for Nvidia. Plus so far this seems to be an nvidia/amazon new world thing. I havent seen or heard of any amd cards shitting the bed.
@@critical_unknown Cap it in the Nvidia control panel... you can do a global cap for all games or you can specify individual games to cap the framerate. I cap every game at 144fps.
@@ch0ketv_ that's true, its been a while since I've used an Nvidia Gpu, cheers for pointing that out 👍🏻
The shutting off happened to me when i turned off vsync, but im on a 1080 TI
Same
try to underclock the gpu and under limit anything (power limit, temp limit), and turn off your vsync
“I don’t know gladd and his content” oh boy you’re in for a treat, the dude loves his hamster