EDIT2: I really do not want to put anymore time and thought into this 12VHPWR adapter issue. I have way too many mobos and RAM to test. EDIT: my statement that 12V can't cause arcing isn't entirely accurate. While it's technically true that 12 VOLTS can't cause an arc. Inductive loads(like motors) powered by 12V can cause arc when suddenly disconnected due to the inductance causing a temporary high voltage spike. This however is not relevant to the discussion at hand as you aren't gonna get an inductive voltage spike when one or several pins in the 12VHPWR lose contact. (you might get a indcutive voltage spike if the connector is entirely disconnected) Also while the materials used for the connector are selected to not self sustain a fire after ignition it is not impossible for them to ignite something else that can sustain a fire. I would recommend that you don't leave cards powerd by the Nvidia 12VHPWR adapter loaded while unattended. TL;DW 1. Plastic is an excellent thermal insulator. Pointing a thermal camera at the connector won't give you a good read of the internal temperature of the connector 2. Try measure the voltage drop across the connector in various states of bending 3. Nvidia adapter is using 2 split terminals that are less physically robust than other terminal options (probably why all the photos of melting connectors are with the Novideo adapter.) 4. 3x PCI-e 8pins are just better than the 12VHPWR connector 5. The connector can be melting internally without externally visible damage. So just because it looks fine on the outside doesn't mean it's fine on the inside. BTW this video is even lower effor than the last one on this topic. I don't find unreliable high end Nvidia GPUs particularly surprising or interersting.
@@user-bonk in igor's article about the situation, it specifically says the issue is not the connector...PSU provided cables use the new spec connector just fine. It's NVIDIA's specific adapter build quality that's the issue, not the GPU/card itself.
I've been trying to explain that (1 & 5) about plastic and people trying to read the temps with a thermal camera 😂 Like, bruuuh that's external temp!!... Not surprised Jay was one of those lol
Glad to see people helping an indie company with their issues. Hope this increases adoption of GPUs for Nvidia outside of their niche Users. Their products seems to be so cheap and safe.
@@axeltaylor9389 A little longer ago than that. NVIDIA was already a dominant player by 2002-2003. By then they bought out STB and 3dfx. Matrox had already faded, and it was down to a two vendor race. Now, back in the NV1 and Riva 128 days, that's when they were an indie company who needed help.
even the thought that they go and make this dogshit connector instead of just giving the fucking card 4 8 pins or 2 cpu 8 pins blows my mind but i guess they want to cheap out on the pcb even more like the greedy bastards that they are
@@glitter_fart you increase the potential difference until it overcomes the resistance of an airgap and creates an arc. Current has nothing to do with it.
@@glitter_fart only way you can get it to "arc" if you are plugging in live 12V into a "cold" device (one that has fully discharged its own capacitors) and expect stuff to run normally. In personal computers (yes, PCs, in servers it's a different story) you never EVER do that. You will either trip protection or straight up burn connections (near instant) or chips that are connected there. I can get even 3V to arc at micron distances of air gap but never in assembled PC.
Your the first person who has actually said what seems obvious. If the problem was at the solder joints why is the melting at the opposite end? It’s because the problem is at the actual connection where they are making (poor) contact and the melting is happening.
Because if you lose a solder joint, you have the same amount of current going through a smaller amount of terminals, leading to more heat generated in the terminals that already have a suboptimal contact. More current more heat.
@@Maekiii So losing connection on one of the terminals by breaking the solder joint would cause the rest of them to carry a bigger load. Yes - but how does it explain that melting is happening on supposedly disconnected terminal-pin connection?
Plus the fact that they are using garbage like thin metal both on the board and on the cable that makes the actual connection. Its the same fault that is often made when you try to make a cheap high powered speaker. It doesn’t make any difference that your cables is thick enough for the power drawn, if the connector on the speaker Unit is poor or the small wire leading the signal to the coil is too thin you are going to get too much heat. It’s just a question wether its critical enough to make any damage.
@@skorpers Having followed Buildzoid since his Gamers Nexus videos, the conclusion is obvious; he is an irresistible force of nature. Confining him to a script would fracture space-time.
@@wargamingrefugee9065 He could employ the Gunnery SGT Hartman method which is to react genuinely to the situation at hand and the dialog becomes final.
@@DepressedMusicEnjoyer I have never heard of that being a thing on pcie connectors. What is the rating for the number of connect / disconnect cycles on a pcie connector? I am highly skeptical that you are correct here. There is flat out no way those are rated for 30 disconnects. I researched this for a while but couldn't find anything to support your statement. Why are you trying to defend a company that gives a flying F is they burn your house down unless it costs them money?
Thinner metal = higher resistance, improper contact = higher resistance, higher resistance = more heat = plastic melting, it's just that simple. Thanks for clarifying shit, been following on Reddit and TH-cam, finally someone that knows their shit.
When using bus bar to connect all pins then current go to other pins so lower power dissipation on that pin. If you got one lucky good connection because uneven force then you have problem because it will take bigger portion of current and it will heat more than others. If that ndivia adapter would have individual cable/pin then we don't have this current balance problem in this scale.
the irony here being that this particular youtuber turned out to be wrong. had nothing to do with the quality of the pins. turns out the issue was a couple of clowns not plugging the cable in all the way. gamer's nexus had one of their 4090s running on just 1 live and 1 ground pin with a 600W (maximum allowed by BIOS) power draw, and it was fine because those 2 pins were plugged in correctly. more than 200,000 4090s and 4080s sold to date. only a dozen or so RMAd cards according to the board vendors. the connector is fine. and i personally love it. makes SFF builds so much easier.
This explains why Jayztwocents couldn’t reproduce the “fire”: he was pointing a camera at the whole connector while the only responsible is the pin inside the connection, which is impossible to isolate and check.
with the split pins, a bit of sideways force would cause the pin to open slightly, then you would only have good contact in 2 places, the front left and back right ... and the top + bottom, but a) they are only partial surfaces, b) the force would be left/right so would have better contact there.
Im really interested in this plug not really for the actual content (who owns a 4090 lol) but the way things like this are handled and the levels of trust and slack nvidia is given by 3rd party reviewers. Anything short of a full recall asap is incongruent with standards we held NZXT too.
(for reference, for a 12 V voltage to create an arc, _in air_ , you'd need to bring the two connectors 4 _microns_ close to each other. About one tenth the thickness of a human hair)
It was as I thought. Somehow you get increased contact resistance in the female part of the connector. Due to failing of the sleeve. It fails due to deformation of the sleeve. The heat will appear where increased contact resistance appear, and propagate from that point
This connector would probably work fine if each crimp were terminated to a single cable. The fact that they are commoned by a rigid strip of metal in order to adapt the number of connections means that the bending moment placed on the outer pins is quite large. This combined with the split female crimps means you probably end up with the far pins making only partial contact at the tip and the base of the male pin. As you pointed out the thermal resistance of the plastic will be quite high so it won't take much additional power loss to cause overheating.
The bending of the cables is just stupid. It could easily be rectified if someone had actually used their brain when designing cables and how the connector was placed on the board.
@@mrdali67 You forgot one of the most glaring issues - STRAIN RELIEF. How TF there isn't any REAL strain relief material securely holding the wires is just absurd. Coming from a motorsports background, strain relief is a big deal because of all of the G forces & vibration, and to see such minimal strain relief on a connector of such fragile construction is just mind blowing. What Nvidia won't do to save a buck...
@@wassilia1234 The 3000 series cable did use the same dual split female pins tho, makes it seem more like a difference in the constrution of the cables causing the issue rather than the pins themselves since they are common between the two adapters, and we didn't have an issue drawing 450W+ over the adapter when connected to a RTX 3090Ti.
@@RyTrapp0 Ya it's a huge problem, but you can easy make a connector where you don't have the strain to the cables. 1. Put the darn connector on the back of the card and make 2 versions of an "L" connector so it's possible to choose the right one dependent on how your mounting the card. And you should have no problem in getting a good cable management without bending the cables right at that connector. It's not that hard to figure out. Jesus .. doesn't require an engineering degree 🙃. Again, It's because they didn't care and just botched a really bad solution that could have easy been avoided.
You might be right on Nvidia forcing its add-in board partners the 12VHPWR maybe that is why EVGA no longer makes GPUs. They don't want to use the adapter for either they prefer the 8 pin connector.
27:52 starting from here, things go a bit wrong. The recommended terminal from the datasheet is the 10132447-121PLF HCC (High Current Capable) terminal, which for the terminal itself actually has a 12A rating with 16GA wire. The safety margin is nearly the same as a Mini Fit Jr 8-pin. 12A x 6pins = 96A x 12V = 1152W.
Thank you for understanding physics. The terminal was never designed to be side loaded which causes separation which causes increased heat which weakens the housings rigidity which leads to further separation and the cycle continues. ❤️
Doesn't EPS use a similar connector to 8-pin while it is rated for 300w? thanks to intel, we do have cpus that can surpass 300w easily and there hasn't been any reports of widespread issues with that cable, right?
Have you actually read igors lab article (there is an english version)? He says that the terminals get overloaded because of how the load is distributed over them. The outer terminals have to carry way more current than the inner ones the way they soldered the cables onto the plug.
in theory you could pump a non conductive fluid like mineral oil through the connector to cool it better. This would probably require modifying the connector a bit.
@@ActuallyHardcoreOverclocking there are pcbs with channels in the power planes for pumping coolant through lol. And there are some servers that use subversion cooling. Like instead of air in ur case it's some sort of nonconductive fluid.
IIRC The issue with the solder isn't necessarily that the solder itself overheats, it's that those joints easily break and might tear the thin sheet connecting the 12v lines, causing the current to flow through fewer pins, causing greater thermal load.
Not sure if it matters, but NV adapter is using 4 wires instead of 6, making it 456W; as the 4 middle pins sharing 1 wire per 2 plates. Interestingly the pins with a single wire per plate, on the side of the connector, are the ones that usually gets melted. Considering that those pins are powered by a cable consisting of 4 wires capable 8.5A (300+W) each that is being channelled into single pin designed to handle 9.5A. Maybe if one of the 2 middle ones looses soldering or their pins structural integrity gets compromised the one on the side of the connector has to take up the extra load, which it can as it has over 100% safety margin, but then we overload that pin 3+ times the A it designed to handle.
This is why 24-pin ATX12V 2.x standard has a 3.3v sense wire going through connector from the supplied 3.3v and back up to PSU. This allows PSU to keep track of that the cable is plugged in and what kind of voltage drop it sees along the entire cable, including connectors. If PCI SIG had used this approach for this 12VHPWR connector, it would have been fine, regardless of the number of splits or joins. It would have needed compliancy in the PSU though, but any load distributed would still show increased loss if there was any bad connections.
If you look at the pic at 16:02, you will notice that one side of the pins have been flared out. It almost looks like someone took a screwdriver and "pushed" that side out. their is something seriously wrong with the plugs on these cards.
One thing that's worth noting is the NVIDIA connector takes a lot less force to insert than a real Molex Microfit 3.0, I haven't had the chance to feel a third party cable but I'd bet there's a notable difference in insertion force so the use of those split terminals is a smoking gun for me, contact force on the dimples is probably lower than spec leading to high resistance, poor current sharing and thus thermal runaway.
Many people have already pointed out - yes, arcing happens at 12V just as well. the voltage only affects the distance the initial arc can strike, nothing else. Arc-welders operate at 10-40V can can draw quit long arcs. The intermittent connections and sparks you mentioned - that does involve arcing. The current return path - most of that will be going back through the same connector as is required by the PCIe specification. The PCIe-Slot is wired to handle up to 75W and will have a lot of problems if you try to push 200W through that. The specs also are not as you make them out to be. The PCIe-specs say which connector must be used and what electrical characteristics it must have. And that means a PCIe 8pin power connector for the connection is rated at 150W / 13.5A maximum for the entire connector. The physical connectors them self can be rated a lot higher, but they are not required to be - there is no inherent higher safety-margin there. A manufacturer could very well used a connector that is only rated at 4.5A/pin and that would be within the ATX specs. (Even worse - the table already gives a hint but the specific crimp-terminal that YOU HAVED CHOOSEN to show is rated to be used all the way down to AWG24 were they don't even give a minimum required current handling capability - aka you could use that connector with AWG24 and have it burn up at 1A already - so much for the supposedly high safety margin). Now why are you comparing the specification of just the crimp-terminal used in the 8pin against the entire connector for the 12pin? cdn.amphenol-cs.com/media/wysiwyg/files/documentation/datasheet/boardwiretoboard/bwb_minitekpwr3.0_hcc.pdf Here - the ACTUAL specification of the crimps used for the 12pin connector. Oh look at that, they are 12 Amp Crimp-connectors. And they are also available in 8A, 8.5A, 13A variants. And as we have now seen the problem is not that the connector is designed bad but that the cables are not attached properly. The individual pins are designed to hold the individual cables crimped, not shoddily soldered to a way too thin foil. You your self went on and on about how the plastic at the outside of the connector is getting actively cooled and how the metal pins are decent thermal connectors.... Now here me out - what happens if there is a bad connections and high resistance near the solderjoints where the outside plastic is cooled? Oh, the plastic around the join will get very hot but from the outside will be fine, and the heat will be transported to the pins and into the cables you say? It likely is the combination of both - with the bad soldering the current is already concentrated and more heat produced, and then the bad crimps used getting bent and generating even more heat.
Well done. Measuring voltage drop under load would be my choice for proper testing. This is also the preferred way of testing circuits in automotive diagnostics.
Yeah, hense the hard 450W limit on the FE card. The irony of Nvidia "playing it safe" by sticking to the PCIE spec and cramming in a 4th 8-pin when they may have avoided all of this drama by simply using that adapter is just too good. Got tired looking for a pic to see if it used double split terminals but I assume so. If the 3090ti balanced the load across the pins better that may be a big factor but I really want to see tests on a 4090.
@@JJFX- if they were so set on this connector they should have just used two and ran 300w through each with two 8 pins. Sure it'd take up more space but it would be better than 4x 8 pins
@@p.e.r.c.y More space? Do you mean the card connector being 2 merged 8-pins, not the 16-pin connector adapter? As much I'm for not sticking to the 150W spec, I understand why they wouldn't want to push it to 300W for widespread use. Using 3 like all the 3rd party PSU adapter cables gives plenty of margin for error and more room for overclockers to play with. I also don't understand why they aren't including the PCIE slot in the power spec for these cards anymore like they did when they were just using 8-pin connectors. They didn't seem to care when the 1080 FE max limit included ~70W over PCIE.
@@JJFX- It is the same double split female pin, if you check the Techpowerup review of the 3070 it shows a pic of the connector, you can clearly see its dual split pin (albeit with half the pins populated because it's a 3070). The reviews of the higher end cards didn't appear to have pictures of the 12-pin end of the adapter.
@@JJFX- There is still a little draw from the slot (15-20W at full load based on various reviews), but the main reason that most of the power comes from the PCIe power cable connection is for ATX 3.0 PSUs and their ability to communicate directly with the GPU about power requirements, the ATX 3.0 PSUs have no way to communicate about the power draw over the slot from the motherboard.
What I don't understand is why the bad connection still gets a large amount of current going through it. Igor's teardown seemed to show all the pins in a row being connected together, which should mean that if one of the pins gets a higher resistance it should start passing less current and the rest should take up the slack. Or is this because the 12V pins on the card side are not all connected to the same power plane?
If the terminal gets hot, the heat will travel up the wire a short distance and you'll see it on a thermal camera. Overall I agree. Yes my background is in electronics and yes I own a thermal camera.
Yeah that was also what im thinking...how can a disconnected pad cause the melting of pins? The amount of current going through the Pins doesn't change, only the current going through the cables. Which aren't failing. Even IF both outer pads rip, for many PSUs 300W on a 8-Pin is still within spec... unless you plug in a daisychain cables with both outer plugs and then all current goes through one cables to your PSU... But in that case not the Pins or GPU fails, but the PSU...
He is saying you cannot arc 12 volts. He sounds like someone with a rudimentary understanding of electricity. I have seen voltages under 24 vdc arc like crazy due to inductive loads.
@@mohibdosani6957 OK, when a split pin starts to open, intermittent contact will arc. I have seen this so often with field connections on Automation where connector bodies are fixed to a structure when they shouldn't be. This exact issue is one of the highest causes of electrical failures I have seen in nearly 2 decades as an engineer designing electrical systems on automation. The point is when you have current draw and start to open the circuit, any inductance in the load will force current draw increasing voltage and causing arcing. I am just saying it is inaccurate to say a 12 volt load cannot arc.
but then it would melt at the solder joints not at the terminals. The damage is almost entirely localized to the terminals. The ugly soldering isn't what's getting hot.
@@SuperUltimateLP i think GPU’s are required to detect PCIe slot power and if applicable supplementary power I dont know of the lack of PCIe slot power would prevent a GPU turning on but if it doesn’t the 75w max that could be supplied from there would have to go through crappy cable and would defiantly cause issues For example my 3090 will not draw more than 160w per 8 pin connector and where possible it wont draw power from the slot Most ive seen is 60w from the slot and average is normally 15w which the rest via 2x 8 pins
Is there any reason why we only see 6pins used in powerboards? You could basically fit 4 of them on the same area than 3 8pin, while delivering the same power as long as the 3 12v pins are present.
Thanks for the even more information on this topic man. The more I hear about this stupid connector and how Nvidia screwed up majorly makes me want to Switch to AMD because with their RX 7xxx series GPUs it has been confirmed that AMD IS NOT going this route and they are keeping the 8 pin connectors that we know and love.
I think the double split terminals aren't helping, but since in most cases the outer connectors melt first what is probably happening is that bending causes the small part between the solder joint and the pin on the outer pins to break/rip/crack, BUT not all the way through just enough to reduce the cross-section and thus increasing resistance and heat.
Thanks for explaining this in a manner that isn't sensational and just looking at hard facts. As far as the choice to go with the 12 pin, I think the choice really has to do with selling it to OEMS and the likes. Even though we could draw 600 watts from 2 8 pins. I can't really as NVIDIA go to a SI like Dell or HP and go yea I am going to sell you product that draws power out of spec (even though its perfectly fine). It's alot easier to say hey this plug right here has the certified spec of drawing up to 600watts. From a diy customer it makes sense to just have 8 pins but from a business/legal perspective you cant sell something knowingly out of spec and not expect it come back and bite you. Not saying this faulty adapter won't cause some harm either. I say this a someone who use to be HP Enterprise on paper items matter a lot even if they do not make sense or are not realistic.
In Igor's article on the subject, it specifically says the issue is not the connector...PSU provided cables use the new spec connector just fine. It's NVIDIA's specific adapter build quality that's the issue, so theoretical if SI builders just used PSUs compliant with the new ATX3.0 standard, they should be fine.
Please someone tell me why we cant use a 2 Piin connector that can handle 600 watts ? Or even a 4 Pin Connector that can handle 600 watts? In the Radio Control hobby we use connectors that have robust connectors, and larger gauge wires, there are very few problems running 150+ Amps at literally over 40V. I dont understand why something like this could not be adopted to work for video cards. 12V at 600W is only 50 Amps. and the connector and wires are very manageable. Please help me to understand this ?
People probably bend the cables much more than on 3090ti because the card is so much larger and I'm many cases where the largest cards from last generation were fine the 4090 sits right against the panels.
OMG, thankyou thankyou thankyou. I thought I was the only one who spotted the 100%/10% safety/hamfistedness margin reduction. That's a brutal level of precision to dump on your users without warning. Both connectors were rated for the same 30 use cycles but 100% safety margin makes that far less important than 10%. PCI-SIG should toss this crap to the curb. You want 600w, then design a 1200w connector and de-rate it. Also: 3090ti adaptor = expensive 12x crimped cables. Zero cheaparse soldering in sight.
So, assuming I have a Seasonic Prime PX (which uses 18 AWG wires) and I also use Seasonic's adapter (12 --> 2x8pin) it means that they are basically doing what AMD and operate at a safety overhead of 2%?
I think the important missing from all the explanatory videos is: why does housing of the receptacle not prevent this excessive movement when wires are bent? It should be possible to both allow some movement to allow the pin to be well seated and prevent the receptacle from moving too much.
Didnt the 3090TI use the same adapter but without the 4 sense pins? And also since 3090TI have the same connector as the 4090. Did anyone try to use the adapter on a 3090TI and see what happens. Do the sense pins work on a 3090TI?
How are people fitting these cards with their connectors into normal ATX cases and closing them? I run my case open with my card cause I didnt like the connectors pressed up against the sidepanel, And I have a 1080ti. I cant think of a normal case that doesnt put the card on a riser and mounts it vertically that you could close the case with this in it. And not bend the cable significantly, even just trying to close it would pry the terminal spades apart. Its absolutely rediculous.
Why don't they just use a big ol 2 pin DC connectors that RC lipo batteries use that's actually designed for high amps, and don't care about cable bending. Would take about the same amount of space from the pcb as that 12pin.
Hi i've never commented on one of your videos before but i have to say something about the solder side of the connector. It looks to me like each of the solder pads on the connector are separated with a split. If the solder that's been applied only connects to the single solder pad then bending of the 14awg wires could also put tension on the pin interconnect more so than the middle four which are all soldered together. Nvidia should've made the solder pad one piece and only had one split in the pin as you mentioned.
Which side again is the ground side? The side with the side-channels, or the other? Seems like the burnt ones are on the same side? Maybe that is a clue?
32:13 Load balancing might further reduce the current on each of 3 parts compared to how much a bad connection would reduce it. So, if card measures a voltage drop on 1 part it can reduce the load on it. To achieve that the balancing parameter should just be the voltage drop multiplied by current draw so that the balanced physical value is the power dissipation.
Connectors working fine. Done a bunch of dice on 4090 and even broke a really nice gpu bin Msi suprim with my teammate. Using same connector on strix and it’s a hard kink. Just plug it in. Caught myself a couple times now.
I just wqnt to know will be melt my connector or not after weeks usage and iam waiting now cablemod cable. Hx 1200 corsair psu and msi 4090rtx. Should I worry about after 3 weeks or not?
with the 3090ti v 4090 if you split the 12v into 3 sections wouldn't the card then be able to spot a greater voltage drop on 2 pins vs the other 4 pins and throttle it back? the cards in stock config only pull 450w on avg so you would be in spec of the rest by practically disabling 2 pins.
If you want a good example of arcing think about a cars ignition system where ignition coil pump out over 40 000 volts to throw a good spark across the gap of the spark plugs
Theoretically the load-balancing across sets of pins should make the 3090 more susceptible to overloading pins, since a failed pin would route the full cohort target load through its cohorts only, whereas a 4090 would spread the failed pin's load across the entire connector's pin array (per direction).
Also with that adapter all pins are connected to bus bar. So one good contact will take bigger portion of current and get biggest power to dissipate because bus bar will even voltage drop over connections :) Individual cable for each pin maybe enough for this balancing.
I am wondering if potentially the 3090 ti adapter was tossed in with the 4090 and it was rated for 150 instead of 300. That would explain Igors picks. I don't own cards, so take it for a grain of salt
Aye I concur, have noticed it's "always" the pwr/12v pins, and it _seems_ to "always" *_start (!)_* at the edge pins, commonly pin 6. 12V being the inside row on the socket/pcb side (pin 1-6)/does make it a bit of a PITA; I don't know if any of the AIB versions have a flipped socket though. BUT! I must insist, even on the Founders Edition, that the appropriate method here would be to take a couple of the dirt cheap, tiny glass bead K-type thermocouples or 100k 3950 NTCs, dip the bead in TIM, then secure the cables somehow so the tension of the stiff sensor legs makes them push against pin 6 and 1 (the outer edge 12v pins) , as close to the backside of the socket housing as possible. IMO that is the closest we're gonna get to a temp measurement "inside the connection", as it's a direct pin/lead contact measurement, and barely 3-4 mm from the pin connection itself.
I think the problem with solder join is if 12v didn't connect but ground was connected, the adapter still considers the cable is 600W capable and sending more power through the other 3. The power sense seems only counts how many pins is grounded... Now if you have 2 12V side... disconnected...
it takes about 30,000 volts per centimeter, or about 75,000 volts per inch, to jump a clear air gap. Once the gap is ionized, the sustaining voltage is less.
To me it looks like when the cable snaps off from the pin, there is still the same current going through the pin, but there is a much smaller path for heat to flow as the wire has been disconnected. Normally the heat will go down the wire and dissipate. So removing that wire connection increases the thermal impedance and it gets significantly hotter.
Includes. I understand tha 105c includes those 30c, so realisticly you get 75c surrounding area. Give some error margin and you get around 65c max usable temp near connector. Being so fricking close to heat generating components I supose hitting those 65c is prety realistic. Can it be by choosing this connector them being just overoptimistic, and relaying to much on simulations, not considering real world usage case?
well, copper wire cables are also good heat conductors. If the connector itself is hot enough to melt plastic, the copper wire will also be visibly hot on the flir for sure. Since the heat gets conducted up into the wire. This problem could easily be solved by gold plating, which would reduce contact resistance, or even switching the connector to solid copper. Not sure what metal it is made of, but most regular metals are not great conductor compared to silver, copper and gold.
Molex makes those, they range in price from 0.14$ to 0.21$ per piece , something similar but automotive grade from TE Connectivity starts price @ 1.5$, the only time a TE plug/pin failed whein it passed 28V and 400A.
How do you feel overall about the amount of current being shoved through those connectors? At 600W that's 50A of current which just seems to be asking for connector problems. Obviously that'd be total chaos with power supplies, but maybe there should be a 24V standard to bring the current back down before we end up with GPUs with car battery terminals on them... If you think about it, what else in your house even requires a 50A capable connector?
Your concern is unfounded. The wires are more than capable of the current when counted in parallel bundled together. I will explain why since you seem smart. The 4090 adapter shown is using 14AWG wires, 4 of them for power, 4 for ground feeding into the 12-pin plug. This 14 gauge is very common wire-gauge in homes (also 12), and is regulated to 15-amps in the US, you would be familiar with this spec. This is conservatively rated for in-wall home safety, the wire itself is acceptably used in 25-30 amp in other situations, or even more if cooled. So Taking conservatively 15 Amps * 4 Wires = 60 Amps. Times 12 Volts = 720 watts. More than enough to cover the 600w figure. That said, the Pin and the Connector, I can't speak to, but BZ said 9.5 Amps capable per pin * 6 pins = 57 Amps. 684 watts. So also enough. It wasnt the electrical engineering that went wrong, its the mechanical engineering. You bring up a good point though about the 12V becoming somewhat undesirable when such high amps are needed.
@@mrlithium69 Yeah I get that the connector can handle it, he's explained it very well in the video and supported it with actual specs. But the 8pin connector did have a much wider safety margin which undoubtedly contributed a lot to incidents like this being rare. Hence why the question, have we reached a point where GPUs need so much power that 12V is worth it anymore. I know these things fairly well and I wouldn't have had thought to even think about the connector if I had a 4090: plug it in, close case panel and use the card. Realistically with an 8pin even though it's not great for it, you wouldn't think too much about it being squished a little if you're tight on space or length as long as it's "clicked" in properly and doesn't come out. Lots of people have their motherboard 24pin do a sharp bend to go into the back of the case and nobody thinks anything of it either. 50A just needs a lot of pins and very good contact and nothing about this connector or cable screams high current, proceed with extra care. I'm also thinking about all the videos out there of LTT or GamersNexus receiving PCs from prebuilt companies with connectors not being plugged in correctly. This connector effectively needs to be plugged in perfectly and its cable ran perfectly to not be an issue. Going to 24V or even 36V would bring the tolerances back to 6/8pins. It's going to step it down to 1.something V anyway, if anything the VRM should run cooler. With ATX12VO coming up, now would be _the_ time to include other voltages in the spec on top of getting rid of 5V.
I would like to see 48V what is used in telecom. I think main problem in this case is nvidia adapter has bus bar in connector and if connector is having uneven force that one connector is making better contact then it take "all amps" and burn. ok not all but over rated current for that pin. But with individual cables cable resistance is balancing those currents.
Sup buildzoid, have you seen the Brazilian Galax team (Teclab) pushing one cable with >1000w load for a couple dozen minutes in a few lives? even testing different power supplies, plenty of those getting into protection before any harm was shown towards the cables, which is kinda impressive, but still, I'm mostly interested in your opinion on those guys who push for WR OCs vehemently defending the build quality of the cables in response to the enthusiast public encountering and pushing for attention on the issue.
@@SolarianStrike That is true, although I understand that could be extremely naive by my part, I still find it a bit odd/unlikely how those guys with an image to preserve in the hardware space would shill for Nvidia this heavily if there truly are issues with their product to the point "casual" users with under the spec wattages, since most gpus don't seem to be pushing past 500w mark without meddling/specific models, are suffering cosmetic damages and/or pin damage.
What is most amusing is that despite the Amphenol connector being the one approved by PCI SIG, Nvidia who helped design it, contracted Astron (who doesn't publish datasheets) to build it. That adapter is NOT Amphenol. Astron made the 12-pin last gen, too, but they actually used crimped terminals since they only used 16GA wire, so they didn't have this double split in the terminals. But the need to solder was due to using 14GA and the max size for the Amphenol terminals being 16GA. Nvidia deserves to eat this.
A better mechanical design like a retention bracket directly attached from the wire with kind of comb to the adapter would have prevent the cracks of the solders. Can’t believe nvidia engineers didn’t see that kind of problem coming during mechanicals stress tests.
I am an electrical engineer, but I don't have expert knowledge in this field. Something you said doesn't make sense and I lack the equipment to test to see if it has any foundation in fact. You mention that the burning is occurring on the 12 volt side and not on the neutral side, your explanation is that the return current is moving significantly through the mother board and diminished through the neutral conductors on the 12VHPWR. I don't think this makes sense for several reasons; 1) if the traces on the motherboard can support this sort of current with this sort of travel distance, then why do we even have special power connectors on the video card. 2) if the motherboard is supporting a significant percentage of the return current, then why have just as many conductors for return current on the PCIe /12VHPWR connector, it would just be wasteful. 3) I haven't read through the ATX power standards, but I really doubt that over current protection wouldn't be tripped if so much current was moving through the 24 pin motherboard connector. I would have designed it so that approximately as much current must be returning through the proper return conductor or it would trip, but I don't know if this is how it was done. My belief is that it is because a bending moment is being introduced through the connector with nothing to brace on the opposing side, opposite the pivot point below the neutral ground pins. This is resulting in improper contact with the 12 volt pins leading to a significant increase in resistance at that location. The clip could have been used to reduce the bending moment, but for some stupid reason is on the underside of card (this is just bad engineering), if you look at a PCIe power connector, the clip is on the top side (nearest the PCB) and this would act to reduce the bending moment if you pull a cable downward or allow it to hang under gravity. If the 12VHPWR socket was mounted upside down (so the clip was on top), I think this would probably fix the whole issue.
is possible to have two nvidia gpu in one system and different version of drivers for one and second card. My NVS 510 for monitor output hates with rtx 3060. installation drivers for nvs510 whipout rtx3060 drivers end the same rtx clears nvs drivers.
One of the clues being overlooked generally on this topic across multiple videos and sites is that many of these failures happen after 3 weeks, or several months, or 9 months etc. So think about that for a minute. We're shoving 300 to 400 W across one to three tight pins on a new design connector and getting a run away heat accumulation. But it doesn't happen right away but fails after a significant amount of time. Why ? What could cause this ? I submit, Corrosion ! ... namely oxidation at the exact fine point of pin contact. We know that oxidation increases with heat exposure and air exposure over time ... and that corrosion causes even more heat due to resistance and before you know it you have a run away heat buildup. So, how do we prevent this ? and I am amazed at how little attention this idea gets ... like with any electrical contact surface which carries a high load with respect to surface area, we have to protect the existing 'ok, but not great' connection with an anti oxidation contact grease designed for this purpose. Yes, they do make them, they are lithium based and in this case we want something that encourages conductivity not discourages it , so we don't want a 'dielectric no-ox' ... so get 'dielectric grease' out of your head cause that's what comes to mind with most people ... we want the opposite of that. Craig labs makes the perfect solution for this, its called DeoxIT L260Cp ... the 'Cp' stands for copper particles ... it has copper particles in the grease, it's lithium based so it will not degrade under heat stress. I'll be applying mine with a needle type applicator ... sparingly ... inside the barrels of the connector.
Aren't a lot of PSU built to compensate for the voltage drop on detection? Also among low-to-mid tier PSUs aren't there plenty which which do it across all the lines and not only droped one?
It makes sense the load balance of 3090ti are saving the conectors from burning. The power draw will folow the path of less resistence. Ex: If 4 out of 6 of those conectors are getting any more resistence than normal, the power will flow through the 2 ones with less resistence and exceed the specifications (creating the problem), with power balance the board will not let it happen. Probably the split casing of the conector with bending is creating this bad conection with some of the conectors and causing this unbalance.
So the point of failure is the narrow pins at the very tip. Makes sense. Napkin math at 600w, 6 12v pins, 8.3A per pin. Even if the wires are 14awg, the pins are still the same thickness but then the contact area is halved when it bends. Making the connector smaller and the power draw per pin higher was playing with fire.
EDIT2: I really do not want to put anymore time and thought into this 12VHPWR adapter issue. I have way too many mobos and RAM to test.
EDIT: my statement that 12V can't cause arcing isn't entirely accurate. While it's technically true that 12 VOLTS can't cause an arc. Inductive loads(like motors) powered by 12V can cause arc when suddenly disconnected due to the inductance causing a temporary high voltage spike. This however is not relevant to the discussion at hand as you aren't gonna get an inductive voltage spike when one or several pins in the 12VHPWR lose contact. (you might get a indcutive voltage spike if the connector is entirely disconnected)
Also while the materials used for the connector are selected to not self sustain a fire after ignition it is not impossible for them to ignite something else that can sustain a fire. I would recommend that you don't leave cards powerd by the Nvidia 12VHPWR adapter loaded while unattended.
TL;DW
1. Plastic is an excellent thermal insulator. Pointing a thermal camera at the connector won't give you a good read of the internal temperature of the connector
2. Try measure the voltage drop across the connector in various states of bending
3. Nvidia adapter is using 2 split terminals that are less physically robust than other terminal options (probably why all the photos of melting connectors are with the Novideo adapter.)
4. 3x PCI-e 8pins are just better than the 12VHPWR connector
5. The connector can be melting internally without externally visible damage. So just because it looks fine on the outside doesn't mean it's fine on the inside.
BTW this video is even lower effor than the last one on this topic. I don't find unreliable high end Nvidia GPUs particularly surprising or interersting.
5) This is completely unacceptable, let alone on a $1,600 retail product
What about the one from atx 3.0 psu? Literally same one?
@@user-bonk the PSU manufacturer can use a different terminal style from what Nvidia uses.
@@user-bonk in igor's article about the situation, it specifically says the issue is not the connector...PSU provided cables use the new spec connector just fine. It's NVIDIA's specific adapter build quality that's the issue, not the GPU/card itself.
I've been trying to explain that (1 & 5) about plastic and people trying to read the temps with a thermal camera 😂
Like, bruuuh that's external temp!!... Not surprised Jay was one of those lol
Glad to see people helping an indie company with their issues. Hope this increases adoption of GPUs for Nvidia outside of their niche Users. Their products seems to be so cheap and safe.
XD
this would have been a legit comment 20 years ago
@@axeltaylor9389 A little longer ago than that. NVIDIA was already a dominant player by 2002-2003. By then they bought out STB and 3dfx. Matrox had already faded, and it was down to a two vendor race. Now, back in the NV1 and Riva 128 days, that's when they were an indie company who needed help.
Imagine cheaping out on the adapter for a $1600 TOTL GPU
even the thought that they go and make this dogshit connector instead of just giving the fucking card 4 8 pins or 2 cpu 8 pins blows my mind but i guess they want to cheap out on the pcb even more like the greedy bastards that they are
@@glitter_fart you increase the potential difference until it overcomes the resistance of an airgap and creates an arc. Current has nothing to do with it.
@@glitter_fart amps has nothing to do with it, 12v DC nothing. It takes hundreds of V DC to create arcs through air, even at less than a mm
@@glitter_fart You need 1000 Volts per mm distance to create a spark in Air
@@glitter_fart only way you can get it to "arc" if you are plugging in live 12V into a "cold" device (one that has fully discharged its own capacitors) and expect stuff to run normally.
In personal computers (yes, PCs, in servers it's a different story) you never EVER do that. You will either trip protection or straight up burn connections (near instant) or chips that are connected there.
I can get even 3V to arc at micron distances of air gap but never in assembled PC.
Your the first person who has actually said what seems obvious. If the problem was at the solder joints why is the melting at the opposite end? It’s because the problem is at the actual connection where they are making (poor) contact and the melting is happening.
Because if you lose a solder joint, you have the same amount of current going through a smaller amount of terminals, leading to more heat generated in the terminals that already have a suboptimal contact. More current more heat.
@@Maekiii So losing connection on one of the terminals by breaking the solder joint would cause the rest of them to carry a bigger load. Yes - but how does it explain that melting is happening on supposedly disconnected terminal-pin connection?
Yeah... Man J2C is no electrical engineer that's for sure.
Yeah he analised it and Igor further proved it.
Plus the fact that they are using garbage like thin metal both on the board and on the cable that makes the actual connection. Its the same fault that is often made when you try to make a cheap high powered speaker. It doesn’t make any difference that your cables is thick enough for the power drawn, if the connector on the speaker Unit is poor or the small wire leading the signal to the coil is too thin you are going to get too much heat. It’s just a question wether its critical enough to make any damage.
"I have several calculators open" feels like a very AHOC moment
Buildzoid is missing out on potential voice acting roles in nerd comedy animations on netflix.
@@skorpers Having followed Buildzoid since his Gamers Nexus videos, the conclusion is obvious; he is an irresistible force of nature. Confining him to a script would fracture space-time.
@@wargamingrefugee9065 He could employ the Gunnery SGT Hartman method which is to react genuinely to the situation at hand and the dialog becomes final.
@@skorpers Now that I would watch. Where's Stanley Kubrick when we need him?
@@wargamingrefugee9065 Will we be seeing 2025: The Samsung Odyssey?
And now I return to the channel where my 12VHPWR TH-cam odyssey began. The circle of life is now complete.
I cannot believe these connectors can only be disconnected and connected 30 times. This is nuts. Screw nvidia.
@@mrfarts5176 it’s literally the same for the previous 8 pin connectors
@@DepressedMusicEnjoyer I have never heard of that being a thing on pcie connectors. What is the rating for the number of connect / disconnect cycles on a pcie connector? I am highly skeptical that you are correct here. There is flat out no way those are rated for 30 disconnects. I researched this for a while but couldn't find anything to support your statement. Why are you trying to defend a company that gives a flying F is they burn your house down unless it costs them money?
Thinner metal = higher resistance, improper contact = higher resistance, higher resistance = more heat = plastic melting, it's just that simple. Thanks for clarifying shit, been following on Reddit and TH-cam, finally someone that knows their shit.
When using bus bar to connect all pins then current go to other pins so lower power dissipation on that pin. If you got one lucky good connection because uneven force then you have problem because it will take bigger portion of current and it will heat more than others. If that ndivia adapter would have individual cable/pin then we don't have this current balance problem in this scale.
This is the best explanation of this problem. I’m surprised other TH-camrs can’t understand this.
the irony here being that this particular youtuber turned out to be wrong. had nothing to do with the quality of the pins. turns out the issue was a couple of clowns not plugging the cable in all the way. gamer's nexus had one of their 4090s running on just 1 live and 1 ground pin with a 600W (maximum allowed by BIOS) power draw, and it was fine because those 2 pins were plugged in correctly.
more than 200,000 4090s and 4080s sold to date. only a dozen or so RMAd cards according to the board vendors. the connector is fine. and i personally love it. makes SFF builds so much easier.
Your speculation and addressing is our expectations of you.. Thanks for the post!
This explains why Jayztwocents couldn’t reproduce the “fire”: he was pointing a camera at the whole connector while the only responsible is the pin inside the connection, which is impossible to isolate and check.
Jay🤐
also he didn't bend it while the connector is attached to the gpu , he bend it before it was connected to the gpu
Yup, the camera failed to make the connector catch on fire, he should have used a high power laser instead.
This is why I don't pay much attention to Jayz videos. He doesn't know what he's talking about when it comes to technical stuff.
@@fireonawire But to be fair, he also doesn't pretend he does.
with the split pins, a bit of sideways force would cause the pin to open slightly, then you would only have good contact in 2 places, the front left and back right ... and the top + bottom, but a) they are only partial surfaces, b) the force would be left/right so would have better contact there.
Im really interested in this plug not really for the actual content (who owns a 4090 lol) but the way things like this are handled and the levels of trust and slack nvidia is given by 3rd party reviewers. Anything short of a full recall asap is incongruent with standards we held NZXT too.
They just need to replace the adapter, for free, for everyone who bought a 4090 and uses the adapter.
You've got a clear mind in that head of yours, man. Thanks for this "rambling". Big thanks.
More videocards PCB rambling/reviews please ;)
(for reference, for a 12 V voltage to create an arc, _in air_ , you'd need to bring the two connectors 4 _microns_ close to each other. About one tenth the thickness of a human hair)
It was as I thought. Somehow you get increased contact resistance in the female part of the connector. Due to failing of the sleeve. It fails due to deformation of the sleeve. The heat will appear where increased contact resistance appear, and propagate from that point
This connector would probably work fine if each crimp were terminated to a single cable. The fact that they are commoned by a rigid strip of metal in order to adapt the number of connections means that the bending moment placed on the outer pins is quite large. This combined with the split female crimps means you probably end up with the far pins making only partial contact at the tip and the base of the male pin. As you pointed out the thermal resistance of the plastic will be quite high so it won't take much additional power loss to cause overheating.
The bending of the cables is just stupid. It could easily be rectified if someone had actually used their brain when designing cables and how the connector was placed on the board.
And 3000 series adapter has all separate cables for terminals. It seems like lower build quality this time.
@@mrdali67 You forgot one of the most glaring issues - STRAIN RELIEF. How TF there isn't any REAL strain relief material securely holding the wires is just absurd. Coming from a motorsports background, strain relief is a big deal because of all of the G forces & vibration, and to see such minimal strain relief on a connector of such fragile construction is just mind blowing.
What Nvidia won't do to save a buck...
@@wassilia1234 The 3000 series cable did use the same dual split female pins tho, makes it seem more like a difference in the constrution of the cables causing the issue rather than the pins themselves since they are common between the two adapters, and we didn't have an issue drawing 450W+ over the adapter when connected to a RTX 3090Ti.
@@RyTrapp0 Ya it's a huge problem, but you can easy make a connector where you don't have the strain to the cables. 1. Put the darn connector on the back of the card and make 2 versions of an "L" connector so it's possible to choose the right one dependent on how your mounting the card. And you should have no problem in getting a good cable management without bending the cables right at that connector. It's not that hard to figure out. Jesus .. doesn't require an engineering degree 🙃. Again, It's because they didn't care and just botched a really bad solution that could have easy been avoided.
Great video. I think you did a good job of politely explaining that plastic is an insulator, its just surprising that you had to.
I really love this connecter its beautiful disarster of a bunch of reasonable decisions combining to make something terrible.
Sounds like it was created by a government bureaucracy.
You might be right on Nvidia forcing its add-in board partners the 12VHPWR maybe that is why EVGA no longer makes GPUs. They don't want to use the adapter for either they prefer the 8 pin connector.
The did not do it just because of it but it could be a reason
27:52 starting from here, things go a bit wrong. The recommended terminal from the datasheet is the 10132447-121PLF HCC (High Current Capable) terminal, which for the terminal itself actually has a 12A rating with 16GA wire. The safety margin is nearly the same as a Mini Fit Jr 8-pin. 12A x 6pins = 96A x 12V = 1152W.
Datasheet, if youtube allows the link... cdn.amphenol-cs.com/media/wysiwyg/files/documentation/datasheet/boardwiretoboard/bwb_minitekpwr3.0_hcc.pdf
Thank you for understanding physics.
The terminal was never designed to be side loaded which causes separation which causes increased heat which weakens the housings rigidity which leads to further separation and the cycle continues. ❤️
Doesn't EPS use a similar connector to 8-pin while it is rated for 300w? thanks to intel, we do have cpus that can surpass 300w easily and there hasn't been any reports of widespread issues with that cable, right?
yeah the 8pin EPS doesn't have much of a safety margin on it. The PCi-e 8pin a bit of an anomally in that sense.
Have you actually read igors lab article (there is an english version)? He says that the terminals get overloaded because of how the load is distributed over them. The outer terminals have to carry way more current than the inner ones the way they soldered the cables onto the plug.
Not exactly. They are made in a way that the outer terminals can start ripping off when bent. It's not the plug, it's the adapter.
38 minutes. Great video legth to watch during a lunch break :D
This wasn't really an issue with 3090Ti, so I think it's the new adapter.
It is. Check Igor'sLab on that. The adapter is the problem.
Buildziod can I just use cold water to cool it down so it doesnt melt
in theory you could pump a non conductive fluid like mineral oil through the connector to cool it better. This would probably require modifying the connector a bit.
@@ActuallyHardcoreOverclocking there are pcbs with channels in the power planes for pumping coolant through lol. And there are some servers that use subversion cooling. Like instead of air in ur case it's some sort of nonconductive fluid.
@@ActuallyHardcoreOverclocking thanks but I already used water. love you
Yeah this 100% seems to be the correct issue. You are really good at deducting and reasoning. Glad I subscribed.
This is the proper ramblings.. unlike so many "tech" yt-er out there.. great vids
@@glitter_fart Why is it incorrect?
IIRC The issue with the solder isn't necessarily that the solder itself overheats, it's that those joints easily break and might tear the thin sheet connecting the 12v lines, causing the current to flow through fewer pins, causing greater thermal load.
03:30 "I do not think this is very likely to serlf-ignite"
Until the 12V melts the housing and touches 0V or a signal wire.
Not sure if it matters, but NV adapter is using 4 wires instead of 6, making it 456W; as the 4 middle pins sharing 1 wire per 2 plates. Interestingly the pins with a single wire per plate, on the side of the connector, are the ones that usually gets melted. Considering that those pins are powered by a cable consisting of 4 wires capable 8.5A (300+W) each that is being channelled into single pin designed to handle 9.5A.
Maybe if one of the 2 middle ones looses soldering or their pins structural integrity gets compromised the one on the side of the connector has to take up the extra load, which it can as it has over 100% safety margin, but then we overload that pin 3+ times the A it designed to handle.
I wonder if Nvidia used the fully split contacts to try to make the connector easier to insert for the end user...
Likely just from their chosen supplier, they used them for the 3000 series adapters also.
This is why 24-pin ATX12V 2.x standard has a 3.3v sense wire going through connector from the supplied 3.3v and back up to PSU.
This allows PSU to keep track of that the cable is plugged in and what kind of voltage drop it sees along the entire cable, including connectors.
If PCI SIG had used this approach for this 12VHPWR connector, it would have been fine, regardless of the number of splits or joins. It would have needed compliancy in the PSU though, but any load distributed would still show increased loss if there was any bad connections.
So, we need ceramic connector housing for better heat dissipation?
If you look at the pic at 16:02, you will notice that one side of the pins have been flared out. It almost looks like someone took a screwdriver and "pushed" that side out. their is something seriously wrong with the plugs on these cards.
One thing that's worth noting is the NVIDIA connector takes a lot less force to insert than a real Molex Microfit 3.0, I haven't had the chance to feel a third party cable but I'd bet there's a notable difference in insertion force so the use of those split terminals is a smoking gun for me, contact force on the dimples is probably lower than spec leading to high resistance, poor current sharing and thus thermal runaway.
thanks for correcting jay, as always
mans here spitting straight facts, no assumptions! love it!
Many people have already pointed out - yes, arcing happens at 12V just as well. the voltage only affects the distance the initial arc can strike, nothing else.
Arc-welders operate at 10-40V can can draw quit long arcs. The intermittent connections and sparks you mentioned - that does involve arcing.
The current return path - most of that will be going back through the same connector as is required by the PCIe specification. The PCIe-Slot is wired to handle up to 75W and will have a lot of problems if you try to push 200W through that. The specs also are not as you make them out to be. The PCIe-specs say which connector must be used and what electrical characteristics it must have. And that means a PCIe 8pin power connector for the connection is rated at 150W / 13.5A maximum for the entire connector. The physical connectors them self can be rated a lot higher, but they are not required to be - there is no inherent higher safety-margin there. A manufacturer could very well used a connector that is only rated at 4.5A/pin and that would be within the ATX specs.
(Even worse - the table already gives a hint but the specific crimp-terminal that YOU HAVED CHOOSEN to show is rated to be used all the way down to AWG24 were they don't even give a minimum required current handling capability - aka you could use that connector with AWG24 and have it burn up at 1A already - so much for the supposedly high safety margin).
Now why are you comparing the specification of just the crimp-terminal used in the 8pin against the entire connector for the 12pin?
cdn.amphenol-cs.com/media/wysiwyg/files/documentation/datasheet/boardwiretoboard/bwb_minitekpwr3.0_hcc.pdf
Here - the ACTUAL specification of the crimps used for the 12pin connector. Oh look at that, they are 12 Amp Crimp-connectors. And they are also available in 8A, 8.5A, 13A variants.
And as we have now seen the problem is not that the connector is designed bad but that the cables are not attached properly. The individual pins are designed to hold the individual cables crimped, not shoddily soldered to a way too thin foil. You your self went on and on about how the plastic at the outside of the connector is getting actively cooled and how the metal pins are decent thermal connectors.... Now here me out - what happens if there is a bad connections and high resistance near the solderjoints where the outside plastic is cooled? Oh, the plastic around the join will get very hot but from the outside will be fine, and the heat will be transported to the pins and into the cables you say?
It likely is the combination of both - with the bad soldering the current is already concentrated and more heat produced, and then the bad crimps used getting bent and generating even more heat.
How long of an arc can you pull at 12V? And, how close do you hav to be to start that arc?
Well done. Measuring voltage drop under load would be my choice for proper testing. This is also the preferred way of testing circuits in automotive diagnostics.
You are amazing. The knowledge I have gained from you, first 6 pints are on me.
Fundamental design fault that nvidia knew about but chose not to fix
They definitely changed the adaptor, It was 3x 8 pins and they were direct wired to pins rather than the distro solder block connection.
Yeah, hense the hard 450W limit on the FE card. The irony of Nvidia "playing it safe" by sticking to the PCIE spec and cramming in a 4th 8-pin when they may have avoided all of this drama by simply using that adapter is just too good.
Got tired looking for a pic to see if it used double split terminals but I assume so. If the 3090ti balanced the load across the pins better that may be a big factor but I really want to see tests on a 4090.
@@JJFX- if they were so set on this connector they should have just used two and ran 300w through each with two 8 pins. Sure it'd take up more space but it would be better than 4x 8 pins
@@p.e.r.c.y More space? Do you mean the card connector being 2 merged 8-pins, not the 16-pin connector adapter?
As much I'm for not sticking to the 150W spec, I understand why they wouldn't want to push it to 300W for widespread use. Using 3 like all the 3rd party PSU adapter cables gives plenty of margin for error and more room for overclockers to play with.
I also don't understand why they aren't including the PCIE slot in the power spec for these cards anymore like they did when they were just using 8-pin connectors. They didn't seem to care when the 1080 FE max limit included ~70W over PCIE.
@@JJFX- It is the same double split female pin, if you check the Techpowerup review of the 3070 it shows a pic of the connector, you can clearly see its dual split pin (albeit with half the pins populated because it's a 3070). The reviews of the higher end cards didn't appear to have pictures of the 12-pin end of the adapter.
@@JJFX- There is still a little draw from the slot (15-20W at full load based on various reviews), but the main reason that most of the power comes from the PCIe power cable connection is for ATX 3.0 PSUs and their ability to communicate directly with the GPU about power requirements, the ATX 3.0 PSUs have no way to communicate about the power draw over the slot from the motherboard.
What I don't understand is why the bad connection still gets a large amount of current going through it. Igor's teardown seemed to show all the pins in a row being connected together, which should mean that if one of the pins gets a higher resistance it should start passing less current and the rest should take up the slack. Or is this because the 12V pins on the card side are not all connected to the same power plane?
Just because the current is lower doesn't mean it's low enough to not cause issues in this case
If the terminal gets hot, the heat will travel up the wire a short distance and you'll see it on a thermal camera. Overall I agree. Yes my background is in electronics and yes I own a thermal camera.
Yeah that was also what im thinking...how can a disconnected pad cause the melting of pins? The amount of current going through the Pins doesn't change, only the current going through the cables. Which aren't failing. Even IF both outer pads rip, for many PSUs 300W on a 8-Pin is still within spec... unless you plug in a daisychain cables with both outer plugs and then all current goes through one cables to your PSU... But in that case not the Pins or GPU fails, but the PSU...
Does cablemod's VHPWR connector have pins that aren't split on top and bottom like nvidia's? I couldn't see any good photos online
from what I've seen it looks like they using single split terminals that should hold up much better.
as an electrical engineer this channel feels like a wet dream
He is saying you cannot arc 12 volts. He sounds like someone with a rudimentary understanding of electricity. I have seen voltages under 24 vdc arc like crazy due to inductive loads.
glad to hear it.
@@mrfarts5176 please explain what is the pin in arcing to? cuz from what i know u cant arc from metal to PLASTIC with an air insulation
@@mohibdosani6957 maybe they have an Intel GPU 🥁
@@mohibdosani6957 OK, when a split pin starts to open, intermittent contact will arc. I have seen this so often with field connections on Automation where connector bodies are fixed to a structure when they shouldn't be. This exact issue is one of the highest causes of electrical failures I have seen in nearly 2 decades as an engineer designing electrical systems on automation. The point is when you have current draw and start to open the circuit, any inductance in the load will force current draw increasing voltage and causing arcing.
I am just saying it is inaccurate to say a 12 volt load cannot arc.
Is it possible the solder joints are snapping off on 1 side of the connector forcing more amperage through the remaining connected wires?
but then it would melt at the solder joints not at the terminals. The damage is almost entirely localized to the terminals. The ugly soldering isn't what's getting hot.
And previously GPUs could detect if pci-E power wasn't supplied, I hope that they have implemented a similar system in the 4000 series
@@SuperUltimateLP i think GPU’s are required to detect PCIe slot power and if applicable supplementary power
I dont know of the lack of PCIe slot power would prevent a GPU turning on but if it doesn’t the 75w max that could be supplied from there would have to go through crappy cable and would defiantly cause issues
For example my 3090 will not draw more than 160w per 8 pin connector and where possible it wont draw power from the slot
Most ive seen is 60w from the slot and average is normally 15w which the rest via 2x 8 pins
Is there any reason why we only see 6pins used in powerboards? You could basically fit 4 of them on the same area than 3 8pin, while delivering the same power as long as the 3 12v pins are present.
We are going from 200Watts GPUs to 600Watts GPUs.
Where is the engineering to go 4x on the spec for failure points?
Thanks for the even more information on this topic man. The more I hear about this stupid connector and how Nvidia screwed up majorly makes me want to Switch to AMD because with their RX 7xxx series GPUs it has been confirmed that AMD IS NOT going this route and they are keeping the 8 pin connectors that we know and love.
I think the double split terminals aren't helping, but since in most cases the outer connectors melt first what is probably happening is that bending causes the small part between the solder joint and the pin on the outer pins to break/rip/crack, BUT not all the way through just enough to reduce the cross-section and thus increasing resistance and heat.
Thanks for explaining this in a manner that isn't sensational and just looking at hard facts. As far as the choice to go with the 12 pin, I think the choice really has to do with selling it to OEMS and the likes. Even though we could draw 600 watts from 2 8 pins. I can't really as NVIDIA go to a SI like Dell or HP and go yea I am going to sell you product that draws power out of spec (even though its perfectly fine). It's alot easier to say hey this plug right here has the certified spec of drawing up to 600watts. From a diy customer it makes sense to just have 8 pins but from a business/legal perspective you cant sell something knowingly out of spec and not expect it come back and bite you. Not saying this faulty adapter won't cause some harm either. I say this a someone who use to be HP Enterprise on paper items matter a lot even if they do not make sense or are not realistic.
In Igor's article on the subject, it specifically says the issue is not the connector...PSU provided cables use the new spec connector just fine. It's NVIDIA's specific adapter build quality that's the issue, so theoretical if SI builders just used PSUs compliant with the new ATX3.0 standard, they should be fine.
Please someone tell me why we cant use a 2 Piin connector that can handle 600 watts ? Or even a 4 Pin Connector that can handle 600 watts? In the Radio Control hobby we use connectors that have robust connectors, and larger gauge wires, there are very few problems running 150+ Amps at literally over 40V. I dont understand why something like this could not be adopted to work for video cards. 12V at 600W is only 50 Amps. and the connector and wires are very manageable. Please help me to understand this ?
28:23 the trick with safety margins is as far as we try to evaluate the margin of the contact itself the wire gauges should not matter, should they?
People probably bend the cables much more than on 3090ti because the card is so much larger and I'm many cases where the largest cards from last generation were fine the 4090 sits right against the panels.
OMG, thankyou thankyou thankyou.
I thought I was the only one who spotted the 100%/10% safety/hamfistedness margin reduction. That's a brutal level of precision to dump on your users without warning.
Both connectors were rated for the same 30 use cycles but 100% safety margin makes that far less important than 10%. PCI-SIG should toss this crap to the curb. You want 600w, then design a 1200w connector and de-rate it.
Also: 3090ti adaptor = expensive 12x crimped cables. Zero cheaparse soldering in sight.
So we need pcie 5.1 powerconnector which has 16+4 pins and direct wire from psu.
So, assuming I have a Seasonic Prime PX (which uses 18 AWG wires) and I also use Seasonic's adapter (12 --> 2x8pin) it means that they are basically doing what AMD and operate at a safety overhead of 2%?
I think the important missing from all the explanatory videos is:
why does housing of the receptacle not prevent this excessive movement when wires are bent? It should be possible to both allow some movement to allow the pin to be well seated and prevent the receptacle from moving too much.
Didnt the 3090TI use the same adapter but without the 4 sense pins? And also since 3090TI have the same connector as the 4090. Did anyone try to use the adapter on a 3090TI and see what happens. Do the sense pins work on a 3090TI?
How are people fitting these cards with their connectors into normal ATX cases and closing them?
I run my case open with my card cause I didnt like the connectors pressed up against the sidepanel, And I have a 1080ti. I cant think of a normal case that doesnt put the card on a riser and mounts it vertically that you could close the case with this in it. And not bend the cable significantly, even just trying to close it would pry the terminal spades apart. Its absolutely rediculous.
Do you think the new Corsair '12VHPWR GPU Power Bridge' will make a difference?
Why don't they just use a big ol 2 pin DC connectors that RC lipo batteries use that's actually designed for high amps, and don't care about cable bending. Would take about the same amount of space from the pcb as that 12pin.
Great video, everyone needs to see this
I listen to you and you sound like an engineer is so awesome that you know that much
Hi i've never commented on one of your videos before but i have to say something about the solder side of the connector.
It looks to me like each of the solder pads on the connector are separated with a split. If the solder that's been applied only connects to the single solder pad then bending of the 14awg wires could also put tension on the pin interconnect more so than the middle four which are all soldered together.
Nvidia should've made the solder pad one piece and only had one split in the pin as you mentioned.
Without the internet, i could never have listened to someone talk about a connector for 38 minutes.
Which side again is the ground side? The side with the side-channels, or the other? Seems like the burnt ones are on the same side? Maybe that is a clue?
32:13 Load balancing might further reduce the current on each of 3 parts compared to how much a bad connection would reduce it. So, if card measures a voltage drop on 1 part it can reduce the load on it. To achieve that the balancing parameter should just be the voltage drop multiplied by current draw so that the balanced physical value is the power dissipation.
Is this also an issue on the 3x8 pin adapter with 450 watt power?
I have this cable on my RTX 4090
Connectors working fine. Done a bunch of dice on 4090 and even broke a really nice gpu bin Msi suprim with my teammate. Using same connector on strix and it’s a hard kink.
Just plug it in. Caught myself a couple times now.
I just wqnt to know will be melt my connector or not after weeks usage and iam waiting now cablemod cable. Hx 1200 corsair psu and msi 4090rtx. Should I worry about after 3 weeks or not?
with the 3090ti v 4090 if you split the 12v into 3 sections wouldn't the card then be able to spot a greater voltage drop on 2 pins vs the other 4 pins and throttle it back? the cards in stock config only pull 450w on avg so you would be in spec of the rest by practically disabling 2 pins.
I'm so glad something is backfiring on Jenson and the Leather Jacket Mafia.
If you want a good example of arcing think about a cars ignition system where ignition coil pump out over 40 000 volts to throw a good spark across the gap of the spark plugs
Theoretically the load-balancing across sets of pins should make the 3090 more susceptible to overloading pins, since a failed pin would route the full cohort target load through its cohorts only, whereas a 4090 would spread the failed pin's load across the entire connector's pin array (per direction).
Does the 4090 load balance on the individual pins? If so, it could be also help, right?
Also with that adapter all pins are connected to bus bar. So one good contact will take bigger portion of current and get biggest power to dissipate because bus bar will even voltage drop over connections :)
Individual cable for each pin maybe enough for this balancing.
I am wondering if potentially the 3090 ti adapter was tossed in with the 4090 and it was rated for 150 instead of 300. That would explain Igors picks. I don't own cards, so take it for a grain of salt
31:43 Wait, didn't the nvidia engineer in GN's video say they actively load balance through all the incoming 4 8pins on the 4090? That's odd.
Aye I concur, have noticed it's "always" the pwr/12v pins, and it _seems_ to "always" *_start (!)_* at the edge pins, commonly pin 6. 12V being the inside row on the socket/pcb side (pin 1-6)/does make it a bit of a PITA; I don't know if any of the AIB versions have a flipped socket though. BUT! I must insist, even on the Founders Edition, that the appropriate method here would be to take a couple of the dirt cheap, tiny glass bead K-type thermocouples or 100k 3950 NTCs, dip the bead in TIM, then secure the cables somehow so the tension of the stiff sensor legs makes them push against pin 6 and 1 (the outer edge 12v pins) , as close to the backside of the socket housing as possible.
IMO that is the closest we're gonna get to a temp measurement "inside the connection", as it's a direct pin/lead contact measurement, and barely 3-4 mm from the pin connection itself.
i dont agree on the part the connector being generally bad, but yes nvidia probably saved money or underengineered the adapter, which is just bad.
I think the problem with solder join is if 12v didn't connect but ground was connected, the adapter still considers the cable is 600W capable and sending more power through the other 3. The power sense seems only counts how many pins is grounded...
Now if you have 2 12V side... disconnected...
it takes about 30,000 volts per centimeter, or about 75,000 volts per inch, to jump a clear air gap. Once the gap is ionized, the sustaining voltage is less.
To me it looks like when the cable snaps off from the pin, there is still the same current going through the pin, but there is a much smaller path for heat to flow as the wire has been disconnected. Normally the heat will go down the wire and dissipate. So removing that wire connection increases the thermal impedance and it gets significantly hotter.
Includes. I understand tha 105c includes those 30c, so realisticly you get 75c surrounding area. Give some error margin and you get around 65c max usable temp near connector. Being so fricking close to heat generating components I supose hitting those 65c is prety realistic. Can it be by choosing this connector them being just overoptimistic, and relaying to much on simulations, not considering real world usage case?
well, copper wire cables are also good heat conductors. If the connector itself is hot enough to melt plastic, the copper wire will also be visibly hot on the flir for sure. Since the heat gets conducted up into the wire.
This problem could easily be solved by gold plating, which would reduce contact resistance, or even switching the connector to solid copper. Not sure what metal it is made of, but most regular metals are not great conductor compared to silver, copper and gold.
Molex makes those, they range in price from 0.14$ to 0.21$ per piece , something similar but automotive grade from TE Connectivity starts price @ 1.5$, the only time a TE plug/pin failed whein it passed 28V and 400A.
Can anyone explain to me why the 2x8 was not used? XD
How do you feel overall about the amount of current being shoved through those connectors? At 600W that's 50A of current which just seems to be asking for connector problems.
Obviously that'd be total chaos with power supplies, but maybe there should be a 24V standard to bring the current back down before we end up with GPUs with car battery terminals on them...
If you think about it, what else in your house even requires a 50A capable connector?
Your concern is unfounded. The wires are more than capable of the current when counted in parallel bundled together. I will explain why since you seem smart.
The 4090 adapter shown is using 14AWG wires, 4 of them for power, 4 for ground feeding into the 12-pin plug. This 14 gauge is very common wire-gauge in homes (also 12), and is regulated to 15-amps in the US, you would be familiar with this spec. This is conservatively rated for in-wall home safety, the wire itself is acceptably used in 25-30 amp in other situations, or even more if cooled. So Taking conservatively 15 Amps * 4 Wires = 60 Amps. Times 12 Volts = 720 watts. More than enough to cover the 600w figure.
That said, the Pin and the Connector, I can't speak to, but BZ said 9.5 Amps capable per pin * 6 pins = 57 Amps. 684 watts. So also enough.
It wasnt the electrical engineering that went wrong, its the mechanical engineering.
You bring up a good point though about the 12V becoming somewhat undesirable when such high amps are needed.
@@mrlithium69 Yeah I get that the connector can handle it, he's explained it very well in the video and supported it with actual specs. But the 8pin connector did have a much wider safety margin which undoubtedly contributed a lot to incidents like this being rare. Hence why the question, have we reached a point where GPUs need so much power that 12V is worth it anymore.
I know these things fairly well and I wouldn't have had thought to even think about the connector if I had a 4090: plug it in, close case panel and use the card. Realistically with an 8pin even though it's not great for it, you wouldn't think too much about it being squished a little if you're tight on space or length as long as it's "clicked" in properly and doesn't come out. Lots of people have their motherboard 24pin do a sharp bend to go into the back of the case and nobody thinks anything of it either. 50A just needs a lot of pins and very good contact and nothing about this connector or cable screams high current, proceed with extra care.
I'm also thinking about all the videos out there of LTT or GamersNexus receiving PCs from prebuilt companies with connectors not being plugged in correctly. This connector effectively needs to be plugged in perfectly and its cable ran perfectly to not be an issue. Going to 24V or even 36V would bring the tolerances back to 6/8pins. It's going to step it down to 1.something V anyway, if anything the VRM should run cooler.
With ATX12VO coming up, now would be _the_ time to include other voltages in the spec on top of getting rid of 5V.
I would like to see 48V what is used in telecom.
I think main problem in this case is nvidia adapter has bus bar in connector and if connector is having uneven force that one connector is making better contact then it take "all amps" and burn. ok not all but over rated current for that pin. But with individual cables cable resistance is balancing those currents.
Sup buildzoid, have you seen the Brazilian Galax team (Teclab) pushing one cable with >1000w load for a couple dozen minutes in a few lives? even testing different power supplies, plenty of those getting into protection before any harm was shown towards the cables, which is kinda impressive, but still, I'm mostly interested in your opinion on those guys who push for WR OCs vehemently defending the build quality of the cables in response to the enthusiast public encountering and pushing for attention on the issue.
Conflict of interest is a thing. of course the manufacturers and people associate with them have the incentive to defend the product.
@@SolarianStrike That is true, although I understand that could be extremely naive by my part, I still find it a bit odd/unlikely how those guys with an image to preserve in the hardware space would shill for Nvidia this heavily if there truly are issues with their product to the point "casual" users with under the spec wattages, since most gpus don't seem to be pushing past 500w mark without meddling/specific models, are suffering cosmetic damages and/or pin damage.
@@SolarianStrike at least they do live testing and film it
And not just rambling
What is most amusing is that despite the Amphenol connector being the one approved by PCI SIG, Nvidia who helped design it, contracted Astron (who doesn't publish datasheets) to build it. That adapter is NOT Amphenol. Astron made the 12-pin last gen, too, but they actually used crimped terminals since they only used 16GA wire, so they didn't have this double split in the terminals. But the need to solder was due to using 14GA and the max size for the Amphenol terminals being 16GA. Nvidia deserves to eat this.
A better mechanical design like a retention bracket directly attached from the wire with kind of comb to the adapter would have prevent the cracks of the solders.
Can’t believe nvidia engineers didn’t see that kind of problem coming during mechanicals stress tests.
Can you do a collab with GN to test this and see how much bend will cause the connector to melt? Do you guys still do collaborations?
they are in the US and I'm in the UK
@@ActuallyHardcoreOverclocking that’s only one letter apart, can’t be that bad to travel one letters worth difference.
@@ActuallyHardcoreOverclocking How many 34401A can you fit on a plane?
I am an electrical engineer, but I don't have expert knowledge in this field. Something you said doesn't make sense and I lack the equipment to test to see if it has any foundation in fact. You mention that the burning is occurring on the 12 volt side and not on the neutral side, your explanation is that the return current is moving significantly through the mother board and diminished through the neutral conductors on the 12VHPWR. I don't think this makes sense for several reasons; 1) if the traces on the motherboard can support this sort of current with this sort of travel distance, then why do we even have special power connectors on the video card. 2) if the motherboard is supporting a significant percentage of the return current, then why have just as many conductors for return current on the PCIe /12VHPWR connector, it would just be wasteful. 3) I haven't read through the ATX power standards, but I really doubt that over current protection wouldn't be tripped if so much current was moving through the 24 pin motherboard connector. I would have designed it so that approximately as much current must be returning through the proper return conductor or it would trip, but I don't know if this is how it was done.
My belief is that it is because a bending moment is being introduced through the connector with nothing to brace on the opposing side, opposite the pivot point below the neutral ground pins. This is resulting in improper contact with the 12 volt pins leading to a significant increase in resistance at that location. The clip could have been used to reduce the bending moment, but for some stupid reason is on the underside of card (this is just bad engineering), if you look at a PCIe power connector, the clip is on the top side (nearest the PCB) and this would act to reduce the bending moment if you pull a cable downward or allow it to hang under gravity. If the 12VHPWR socket was mounted upside down (so the clip was on top), I think this would probably fix the whole issue.
is possible to have two nvidia gpu in one system and different version of drivers for one and second card. My NVS 510 for monitor output hates with rtx 3060. installation drivers for nvs510 whipout rtx3060 drivers end the same rtx clears nvs drivers.
One of the clues being overlooked generally on this topic across multiple videos and sites is that many of these failures happen after 3 weeks, or several months, or 9 months etc. So think about that for a minute. We're shoving 300 to 400 W across one to three tight pins on a new design connector and getting a run away heat accumulation. But it doesn't happen right away but fails after a significant amount of time. Why ? What could cause this ? I submit, Corrosion ! ... namely oxidation at the exact fine point of pin contact. We know that oxidation increases with heat exposure and air exposure over time ... and that corrosion causes even more heat due to resistance and before you know it you have a run away heat buildup. So, how do we prevent this ? and I am amazed at how little attention this idea gets ... like with any electrical contact surface which carries a high load with respect to surface area, we have to protect the existing 'ok, but not great' connection with an anti oxidation contact grease designed for this purpose. Yes, they do make them, they are lithium based and in this case we want something that encourages conductivity not discourages it , so we don't want a 'dielectric no-ox' ... so get 'dielectric grease' out of your head cause that's what comes to mind with most people ... we want the opposite of that. Craig labs makes the perfect solution for this, its called DeoxIT L260Cp ... the 'Cp' stands for copper particles ... it has copper particles in the grease, it's lithium based so it will not degrade under heat stress. I'll be applying mine with a needle type applicator ... sparingly ... inside the barrels of the connector.
Aren't a lot of PSU built to compensate for the voltage drop on detection? Also among low-to-mid tier PSUs aren't there plenty which which do it across all the lines and not only droped one?
It makes sense the load balance of 3090ti are saving the conectors from burning. The power draw will folow the path of less resistence. Ex: If 4 out of 6 of those conectors are getting any more resistence than normal, the power will flow through the 2 ones with less resistence and exceed the specifications (creating the problem), with power balance the board will not let it happen. Probably the split casing of the conector with bending is creating this bad conection with some of the conectors and causing this unbalance.
Nvidia cheaping in the conector and the board balance created this problem.
So the point of failure is the narrow pins at the very tip. Makes sense. Napkin math at 600w, 6 12v pins, 8.3A per pin. Even if the wires are 14awg, the pins are still the same thickness but then the contact area is halved when it bends. Making the connector smaller and the power draw per pin higher was playing with fire.