Intel Wants To Help You Buy A Power Supply

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2024

ความคิดเห็น • 46

  • @tacticalcenter8658
    @tacticalcenter8658 ปีที่แล้ว +14

    Power supplies didn't start getting good until Johnny Guru who used to review PSUs got into the industry.

    • @volppe01
      @volppe01 ปีที่แล้ว +1

      Completely disagree since the beginning of the pc industry there have been good and bad manufactures. I worked for large enterprises that had 1000’s of servers running in the 1990’s and power supply failures were fairly uncommon.

  • @EyesOfByes
    @EyesOfByes ปีที่แล้ว +10

    Gigabyte missed the memo...

  • @mikeelek9713
    @mikeelek9713 ปีที่แล้ว +11

    That is quite a test environment. Gigabyte isn't on the "compatible" list, and it would have been disheartening if it was.
    Seriously, this is an excellent public service. There are many power supplies on the market, and if you're spending $1,000 or more on components, you should get a good power supply. Don't cheap out on the power supply.

  • @jierenzheng7670
    @jierenzheng7670 ปีที่แล้ว +5

    Man this is so cool. Maybe Steve at GN can collab with these guys too

  • @edo9104
    @edo9104 ปีที่แล้ว +4

    The problem of psu testing that already happened with 80plus is that they send some tuned units for the test, I think that 2 instead of one doesn't really make a difference, than they change the components in the retail version like many did, Gigabyte as first

    • @FakeGordonMahUng
      @FakeGordonMahUng ปีที่แล้ว +4

      This can happen with reviewed units as well. I would hope known good brand names don't go the route of switching components so they are not the same as tested.

  • @Starscreamious
    @Starscreamious ปีที่แล้ว +7

    It's a shame they don't do hotbox testing. Otherwise this is a good service for the community that Intel is providing.....which feels *really* *weird* to say.

    • @gherkinisgreat
      @gherkinisgreat ปีที่แล้ว

      To a degree not really, if consumers are being sold higher quality products they'll buy more

  • @tweakpc
    @tweakpc ปีที่แล้ว

    I would be nice if PSU Reviewers had the chance to get in contact with intel, because there are major issues with ATX3 Specs that make consistent testing quite a pain. For example that official ATX 3 specs and "confidential" Intel docs are different.

  • @anameofauser6866
    @anameofauser6866 ปีที่แล้ว

    Corsair is rocking with PSUs that pass the tests, glad I bought one.

  • @fairycat
    @fairycat ปีที่แล้ว

    That looks awesome. Thanks for such interesting rubric.

  • @doomtomb3
    @doomtomb3 ปีที่แล้ว

    What about temperatures? I'm sure if a unit overheats or powers down, it will fail. But is there anything in the ATX spec about the temps that the unit must maintain, again this is room temp.

  • @itsdeonlol
    @itsdeonlol ปีที่แล้ว

    This is really awesome Gordon!

  • @Vegemeister1
    @Vegemeister1 ปีที่แล้ว

    "Input voltages 115-230." Not down to 85V for worst case in Japan?

  • @boedilllard5952
    @boedilllard5952 ปีที่แล้ว

    Who is responsible for the ATX 3.0 connector?

  • @vensroofcat6415
    @vensroofcat6415 ปีที่แล้ว +2

    ATX12VO only. When? 😏

  • @christopherjackson2157
    @christopherjackson2157 ปีที่แล้ว

    I wonder how much they charge the vendors to perform these tests.
    I still feel you can get all the info you need as a normal consumer from the warranty. It's the one piece of information the vendor is actually willing to put its own money behind. It will last x number of years of normal use or they will pay you for it.
    This sort of in-depth testing is useful to vendors when they go to buy the psu from the manufacturer. Setting up your own independent lab to test units is prohibitively expensive. Even giant companies like gigabyte don't seem to be able to afford it. (Sarcasm) And it's useful for their marketing/sales.

    • @FakeGordonMahUng
      @FakeGordonMahUng ปีที่แล้ว

      They don't charge. It's a free service.

    • @Vegemeister1
      @Vegemeister1 ปีที่แล้ว

      Situations where warranties fail:
      1. The PSU causes problems (random reboots, early failure of other components) that can't be identified and blamed on the PSU without a failure analysis lab.
      2. The PSU is bad in ways that aren't covered under warranty (takes forever to wake from sleep, bad efficiency at idle, easily reset by ultrashort mains voltage droops).
      3. The vendor goes out of business before the warranty expires.
      4. The vendor is making a bet that only a tiny fraction of buyers will use the PSU near full rated output, and even fewer will do so for thousands of hours.

  • @tonnylins
    @tonnylins ปีที่แล้ว

    Nice!

  • @nonflyingfinn2173
    @nonflyingfinn2173 ปีที่แล้ว

    Gordon psu which came with the case and weighs nothing. After you peel the sticker it reads Q-TEC. 😃

  • @billp37abq
    @billp37abq ปีที่แล้ว

    Less power better solution?
    Celeron N4020? $100 Best Buy.
    Ryzen 7 5700U ~$530 Best Buy?

  • @SamLoki
    @SamLoki ปีที่แล้ว

    Intel is the least greedy company out of the big 4

  • @goblinphreak2132
    @goblinphreak2132 ปีที่แล้ว +1

    Did you ask him about Nvidia's 12VHPWR? curious.
    EDIT: so you did, and he said "that's on pci-sig" because intel doesn't own pci-sig, they are A PART OF pci-sig, but they dont outright own pci-sig. Which also, Nvidia paid off pci-sig to adopt 12vhpwr. intel can test it, but that's because they have to because its part of the pci-sig spec. lmao. nvidia getting sued, and pci-sig trying to distance themselves from the lawsuit by saying "if the connection isn't working, that isn't the spec's fault that's the brands fault" aka "nvidia is at fault not us" but lets be honest, that kind of "dont look at me" early repsonse tells me nvidia paid off pci-sig to adopt and its biting both of them in the ass.
    which again, that's why INTEL didn't use 12vhpwr on their new gpu's. YES you can argue "their gpu's dont use that much power" but if you are gonna make a new gpu, you might as well just adopt the new spec, unless you know something is fishy in the first place. neither amd nor intel are using 12vhpwr. so it begs the question, how right am i?

    • @agenericaccount3935
      @agenericaccount3935 ปีที่แล้ว

      Condense that shit down to something coherent

    • @FakeGordonMahUng
      @FakeGordonMahUng ปีที่แล้ว +1

      It's a lot more complicated than that. Also, I don't think we have any proof of any "pay offs" and when you get an industry group made up of multiple multi-billion dollar companies, it's very hard to know what exactly happened in the background. I can tell you I have been told by many people in the industry who are looking outside in that Nvidia wanted to adopt the original 12-pin connector, not the 16-pin 12VHPWR connector which came as a late addition to it. It's obviously a game of hot potato but I believe your allegations of 'paid off' is mostly conjecture without any proof.
      And Intel skipping 12VHPWR isn't surprising because adopting it brings extra cost since 99 percent of install base PSUs have no connector. If Intel adopted 12VHPWR just to adopt it, that would have meant including an adapter cable that means more supply chain risks, and extra costs. Like AMD, Intel would have had to make that decision quite some time ago as well so it all adds up.
      Also: You should consider that the main upside to 12VHWPR on a GPU is to reduce connectors. You can deliver up to 450 to 600 watts through a single connector that uses less physical space than a single PCIe connector. The advantage really increases on high-wattage GPUs such as the 4090 where board makers would have had to include FOUR 8-pin connectors on it. Yes, the cards are huge, but the physical space for four 8-pin connectors is a lot. Look at your card and visualize how much 4 8-pin PCIe connectors would occupy. Bringing this back to Arc cards--there's just no reason to adopt a new connector that adds cost, adds supply chain drag (what if you suddenly can't get the adapters which often happened during the previous supply chain foulups) and really offers less physical space benefits on a card that uses a max of 2 connectors.

    • @goblinphreak2132
      @goblinphreak2132 ปีที่แล้ว

      @@FakeGordonMahUng I agree with you, we should be able to reduce the number of connections to a gpu while still delivering the power we need. except that the 4090 already caps out the new connection standard, so what, the 5090 gonna need two 12 pin? with the idea that each generation of gpu uses more power? then whats the point of the new 12 pin when we already gonna need 2 of them?
      IN REALITY, PCI-SIG has been under-rating their specification. ATX generally requires AT LEAST 18 gauge wire. Some higher end PSU's like seasonic 1600w use 16 gauge which is thicker wire.... they claim the 6pin pci-e connection is only capable of 61 watts and the 8pin pci-e is only capable of 182 watts. this is factually incorrect. Breaking that info down, 61 watts divided by 12 volts would be about 5.08 amps. but that's split between THREE positive wires. so we then divide 5 amps by 3 wires to get 1.69 amps per wire. THATS NOTHING in terms of current. In reality we could pump at least 5 amps per wire without any kind of crazy heat output. This would result in 15 total amps of power, multiplied by 12 volts means the 6pin is ACTUALLY capable of 180 watts all on its own. And the 8pin having two extra grounds to reduce resistance should be about 250 watts. SAFELY.
      In the car world, one thing we KNOW is that impedance/resistance of wire gets WORSE with length. the longest length we are dealing with inside a pc case is about 3 feet at its worse/longest length. I know for a fact that 3 feet of 18 gauge wire, can easily pass through about 10 amps of current while meeting the 60c temp rating. Now mind you, the new 600w 12VHPWR connections are also rated 600w at 60c.... so using that same 60c temp guide, the classic 6 pin connection is actually capable of 30 total amps (10a per wire, 3 wires) for a total of 360 watts....
      WCCFtech did their little "adaptor test" with their initial "its burning the cables" test and showed TWO 8pin hitting 300w each, at about 30c temps, leading ;into a single 12vhpwr supplying the full 600w total.... that means we can easily, right now, run two 8pin for our 600w needs. but because of PCI_SIG being run by idiots, they under-rate that potential and pretend the old connection cant handle it.
      sadly the 12vhpwr is already rated 600w at 60c.... and many have shown those exact temps (50c-55c in open air test benches) to be the fact. with that in mind, if we take the typical 6 pin connection, and change it to 16gauge to match the new standard, in theory that 6pin would then be capable of 300w according to their own data.
      now using common sense, why not take the spec further? use 12gauge or even 10gauge wire as the standard? 10 gauge in home use, with multiple feet mind you, is capable of 30 amps of current. apply that same mindset to even the basic 6pin connection. a 6 pin connection, with 10 gauge wire, 90 amps because it has three positive wires, and bam, you have 1080 watts on the old school 6pin.... the 8pin would edge out slightly more thanks to two extra grounds reducing impedance/resistance. AND THE BEST PART of this ideal, backwards compatibility. AND YET, PCI-SIG magically decided to adopt the 12pin+4pin connection for god knows what reason.... it does, not, make, logical, sense. Could you imagine, simply buying a new ATX3.0 PSU, using their new 6pin cable, and getting 1080 watts through it? that saves space.... and as I already stated, what happens when GPU's start using more than 600w? we are gonna need TWO 12vhpwr connections? space savings GONE. it just doesn't make sense. if we are gonna "develop a new connection to plan into the future" why didn't they future proof it? I mean It almost seems like in TWO generations, we are gonna need yet another standard. it just doesn't make logical sense....

    • @FakeGordonMahUng
      @FakeGordonMahUng ปีที่แล้ว +1

      @@goblinphreak2132 So as I understand it and have been told by people who follow it a lot closer, and as we know, the PCIe connector and recommended gauge wire is well capable of supporting far more amperage than what it carries which led to much confusion when the original 12-pin connectors were shipped and the wattage needs and claims.
      Why? As I understand it (but may be wrong), the original PCIe connector was introduced more than 15 years ago and was led mostly by Nvidia but obviously ATI/AMD and others adopted it as well. It was a very conservative 150 watts in today's view, but in 2007, it probably seemed like it had plenty of runway.
      So fast forward 15 years and you want to add amperage and there's additional amperage on the 8-pin connector available.
      (This is my conjecture here) So why not just go from a max 150 watts to a max of 300 watts on a single 8-pin? Especially if it can handle it? My guess again is that you can't introduce the same connector carrying 300 watts into an eco system where someone is going to screw something up by plugging in the wrong thing. Do you just make it bigger and bigger? I don't know if you've eyeballed the 16-pin 12VHPWR connector but it is tiny. 12x 12VHWPR is likely smaller than 1 6-pin connector.
      Yes, I know, people will say, well why didn't we spec the connector and cable at a higher rating than 150 watts in 2007? "Over building" everything sounds good until you think about how much that adds cost to it which means no one wants to do it. If you tried to force wire gauge and connectors on people in 2007 designed to carry you into 2030, you'd likely have a revolt for cost reasons. That's the way it works in an industry where margins are literally pennies and nickels for some vendors. And when you're talking a 25 cent connector and cable x millions, it adds up really quickly.
      People need to look at it all dispassionately and remember this is all done for business reasons. Yes, letting the engineers drive everything looks great until you realize they designed something for a market that doesn't exist or may not exist for another 20 years and in the meantime, you have to survive to get there.
      We all complain about not planning ahead with freeways but there's a reason you don't lay down a 12-lane highway in 1952--the cost. It sucks to be stuck in that traffic, but there are really legitimate reasons things are done the way they are. And if we had been around in 1952 to push for 12-lane highways back then, we would have been run out of office or fired on the spot. People don't want to pay for something that they won't use for decades.

  • @toddincabo
    @toddincabo ปีที่แล้ว

    👍 MVP

  • @CHIEF_420
    @CHIEF_420 ปีที่แล้ว +3

    🎧

  • @3dfxvoodoocards6
    @3dfxvoodoocards6 ปีที่แล้ว

    Like

  • @leoSaunders
    @leoSaunders ปีที่แล้ว +1

    wow. good guy intel
    😂😂

  • @hadoukenocx4746
    @hadoukenocx4746 ปีที่แล้ว +5

    Intel wants to help us? Intel was the one who forced the new Connector on the RTX 4090 and on us no, thanks, I don't need help from intel. I trust more Corsair for PSU's

    • @bionicsquash4306
      @bionicsquash4306 ปีที่แล้ว +13

      They did not force Nvidia to use the connector on the 4090, what are you talking about?

    • @nanoflower1
      @nanoflower1 ปีที่แล้ว +9

      @@bionicsquash4306 Exactly. Nvidia chose to go for a power hungry design that then could use the new connector. They could have used a less power hungry design as AMD appears to have done with RDNA3 or they could have used multiple 8 pin connectors.

    • @zivzulander
      @zivzulander ปีที่แล้ว +7

      Did you watch the video or are you just responding to the video title without context? [rhetorical question] This gets addressed. Even from other sources, it's clear that PCI-SIG as a whole and Nvidia are more responsible for physical design of the connector, which in rare instances (user not plugging in connector fully) can result in melting the 4090 12VHPWR connector.
      I'm not sure why you'd expect Intel to catch it when their testing is broader and for a bunch of PSUs, not exhaustively testing Nvidia cards. Even reviewers and experts *trying* to make their connectors fail couldn't do it until the exact causes were figured out, which involves some level of user error (granted, due to an imperfect seating/locking mechanism) and not something that is widespread.
      No one was "forced" to use 12VHPWR, regardless, which is how the new AMD Radeon 7000 series are sticking with old 8-pin power connectors. Or do you think Pat Gelsinger kidnapped Jensen's family and told him he has to use the new cable or he'd never see his family again? Is that the logic at play here? 😛

    • @hadoukenocx4746
      @hadoukenocx4746 ปีที่แล้ว +1

      @@zivzulander There is only so much information you can find on TH-cam to comment,
      but I won't explain here why intel did force to use this connector, it's also a technical reason.
      And I will not explain why intel doesn't use the own connector on their arc GPUs so far you guys don't pay me for my time and knowledge images or not some of us have a job that involves computer Science. I know why they use the connector, the fault isn't to blame nvidia, only it's maybe 20% Nvidia's fault.
      Nvidia did even say something about the connector.
      You guys should do the homework before typing something here because you are angry because, a company doesn't care about you, and why you have intel or AMD.
      Intel or AMD doesn't care that you guys are brand favouring some company, I have both intel and AMD Epic systems, don't flame me for some brand because I did hurt your feelings about a computer part.
      If you want to know why the connector is needed and why it's there, you can go to university and learn and study computer Science.
      The part about Pat Gelsinger can be who knows 😂😁

    • @hadoukenocx4746
      @hadoukenocx4746 ปีที่แล้ว +1

      @@bionicsquash4306 There is only so much information you can find on TH-cam to comment, but I won't explain here why intel did force to use this connector, it's also a technical reason. And I will not explain why intel doesn't use the own connector on their arc GPUs so far you guys don't pay me for my time and knowledge images or not some of us have a job that involves computer Science. I know why they use the connector, the fault isn't to blame nvidia, only it's maybe 20% Nvidia's fault. Nvidia did even say something about the connector. You guys should do the homework before typing something here because you are angry because, a company doesn't care about you, and why you have intel or AMD. Intel or AMD doesn't care that you guys are brand favouring some company, I have both intel and AMD Epic systems, don't flame me for some brand because I did hurt your feelings about a computer part. If you want to know why the connector is needed and why it's there, you can go to university and learn and study computer Science.