Stephan Schaffner Yeah, there's always possible inconsistencies between the silicon on two identical overclocking parts. In the CPU world it's commonly known as the silicon lottery. These lower end cards are likely to be more inconsistent from card to card, whereas with higher end cards you can get them "binned" which means they test to make sure the ASIC quality meets a standard. See Gigabyte G1 Gaming and EVGA FTW cards
Thince Plays Linus is an over rated irrelavant, uneducated idiot with absurdly irritating voice and gestures. wonder why he has more than 2 subscribers. his mom and dad.
That's been done (or at least something similar) I think it was Gamer's Nexus that did a similar test with Vega and found that only when you physically locked the clocks could you get two cards to give similar performance.
It's always cool to see how components performed , but even more better to know why , could be a start of a news series , great content as always jay .
Jay wakes up in the middle of the night in a cold sweat. Dread washes over him. "Are the cards different revisions?!" Drives to the office at three in the morning. Austin is just leaving the office, overhears Jay mumbling as he feverishly marches to the door "Rev 2, it better not!"
Had 3 GTX 770's Original card had Hynix based memory, while the other two were Eplidia based. The later two cards also had different bios revisions, lower overclocking potential and slightly worse performance compared to the original card. The two Eplidia based cards had damn near identical performance. Either way I think the adage "You get what you pay for" applies here as manufacturing tolerances tend to be quite a bit tighter on "premium" products. I do like JayzTwoCents idea for a GPU mfg tolerance test between various GPU's... Whose budget card is most consistent, if there is a consistent variance how much does it decrease as you go upwards in GPU class?
Peter I was thinking about benchmarking my pc for lols. What color shirt would you suggest and would full bladder be worse than empty bladder? I don't have much choice in wall outlets in my room unfortunately :/
Infrared is my go-to color for shirts. Make sure it is less than 0.4% cotton. A little bit of impurity is ok but might limit your OC levels. The shirt shouldn't be logoed - you don't want to Hz the feelings of the CPU. If you want to OC to a professional level, be sure to wear a button-up collared shirt. As for bladder, emptier is better, but only if you have a printer connected. Much like how predatory animals smell fear, printers can sense tension and time limits, and will intentionally cause failures as a result. As long as your printer remains connected to the PC (even through wifi), it has the ability to tell you PC that you are not in the best condition for operation. You normally don't get every drop out when you pee, so I take a syringe and inject it into my bladder to suck out the remains. Pro-tip: this is a great way to avoid the pain of kidney stones! The closer your outlet is to the power plant, the better your results will be. An easy way to get more power is to go to the watt meter outside your house, open it up, and replace the first gear of the digit counter so it's a 4:1 ratio. This will give you 4x the power.
Jay, I thought that this video, and the series of videos it spun off of, were honestly really excellent. Showing how great budget PCs are and can be now a days? Awesome. Subtly showing the importance of competition within the marketplace and how it is objectively better for the consumer? Great. Responding to your fans and trying solutions that they offered to improve your testing with these systems instead of just sticking with the results you got? Very good show of respect for your audience. But this video, *this* video is really the best of the bunch. You truly went above and beyond what could be honestly expected of you, and didn't stop searching until you found the solution to the disparity in performance between the two systems. It really is an amazing mark of integrity and transparency that I appreciate as a long-time viewer. You were honest and straightforward with all of us, and I really appreciate that. You kept trying and making videos when a couple of tweets, a spreadsheet, or a short addendum would have sufficed. I respected you fairly highly before, but thanks to the real science and trial and error you showed and informed us with in this latest series of videos, you are honestly the tech TH-camr I hold in the highest regards. Thank you Jay, keep doing what you're doing.
Now what if you clocked them the same at a stable clock? Does it change at all? Wonder if there is any variance between card ipc or if it's purely gpu boost making the difference.
Tech NOW that's what I was thinking, it doesn't hold a ton of reason to have an IPC difference on the same silicon. But I think it's worth testing, same with asic, it has weird effects.
It has nothing to do with IPC. The silicon lottery means that one card simply has a higher stable clock at lower power levels. Hence GPU boost goes further. IPC is never different between to identical chips as that would imply that somehow the GPU is doing less work per cycle. The only way that can happen is if parts of the circuitry is unavailable. The only time you ever encounter something like this is when you are overclocking and artifacts appear. Then again it's not so much parts of the CPU not doing anything as it is parts of the CPU making incorrect calculations because the states within the circuitry has become unstable.
Yea if i recall, Jay said he bought them at 2 different times.. its possible there was a bios update to the card which caused a change in performance as well. So yes.. I wonder what the BIOS differences are between those 2 cards.
He is referring to the BIOS on the CARD not the motherboard bios that will be same between tests. Depending on when the cards were bought or shipped they could both be running different Bios versions that may or may not effect GPU boost 3.0
It is quite common for BIOS updates to silently change the TDP cap, Boost behaviour, even VRAM straps of otherwise identical cards. This can have varying impacts on performance.
Same with CPUs. But the difference in silicon quality on CPUs aren't really felt unless you're overclocking or on a TDP restricted usage like on a laptop.
I would have thought Jay of all people would have understood that. If he just wanted comparable rigs to test to get approximate differences then building different systems is fine, but you will never be able to get reliable, or properly comparable results unless the only part you change is the one you are testing.
I have a similar circumstance with both my Strix GTX 1070 O8G Cards. I recently decided to disassemble my pc to do a thorough cleaning (dust removal) and noticed that I was getting a lots of frame rate drops when I would play my games or run benchmarks (the cards are obviously in SLI). I decided to switch the GPUs around to see if that would improve the performance and it went back to playing like it did before I cleaned out the system and noticed a lot more stability in frame rates as well as eliminated the stuttering I was also experiencing when I had originally re-assembled the system. So for those who want to hate on Jay for bringing up a very good point when it comes to pc hardware inequality, I will back this 100% as you have to remember one thing, especially when you talk about GPUs, CPUs, RAM and other components and that is you will always be at the mercy of the silicone lottery when it comes to the hardware you get for your builds. I even noticed this when it comes to something simple as a cellphone as my wife and I got our phones from the same retailer on the same day and had a serial number difference of less than 10 units and when I ran benckmarks on both phones, hers performed about 4% better than mine straight out of the box.
one of the benefits of solid audio equipment is that the noises you respond to in multiple videos, arent noticeable to the viewer. let the equipment work for you and dont stress the ambient noises. (sirens, motorcycles, and dumptrucks oh my!) good work guys, keep it up
Only if the chips are binned properly, a GPU can still have a defect (a very small one) and still be sold with that defect active, meaning that they haven't turned anything with that defect off and it will still work almost exactly as well as one that doesn't, it just might have an extremely small difference in performance. Also, the smaller we go with silicon, the more inconsistencies we will have, but due to better IPC and smaller transistors, we can pack in more performance (at least right now) on that same, even more inconsistent silicon, for it not to make a massive effect.
It would be good if you could test the same card with a higher sample size. Say if you had 5 GTX 1050's, and 5 GTX 1070's, and see the performance difference on a wider median scale as well if it applies only to lower tier cards.
Thought I would add my own unigine heaven scores (with the same settings in the vid) of my i5 2400 and GTX 1050 to compare to your R3/G4560 if you're going the used route with 8gb ddr3. The temperature and frequencies wont be the same since I'm using the MSI LP dual fan version in slimline case with no fans, so its not an apples to apples comparison. OVERCLOCKED (+145 core +300 mem): 1796 BOOST most of the time Score: 1739 Avg: 69 Min: 17.8 Max: 135.3 76C STOCK: 1696 BOOST most of the time Score: 1640 Avg: 65.1 Min: 8.6 Max: 126.7 72C My boost clocks were a lot lower than your 1050, maybe because of the higher temperatures and my 240W PSU/silicon lottery but there you go.
JC Dayrit yeah but your manually tweaked numbers are still better than the gpu 3.0 boost numbers. Basically I was right to assume someone manually tweaking their card will achieve better performance than gpu 3.0 on its own.
I just love that commercial where Nick is wearing the OG JayzTwoCents shirt. I remember first watching you years ago and that shirt would come up so much
Or the other way around: getting a chip that is able to be pushed much higher but market demand for the higher chip isn't enough. Reminds me of the old Pentium 4/Core 2/K10 days when even the lowest models could easily clocked to match the highest chips. Just the demand for a $800 enthusiast chip is much lower than for a $150 entry level option. Nowadays everything is locked down.
Just like its the ordinary FX-8350 and not the 9590 that holds the world frequency records. But you get that variance on everything, even my Ryzen 1700 can't hit 4 Ghz (without 1.70 v) and others can almost hit that 4 Ghz on the stock cooler. I'm obviously not pushing 1.7v, because I'm not sure that's safe on even my sub-zero cooler. I would be fine pushing 1.9v on LN2, but this chip is not a top tier chip so I'm not wasting my time. The more complex the circuit, the higher chance of variance, but you might not notice this variance, because its that much more likely you will have that 1 little transistor out of billions that will hold you back.
It's not that it's cheaper card, there are little differences across different units, overclockers know about this well, for example my unit of gtx 980Ti can be overclocked 90MHz more than average units.
I can speak to my experience about this. I had a build with 2 Gigabyte G1 Gaming 980tis. I had read somewhere that you should put your best overclocker in the top slot (I don't know if it really makes a difference but it couldn't hurt), so I put each card in separately to test how far I could push them. Same bios, same version, and one overclocked to 1520 just fine, but one gave me an enthusiastic middle finger at 1450 no matter what i did. Thankfully the 2 cards pushed 1440p at 144hz for the most part anyways, so I just chalked it up ti silicon lottery and moved on.
If the overclock is dependent on temperature then could it also not be an issue with the way that the heatsink and fan has been applied? If you reapply thermal compound then might that affect results?
No. The good card went further even before the heat sink was saturated. Hence it's a power limit. Adding a dedicated GPU power connector may act as an equalizer but at the power levels supplied by the PCIe port one card was slightly better. That said this card with an external power connector would probably still contain better silicon than the other one and thus go further assuming neither one was temperature restricted.
Colin Jones I was actually thinking that the entire time. If he tries this we'll know if it just wasn't seated properly or not. Maybe even might just be that the factory TIM wasn't applied very well. Hopefully Jay tries this.
Or maybe a controller bug or something? Like -- the "better" card ran 1C higher, yet clocked higher. And the "worse" card clocked itself lower despite being 1C cooler. This would actually be contrary to your hypothesis, although as Jay said, it bases the OC on temperature ranges, not 1C steps, so who knows. But perhaps the card's built-in overclock is hand-tuned or just variable in general, and even though the "worse" card *could* go a bit higher, it's too late, it's already been stamped, boxed, shipped.
I thought that's how they are sold? Take two GTX 1080's. They are tested at the factory and the ones that perform much higher than normal are sold as "gaming" editions where everything else is sold as normal 1080's.
Jay-- I had a similar, puzzling issue one time w/ a PC I was fixing for someone. Their GPU's PCIe connectors had a badass scratch on one of the connection "teeth". Nobody could explain the performance difference beyond this issue. Once swapped out, testing was noticeably different. At the time, I did not take notice w/ the frequency-- but the rest of the problems were pretty much identical. It WOULD affect voltage too.
I think there can be various performance/quality factors in this field.. 1. It's a 1050 graphics card not some high class 1080, the quality control can lack a bit more than how much it can with a high tier graphics card. So.. More expensive the card, more quality it has.. 2. Aftercooled cards, specially MSI might have problems with making cards run how they should.. MSI is a known company to have QC (Quality Control) flaws. (This point is taken into account if MSI really does touch silicon on the card AND/OR the cards are OC edition in which case a card with a tad worse cooling might perform worse.) 3. Lastly, yes they can be built a nanometer differently and have different performance outputs.
This series has been revealing and interesting. I wonder if you could do a vid on how chips are made (process, success factors & ratios/percentages), just to see if we can theorize (in layman's terms of course) why such variance is seen. Rabbit holes are fun! Love the vids, Jay!
Dude I watch alot of your content and I have not come across a video that urges me to leave negative feedback. You are doing great man. I do no understand why people get upset like they do. Keep up the amazing work Jay. Thank you for all the knowledge you are giving us.
Exactly my thoughts. Sure, basic tests will confirm that it can hit - and exceed - default speeds, but I doubt the same effort is put into quality control for something that costs $120 when compared to something that could potentially cost the company $500 (or more) due to a dissatisfied customer.
A quick correction, thermal throttling is when you CPU/GPU hits temperature limit and need to cool down to avoid damage. There is also power throttling, it is one the power delivered is not enough to hits the targeted voltage/frequency. It is mean to prevent crashing. As for the performance difference, I think it has to do with the card power ceiling along with the "silicon lottery" . You see, better quality ASIC can hit the needed frequency at lower voltage, the 1050 cards firmware tells it to draw maximum of 50 watt of power. It appears that due to the manufacturing variance, one card can extract slightly higher frequency before hitting the power ceiling of 50 watts. A simple question? Did buy the cards at the same time. Becuase sometimes, sometimes the manufacturing process gets improved overtime, even on the same process node as the process matures.
This video is extremely useful I have a 1050 and wondered about the differences in frequency speed on my own GPU. and so far have never found out why until today
WOW Jay, you are an amazing youtube!!! you take comments made to call you out and turn it into a positive, not like other youtube like BN who won't take feedback and are unhappy when you don't give them positive feedback. In my eyes, all feedback is good even bad as it shows you another way.
I had always wondered if you were using the SAME GPU in each system when they weren't being showed in the videos. I would never do that doing a comparison video for this exact reason. I'm glad you got it sorted out, but now I know to look if people are using two different GPU's or the same and just doing each test separate for comparison builds such as the one that caused all this fuss to begin with. I can understand the need to save time by doing the comparison side by side, but then get inconsistent results from not using the same GPU in a test like this one.
HugsOverDrugs Mine is going 1860MHz (2050MHz when overclocked) (500W cheap PSU) but it broke 6 months after with the O/C. So now I don't overclock it anymore...
JapWhite i dont oc it either i just let it do its thing it auto overclocks to 1974...plus mine has a huge cooler its the msi gaming x the max it has hit is 63c
I'm glad you take the time to actually check your results and validate everything. Good job on investigating! I'd be interested in seeing what the average deviation between more cards would be... I'm wondering how much variance there is in overclock potential
NoOne ButMyself I don't actually see a problem here since both seem to account for variables pretty well so you're not gonna hit unreliable performance very fast.
Seems like i got lucky with the 1070 G1.... oc of 2501 Ghz ... oc it runs hot and throttle's a lot but i didnt see any gtx 1070 air cooled from gigabyte hit these number's Edit: Software bug , used evga's software and got diferent result's
What are you using to check the mhz? I know I can get my gtx 1080 up to 2126mhz witch is the highest I've seen for a 1080. That's on the stock voltage.
But have you *tested* the performance difference actually? Because on quite a few cards you can set the core clock to stupid high levels and it just ignores it and runs at stock anyway. It's a bug of GPUBoost 3.0. PS: I can pretty much guarantee your that you are not actually reaching 2501MHz. *Either you're fooling yourself or lying.* hwbot.org/benchmark/unigine_superposition_-_1080p_xtreme/rankings?hardwareTypeId=videocard_2716&cores=1#start=0#interval=20
Its not that hard to understand, if youve been into pc gaming and pc building for a few months you more or less understand everything that anyone has to say.
Tibor Klein msi is suddenly the problem now? You're gonna find discrepancies in all cards. These are both doing higher than advertised speeds. There's no issue here. There's just a difference between the cards because you'll never have 2 truly identical cards, same with cpus.
Something ive never realised until you just mentioned cards running higher than their boost clock is that my EVGA GTX980 SC acx is running almost 200mhz higher than the stated boost clock :D quite happy with that
Lol budget video goes on to a mini series. Not a bad thing though, sweet that you are showing why and not just explaining. Good job for sticking with it.
very good couple of videos and im actually surpriced on how much actually comes into account when building a pc and i didnt know that therre could be a difference in 2 identical gpus
Very intersting, you should do the test bench with high and low end, this is acctually some interesting and unique content, which i can imagine is hard to come by. I've no doubt someone has seen this issue before, but ive sure never heard of it myself
I'm curious about the TIM application between the two. I'd love to see you tear down both cards, inspect The stock TIM application between the two and then replace the TIM on both and retest. Hope you see this Jay and at least comment on this idea. Any way, thanks for the great video! Keep up the great work!
Hey Jay! Some thoughts: 1; Verify Bios Versions ?! 2; Test reseat the coolers with new thermal-paste, might be variations in paste quality/application over time (since they were bought at different time) 3; While having the cooler of check if they have the same memory manufacturer, over time nvidia buy nand (gddr5) in bulk form different vendors / manufacturers. 4; The gpu may be produced whit different fabs, like apple did with the a9 chip between Global Foundries and TSMC which resulted in variations in performance. Couldn't find out were the 1050 was produced. Really enjoy your channel! Keep up the awesome work! /j0x
As a silicon chemist the silicon ingots that we get wafers are not exactly consistent vertically when compare their wafers cut from them. Higher tier electronics pay a premium for more consistent and better quality wafers off the ingot. Lower quality electronics have more variability between wafers. This could be that or simple engineering fluctuations as the card is made.
No two chips are identical. My son-in-law and I built 2 similar gaming rigs in 2015 using the i5 4690k CPU. We both went with ASUS motherboards; mine was a Z97-AR, his, an ROG Maximus Formula. My CPU OCed (through the AI Suite 3 autotuner) to 4700MHz @ 1.276v; His peaked at 4400MHz, and he was plagued with crashes when he tried to tweak it manually. Mine, with a Cooler Master Hyper 212 EVO, runs at 58C with MSI Kombuster's CPU Burner; His hits 73C with a Corsair Hydro H105. FTR, both were built in Corsair Vengeance C70 cases w/ 8 120mm fans, and both boards were OCed to 1600MHz w/ 16GB RAM. I've also seen a lot of videos where some people get higher clock speeds out of their R9 380X than I can. It happens. AFAIK, they're only guaranteed to work stable at the factory preset speed; being unlocked only gives the user the "potential" to set it higher, but the capabilities are not guaranteed by any manufacturer.
Try cleaning the contact-fingers on the GPU! Since all the power has to come through these narrow pins, any finger-prints and random dust can have an influence. Rubbing alcohol works well.
I was reading some stuff on Tom's hardware that pointed to potential issues in heatsink seating on factory cards. Not likely with the temp zone you are seeing, but possible?
Classic part of "Silicon lottery". Thanks for showing it right Jay. What I would LOVE to see as a (somewhat late) follow-up: You mentioned Boost 3.0 - Can you influence that on your own? For example if you just re-apply new thermal paste, seat the Cooler better (tighter) and can you do a comparison with this and then slap a waterblock on it to see if you can get the "worse performing" card to the regular one? And how big will the gap be when you OC the cards to max? Will the gap widen the further you OC the card? And what I would LOVE to see - because I cannot find ANY info on this topic: What benefit is OCing Pascal REALLY bringing? I mean yeah OC because I can - but when I tried that with my stubborn 1080 G1 I get stuck at around 2050Mhz and when benchmarking I don't see much difference than "stock" Boost 3.0 - just 2-3 FPS. Which is not worth the effort? Or is there really more to it that I'm not seeing when testing with Superposition that I can only see when gaming or so? Keep up the awesome work.
I believe it boils down to the actual build of the GPU chip. I used to work in semi conductors. Scratches and particles occur across the wafer. So to alleviate it, engineers build redundant components into the design. The higher performing GPU has fewer flaws than then the lower performing one. During testing, both chips fell within the acceptable control window. Thus both were released to be sold.
Good video! I personally think it would be nice to see a video thoroughly testing the difference between stock thermal paste and stuff like liquid metal on GPU's.. I Know there are other videos out there, but mixing the question of lower tier GPU's with the question of thermal interface quiality could be really interesting 🙂
I've intern in semiconductor company which making semicon chips, as a production maintenance for 5 month. from my experience, every chip has their own value readings within the acceptable values. we call them offsets value. for instance the range of ±10mV/mA/mohm. so some chips have readings near to reject value but still considered good units. means not all units are made perfectly identical, they have differences. which in this video, the card is slightly better than the other one. tldr: not all semicon products were made perfectly identical. thus giving a slightly different output on each product/unit.
I'd even watch your videos if they'd only be advertisements. These are so entertaining and well done! I know you are bros with Linus, but I have to say it: Your channel is everything LTT set out to be but failed horribly. You don't use bullshit memes to get more views from 10-year-olds. You present in a professional fashion, which makes you the #1 hardware channel for me.
This is something about the silicone used to build the GPU itself. The silicone quality varies even in one wafer. The only way for manufacturer to find it out is to do something like what EVGA did to their King Ping Cards.
It would be interesting to see if reapplying the gpu thermal paste would make any dif on both cards. fyi reapply paste to rule out if its production line issues from the company..
I've had a bit of this silicon lottery as well with my 1070. With GPU Boost it's usually in the 1800-1900 area maximum, which is low for a 1070. If you want to do more tests, what I'd like to see is if OEMs bin their GPUs. I.e., EVGA makes... four, five kinds of 1070s/1080s. Is there any pattern to the silicon lottery, i.e. do the more expensive variants clock better?
It actually makes sense that the higher tier cards are more consistent. They likely have a more stringent standard of quality and bin a higher percentage of the chips overall. The result is a generally higher average performance across all commercial products. For the 1050 range, they just slap any functional silicon on a circuit board.
I have two EVGA GTX 1070 Hybrids, and there's a fair difference between them in terms of overclock stability based on my experience. After seeing this, I'm tempted to quantify that difference. Good stuff!
Wow ... I never thought that THIS could be a variable on the test! Thumbs up for keep letting us know whats happening and not stopping on this topic!
Stephan Schaffner Yeah, there's always possible inconsistencies between the silicon on two identical overclocking parts. In the CPU world it's commonly known as the silicon lottery. These lower end cards are likely to be more inconsistent from card to card, whereas with higher end cards you can get them "binned" which means they test to make sure the ASIC quality meets a standard. See Gigabyte G1 Gaming and EVGA FTW cards
Sometimes lower end cards have ridiculously oc headroom.
I just realized something. This is like Linus's workshop but it comeback with actual interesting results.
Thince Plays Linus is an over rated irrelavant, uneducated idiot with absurdly irritating voice and gestures. wonder why he has more than 2 subscribers. his mom and dad.
@@subaveragejoe2 linus's voice does disturbing to me maybe females like that somehow, i think,,,
@@闪烁的微光 LInus the real og, with that dropping fetish
If you want to *really* highlight this difference, test with two identical Rx580's. As a dual Rx580 owner I can tell you it's astonishing.
I simply bought them a long time ago. Before everyone went raving mad. Nothing to do with luck, just timing.
That's been done (or at least something similar) I think it was Gamer's Nexus that did a similar test with Vega and found that only when you physically locked the clocks could you get two cards to give similar performance.
I was lucky to score an MSI RX580 Gaming X PLUS a while back. . . that sumbitch can hold a stable 1535mhz oc.
I shouldn't have sold mine, the money was just too good though -_-
CatSay 2 rx 470s one works flawless, one black screens within 15min of gaming... They're ones from friends I've tested...
It's always cool to see how components performed , but even more better to know why , could be a start of a news series , great content as always jay .
Damn, nice lighting in the background.
Jay wakes up in the middle of the night in a cold sweat. Dread washes over him. "Are the cards different revisions?!" Drives to the office at three in the morning. Austin is just leaving the office, overhears Jay mumbling as he feverishly marches to the door "Rev 2, it better not!"
same part number
Had 3 GTX 770's Original card had Hynix based memory, while the other two were Eplidia based. The later two cards also had different bios revisions, lower overclocking potential and slightly worse performance compared to the original card. The two Eplidia based cards had damn near identical performance. Either way I think the adage "You get what you pay for" applies here as manufacturing tolerances tend to be quite a bit tighter on "premium" products.
I do like JayzTwoCents idea for a GPU mfg tolerance test between various GPU's... Whose budget card is most consistent, if there is a consistent variance how much does it decrease as you go upwards in GPU class?
oh no lol
Hence all the gray hair. :)
I know what it is!!!! It's the screws you are using. They are definitely bottlenecking.
well screw this
Don't be ridiculous. It's more likely something like which outlet in the wall he's using, the color of his shirt, or how full his bladder is.
Peter I was thinking about benchmarking my pc for lols. What color shirt would you suggest and would full bladder be worse than empty bladder? I don't have much choice in wall outlets in my room unfortunately :/
Infrared is my go-to color for shirts. Make sure it is less than 0.4% cotton. A little bit of impurity is ok but might limit your OC levels. The shirt shouldn't be logoed - you don't want to Hz the feelings of the CPU. If you want to OC to a professional level, be sure to wear a button-up collared shirt.
As for bladder, emptier is better, but only if you have a printer connected. Much like how predatory animals smell fear, printers can sense tension and time limits, and will intentionally cause failures as a result. As long as your printer remains connected to the PC (even through wifi), it has the ability to tell you PC that you are not in the best condition for operation. You normally don't get every drop out when you pee, so I take a syringe and inject it into my bladder to suck out the remains. Pro-tip: this is a great way to avoid the pain of kidney stones!
The closer your outlet is to the power plant, the better your results will be. An easy way to get more power is to go to the watt meter outside your house, open it up, and replace the first gear of the digit counter so it's a 4:1 ratio. This will give you 4x the power.
Yeah the screws have like 90 threads while the CPU only has like 8
Jay, I thought that this video, and the series of videos it spun off of, were honestly really excellent. Showing how great budget PCs are and can be now a days? Awesome. Subtly showing the importance of competition within the marketplace and how it is objectively better for the consumer? Great. Responding to your fans and trying solutions that they offered to improve your testing with these systems instead of just sticking with the results you got? Very good show of respect for your audience.
But this video, *this* video is really the best of the bunch. You truly went above and beyond what could be honestly expected of you, and didn't stop searching until you found the solution to the disparity in performance between the two systems. It really is an amazing mark of integrity and transparency that I appreciate as a long-time viewer. You were honest and straightforward with all of us, and I really appreciate that. You kept trying and making videos when a couple of tweets, a spreadsheet, or a short addendum would have sufficed. I respected you fairly highly before, but thanks to the real science and trial and error you showed and informed us with in this latest series of videos, you are honestly the tech TH-camr I hold in the highest regards.
Thank you Jay, keep doing what you're doing.
Now what if you clocked them the same at a stable clock? Does it change at all?
Wonder if there is any variance between card ipc or if it's purely gpu boost making the difference.
XDeadzX it's purely GPU boost making that difference
Tech NOW that's what I was thinking, it doesn't hold a ton of reason to have an IPC difference on the same silicon. But I think it's worth testing, same with asic, it has weird effects.
It's just the OC (boost). Basic silicon lottery stuff. One of the chips can attain a higher OC than the other. Nothing magical or weird about it.
It has nothing to do with IPC. The silicon lottery means that one card simply has a higher stable clock at lower power levels. Hence GPU boost goes further. IPC is never different between to identical chips as that would imply that somehow the GPU is doing less work per cycle. The only way that can happen is if parts of the circuitry is unavailable. The only time you ever encounter something like this is when you are overclocking and artifacts appear. Then again it's not so much parts of the CPU not doing anything as it is parts of the CPU making incorrect calculations because the states within the circuitry has become unstable.
Did you take into account the BIOS version of the gpu? It could make a difference
AsciiGDL how so?
Yea if i recall, Jay said he bought them at 2 different times.. its possible there was a bios update to the card which caused a change in performance as well. So yes.. I wonder what the BIOS differences are between those 2 cards.
Should be the same if you're otherwise using the exact same system otherwise.
He is referring to the BIOS on the CARD not the motherboard bios that will be same between tests. Depending on when the cards were bought or shipped they could both be running different Bios versions that may or may not effect GPU boost 3.0
It is quite common for BIOS updates to silently change the TDP cap, Boost behaviour, even VRAM straps of otherwise identical cards. This can have varying impacts on performance.
But the silicon lottery means no GPU is identical.
Same with CPUs. But the difference in silicon quality on CPUs aren't really felt unless you're overclocking or on a TDP restricted usage like on a laptop.
I would have thought Jay of all people would have understood that.
If he just wanted comparable rigs to test to get approximate differences then building different systems is fine, but you will never be able to get reliable, or properly comparable results unless the only part you change is the one you are testing.
yes of course the asic quality will be different and thus the boost will be different on the cards. the difference will be at most 2-3% though.
Came here to post this exact comment, beat me to it lol.
I think regardless of what the final performance is you get a guaranteed speed and if its higher that's just a bonus.
I have a similar circumstance with both my Strix GTX 1070 O8G Cards. I recently decided to disassemble my pc to do a thorough cleaning (dust removal) and noticed that I was getting a lots of frame rate drops when I would play my games or run benchmarks (the cards are obviously in SLI). I decided to switch the GPUs around to see if that would improve the performance and it went back to playing like it did before I cleaned out the system and noticed a lot more stability in frame rates as well as eliminated the stuttering I was also experiencing when I had originally re-assembled the system. So for those who want to hate on Jay for bringing up a very good point when it comes to pc hardware inequality, I will back this 100% as you have to remember one thing, especially when you talk about GPUs, CPUs, RAM and other components and that is you will always be at the mercy of the silicone lottery when it comes to the hardware you get for your builds. I even noticed this when it comes to something simple as a cellphone as my wife and I got our phones from the same retailer on the same day and had a serial number difference of less than 10 units and when I ran benckmarks on both phones, hers performed about 4% better than mine straight out of the box.
Is this what they call the silicon lottery?
David *silicone lottery
Lin Huichi silicone goes into tits. silicon goes into chips.
And both are fun
silicon lottery*
Yes
one of the benefits of solid audio equipment is that the noises you respond to in multiple videos, arent noticeable to the viewer. let the equipment work for you and dont stress the ambient noises. (sirens, motorcycles, and dumptrucks oh my!) good work guys, keep it up
Simple answer , silicon has inconsistency !
But going smaller and smaller over the years, shouldn't you see a smaller gap in performance?
Only if the chips are binned properly, a GPU can still have a defect (a very small one) and still be sold with that defect active, meaning that they haven't turned anything with that defect off and it will still work almost exactly as well as one that doesn't, it just might have an extremely small difference in performance. Also, the smaller we go with silicon, the more inconsistencies we will have, but due to better IPC and smaller transistors, we can pack in more performance (at least right now) on that same, even more inconsistent silicon, for it not to make a massive effect.
Advertised speeds should always be guaranteed, silicon lottery is when you go beyond advertised speeds.
It would be good if you could test the same card with a higher sample size. Say if you had 5 GTX 1050's, and 5 GTX 1070's, and see the performance difference on a wider median scale as well if it applies only to lower tier cards.
VanderJamesHum yesss that would be nice
i have two founders editions 1080 on the way and i have one already.. i plan to benchmark all 3 just to check
Thought I would add my own unigine heaven scores (with the same settings in the vid) of my i5 2400 and GTX 1050 to compare to your R3/G4560 if you're going the used route with 8gb ddr3. The temperature and frequencies wont be the same since I'm using the MSI LP dual fan version in slimline case with no fans, so its not an apples to apples comparison.
OVERCLOCKED (+145 core +300 mem): 1796 BOOST most of the time
Score: 1739
Avg: 69 Min: 17.8 Max: 135.3
76C
STOCK: 1696 BOOST most of the time
Score: 1640
Avg: 65.1 Min: 8.6 Max: 126.7
72C
My boost clocks were a lot lower than your 1050, maybe because of the higher temperatures and my 240W PSU/silicon lottery but there you go.
JC Dayrit yeah but your manually tweaked numbers are still better than the gpu 3.0 boost numbers. Basically I was right to assume someone manually tweaking their card will achieve better performance than gpu 3.0 on its own.
ArtisChronicles Yeah that's true. The boost goes even higher up to 1835 while gaming.
JC Dayrit obviously your boost is lower. You PSU is low.
I just love that commercial where Nick is wearing the OG JayzTwoCents shirt. I remember first watching you years ago and that shirt would come up so much
Seems pretty obvious that they use the lower binned parts for the cheaper cards, so you have a higher chance of getting a "dud"
Or the other way around: getting a chip that is able to be pushed much higher but market demand for the higher chip isn't enough. Reminds me of the old Pentium 4/Core 2/K10 days when even the lowest models could easily clocked to match the highest chips. Just the demand for a $800 enthusiast chip is much lower than for a $150 entry level option.
Nowadays everything is locked down.
Just like its the ordinary FX-8350 and not the 9590 that holds the world frequency records. But you get that variance on everything, even my Ryzen 1700 can't hit 4 Ghz (without 1.70 v) and others can almost hit that 4 Ghz on the stock cooler.
I'm obviously not pushing 1.7v, because I'm not sure that's safe on even my sub-zero cooler. I would be fine pushing 1.9v on LN2, but this chip is not a top tier chip so I'm not wasting my time. The more complex the circuit, the higher chance of variance, but you might not notice this variance, because its that much more likely you will have that 1 little transistor out of billions that will hold you back.
It's not that it's cheaper card, there are little differences across different units, overclockers know about this well, for example my unit of gtx 980Ti can be overclocked 90MHz more than average units.
Can you "lock down" the clocks or disable the turbo?
I can speak to my experience about this. I had a build with 2 Gigabyte G1 Gaming 980tis. I had read somewhere that you should put your best overclocker in the top slot (I don't know if it really makes a difference but it couldn't hurt), so I put each card in separately to test how far I could push them. Same bios, same version, and one overclocked to 1520 just fine, but one gave me an enthusiastic middle finger at 1450 no matter what i did. Thankfully the 2 cards pushed 1440p at 144hz for the most part anyways, so I just chalked it up ti silicon lottery and moved on.
I got similar experience with this exact graphics card.
I will forever love the Fractal Design advert in the beginning.
Great video as always!
That ad intro though HAHA
Mikhael Gonato hahahha so good
Mikhael Gonato it took forever to skip
So cool.
It's like the 10th time he's used it. Quite old now.
Beurhp no puns intended
I love the ads with Nic. He's a natural.
If the overclock is dependent on temperature then could it also not be an issue with the way that the heatsink and fan has been applied? If you reapply thermal compound then might that affect results?
No. The good card went further even before the heat sink was saturated. Hence it's a power limit. Adding a dedicated GPU power connector may act as an equalizer but at the power levels supplied by the PCIe port one card was slightly better. That said this card with an external power connector would probably still contain better silicon than the other one and thus go further assuming neither one was temperature restricted.
Colin Jones I was actually thinking that the entire time. If he tries this we'll know if it just wasn't seated properly or not. Maybe even might just be that the factory TIM wasn't applied very well. Hopefully Jay tries this.
Or maybe a controller bug or something? Like -- the "better" card ran 1C higher, yet clocked higher. And the "worse" card clocked itself lower despite being 1C cooler. This would actually be contrary to your hypothesis, although as Jay said, it bases the OC on temperature ranges, not 1C steps, so who knows.
But perhaps the card's built-in overclock is hand-tuned or just variable in general, and even though the "worse" card *could* go a bit higher, it's too late, it's already been stamped, boxed, shipped.
Roger J that depends on where that 1C is..Pascal throttles at specific temperature points and brings it down by about 13mhz.
that quick swap was very satisfying
I thought that's how they are sold? Take two GTX 1080's. They are tested at the factory and the ones that perform much higher than normal are sold as "gaming" editions where everything else is sold as normal 1080's.
AyyDynamic At best a few frames per second. Age old saying goes if you want more performance buy a better Gpu/Cpu and or platform.
F2H when you return something that works well they charge you 20% don't they?
Jay-- I had a similar, puzzling issue one time w/ a PC I was fixing for someone.
Their GPU's PCIe connectors had a badass scratch on one of the connection "teeth".
Nobody could explain the performance difference beyond this issue.
Once swapped out, testing was noticeably different.
At the time, I did not take notice w/ the frequency-- but the rest of the problems were pretty much identical. It WOULD affect voltage too.
I think there can be various performance/quality factors in this field..
1. It's a 1050 graphics card not some high class 1080, the quality control can lack a bit more than how much it can with a high tier graphics card. So.. More expensive the card, more quality it has..
2. Aftercooled cards, specially MSI might have problems with making cards run how they should.. MSI is a known company to have QC (Quality Control) flaws. (This point is taken into account if MSI really does touch silicon on the card AND/OR the cards are OC edition in which case a card with a tad worse cooling might perform worse.)
3. Lastly, yes they can be built a nanometer differently and have different performance outputs.
This series has been revealing and interesting. I wonder if you could do a vid on how chips are made (process, success factors & ratios/percentages), just to see if we can theorize (in layman's terms of course) why such variance is seen. Rabbit holes are fun! Love the vids, Jay!
Could be because of the BIOS version of each card. Could also be because of the famous silicon lottery, though I wouldn’t expect that much of a gap.
Dude I watch alot of your content and I have not come across a video that urges me to leave negative feedback. You are doing great man. I do no understand why people get upset like they do. Keep up the amazing work Jay. Thank you for all the knowledge you are giving us.
Why are we still here, just to suffer?
HeyItsPyxis every day, every night
Parth Srivastava I can feel my legs
HeyItsPyxis My arms, and even my fingers
Parth Srivastava The body I've lost
Hilbert Black The comrades i've lost
I think its a good idea to keep testing like this . It will help to expose the manufacturers who pass off bad parts on the public .
came for the video stayed for the add
Bedi O *ad
Actually informative. It's nice to see this kind of stuff instead of fluff - well done, Jay.
I found this issue Jay... You don't have enough RGB. And no flames on the side. How can the card go fast?
Idk wtf that ad after the intro was about, but i like it. Nice work.
I thought the original question was the cpu value to performance? Can you revisit 1 to 2 with the same gpu and answer that question?
Leonita C yes we need that conclusion. That ahtlon is lower performing but does have at least 2 generations of upgrade paths. It makes it interesting.
You did a amazing job Jay. But really it's 1:42 in the Morning and we are on the same time zone
The reason it's less noticeable on higher-end cards is due to binning.
Exactly my thoughts. Sure, basic tests will confirm that it can hit - and exceed - default speeds, but I doubt the same effort is put into quality control for something that costs $120 when compared to something that could potentially cost the company $500 (or more) due to a dissatisfied customer.
That Ad spot was so f**King funny!!! I loved it!
Jay have you contacted MSI for their two cents? I would love to hear what they say.
I expect an MSIzTwoCents channel to come up.
A quick correction, thermal throttling is when you CPU/GPU hits temperature limit and need to cool down to avoid damage. There is also power throttling, it is one the power delivered is not enough to hits the targeted voltage/frequency. It is mean to prevent crashing.
As for the performance difference, I think it has to do with the card power ceiling along with the "silicon lottery" . You see, better quality ASIC can hit the needed frequency at lower voltage, the 1050 cards firmware tells it to draw maximum of 50 watt of power. It appears that due to the manufacturing variance, one card can extract slightly higher frequency before hitting the power ceiling of 50 watts.
A simple question? Did buy the cards at the same time. Becuase sometimes, sometimes the manufacturing process gets improved overtime, even on the same process node as the process matures.
Since that card didnt behave well, chuck it down a street. I think that it would learn its lesson.
Michael G I'd still take the slower card. It's still faster than advertised speeds!
Michael G lmfao I'm deceased mate. 😂😂
This video is extremely useful I have a 1050 and wondered about the differences in frequency speed on my own GPU. and so far have never found out why until today
WOW Jay, you are an amazing youtube!!! you take comments made to call you out and turn it into a positive, not like other youtube like BN who won't take feedback and are unhappy when you don't give them positive feedback. In my eyes, all feedback is good even bad as it shows you another way.
Scarletsb0y yea right it's very difficult to stand hate, much less use it as feedback and change urself
I had always wondered if you were using the SAME GPU in each system when they weren't being showed in the videos. I would never do that doing a comparison video for this exact reason. I'm glad you got it sorted out, but now I know to look if people are using two different GPU's or the same and just doing each test separate for comparison builds such as the one that caused all this fuss to begin with.
I can understand the need to save time by doing the comparison side by side, but then get inconsistent results from not using the same GPU in a test like this one.
Max temperturebendacheef.
Problem number one: M.
Problem number two S.
Problem number three I.
Fix these problems, and the issue should be settled.
I like what you have done with these videos - since the shakey start!
I have the same gtx 1050(same as in the video) and it ..boosts over 2 GHz!!..
I think i have won a lottery..XD
Tanmay Rastogi you probably have a good psu and it gives the card alot of tdp space i guess...my 1060 with an 850w xfx it boosts up to 1974
Well... proof?
HugsOverDrugs Mine is going 1860MHz (2050MHz when overclocked) (500W cheap PSU) but it broke 6 months after with the O/C. So now I don't overclock it anymore...
JapWhite i dont oc it either i just let it do its thing it auto overclocks to 1974...plus mine has a huge cooler its the msi gaming x the max it has hit is 63c
Not really cause it still a low end GPU.
I'm glad you take the time to actually check your results and validate everything. Good job on investigating! I'd be interested in seeing what the average deviation between more cards would be... I'm wondering how much variance there is in overclock potential
And this is why boost is a problem.... unreliable performance. GRANTED, both are over the base rates boost :)
Sadly AMD's new Vega boost has joined the club with Nvidia's boost.
NoOne ButMyself I don't actually see a problem here since both seem to account for variables pretty well so you're not gonna hit unreliable performance very fast.
I imagine a manual OC will have similar issues with fab variance. No two cards are the same.
I love how the numbers literally change as he says them
@2:57
@3:00
@8:02
Seems like i got lucky with the 1070 G1.... oc of 2501 Ghz ... oc it runs hot and throttle's a lot but i didnt see any gtx 1070 air cooled from gigabyte hit these number's
Edit: Software bug , used evga's software and got diferent result's
My MSI GTX 1070 gaming x only does 2100/9600
I think you meant Mhz, not Ghz but damn. Nice OC
What are you using to check the mhz?
I know I can get my gtx 1080 up to 2126mhz witch is the highest I've seen for a 1080. That's on the stock voltage.
Andrew King msi afterburner and yes i meant mhz
But have you *tested* the performance difference actually? Because on quite a few cards you can set the core clock to stupid high levels and it just ignores it and runs at stock anyway. It's a bug of GPUBoost 3.0. PS: I can pretty much guarantee your that you are not actually reaching 2501MHz. *Either you're fooling yourself or lying.* hwbot.org/benchmark/unigine_superposition_-_1080p_xtreme/rankings?hardwareTypeId=videocard_2716&cores=1#start=0#interval=20
Definitely interested in seeing your microcentre overclock theory!
I wish I understood computer talk more.
There are only 10 kinds of people. The ones who do understand, and the ones who do not.
What is confusing? Maybe I can help.
Its not that hard to understand, if youve been into pc gaming and pc building for a few months you more or less understand everything that anyone has to say.
Mp57navy 10 kinds of people and you only mention 2? Lol nice
WhitishlyBlack not everything, but just enough to not really care too much about a whole lotta other things.
Literally spat coffee out my nose the first time I saw that ad and I still love it every time hahah
Conclusion: Don't buy an MSI budget graphics card.
Just dig to the back of the shelf and get the "fresher" one.
There are no shelves in online stores.
Tibor Klein msi is suddenly the problem now? You're gonna find discrepancies in all cards. These are both doing higher than advertised speeds. There's no issue here. There's just a difference between the cards because you'll never have 2 truly identical cards, same with cpus.
conclusion: don't buy an MSI anything
dangeredwolf how? Msi doesn't make the silicon blame nvidea not them the cards the same model
HOLY SHIT. YOU MADE THE VIDEO OF THE QUESTION I ASKED MYSELF A COUPLE DAYS AGO.
Something ive never realised until you just mentioned cards running higher than their boost clock is that my EVGA GTX980 SC acx is running almost 200mhz higher than the stated boost clock :D quite happy with that
That ad integration was hilarious.
Lol budget video goes on to a mini series. Not a bad thing though, sweet that you are showing why and not just explaining. Good job for sticking with it.
Ok... sold... That ad was hilarious.
very good couple of videos and im actually surpriced on how much actually comes into account when building a pc and i didnt know that therre could be a difference in 2 identical gpus
This is absolutely fascinating. I wouldn't never think that would of been the case.
Yes, please do a higher end card comparison Jay, this is really interesting. Thanks
I gotta say, love ur commercial for the cooler!! Priceless. love you guys and your content!
Very intersting, you should do the test bench with high and low end, this is acctually some interesting and unique content, which i can imagine is hard to come by. I've no doubt someone has seen this issue before, but ive sure never heard of it myself
I'm curious about the TIM application between the two. I'd love to see you tear down both cards, inspect The stock TIM application between the two and then replace the TIM on both and retest. Hope you see this Jay and at least comment on this idea.
Any way, thanks for the great video! Keep up the great work!
Wow, I never thought of that. Does that mean you have to be lucky in order to get a "better performing" card?
Hey Jay! Some thoughts:
1; Verify Bios Versions ?!
2; Test reseat the coolers with new thermal-paste, might be variations in paste quality/application over time (since they were bought at different time)
3; While having the cooler of check if they have the same memory manufacturer, over time nvidia buy nand (gddr5) in bulk form different vendors / manufacturers.
4; The gpu may be produced whit different fabs, like apple did with the a9 chip between Global Foundries and TSMC which resulted in variations in performance. Couldn't find out were the 1050 was produced.
Really enjoy your channel! Keep up the awesome work! /j0x
Digging these series of videos Jay maybe come up with a series for this very issue with other PC components.
As a silicon chemist the silicon ingots that we get wafers are not exactly consistent vertically when compare their wafers cut from them. Higher tier electronics pay a premium for more consistent and better quality wafers off the ingot. Lower quality electronics have more variability between wafers. This could be that or simple engineering fluctuations as the card is made.
No two chips are identical. My son-in-law and I built 2 similar gaming rigs in 2015 using the i5 4690k CPU. We both went with ASUS motherboards; mine was a Z97-AR, his, an ROG Maximus Formula. My CPU OCed (through the AI Suite 3 autotuner) to 4700MHz @ 1.276v; His peaked at 4400MHz, and he was plagued with crashes when he tried to tweak it manually. Mine, with a Cooler Master Hyper 212 EVO, runs at 58C with MSI Kombuster's CPU Burner; His hits 73C with a Corsair Hydro H105. FTR, both were built in Corsair Vengeance C70 cases w/ 8 120mm fans, and both boards were OCed to 1600MHz w/ 16GB RAM. I've also seen a lot of videos where some people get higher clock speeds out of their R9 380X than I can. It happens. AFAIK, they're only guaranteed to work stable at the factory preset speed; being unlocked only gives the user the "potential" to set it higher, but the capabilities are not guaranteed by any manufacturer.
Thank you Jay! Staying tuned.
Very cool vid on the differences on the 'silicon lottery' even for vid cards!
Who comes up with the ideas of the video spots from fractal? I LOVE them SO MUNCH!!! Keep doing what you do, i like it!
Try cleaning the contact-fingers on the GPU!
Since all the power has to come through these narrow pins, any finger-prints and random dust can have an influence.
Rubbing alcohol works well.
Damn that studio is looking good now!
I was reading some stuff on Tom's hardware that pointed to potential issues in heatsink seating on factory cards. Not likely with the temp zone you are seeing, but possible?
+gogo8092201 I'm leaning towards possible memory chip manufacturer changes over the life of the card
Yea silicon lottery is live and well ,Thanks for the in depth look Jay
Classic part of "Silicon lottery".
Thanks for showing it right Jay.
What I would LOVE to see as a (somewhat late) follow-up:
You mentioned Boost 3.0 - Can you influence that on your own? For example if you just re-apply new thermal paste, seat the Cooler better (tighter) and can you do a comparison with this and then slap a waterblock on it to see if you can get the "worse performing" card to the regular one?
And how big will the gap be when you OC the cards to max? Will the gap widen the further you OC the card?
And what I would LOVE to see - because I cannot find ANY info on this topic:
What benefit is OCing Pascal REALLY bringing? I mean yeah OC because I can - but when I tried that with my stubborn 1080 G1 I get stuck at around 2050Mhz and when benchmarking I don't see much difference than "stock" Boost 3.0 - just 2-3 FPS. Which is not worth the effort? Or is there really more to it that I'm not seeing when testing with Superposition that I can only see when gaming or so?
Keep up the awesome work.
I believe it boils down to the actual build of the GPU chip. I used to work in semi conductors. Scratches and particles occur across the wafer. So to alleviate it, engineers build redundant components into the design. The higher performing GPU has fewer flaws than then the lower performing one. During testing, both chips fell within the acceptable control window. Thus both were released to be sold.
Silicon lottery, man; there's no escaping it!
Good video!
I personally think it would be nice to see a video thoroughly testing the difference between stock thermal paste and stuff like liquid metal on GPU's..
I Know there are other videos out there, but mixing the question of lower tier GPU's with the question of thermal interface quiality could be really interesting 🙂
I love a good troubleshooting story, Jay!
Troubleshooting is what I do . . .
It would be interesting to see this test run on say maybe ten cards...
I've intern in semiconductor company which making semicon chips, as a production maintenance for 5 month. from my experience, every chip has their own value readings within the acceptable values. we call them offsets value. for instance the range of ±10mV/mA/mohm. so some chips have readings near to reject value but still considered good units. means not all units are made perfectly identical, they have differences. which in this video, the card is slightly better than the other one.
tldr: not all semicon products were made perfectly identical. thus giving a slightly different output on each product/unit.
I'd even watch your videos if they'd only be advertisements. These are so entertaining and well done! I know you are bros with Linus, but I have to say it: Your channel is everything LTT set out to be but failed horribly. You don't use bullshit memes to get more views from 10-year-olds. You present in a professional fashion, which makes you the #1 hardware channel for me.
This is something about the silicone used to build the GPU itself. The silicone quality varies even in one wafer. The only way for manufacturer to find it out is to do something like what EVGA did to their King Ping Cards.
It would be interesting to see if reapplying the gpu thermal paste would make any dif on both cards. fyi reapply paste to rule out if its production line issues from the company..
I've had a bit of this silicon lottery as well with my 1070. With GPU Boost it's usually in the 1800-1900 area maximum, which is low for a 1070.
If you want to do more tests, what I'd like to see is if OEMs bin their GPUs. I.e., EVGA makes... four, five kinds of 1070s/1080s. Is there any pattern to the silicon lottery, i.e. do the more expensive variants clock better?
I would love to see some videos testing more cards like this on the test bench like you suggested in the video.
Great vid Jay, this makes sense as not all silicon is created equally. You play a very interesting lottery when you buy a gpu.
I like how you do things based on feedback from the community
I've had the same exact thing happen with MSI 660's one was 60 to 80hz faster than the other causing major SLI issues, great video very informative :)
Throttling is when the GPU drops below its base clock. Anything above it is GPU boost in action.
It actually makes sense that the higher tier cards are more consistent. They likely have a more stringent standard of quality and bin a higher percentage of the chips overall. The result is a generally higher average performance across all commercial products. For the 1050 range, they just slap any functional silicon on a circuit board.
I have two EVGA GTX 1070 Hybrids, and there's a fair difference between them in terms of overclock stability based on my experience. After seeing this, I'm tempted to quantify that difference. Good stuff!
I love how well you listen to the comment section. You're the best I have seen with that from any big TH-camr. Nice job man.