nvidia is sadly very greedy with VRAM, they use it as a crutch to push buyers into higher tier cards if they want more vram. If they went the AMD route and just gave people 16gb on mid-range cards already I think it would really benefit nvidia's reputation.
@@pozytywniezakrecony151 By sheer stroke of luck I scored _two_ Radeon VII cards for $200. needed some tlc in the form of dusting and replacing fans, true (~$40 for two sets, just to be certain). More surprised they still had their original warranty stickers intact. Though not as much as with the still impressive idle temps. Those same crypto bros still want $300 for a 3060. And unfortunately those listings crowd the marketplace and eBay algorithms like all those baity gaspy videos clogged the TH-cam algorithm for much of 2021 and 22.
absolutely. With my 4070ti which has 12GB, I regularly get it up to 11GB+ at 1440p with full RT applied. With VRAM you won't often notice average fps changes but you will notice stutters as it swaps between shared memory and GPU memory. Or with texture swaps in certain games (like Control).
At 1440p/Ultra/RT/FG you do need 12GB. That is why 4070/Ti has 12GB. Nvidia is smart and they gave players enough VRAM for their games, but not more than enough to use these so-called "midrange" GPUs for some other VRAM-heavy tasks (AI modelling, etc.).
i think the only reason it is running marginally worse in some of these tests must be due to gpu or mem temps 3080 ftw3 obviously is a higher end oc'd model ventus is one of the worst models usually
12GB one is a hidden gem. 90-95% performance of RTX 3060 for 2/3 the price (and it's quite a bit younger than normal 2060 which makes the VRAM chips dying issue virtually non-existent)
@@GewelRealthe extra vram also means it can use frame generation. Older cards with extra vram are still capable as long as you don't mind dlss + frame generation.
3080 10gb owner here and the card is still delivering for me at 1440P. I also split my play time between older games from before 2020 and newer games up to 2024. The first tier of games I can run maxed out high refresh. The newer ones I may have to kick on DLSS but im totally fine with that. Still a great experience. Ill see next year if there's anything is worth upgrading to but I typically look for 100% performance to justify upgrading and anything at that level will cost a kidney.
@@AndyViant if it gives me 100% uplift over the 3080 and at least 16gb of vram for $600 I'd get it day 1 but that's pretty much 4090 performance. If AMD can launch a 4090 performance level 800 tier card for significantly cheaper than the 5080 then they'll crush the upper mid-tier. Let's see.
I find it funny that NVIDIA purposely design them with just enough VRAM to the point that when they panic, their design only allows for double the amount, so 10GB is not enough to 20GB which is plenty enough, 8GB to 16GB, 6GB to 12GB
You can always get someone who knows how to solder and upgrade by removing the 10x 1GB memory moduales and soldering on 10x 2GB memory modules to get 20GB of vram.
@@KenoooobiiiiiMeh. It hurt Intel but what it screwed over is the consumer. Alot of games are still single threaded or threaded with maybe a couple cores 😅
I think because the 20 series had no increase to Vram capacity, it's meant Nvidia's Vram capacity is behind by a generation. If they had boosted the 2080 and 2080Ti to 10gb and 12gb then the 3070 and 3080 to 12 and 16gb we would have been fine.
@@mttrashcan-bg1ro :) Yes, that's when the rot set in. The 10 series was perhaps the last update I was actually excited about. Since then in terms of performance uplift and RAM each generation has been more expensive and more underwhelming than the last.
@@Kenoooobiiiiiintel only sold quad cores for that long because they had 0 competitors and amd made a fake 8 core cpu in 2011 until ryzen came out. if anything, amd is the one that actually made change in the market
Even if equal in performance any of the 12gb versions found used on ebay should at least have less wear and tear from mining since they were released later when it became less profitable.
Wear and tear are from poor conditions during usage and lack of maintenance. Such electronics don't really have a "resource" in its usual sense given proper care. So it heavily depends on how well the previous owner wanted the card to look/perform when he finally decided to get rid of it.
They are doing the same thing with the 5070, it will launch with 12GB and then an 18GB version will be released after the idiots…I mean customers spend 700 on a 12 gig card yet again
Question.. what game right now uses over 12gb of vram at 1080p-1440p. I have never seen anything over 10gb without dlss. Hell I still have an 8gb 3060ti and I do struggle maxing out some games but for the most part since I only run 1440p I really wonder what game will use more than 12 if I'm doing semi ok with 8.
@@ivann7214 ratchet & clank does for example, playing it at 1440p, native res, high settings and it sits at 11,3GBs of vram for me, tho i use RX 7700 XT and amd tends to utilize more vram than nvidia for some reason, probably the way the card handles the image rendering, cuz it is difrrent than for nvidia.
I have a RTX 3060 12GB, I'd quite happily swap it for the 10GB version of the 3080. vRAM isn't the be all and end all of what a graphics card can do, it's just one of many factors. More and more people are falling back into the the mindset of a game isn't worth playing if you can't run it at, or very near, maximum settings. On the flip of that more game companies are making games for PC gamers that mostly don't exist, targeting the highest end and relying on upscaling and frame generation to fill in the gaps. The GTX 970 lasted me near a decade of use, I expect the 3060 will do the same. Don't be afraid of lowering the settings.
people could learn a lot from using a lower-end rig for a period of time. i used a 12100f + rx 6600 after owning a 5800x3d + 3080 for a good while, and you start to realise at some point that games are totally playable when they're not totally maxed out!
"More and more people are falling back into the the mindset of a game isn't worth playing if you can't run it at, or very near, maximum settings" If we were talking about budget cards, that opinion would hold weight. We're not. These are 80 class cards. Not low end.
@@Kenoooobiiiii a game looking good at max and the scaling ending there < a game looking good at high/medium and having the scaling go higher for future hardware
@@Kenoooobiiiii The 3080 is now four years old and has been replaced as the latest and greatest, not only by the 12GB version but by a whole new generation of cards. Despite coming out two years later that 12GB version is still four year old tech. Is it then a surprise that they can't run games at 4k High? Perhaps it's time to lower the settings a bit.
Right I had an 2060S for 5 years and I was able to play 4k low on recent titles and still wasn’t to upset I now have an 5700x3D/4080S yes it has 16gb but I’m not stressing lol I can see this going another 7-10 years at this point I’m maxing everything out atm 4k if I gotta turn down in the future o well u can’t always have the fastest mostly if you ant spending the cash. When I saw my 3600/2060S showing age I just tried my best to get the funds to upgrade just simple as that
@@quinncamargo My home theater personal computer is equipped with an Ay Em Dee Ryzen Five Thirty-six hundred ex and an Ay Em Dee Ahr Ex fifty-seven hundred ex tee. But it has only sixteen gigabytes of dee-dee-ahr-four random access memory.
How do you see the 10GB version outperform the 12GB version and not go: "hmm.... That's actually impossible, since the other card is better in every single metric."
The 10gb EVGA card actually has higher base and boost clocks than the RTX 3080 12gb despite having fewer cores. It varies from game to game but some engines prefer faster clock speeds to more cores.
I am thinking it's possible that it's down to the cooler on the 12gb card or even contact between the GPU and the cooler, had that issue with my new Zotac 4070Ti Super 16GB, hot spot was high. Took the cooler off and applied thermal grizzly and dropped the temps 12C, not only that but the fan speed was also lower. So cooler running and quieter, win win in my books. Still it was a brand new card I shouldn't really have to do that.
Im perfectly happy with my 6800xt compared to anything from nvidia, i got it at an insane price, and frankly I dont want to support nvidia at this point.
In PCVR, 16GB is the only way to go. Especially when you are trying to hit native output on high-res headsets. I have a Reverb G2, which basically require 6000x3000 to get proper 1:1 output... Even if games use dynamic- or fixed resolution scaling I often hit 13.5GB or above. And it's so nice to not have any stutters in VR. It's a shame high performing GPUs like these have 10 and 12GB.
Was about to say that. PCVR ATM is really hard to get into. I got a A770 LE just for the VRAM/cost years ago, thinking it was temporary. It's 2 years later and there's STILL barely any >12gb cards under $600 or so, especially Nvidia cards (NVENC VR). Waiting on RX8000.
@@koolkidgamr6260 amd is a good alternative, i can play cyberpunk in vr with a 6800 xt on my quest 2 with headtracking and all and id recommend 7800 xt for the improved raytracing or wait for 8000 series. Nvidia stagnating rn because lack of competition the last 2 generations.
Another awesome use case for 16gb is playing on a TV. My partner and I played through RE4 remake at 4K with maxed out settings (with RT) and FSR 2 quality, and it was a really great experience. 4K gaming is doable even on “1440p cards” if you’re smart with your settings and use some light upscaling
@@bearpuns5910RE4 runs perfectly fine on my 4070 Ti at 4K/maxed out (incl. 8GB textures and RT) + DLSS Quality. No stutters, no crushes, no noticeable texture pop-in. You do not need 16GB for this game. And with DLSS RE4 looks so much better than at native or with FSR. I would choose DLSS over additional VRAM AMD offers any day.
Both were available when i got my 3080. I never thought the 2gb made the difference, but the bus width was more attractive... $100 difference between a zotac 10gb and msi 12gb... seemed like an easy choice in oct 2022.
Depends on what you mean by redeemed. I paid $630 for my RTX 3080 FE in January 2021 (I had a Best Buy discount), and it's served me very well for almost 4 years now. There's almost no chance I would have gotten a 12GB model anywhere near that price by the time it came out, and I haven't been limited by the 10GB since I mainly play Elden Ring. That said, there's almost no denying that you'd see a tangible benefit in many games from the extra 2GB using high resolutions and more demanding settings.
I snagged my ftw3 3080 12 gig right after the mining boom went bust and I've been using it to play at 4k and run an LLM on my machine. Finally got a game, Space Marine 2, where I had to take a few settings off ultra to get a stable 60 fps, otherwise it was a solid 48. It's a beast of a card that I plan to have for a few more years, and overclocking is still on the table to boot. I can easily get it to match 3080ti performance.
@@DragonOfTheMortalKombatthis. Most people freak out about allocated memory because it can max out the buffer. But it’s really the utilization that matters.
@@puffyips this is what happens when you watch number that you don't understand, instead of playing the game. Your brain starts to rot and you haluscinate a story. Many such cases!
@@snoweh1 Yes, many gamers spend way too much time looking at performance numbers and far too little just enjoying games. That said, my 16GB 6800XT never causes textures to randomly disappear or characters to look like something out of Minecraft. I just enjoy the game maxed out and don't ever worry about such things.
It's an Ngreedia giving us scraps moment. They wanted you to buy the 3090 if you wanted more VRAM. Don't expect anything less than such tactics. YES some of us DO need that VRAM: non-gaming workloads.
There are always give and takes with GPUs. AMD skimps on hardware also. They skimped on dedicated RT and AI cores. Basic hardware that a RTX 2060 and Arc a310 have. Of the past 12 games that Techpowerup has done performance reviews on only one uses over 11gb at 4k ultra. It uses 12gb at 4k ultra. 6 of them have RT always on. 8 run noticably better on Nvidia GPUs than AMD counterparts that are usually equal. Like 4070S vs GRE. 2 they tie and 2 slightly favor AMD. All had DLSS quality looking better than "native" TAA. Both companies skimp on something. Just hope what your card skimped on isn't what modern games are being developed around.
I use a 5k monitor and could definitely use another 4gb of vram. (3080 12gb owner) I bought it over the 10gb version back in the day because I figured I’d need the extra vram and I’d already like more. There are games like ratchet and clank that run fine at 5k with medium textures but if I turn it up to high I max out the vram.
@@Sp3cialk304 What bs. Nvidia is the biggest scam for gaming and better stfu. STOCK 7900gre is enough to beat 4070Shit and OC'ed gre is nearly as fast as 7900xt which easily beats 4070tiShit.
@@Sp3cialk304The fact that AMD is this late in the game without a hardware-accelerated ai upscaler like DLSS is a little sad. Games are becoming more reliant on upscaling in general, so not having DLSS hurts. I actually think it’s smart that AMD doesn’t focus as much on RT. Path-tracing is a nice-to-have, but it’s not required to get a good experience. AMD GPUs are fine enough for an rt effect or two
Im using rtx 3080. Its not as bad. All you need to do is switch to FSR + Frame generation. Sure its not as sharp quality as dlss, but it helps boosting the fps by almost double. Especially in 4k. Thanks though Iceberg for detailed review
When i used to work for CEX i upgraded from my 8gb rtx 3070 to a 12gb 3080 just because the earlier was struggling at 4k ultra due to its limited vram... Funnily enough now i am finding certain titles now i am having to drop to 1440p at ultra due to 12gb not being enough LOL Performance wise though it definitely does hold up
If you're gaming on a 4K display it would probably look better if you gamed at 1080p intead of 1440p because of the 1/4 scaling. Meaning 1080p goes into 4K (4x) evenly where the 1440p does not.
tbh from what I'm getting from this is newer dev companies are being more sloppy with optimizations and using excess assets for minimal fidelity gains while hiding behind "building for the future" promises. while games like black myth wukong and supposedly starwars outlaws are heralded as intense visual experiences, I find it difficult to argue why they are more demanding to run than a game like Cyberpunk 2077 which still has amazing visuals and beautiful ray tracing all within an open world environment -from a game that came out almost 4 years ago. on a 3080 10gb Cyberpunk can run at max settings 1080p close to 100 FPS without ray tracing and still around 60 FPS with custom settings for ray tracing depending on the rest of your hardware. While I would give Wukong a pass as it was developed by a smaller company from china thats more accustomed to mobile games, star wars outlaws came out from one of the biggest game development companies in the industry (despite its short comings). After a little digging around around, it all really just seems like dev companies are spreading their teams too thin across story, mechanics, and setting development, not investing enough into quality and optimization control, and are trying to shove off the responsibility to hardware developments. With Epic releasing Unreal 5.5 for devs with much better shader optimizations, I'm hoping the industry starts moving towards 3rd party game engines rather than forcing the devs to do virtually everything from the ground up. I am a bit worried that Epic has basically all the power in the premium game engine space with these recent developments, but if Valve, and Ubisoft can shift their gears I wouldnt be surprised if they came up from the dust as well
I went from a GTX 1080 to an RX 6800XT just because it was a more affordable way of getting 16GB VRAM - I want something a little more future proof. I bought it a year ago for £450 and it is still holding up really well for 1440p gaming.
I just got my hands on a Vega 64, upgrading from an R9 Nano. 8 gigs is definitely still enough to play games made by devs that actually care about players
The facts are that the majority of people still play in 1080p. The jump to 4k is a huge mark up in hardware. As far as ray tracing, if you actually play the game, it really not noticable. Everyone still frames 4k w/ ray tracing and say, Ooooh, look how pretty. but if you actually play, it really makes little to no difference. The push for 4k and ray tracing is a corporate agenda for us to spend more, when in reality, our hardware is still enjoyable to play the games we play.
Best use scenario for these types of cards: Gaming at 1440p, AI, 3D Rendering for small or personal or fragmented projects, Testing, future proofing for high-end media. Bitmining and other misc.
In addition to my gaming hobby I'm a 3d modeler/renderer and this summer I bought at a bargain price a build with an i9 9900k and an rtx 3090, while knowing that this is really dated hardware I can't complain about anything, I can play everything, in some cases I use Lossless Gaming to generate frames but in general the 24gb of vram allows me to increase a lot post processing and resources. In Blender and 3ds max (my favorite 3d suites) the 24gb of ram makes me feel like I'm breathing again after a long immersion in rendering... I am coming from a 6gb 2060. I honestly think you don't need to buy the newest card, but you should buy what works for your needs and purposes. If i needed a card only for gaming i think i was going to buy an rtx 4070 or a 3080.
Ive got the 3080 12gb ive had it for about a year now, i got it at a pretty good price, and i couldnt be happier with it, ive never had access to such power before and its mindbreaking
The significant difference in some games that don't even max out the 10gb vram is due to the increase of bandwidth (320bit vs 384bit bus). The difference will only increase from here, in 3 years we'll easily see 15-20% difference in lots of games at 1440p.
Bought a MSI Ventus 3x 3080 10GB in early January, 2021, for $1100. I mined Eth on it for a month, making around $500, and sold it the next month for $2200. Crazy times. Ended up getting 3 5700XTs with the money, giving me 165% of the hashrate and around 50W less power consumption than the 3080. I'm considering the 3080 12GB as a possible upgrade to my main gaming PC in a few months. I'm currently using an RX6600 Silicon Lottery Jackpot Edition. I either sold off or used in a PC flip, all of my mining GPUs. I really don't care about having top notch performance. I've done the whole early adopter/premium gaming PC thing before and I prefer to just build budget rigs these days. I've been a PC gamer for 35 years so even low settings in modern AAA games look fantastic to me. Generally fine with the GPU I have but I game at 1440p and some more VRAM would be useful. As for these results, all of these games can run fine on an 8GB GPU at 1440p. But I don't expect it will be long before 8GB isn't enough for 1440p.
So, I’ve been struggling with this for a while. When I started out, my schtick was that I tested old GPUs on cheap modern PC hardware. The original “reasonably priced gaming PC” got upgraded to the “moderately priced gaming PC” when I started reviewing better GPUs, and now I’m at the point where I’m kinda ready to test brand new GPUs, but I also don’t wanna let down the people who are used to me having a humble test rig. To answer your question, the 7500F holds up really well for the price. I’ll be reviewing the 7800X3D soon, and I’ll have the 7500F figures updated for comparison.
my god I remember owning a used 3090 FE for about a year and seeing the power spike up to 400 watts in-game was crazy. Nvidia really did let these cards loose on the samsung node. A few reasons why I replaced mine with a 4080S FE was that the GDDR6X in the backplate side of the FE 3090 was cooking itself to death even if the card was both undervolted and below 65C (mem temps were regularly reaching 88-95C). The coil whine was so unbearable that it stopped me from using my studio monitors for gaming because the coil whine was leaking into the speakers even through a DAC. The 40 series cooler design blows my old FE away with how cool and quiet it is, while sipping less power (200 watts on average in 1440p maxed vs 400 watts in BG3). Fans were quieter because the memory wasn't hitting 100C in extended sessions, and the best part, no coil whine! mine wasn't the best undervolter, but my V/F curve at the time for my 3090 FE was 1785mhz/.800mv and it was running flawlessly until I got my hands on a 4080S FE. OC doesn't scale well with these cards unless you start to push into the 450 watt + territory so keeping clocks locked at around 1800-1860 was the sweet spot between temps, power, and not hitting power limit. Cuts out the most important thing ever that plagues the 3080/3090 series as a whole unless you have a new vbios flashed. Hitting power limit. Hitting power limit means the card will slow down to go under said power limit, and will be felt in-game as a stutter because the card downclocks to hit said power limit i.e. 3080 at 320W and 3090 at 350W at 100% power slider in afterburner.r I distinctly remember the boosting algorithm of the 30 series is that once the card hits 52C, if there is power headroom and cooling headroom, it will add another 15 mhz to the boost clock as long as it doesn't hit 62-65C. So if you set a V/F curve at 1800mhz at .800mv and the card hits 52C, it will go to 1815mhz if the card has power headroom.
I've been starting to stress that 10gb on the 3080 isn't enough for ultra 1440p anymore. Although I generally don't turn up the RT much. The one game I play that uses it heavily, Cyberpunk, leans on it pretty hard but you can dial it back and get a solid 80fps 1440 with reduced ray tracing or hit up dlss. Much as I've got money burning a hole in my pocket I honestly don't see an urgent need to upgrade.
I brought a 6800xt instead of a 3070 back then for the reason of needing more vram..today I'm running into games that are using 16gb at 1440p on high settings(some cod MWIII maps for example have so many textures that they fill it) nvidia doesn't want cards that have the power to continue to be able to. lacking VRAM is a killer, no one wants to play with stutters at key moments
I have one of these, bought at the end of the crypto boom when a UK retailer seemed to finally have them at sensible prices again. Still plays everything in my steam library all ok, and considering I'm just finishing a replay of Saints Row 3, will probably do so for awhile
I remember when Jensen pulled these out of his oven and I laughed hysterically when I saw that the RTX 3080 only had 10GB of VRAM. Then Red October came and I grabbed an RX 6800 XT when I saw the performance and the 16GB of VRAM. You see, back in the day, during the first mining craze, cards like the GTX 1060 Ti and RX 580 were near $800CAD. Then something inexplicable happened... Out of nowhere, Newegg suddenly had a bunch of brand-new Sapphire R9 Furies. These cards were released two years prior and I was admittedly shocked because they were less than $400CAD. I was admittedly hesitant because I couldn't remember how the R9 Fury performed so I started reading reviews and discovered that the R9 Fury, despite being two years old, was faster than the RX 580. I quickly discovered that the R9 Fury was a monster when it was released, faster than the GTX 980. The card had so much GPU horsepower at the time that it could literally play anything at 1440p Ultra at 70+FPS. Unfortunately, ATi's experiment with HBM gave the R9 Fury an Achilles' heel, the same Achilles' heel that Ampere cards have. nVIDIA made the choice to use more expensive GDDR6X VRAM which meant that they had to give less of it on their GeForce cards to be even somewhat competitive with Radeon. nVIDIA also knew that most gamers aren't smart enough (or just too lazy) to actually research their purchases and just tend to buy nVIDIA by default. Admittedly, nVIDIA was 100% correct in their assessment so they didn't worry too much about it. Just like the aforementioned R9 Fury, having fewer GB of more expensive higher-speed VRAM instead of more GB of more economical VRAM that is slower was proven to be a mistake on the R9 Fury and will prove the same on Ampere cards. Some people like to talk about how "superior" GDDR6X is compared to GDDR6 but it just hasn't shown to make any real difference. If you want to talk about superior VRAM, HBM was in a league of its own with a colossal 4096-bit bus width. Compare that to the 384-bit bus width found on the RTX 4090 and RX 7900 XTX cards of today. I am willing to bet that if you were to take a pair of RX 580s and somehow graft the 16GB of GDDR5 that those two cards have onto something like an RTX 3070 Ti, those 16GB of GDDR5 would out-perform the 8GB of GDDR6X in modern titles and give the RTX 3070 Ti a new lease on life. Sure, the R9 Fury's HBM was impressive, especially when it could run Unigine Superposition at 4K Optimised despite a warning that it didn't have enough VRAM to run the test correctly. Unigine clearly hadn't considered that 4096MB of VRAM on a 4096-bit bus could do things that 4GB had no business being able to do, but despite this, HBM isn't magic and could MAYBE behave like 6GB of GDDR5 because of its incredible speed. This means that 8GB of DGGR5 was better than 4GB of HBM for gaming. This myth that a lot of GeForce owners fell for (and they do seem to fall for a lot of myths) is that GDDR6X is somehow going to make your GeForce cards superior to a Radeon that "only has the inferior GDDR6". I'm pretty sure that the truth is more like AMD probably bought some GDDR6X from Micron and sent it to ATi in Markam to play with. After considerable testing, ATi would discover that the difference in performance and efficiency between GDDT6 and GDDR6X was minimal at best and not worth the extra cost. ATi knows its market and Radeon owners aren't dazzled by frills, we want maximum performance-per-dollar (which, really, ANY user should want). Micron is the exclusive manufacturer of GDDR6X (and probably GDDR7X) while standard GDDR6 is made by Micron, Samsung and SK Hynix. VRAM is a commodity and the more competition you have in the marketplace, the better the price will be. Since Micron has no competiton for X-rated VRAM, their price remains high. Since GeForce owners have no issue getting fleeced for useless frills, nVIDIA, also knowing their market like ATi does, chose to get more profit from the use of GDDR6X and who can blame them? The proof is in the pudding however as the use of GDDR6X has not translated into any real performance advantages for GeForce cards over their Radeon rivals. Let's take a look at the rankings, shall we?: 1st Place - GeForce RTX 4090 with GDDR6X 2nd Place - Radeon RX 7900 XTX with GDDR6 3rd Place - GeForce RTX 4080 Super with GDDR6X 4th Place - Radeon RX 7900 XT with GDDR6 5th Place - GeForce RTX 4070 Ti Super with GDDR6X 6th Place - Radeon RX 7900 GRE with GDDR6 7th Place - GeForce RTX 4070 Super with GDDR6X 8th Place - Radeon RX 7800 XT with GDDR6 9th Place - Radeon RX 7700 XT with GDDR6 10th Place - GeForce RTX 4060 Ti with GDDR6 We can see from this chart that it has been an almost perfect competition stack going back and forth from place to place with both red and green having five of the top-ten. It's also interesting to note that while nVIDIA does have the most performant card in the top-10 with the RTX 4090, they also have the least performant card in the top-ten with the RTX 4060 Ti. It's also interesting to note that the RTX 4070 Super is faster than the RX 7800 XT. This is because the RX 7800 XT is faster than the original RTX 4070 and while the RTX 4070 Super uses GDDR6X VRAM, so too did the RTX 4070. All of this just goes to show you that having fewer GB of faster X-rated VRAM doesn't translate into any real performance advantage but having less of it can (and will) become a serious hindrance to what you card will be able to achieve in the future. People like to talk about bottlenecks and this is no different. My R9 Fury was held back by its lack of VRAM and its incredible GPU horsepower (for the time) was relegated to high-FPS 1080p gaming far too soon. I bought it because it was half the price of the RX 580 during the first mining craze (because it wasn't efficient enough to mine with) and so I could forgive myself for taking a 4GB card in 2017. After all, in 2015, when the R9 Fury came out, 6GB was considered to be high-end, similar to how 16GB is looked at today. However, I feel sorry for the dumb schmucks who bought the RTX 3070 Ti only to discover shortly after that they had only purchased an overpriced high-FPS 1080p card while those who bought the 3070 Ti's rival, the RX 6800, are still happily gaming away at 1440p.
@@mitsuhh That's false. According to TechPowerUp (which knows more than both of us put together), the XTX is slightly faster overall. This means that the XTX is faster than the 4080 in most things. I don't know where you get your information but it sounds like you've been reading LoserBenchmark (UserBenchmark).
@@redline589 I meant for the same price, like the 3070 ti also should have more than 8gb, but we know it’s nvidia’s way to make their cards obsolete faster
Considering it was the flagship that was compared performance-wise to the 2080Ti, it's ironic it had 1gb less and especially ironic that the 3080 10g runs some games worse than the 3060 12gb.
I bought a rtx 3080 strix brand new for 620usd just 2 months back thinking it's decent since I didn't have much choices of the 40series and them being so pricey. Yes a once highly regarded card now does some what ok at 1440p and decent at 1080p. I think I'm ok with it honestly except for the vram
i love my 3080 12 GB, and with my 5600X i haven’t really had any issues outside of the PL update to CBP2077 being a lot harder on my CPU. i feel like i’ll be keeping this GPU for a few more years, but will prolly upgrade to a 5700X3D for GTA VI
Price. 10GB models were significantly cheaper than the newer 4070/70 Ti at the end of the crypo boom. Getting a used one in good condition at the beginning of 2023 like i did for $350-400 was a absolute steal. Only thing u need to deal with, is the higher power draw, (but 8pin pcie means would work with an older psu), and lack of DLSS 3 FG (which is redundant if the game supports FSR 3.1). The 3080 is definitely up there with the 1080 Ti as on the GOAT gpus.
I felt myself lucky to buy a 3080FE direct from NVIDIA at 650 quid MSRP at the height of the madness. The card is still a beast today albeit only at 1440p. I'm guessing the same amount of money now isn't going to provide any sort of massive upgrade. I should add I think it mined half it's cost back in crypto.
I went from a 1080ti to a 3080 10gb, it was odd to go down in memory, I've only had the memory become an issue with Hogwarts 1440p. It was too much and I had to scale to 1080. I'll be waiting for 5000 or 6000, the 3080 is great, but not my favorite card.
This video is very timely. I just picked up two 3080 cards (10GB and 12GB0 for $340 each. Going to pair them with a set of 5800x CPUs i got for $150. Going to replace my computers with 1700x and 1080ti from 2017. I play mostly old games at 1080p so no regrets here :)
Would a 1080/ti be powerful enough to emulate ps4/xb1 generation of games? Im looking to build an emulation machine as my first PC. I have a PS5 and honestly nothing that has come out/coming out has interested me
Got my 3080 10gb at launch, upgraded my screen just after, didn't went 4k coz 3080 was never a true 4k gpu. Still got it paired with 5800x3d, doing well but will replace it next year...
I paid $2300 Cdn for a new rtx 3080 10gb when crypto was hot. I only replaced it when it would run out of vram and the fps would drop like a rock on some player made arcade maps in Far Cry 5 at 4k. I then got a 3090 and the fps no longer tanked on those maps.
Try to get 8GB 3070 Ti as "is 8GB any good" point. That way we know at what point 10GB helps A LOT - running out of VRAM simply feels different than plain perf. difference between models would show. As for "too big for it's own good" - I like Founders card sizes (even if you have to pad mod to get G6X temp under control, along with 0.8V undervolt on cores).
I think the evga 3080 10gb has massive power limit and cooler whereas the MSI ventus is a base model with a lower power limit. I think if you compared evga to evga or ventus to ventus the gap between the 12gb and 10gb would be higher
I am still running an RTX 3060 TI 8GB, playing at 1440p, and I havn't played any game that really maxes it out (havn't touched RE4 yet, who knows). Most of the games I play tend to be either slightly older (still catching up on some big games now that they are discounted), or indie games, so I don't see myself needing a newer GPU for a couple more years tbh. Only thing..... I need a new shiny, so maybe there is a new GPU on the horizon just for that xD
Thank you for this, as a early adopter of the 3080 10gb, I was looking for this for a while, still glad I din't choose the 3070 back then but still think the 3080 deserved more vram
I've been saying VRAM isn't some massive concern and more often than not the core will be the limiting factor. I'm not saying you shouldn't get more for your money, but I'd rather have a GPU that's cheaper, uses less power, and is easier to cool than one with more VRAM than the core can really do anything with. Eventually this will change overtime when the next generation of consoles come out and likely will have more VRAM available so games will be made to utilize more. But for now, VRAM really isn't much of an issue.
20GB on my XFX 7900XT Black Edition.... I'm good for a long time. Coming from the Goat GTX 1080ti 11GB. FYI 5000 series will stiff you, on everything they have no competition. I will say the move from my 1080ti to a 7900XT was easy. I am impressed with the drivers and features of AMD. You don't need FSR with Anti lag and fluid frames. I turn that on with 250+ mods, reshade, 4k texture packs, and ray tracing in Cyberpunk 2077 @ 4k high settings optimize. I get easy 60+ fps no lows below that, no ghosting, screen tears, or stuttering.
If you can, please test rtx3070 vs rtx4060ti in 4k with and without DLSS in games where they can play at +30 FPS or even close to 60. I would like to know how the new one maintains performance in older games like RDR2. TEST IT WITH OPTIMIZE SETTINGS like hardware unboxed settings or digital foundry.
Framegen can also be a bit of a memory hog and probably would've allowed the 12GB model to stretch its legs more over the 10GB card with all the bells and whistles going.
I wanted a 12GB 3080 a couple of years ago upgrading from a Vega 64 , but it was way too expensive and I had to settle for a 2080ti instead and clock the balls out of it.
Looking back on generations prior to the 3080, it really ought to have had 16GB. The 680 had 2, the 780 had 3, the 980 had 4 and the 1080 had 8. You can also make the argument that the 2080 and the 4080 should have had 12GB and 20GB respectively. Edit: I am just looking at the trend of VRAM increases not the relative performance to VRAM or other factors as such.
The GTX 780 was really part of the same generation as the 680 though. Nvidia's made so many strides with Kepler that they didn't feel the need to release the big boy GPU right away. Also, it doesn't help that their flagship Kepler GPU was not ready until the end of 2012.
team green can enable dlss 3.x/frame gen to 3000 series however they wont even do that for gamers will notice least perf improvement from 3000 to 4000 series
I was playing No Man's Sky in VR yesterday and it used 14GB of VRAM on both my 7900 GRE system and 4090 system. That game is relatively lightweight comparatively. 12GB just ain't getting the job done in 2024, but 12 will always be better than 10🤷♂️
This was the second (then third) most powerful GPU from the past generation. Now it has less or the same memory cache as the current midrange dopped up 60 series models. Nvidia being Nvidia.
also i feel like something is off about these benchmarks. makes no sense that the 10 GB card is doing better when the 12 GB should perform between a 10 GB and a Ti
I went with a 6800 XT because I like to mod games (especially skyrim it will use all 16gb) and I often use 14GB+ in quite a few other games maxed out at 1440p. I noticed AMD likes to allocate a lot more VRAM compared to Nvidia.
there are three reasons why we don't see a bigger impact: we never left the cross gen era, so 8GB total RAM is still the baseline. it has to run on a Xbox Series S with it's ****ed up 10GB. Nvidia has 85% or so marketshare so we never got a bunch of new - demanding - games. I mean... Starfield is like... 15 year old tech... AT BEST... Ratchet and Clank, Last of Us, Alan Wake 2 and Cyberpunk are all PS4 games, with some new settings for what hardware/memory is available. but we already saw limitations. 1% percentile aka min FPS are sometimes all over the place. texture not ready, in to low resolution or higher average FPS on a slower GPU (which is a sign for not rendering the same image on both GPUs). to say it's not even worth 3% higher price because average FPS are similar is complete nonsense.
Also nvidia has better on-the-fly memory compression, which they ads since the Kepler series. I mean, starting RX 5000 series, AMD also jumped onto the same bandwagon.
People like to say AMD fine wine now its AMD fine milk, in recent titles RDNA2 has been underperfoming a tier or even two lower without hardware RT. Space Marines 2, God of War Ragnarok, Until Dawn, Silent Hill 2 etc...
@@ooringe378 everybody, how doesnt stick his head in jensens a$$ knows, that you have to wait for driver updates to compares. But i guess, the more you buy the more you slave burned some braincells.
@@Uthleber AMD released said drivers with exclusive highlights for optimization for some of the said games. How many driver updates will it take? I run a 6900XT for my main BTW. If i criticize AMD doesn't mean I'm up Jensens a hole, exercise those braincells.
as a user of a 8gb 1070 i find 12gb on a much faster card an insult
But its still a much faster card 🤷🏻
All of 10 series cards in general was goated, no wonder everything after feels like it fell off
nvidia is sadly very greedy with VRAM, they use it as a crutch to push buyers into higher tier cards if they want more vram.
If they went the AMD route and just gave people 16gb on mid-range cards already I think it would really benefit nvidia's reputation.
I mean, you got a good card. The current gen and last gen lower end and midrange was woefully short
@@Xeonzs Nvidia doesn't give a single fuck about their reputation if it makes them money.
Silly youtuber. The rtx 30 series doesnt exist. The crypto bros made sure of that.
Cmon, good value in 2024 used. Got 3090. Love it. Fallout 4 with 600mods or so 😂
@@pozytywniezakrecony151 By sheer stroke of luck I scored _two_ Radeon VII cards for $200. needed some tlc in the form of dusting and replacing fans, true (~$40 for two sets, just to be certain). More surprised they still had their original warranty stickers intact. Though not as much as with the still impressive idle temps.
Those same crypto bros still want $300 for a 3060. And unfortunately those listings crowd the marketplace and eBay algorithms like all those baity gaspy videos clogged the TH-cam algorithm for much of 2021 and 22.
@@pozytywniezakrecony151 That example is a CPU heavy task being 600 mods. The GPU is just chilling in such an old game.
@@forog1 hmm staying in base 83% GPU 28% i9 used. Lots of mods with textures 4k etc.
Got my 3070 Founders at 300€ in may 2023 ! It's even a revision 2 and it got no use, second hand but new :)
absolutely. With my 4070ti which has 12GB, I regularly get it up to 11GB+ at 1440p with full RT applied. With VRAM you won't often notice average fps changes but you will notice stutters as it swaps between shared memory and GPU memory. Or with texture swaps in certain games (like Control).
At 1440p/Ultra/RT/FG you do need 12GB. That is why 4070/Ti has 12GB. Nvidia is smart and they gave players enough VRAM for their games, but not more than enough to use these so-called "midrange" GPUs for some other VRAM-heavy tasks (AI modelling, etc.).
@@stangamer1151 idk how u and 2 other people thought that nvidia did this because they are smart
@@tysopiccaso8711 They just know their customers. If you want more than just gaming - get 4090. This is a very smart selling strategy.
the 3080 12gb was always a faster GPU regardless of the vram.
you dont say....its almost like they have different specs, imagine that.
Marginally. Like core wise it's like a 3% difference.
i think the only reason it is running marginally worse in some of these tests must be due to gpu or mem temps
3080 ftw3 obviously is a higher end oc'd model
ventus is one of the worst models usually
3080 12 GB is a 3080 Ti with further cores disabled but keeps the rest of its features including the 384 bus width.
Now do one about the RTX 2060 6GB and RTX 2060 12gb 😳
12GB one is a hidden gem. 90-95% performance of RTX 3060 for 2/3 the price (and it's quite a bit younger than normal 2060 which makes the VRAM chips dying issue virtually non-existent)
@@GewelRealthe extra vram also means it can use frame generation. Older cards with extra vram are still capable as long as you don't mind dlss + frame generation.
Seconded, this is needed
Also the RTX 2060 12Gb had more cores than the regular 2060.
@@TheKazragore yeah, it's a 2060Super with a bootleg memory bus.
3080 10gb owner here and the card is still delivering for me at 1440P. I also split my play time between older games from before 2020 and newer games up to 2024. The first tier of games I can run maxed out high refresh. The newer ones I may have to kick on DLSS but im totally fine with that. Still a great experience. Ill see next year if there's anything is worth upgrading to but I typically look for 100% performance to justify upgrading and anything at that level will cost a kidney.
Agreed. If the 8800XT looks like a $600 GPU then it might interest me. But then again, that's gonna be 5070 money. Maybe wait for the 5070 Ti 18gb?
@@AndyViant if it gives me 100% uplift over the 3080 and at least 16gb of vram for $600 I'd get it day 1 but that's pretty much 4090 performance. If AMD can launch a 4090 performance level 800 tier card for significantly cheaper than the 5080 then they'll crush the upper mid-tier. Let's see.
@@Paulie8K if AMD could release a 4090 tier card they wouldn't be charging $600. Maybe $1600 but not $600
The 3080-20GB launch being canceled was disappointing :(
And the Chinese yet again modded 3080 to have 20GB sure tells something
I find it funny that NVIDIA purposely design them with just enough VRAM to the point that when they panic, their design only allows for double the amount, so 10GB is not enough to 20GB which is plenty enough, 8GB to 16GB, 6GB to 12GB
@@Javier64691 why do you find it funny? Its depressing :(
@@Javier64691 TBF, that's kinda how the memory chip works, increasing size in the power of 2 until the introduction of non-binary memories
You can always get someone who knows how to solder and upgrade by removing the 10x 1GB memory moduales and soldering on 10x 2GB memory modules to get 20GB of vram.
Top end cards 80 series should have had a minimum of 16GB from the 4x series onwards.
@@KenoooobiiiiiMeh. It hurt Intel but what it screwed over is the consumer. Alot of games are still single threaded or threaded with maybe a couple cores 😅
I think because the 20 series had no increase to Vram capacity, it's meant Nvidia's Vram capacity is behind by a generation. If they had boosted the 2080 and 2080Ti to 10gb and 12gb then the 3070 and 3080 to 12 and 16gb we would have been fine.
@@mttrashcan-bg1ro :) Yes, that's when the rot set in. The 10 series was perhaps the last update I was actually excited about. Since then in terms of performance uplift and RAM each generation has been more expensive and more underwhelming than the last.
@@Kenoooobiiiiiintel only sold quad cores for that long because they had 0 competitors and amd made a fake 8 core cpu in 2011 until ryzen came out. if anything, amd is the one that actually made change in the market
Even if equal in performance any of the 12gb versions found used on ebay should at least have less wear and tear from mining since they were released later when it became less profitable.
LHR, meaning Low Hash Rate, meaning not conducive to mining versions of the 10GB exist. Although you could unlock it, it was finicky.
Wear and tear are from poor conditions during usage and lack of maintenance. Such electronics don't really have a "resource" in its usual sense given proper care. So it heavily depends on how well the previous owner wanted the card to look/perform when he finally decided to get rid of it.
They are doing the same thing with the 5070, it will launch with 12GB and then an 18GB version will be released after the idiots…I mean customers spend 700 on a 12 gig card yet again
Question.. what game right now uses over 12gb of vram at 1080p-1440p. I have never seen anything over 10gb without dlss. Hell I still have an 8gb 3060ti and I do struggle maxing out some games but for the most part since I only run 1440p I really wonder what game will use more than 12 if I'm doing semi ok with 8.
@@ivann7214Do you play only eSports? Do you know the RE Engine and some Unreal Engine 5 titles?
@@ivann7214if you mod any game. Squad, Far Cry 6hd textures and RT, Avatar. I list is pretty small, but we are getting there.
@@ivann7214 civ 6 can use over 14go or even fill 16go
@@ivann7214 ratchet & clank does for example, playing it at 1440p, native res, high settings and it sits at 11,3GBs of vram for me, tho i use RX 7700 XT and amd tends to utilize more vram than nvidia for some reason, probably the way the card handles the image rendering, cuz it is difrrent than for nvidia.
I have a RTX 3060 12GB, I'd quite happily swap it for the 10GB version of the 3080. vRAM isn't the be all and end all of what a graphics card can do, it's just one of many factors.
More and more people are falling back into the the mindset of a game isn't worth playing if you can't run it at, or very near, maximum settings. On the flip of that more game companies are making games for PC gamers that mostly don't exist, targeting the highest end and relying on upscaling and frame generation to fill in the gaps.
The GTX 970 lasted me near a decade of use, I expect the 3060 will do the same. Don't be afraid of lowering the settings.
people could learn a lot from using a lower-end rig for a period of time. i used a 12100f + rx 6600 after owning a 5800x3d + 3080 for a good while, and you start to realise at some point that games are totally playable when they're not totally maxed out!
"More and more people are falling back into the the mindset of a game isn't worth playing if you can't run it at, or very near, maximum settings" If we were talking about budget cards, that opinion would hold weight. We're not. These are 80 class cards. Not low end.
@@Kenoooobiiiii a game looking good at max and the scaling ending there < a game looking good at high/medium and having the scaling go higher for future hardware
@@Kenoooobiiiii The 3080 is now four years old and has been replaced as the latest and greatest, not only by the 12GB version but by a whole new generation of cards. Despite coming out two years later that 12GB version is still four year old tech.
Is it then a surprise that they can't run games at 4k High? Perhaps it's time to lower the settings a bit.
Right I had an 2060S for 5 years and I was able to play 4k low on recent titles and still wasn’t to upset I now have an 5700x3D/4080S yes it has 16gb but I’m not stressing lol I can see this going another 7-10 years at this point I’m maxing everything out atm 4k if I gotta turn down in the future o well u can’t always have the fastest mostly if you ant spending the cash. When I saw my 3600/2060S showing age I just tried my best to get the funds to upgrade just simple as that
As a guy who uses the greatest price to performance card of all time the fifty seven hundred xt, I feel attacked.
Why u not type "RX 5700XT" 🤣?
@@eda2000-r8h I too run a fifty seven hundred ex-tee
I too use an -Are Ex Fifty seven Hundred Ex-Tee.😂😂😂
I love my Arr Ex Five Seven Zero Zero Ex Tee Graphics Processing Unit from Advanced Micro Devices Incorporated
@@quinncamargo My home theater personal computer is equipped with an Ay Em Dee Ryzen Five Thirty-six hundred ex and an Ay Em Dee Ahr Ex fifty-seven hundred ex tee. But it has only sixteen gigabytes of dee-dee-ahr-four random access memory.
How do you see the 10GB version outperform the 12GB version and not go: "hmm.... That's actually impossible, since the other card is better in every single metric."
The 10gb EVGA card actually has higher base and boost clocks than the RTX 3080 12gb despite having fewer cores. It varies from game to game but some engines prefer faster clock speeds to more cores.
His problem is using the 7500f cpu. It can't keep up. That simple. That's why proper bench testing gpu's is done with the 7800x3d.
I am thinking it's possible that it's down to the cooler on the 12gb card or even contact between the GPU and the cooler, had that issue with my new Zotac 4070Ti Super 16GB, hot spot was high. Took the cooler off and applied thermal grizzly and dropped the temps 12C, not only that but the fan speed was also lower. So cooler running and quieter, win win in my books. Still it was a brand new card I shouldn't really have to do that.
@@markbrettnell3503try a 12th, 13th or 14th gen 700 or 900k. The stutter is from chippers. Monolithic Intels are superior.
@@South_0f_Heaven_ sources?
The only company that has showed us affordable 16GB video card has been Intel, seen the new A770 for $269 on Newegg.
B770 is gonna annihilate it in a couple of weeks
Im perfectly happy with my 6800xt compared to anything from nvidia, i got it at an insane price, and frankly I dont want to support nvidia at this point.
AYYY fellow 6800xt enjoyer 🤝
I'm using a 6950 XT and unless I turn on Nvidia specific Ray tracing it seems to be damn near as good
In PCVR, 16GB is the only way to go. Especially when you are trying to hit native output on high-res headsets. I have a Reverb G2, which basically require 6000x3000 to get proper 1:1 output... Even if games use dynamic- or fixed resolution scaling I often hit 13.5GB or above. And it's so nice to not have any stutters in VR. It's a shame high performing GPUs like these have 10 and 12GB.
Was about to say that. PCVR ATM is really hard to get into. I got a A770 LE just for the VRAM/cost years ago, thinking it was temporary.
It's 2 years later and there's STILL barely any >12gb cards under $600 or so, especially Nvidia cards (NVENC VR).
Waiting on RX8000.
@@koolkidgamr6260 amd is a good alternative, i can play cyberpunk in vr with a 6800 xt on my quest 2 with headtracking and all and id recommend 7800 xt for the improved raytracing or wait for 8000 series. Nvidia stagnating rn because lack of competition the last 2 generations.
Another awesome use case for 16gb is playing on a TV. My partner and I played through RE4 remake at 4K with maxed out settings (with RT) and FSR 2 quality, and it was a really great experience. 4K gaming is doable even on “1440p cards” if you’re smart with your settings and use some light upscaling
@@bearpuns5910RE4 runs perfectly fine on my 4070 Ti at 4K/maxed out (incl. 8GB textures and RT) + DLSS Quality. No stutters, no crushes, no noticeable texture pop-in.
You do not need 16GB for this game.
And with DLSS RE4 looks so much better than at native or with FSR. I would choose DLSS over additional VRAM AMD offers any day.
Both were available when i got my 3080. I never thought the 2gb made the difference, but the bus width was more attractive... $100 difference between a zotac 10gb and msi 12gb... seemed like an easy choice in oct 2022.
Depends on what you mean by redeemed. I paid $630 for my RTX 3080 FE in January 2021 (I had a Best Buy discount), and it's served me very well for almost 4 years now. There's almost no chance I would have gotten a 12GB model anywhere near that price by the time it came out, and I haven't been limited by the 10GB since I mainly play Elden Ring. That said, there's almost no denying that you'd see a tangible benefit in many games from the extra 2GB using high resolutions and more demanding settings.
I snagged my ftw3 3080 12 gig right after the mining boom went bust and I've been using it to play at 4k and run an LLM on my machine. Finally got a game, Space Marine 2, where I had to take a few settings off ultra to get a stable 60 fps, otherwise it was a solid 48. It's a beast of a card that I plan to have for a few more years, and overclocking is still on the table to boot. I can easily get it to match 3080ti performance.
I'am happy with my 10g version.
But it literally could use 16gb like the 6800xt, and yes I’ve seen that much used. Rust from 2013 uses more than 12gb not even on ultra
@@puffyips use or allocate ? Check carefully.
@@DragonOfTheMortalKombatthis. Most people freak out about allocated memory because it can max out the buffer. But it’s really the utilization that matters.
@@puffyips this is what happens when you watch number that you don't understand, instead of playing the game. Your brain starts to rot and you haluscinate a story. Many such cases!
@@snoweh1 Yes, many gamers spend way too much time looking at performance numbers and far too little just enjoying games. That said, my 16GB 6800XT never causes textures to randomly disappear or characters to look like something out of Minecraft. I just enjoy the game maxed out and don't ever worry about such things.
It's an Ngreedia giving us scraps moment. They wanted you to buy the 3090 if you wanted more VRAM. Don't expect anything less than such tactics. YES some of us DO need that VRAM: non-gaming workloads.
There are always give and takes with GPUs. AMD skimps on hardware also. They skimped on dedicated RT and AI cores. Basic hardware that a RTX 2060 and Arc a310 have.
Of the past 12 games that Techpowerup has done performance reviews on only one uses over 11gb at 4k ultra. It uses 12gb at 4k ultra. 6 of them have RT always on. 8 run noticably better on Nvidia GPUs than AMD counterparts that are usually equal. Like 4070S vs GRE. 2 they tie and 2 slightly favor AMD. All had DLSS quality looking better than "native" TAA.
Both companies skimp on something. Just hope what your card skimped on isn't what modern games are being developed around.
I use a 5k monitor and could definitely use another 4gb of vram. (3080 12gb owner) I bought it over the 10gb version back in the day because I figured I’d need the extra vram and I’d already like more. There are games like ratchet and clank that run fine at 5k with medium textures but if I turn it up to high I max out the vram.
@@Sp3cialk304 What bs. Nvidia is the biggest scam for gaming and better stfu. STOCK 7900gre is enough to beat 4070Shit and OC'ed gre is nearly as fast as 7900xt which easily beats 4070tiShit.
@@Sp3cialk304The fact that AMD is this late in the game without a hardware-accelerated ai upscaler like DLSS is a little sad. Games are becoming more reliant on upscaling in general, so not having DLSS hurts.
I actually think it’s smart that AMD doesn’t focus as much on RT. Path-tracing is a nice-to-have, but it’s not required to get a good experience. AMD GPUs are fine enough for an rt effect or two
I love my 3080 12Gb, was a huge upgrade over my old GTX 1080
Im using rtx 3080. Its not as bad. All you need to do is switch to FSR + Frame generation. Sure its not as sharp quality as dlss, but it helps boosting the fps by almost double. Especially in 4k. Thanks though Iceberg for detailed review
When i used to work for CEX i upgraded from my 8gb rtx 3070 to a 12gb 3080 just because the earlier was struggling at 4k ultra due to its limited vram...
Funnily enough now i am finding certain titles now i am having to drop to 1440p at ultra due to 12gb not being enough LOL
Performance wise though it definitely does hold up
If you're gaming on a 4K display it would probably look better if you gamed at 1080p intead of 1440p because of the 1/4 scaling. Meaning 1080p goes into 4K (4x) evenly where the 1440p does not.
@@randysalsman6992 No way. 1440p looks clearly better
Thanks to my 3090 I'm not worried about VRAM.........
same here on my 7900 XTX
I should have picked one up, the 40 series are all too big to fit in my case (ITX life)
Yeah but with 24vram the 3090 performs like 10% faster than 3080 with 10g.
tbh from what I'm getting from this is newer dev companies are being more sloppy with optimizations and using excess assets for minimal fidelity gains while hiding behind "building for the future" promises. while games like black myth wukong and supposedly starwars outlaws are heralded as intense visual experiences, I find it difficult to argue why they are more demanding to run than a game like Cyberpunk 2077 which still has amazing visuals and beautiful ray tracing all within an open world environment -from a game that came out almost 4 years ago. on a 3080 10gb Cyberpunk can run at max settings 1080p close to 100 FPS without ray tracing and still around 60 FPS with custom settings for ray tracing depending on the rest of your hardware.
While I would give Wukong a pass as it was developed by a smaller company from china thats more accustomed to mobile games, star wars outlaws came out from one of the biggest game development companies in the industry (despite its short comings). After a little digging around around, it all really just seems like dev companies are spreading their teams too thin across story, mechanics, and setting development, not investing enough into quality and optimization control, and are trying to shove off the responsibility to hardware developments.
With Epic releasing Unreal 5.5 for devs with much better shader optimizations, I'm hoping the industry starts moving towards 3rd party game engines rather than forcing the devs to do virtually everything from the ground up. I am a bit worried that Epic has basically all the power in the premium game engine space with these recent developments, but if Valve, and Ubisoft can shift their gears I wouldnt be surprised if they came up from the dust as well
The 3080 should have had 20GB. The order should have been: RTX 3050 8GB, RTX 3060 12GB, RTX 3070 16GB, RTX 3080 20GB, and RTX 3090 24GB.
I went from a GTX 1080 to an RX 6800XT just because it was a more affordable way of getting 16GB VRAM - I want something a little more future proof. I bought it a year ago for £450 and it is still holding up really well for 1440p gaming.
I just got my hands on a Vega 64, upgrading from an R9 Nano. 8 gigs is definitely still enough to play games made by devs that actually care about players
You made the cards CPU limited, you'd see more of a difference with a high end CPU
The facts are that the majority of people still play in 1080p. The jump to 4k is a huge mark up in hardware. As far as ray tracing, if you actually play the game, it really not noticable. Everyone still frames 4k w/ ray tracing and say, Ooooh, look how pretty. but if you actually play, it really makes little to no difference. The push for 4k and ray tracing is a corporate agenda for us to spend more, when in reality, our hardware is still enjoyable to play the games we play.
I feel like more companies should "unlaunch" bad products like corncord.
Best use scenario for these types of cards: Gaming at 1440p, AI, 3D Rendering for small or personal or fragmented projects, Testing, future proofing for high-end media.
Bitmining and other misc.
In addition to my gaming hobby I'm a 3d modeler/renderer and this summer I bought at a bargain price a build with an i9 9900k and an rtx 3090, while knowing that this is really dated hardware I can't complain about anything, I can play everything, in some cases I use Lossless Gaming to generate frames but in general the 24gb of vram allows me to increase a lot post processing and resources. In Blender and 3ds max (my favorite 3d suites) the 24gb of ram makes me feel like I'm breathing again after a long immersion in rendering... I am coming from a 6gb 2060. I honestly think you don't need to buy the newest card, but you should buy what works for your needs and purposes. If i needed a card only for gaming i think i was going to buy an rtx 4070 or a 3080.
This. Buy a tool that serves in such a way that you never have to compromise your needs to accommodate the limitations of your tool.
Ive got the 3080 12gb ive had it for about a year now, i got it at a pretty good price, and i couldnt be happier with it, ive never had access to such power before and its mindbreaking
AC is developed on Ubisoft Anvil if I'm not mistaken. Snowdrop is used by Massive.
The significant difference in some games that don't even max out the 10gb vram is due to the increase of bandwidth (320bit vs 384bit bus). The difference will only increase from here, in 3 years we'll easily see 15-20% difference in lots of games at 1440p.
the gpu performance uplift must compensate for your cpu single core performance
AC shadows uses Anvil not Snowdrop.
Bought a MSI Ventus 3x 3080 10GB in early January, 2021, for $1100. I mined Eth on it for a month, making around $500, and sold it the next month for $2200. Crazy times. Ended up getting 3 5700XTs with the money, giving me 165% of the hashrate and around 50W less power consumption than the 3080.
I'm considering the 3080 12GB as a possible upgrade to my main gaming PC in a few months. I'm currently using an RX6600 Silicon Lottery Jackpot Edition. I either sold off or used in a PC flip, all of my mining GPUs. I really don't care about having top notch performance. I've done the whole early adopter/premium gaming PC thing before and I prefer to just build budget rigs these days. I've been a PC gamer for 35 years so even low settings in modern AAA games look fantastic to me.
Generally fine with the GPU I have but I game at 1440p and some more VRAM would be useful.
As for these results, all of these games can run fine on an 8GB GPU at 1440p. But I don't expect it will be long before 8GB isn't enough for 1440p.
Any reason you don’t use a 7800x3d or anything of that class? How does the 7500f hold up for cpu bottle necks?
7800x3d is 4x the price of the 7500f for only 25% more performance. 7500f is similar to 5800x3d
So, I’ve been struggling with this for a while. When I started out, my schtick was that I tested old GPUs on cheap modern PC hardware. The original “reasonably priced gaming PC” got upgraded to the “moderately priced gaming PC” when I started reviewing better GPUs, and now I’m at the point where I’m kinda ready to test brand new GPUs, but I also don’t wanna let down the people who are used to me having a humble test rig.
To answer your question, the 7500F holds up really well for the price. I’ll be reviewing the 7800X3D soon, and I’ll have the 7500F figures updated for comparison.
@@MrIvonJr lmao, the 7500f is NOWHERE near the 5800x3d
@@GrumpyWolfTech in gaming yes it is. I'd recommend looking into benchmarks you'd be surprised how much of jump they made from am4 to am5
7500f is about close to the 5800X3D. Unless you have a 4080 or better, 7500f is fine. AM5 P/P king.
my god I remember owning a used 3090 FE for about a year and seeing the power spike up to 400 watts in-game was crazy. Nvidia really did let these cards loose on the samsung node.
A few reasons why I replaced mine with a 4080S FE was that the GDDR6X in the backplate side of the FE 3090 was cooking itself to death even if the card was both undervolted and below 65C (mem temps were regularly reaching 88-95C). The coil whine was so unbearable that it stopped me from using my studio monitors for gaming because the coil whine was leaking into the speakers even through a DAC.
The 40 series cooler design blows my old FE away with how cool and quiet it is, while sipping less power (200 watts on average in 1440p maxed vs 400 watts in BG3). Fans were quieter because the memory wasn't hitting 100C in extended sessions, and the best part, no coil whine!
mine wasn't the best undervolter, but my V/F curve at the time for my 3090 FE was 1785mhz/.800mv and it was running flawlessly until I got my hands on a 4080S FE. OC doesn't scale well with these cards unless you start to push into the 450 watt + territory so keeping clocks locked at around 1800-1860 was the sweet spot between temps, power, and not hitting power limit. Cuts out the most important thing ever that plagues the 3080/3090 series as a whole unless you have a new vbios flashed. Hitting power limit. Hitting power limit means the card will slow down to go under said power limit, and will be felt in-game as a stutter because the card downclocks to hit said power limit i.e. 3080 at 320W and 3090 at 350W at 100% power slider in afterburner.r
I distinctly remember the boosting algorithm of the 30 series is that once the card hits 52C, if there is power headroom and cooling headroom, it will add another 15 mhz to the boost clock as long as it doesn't hit 62-65C. So if you set a V/F curve at 1800mhz at .800mv and the card hits 52C, it will go to 1815mhz if the card has power headroom.
Still rocking a TUF OC 10GB card with a 4K120 monitor.
I've been starting to stress that 10gb on the 3080 isn't enough for ultra 1440p anymore.
Although I generally don't turn up the RT much. The one game I play that uses it heavily, Cyberpunk, leans on it pretty hard but you can dial it back and get a solid 80fps 1440 with reduced ray tracing or hit up dlss.
Much as I've got money burning a hole in my pocket I honestly don't see an urgent need to upgrade.
If you mod textures at all in games that extra 2gb is gold. Textures are the biggest visual difference in most games, and I refuse to lower them.
been using the 3060 12gb for about a year now and it still slaps AAA with 4k on even though its technically a 1080p card. i love it
I brought a 6800xt instead of a 3070 back then for the reason of needing more vram..today I'm running into games that are using 16gb at 1440p on high settings(some cod MWIII maps for example have so many textures that they fill it)
nvidia doesn't want cards that have the power to continue to be able to.
lacking VRAM is a killer, no one wants to play with stutters at key moments
I have one of these, bought at the end of the crypto boom when a UK retailer seemed to finally have them at sensible prices again. Still plays everything in my steam library all ok, and considering I'm just finishing a replay of Saints Row 3, will probably do so for awhile
At 14:36 he says he didn't notice any difference. He failed to notice the artifacts shaped like a child on the 10 GB card.
I remember when Jensen pulled these out of his oven and I laughed hysterically when I saw that the RTX 3080 only had 10GB of VRAM. Then Red October came and I grabbed an RX 6800 XT when I saw the performance and the 16GB of VRAM. You see, back in the day, during the first mining craze, cards like the GTX 1060 Ti and RX 580 were near $800CAD.
Then something inexplicable happened... Out of nowhere, Newegg suddenly had a bunch of brand-new Sapphire R9 Furies. These cards were released two years prior and I was admittedly shocked because they were less than $400CAD. I was admittedly hesitant because I couldn't remember how the R9 Fury performed so I started reading reviews and discovered that the R9 Fury, despite being two years old, was faster than the RX 580.
I quickly discovered that the R9 Fury was a monster when it was released, faster than the GTX 980. The card had so much GPU horsepower at the time that it could literally play anything at 1440p Ultra at 70+FPS. Unfortunately, ATi's experiment with HBM gave the R9 Fury an Achilles' heel, the same Achilles' heel that Ampere cards have.
nVIDIA made the choice to use more expensive GDDR6X VRAM which meant that they had to give less of it on their GeForce cards to be even somewhat competitive with Radeon. nVIDIA also knew that most gamers aren't smart enough (or just too lazy) to actually research their purchases and just tend to buy nVIDIA by default. Admittedly, nVIDIA was 100% correct in their assessment so they didn't worry too much about it.
Just like the aforementioned R9 Fury, having fewer GB of more expensive higher-speed VRAM instead of more GB of more economical VRAM that is slower was proven to be a mistake on the R9 Fury and will prove the same on Ampere cards. Some people like to talk about how "superior" GDDR6X is compared to GDDR6 but it just hasn't shown to make any real difference. If you want to talk about superior VRAM, HBM was in a league of its own with a colossal 4096-bit bus width. Compare that to the 384-bit bus width found on the RTX 4090 and RX 7900 XTX cards of today.
I am willing to bet that if you were to take a pair of RX 580s and somehow graft the 16GB of GDDR5 that those two cards have onto something like an RTX 3070 Ti, those 16GB of GDDR5 would out-perform the 8GB of GDDR6X in modern titles and give the RTX 3070 Ti a new lease on life. Sure, the R9 Fury's HBM was impressive, especially when it could run Unigine Superposition at 4K Optimised despite a warning that it didn't have enough VRAM to run the test correctly. Unigine clearly hadn't considered that 4096MB of VRAM on a 4096-bit bus could do things that 4GB had no business being able to do, but despite this, HBM isn't magic and could MAYBE behave like 6GB of GDDR5 because of its incredible speed. This means that 8GB of DGGR5 was better than 4GB of HBM for gaming.
This myth that a lot of GeForce owners fell for (and they do seem to fall for a lot of myths) is that GDDR6X is somehow going to make your GeForce cards superior to a Radeon that "only has the inferior GDDR6". I'm pretty sure that the truth is more like AMD probably bought some GDDR6X from Micron and sent it to ATi in Markam to play with. After considerable testing, ATi would discover that the difference in performance and efficiency between GDDT6 and GDDR6X was minimal at best and not worth the extra cost. ATi knows its market and Radeon owners aren't dazzled by frills, we want maximum performance-per-dollar (which, really, ANY user should want).
Micron is the exclusive manufacturer of GDDR6X (and probably GDDR7X) while standard GDDR6 is made by Micron, Samsung and SK Hynix. VRAM is a commodity and the more competition you have in the marketplace, the better the price will be. Since Micron has no competiton for X-rated VRAM, their price remains high. Since GeForce owners have no issue getting fleeced for useless frills, nVIDIA, also knowing their market like ATi does, chose to get more profit from the use of GDDR6X and who can blame them?
The proof is in the pudding however as the use of GDDR6X has not translated into any real performance advantages for GeForce cards over their Radeon rivals. Let's take a look at the rankings, shall we?:
1st Place - GeForce RTX 4090 with GDDR6X
2nd Place - Radeon RX 7900 XTX with GDDR6
3rd Place - GeForce RTX 4080 Super with GDDR6X
4th Place - Radeon RX 7900 XT with GDDR6
5th Place - GeForce RTX 4070 Ti Super with GDDR6X
6th Place - Radeon RX 7900 GRE with GDDR6
7th Place - GeForce RTX 4070 Super with GDDR6X
8th Place - Radeon RX 7800 XT with GDDR6
9th Place - Radeon RX 7700 XT with GDDR6
10th Place - GeForce RTX 4060 Ti with GDDR6
We can see from this chart that it has been an almost perfect competition stack going back and forth from place to place with both red and green having five of the top-ten. It's also interesting to note that while nVIDIA does have the most performant card in the top-10 with the RTX 4090, they also have the least performant card in the top-ten with the RTX 4060 Ti. It's also interesting to note that the RTX 4070 Super is faster than the RX 7800 XT. This is because the RX 7800 XT is faster than the original RTX 4070 and while the RTX 4070 Super uses GDDR6X VRAM, so too did the RTX 4070.
All of this just goes to show you that having fewer GB of faster X-rated VRAM doesn't translate into any real performance advantage but having less of it can (and will) become a serious hindrance to what you card will be able to achieve in the future. People like to talk about bottlenecks and this is no different. My R9 Fury was held back by its lack of VRAM and its incredible GPU horsepower (for the time) was relegated to high-FPS 1080p gaming far too soon. I bought it because it was half the price of the RX 580 during the first mining craze (because it wasn't efficient enough to mine with) and so I could forgive myself for taking a 4GB card in 2017. After all, in 2015, when the R9 Fury came out, 6GB was considered to be high-end, similar to how 16GB is looked at today.
However, I feel sorry for the dumb schmucks who bought the RTX 3070 Ti only to discover shortly after that they had only purchased an overpriced high-FPS 1080p card while those who bought the 3070 Ti's rival, the RX 6800, are still happily gaming away at 1440p.
Brah 😲
4080 is faster than the 7900xtx at most things
@@mitsuhh That's false. According to TechPowerUp (which knows more than both of us put together), the XTX is slightly faster overall. This means that the XTX is faster than the 4080 in most things. I don't know where you get your information but it sounds like you've been reading LoserBenchmark (UserBenchmark).
the rtx 3080 12gb should have been the only 3080
Why it's just added cost for no benefit.
@@redline589 I meant for the same price, like the 3070 ti also should have more than 8gb, but we know it’s nvidia’s way to make their cards obsolete faster
Considering it was the flagship that was compared performance-wise to the 2080Ti, it's ironic it had 1gb less and especially ironic that the 3080 10g runs some games worse than the 3060 12gb.
I bought a rtx 3080 strix brand new for 620usd just 2 months back thinking it's decent since I didn't have much choices of the 40series and them being so pricey. Yes a once highly regarded card now does some what ok at 1440p and decent at 1080p. I think I'm ok with it honestly except for the vram
I never counted it as part of the 30 series when I have the 3080Ti with 12GB
i love my 3080 12 GB, and with my 5600X i haven’t really had any issues outside of the PL update to CBP2077 being a lot harder on my CPU. i feel like i’ll be keeping this GPU for a few more years, but will prolly upgrade to a 5700X3D for GTA VI
Please do rtx 2080 TI vs rtx 3070 to see if the extra vram helps
Price. 10GB models were significantly cheaper than the newer 4070/70 Ti at the end of the crypo boom. Getting a used one in good condition at the beginning of 2023 like i did for $350-400 was a absolute steal. Only thing u need to deal with, is the higher power draw, (but 8pin pcie means would work with an older psu), and lack of DLSS 3 FG (which is redundant if the game supports FSR 3.1). The 3080 is definitely up there with the 1080 Ti as on the GOAT gpus.
You always drop a BANGER videos bro!
I felt myself lucky to buy a 3080FE direct from NVIDIA at 650 quid MSRP at the height of the madness. The card is still a beast today albeit only at 1440p. I'm guessing the same amount of money now isn't going to provide any sort of massive upgrade. I should add I think it mined half it's cost back in crypto.
I went from a 1080ti to a 3080 10gb, it was odd to go down in memory,
I've only had the memory become an issue with Hogwarts 1440p. It was too much and I had to scale to 1080.
I'll be waiting for 5000 or 6000, the 3080 is great, but not my favorite card.
This video is very timely. I just picked up two 3080 cards (10GB and 12GB0 for $340 each. Going to pair them with a set of 5800x CPUs i got for $150. Going to replace my computers with 1700x and 1080ti from 2017. I play mostly old games at 1080p so no regrets here :)
Would a 1080/ti be powerful enough to emulate ps4/xb1 generation of games? Im looking to build an emulation machine as my first PC. I have a PS5 and honestly nothing that has come out/coming out has interested me
Got my 3080 10gb at launch, upgraded my screen just after, didn't went 4k coz 3080 was never a true 4k gpu. Still got it paired with 5800x3d, doing well but will replace it next year...
I paid $2300 Cdn for a new rtx 3080 10gb when crypto was hot. I only replaced it when it would run out of vram and the fps would drop like a rock on some player made arcade maps in Far Cry 5 at 4k. I then got a 3090 and the fps no longer tanked on those maps.
Try to get 8GB 3070 Ti as "is 8GB any good" point.
That way we know at what point 10GB helps A LOT - running out of VRAM simply feels different than plain perf. difference between models would show. As for "too big for it's own good" - I like Founders card sizes (even if you have to pad mod to get G6X temp under control, along with 0.8V undervolt on cores).
I think the evga 3080 10gb has massive power limit and cooler whereas the MSI ventus is a base model with a lower power limit. I think if you compared evga to evga or ventus to ventus the gap between the 12gb and 10gb would be higher
glad amd is not so greedy and actually gives us sub 500 16gb vram GPU (7800xt)
Love your vids! Been watching for months now and just twnated to say they're a comfort food to me as well as super interesting, kudos.
I love my 3080 12gb Gaming Z Trio, with adequate settings and DLSS chugs through everything at 1440p and 4K. Gonna keep it till it dies.
I am still running an RTX 3060 TI 8GB, playing at 1440p, and I havn't played any game that really maxes it out (havn't touched RE4 yet, who knows). Most of the games I play tend to be either slightly older (still catching up on some big games now that they are discounted), or indie games, so I don't see myself needing a newer GPU for a couple more years tbh. Only thing..... I need a new shiny, so maybe there is a new GPU on the horizon just for that xD
3 year old top tier hardware getting obsolete so quickly is insane..
Thank you for this, as a early adopter of the 3080 10gb, I was looking for this for a while, still glad I din't choose the 3070 back then but still think the 3080 deserved more vram
I've been saying VRAM isn't some massive concern and more often than not the core will be the limiting factor. I'm not saying you shouldn't get more for your money, but I'd rather have a GPU that's cheaper, uses less power, and is easier to cool than one with more VRAM than the core can really do anything with. Eventually this will change overtime when the next generation of consoles come out and likely will have more VRAM available so games will be made to utilize more. But for now, VRAM really isn't much of an issue.
20GB on my XFX 7900XT Black Edition.... I'm good for a long time. Coming from the Goat GTX 1080ti 11GB. FYI 5000 series will stiff you, on everything they have no competition. I will say the move from my 1080ti to a 7900XT was easy. I am impressed with the drivers and features of AMD. You don't need FSR with Anti lag and fluid frames. I turn that on with 250+ mods, reshade, 4k texture packs, and ray tracing in Cyberpunk 2077 @ 4k high settings optimize. I get easy 60+ fps no lows below that, no ghosting, screen tears, or stuttering.
I have the EVGA FTW3 12GB version of the 3080, then I bought the 4090.... but the 3080 is still in my collection, found memories
If you can, please test rtx3070 vs rtx4060ti in 4k with and without DLSS in games where they can play at +30 FPS or even close to 60. I would like to know how the new one maintains performance in older games like RDR2.
TEST IT WITH OPTIMIZE SETTINGS like hardware unboxed settings or digital foundry.
Framegen can also be a bit of a memory hog and probably would've allowed the 12GB model to stretch its legs more over the 10GB card with all the bells and whistles going.
You need a better cpu for these tests. Even something like a 7700x would help a lot.
bullshit. nothing under a 4080 get bottle necked
I recently picked up a mining 3080 ti on the used market for the equivalent of ~320£ or ~420usd. It runs very hot at 85°C but sooo worth it
I got the 3080TI 12G as at the time it was cheaper than trying to get a 3080 10G 🤔 Crazy 🤣
I wanted a 12GB 3080 a couple of years ago upgrading from a Vega 64 , but it was way too expensive and I had to settle for a 2080ti instead and clock the balls out of it.
how was that engine called again? ...the "Framedrop Engine"
Looking back on generations prior to the 3080, it really ought to have had 16GB. The 680 had 2, the 780 had 3, the 980 had 4 and the 1080 had 8. You can also make the argument that the 2080 and the 4080 should have had 12GB and 20GB respectively. Edit: I am just looking at the trend of VRAM increases not the relative performance to VRAM or other factors as such.
The GTX 780 was really part of the same generation as the 680 though. Nvidia's made so many strides with Kepler that they didn't feel the need to release the big boy GPU right away. Also, it doesn't help that their flagship Kepler GPU was not ready until the end of 2012.
team green can enable dlss 3.x/frame gen to 3000 series however they wont even do that for gamers will notice least perf improvement from 3000 to 4000 series
Hogwards Lecacy breaks the 10 Gb on 1440p
turn the textures down
@@tysopiccaso8711 i think, a 3 trillion $ company can afford a few GB more Vram for pennys.
@@tysopiccaso8711 just stfu and give those people the vram amount they are paying for
Nvidia is gimping older cards intentionally.
Fsr3 on a 1080ti works amazing. Thanks Amd 👍
So glad I bought a 6900XT for only $100 more than a 6800XT and a like $2-$300 cheaper than the 10gb 3080's were going for during the mining craze.
I was playing No Man's Sky in VR yesterday and it used 14GB of VRAM on both my 7900 GRE system and 4090 system. That game is relatively lightweight comparatively. 12GB just ain't getting the job done in 2024, but 12 will always be better than 10🤷♂️
2 variants of 4060 ti with 8 and 16 gb shows similar results but taken to extreme
I saw your icon slightly changing when it was starting about forza horizon
I use my 3080 12GB for a living room PC that just plays games at 4K. Still going strong, but thank Huang for DLSS.
This was the second (then third) most powerful GPU from the past generation. Now it has less or the same memory cache as the current midrange dopped up 60 series models. Nvidia being Nvidia.
also i feel like something is off about these benchmarks. makes no sense that the 10 GB card is doing better when the 12 GB should perform between a 10 GB and a Ti
yeah another vid from techberg
I went with a 6800 XT because I like to mod games (especially skyrim it will use all 16gb) and I often use 14GB+ in quite a few other games maxed out at 1440p. I noticed AMD likes to allocate a lot more VRAM compared to Nvidia.
there are three reasons why we don't see a bigger impact:
we never left the cross gen era, so 8GB total RAM is still the baseline.
it has to run on a Xbox Series S with it's ****ed up 10GB.
Nvidia has 85% or so marketshare
so we never got a bunch of new - demanding - games. I mean... Starfield is like... 15 year old tech... AT BEST... Ratchet and Clank, Last of Us, Alan Wake 2 and Cyberpunk are all PS4 games, with some new settings for what hardware/memory is available.
but we already saw limitations. 1% percentile aka min FPS are sometimes all over the place. texture not ready, in to low resolution or higher average FPS on a slower GPU (which is a sign for not rendering the same image on both GPUs). to say it's not even worth 3% higher price because average FPS are similar is complete nonsense.
The memory bandwidth on this card massive tho right? So theoretically, it’s a smaller amount of memory that is fast asf.
Also nvidia has better on-the-fly memory compression, which they ads since the Kepler series.
I mean, starting RX 5000 series, AMD also jumped onto the same bandwagon.
seems like if you want a decent frame rate with RT on decent hardware without DLSS/FSR just go to 1080p lol
sometimes I wonder if it's worth modding this or a 4070/4070 Super and Ti to 24GB
Nah pointless lol
@@justmatt2655 just hope we don't get more games with bad VRAM management and texture loading like Star Wars Outlaws
@@Nintenboy01 that's down to game Devs. Having more vram on your cards will only pass on extra cost to you, the consumer. Be careful what you wish for
I wonder how the 12gb 3080 would compare to the RX 6800xt in modern titles, they used to go back and forth in non-RT situations
People like to say AMD fine wine now its AMD fine milk, in recent titles RDNA2 has been underperfoming a tier or even two lower without hardware RT. Space Marines 2, God of War Ragnarok, Until Dawn, Silent Hill 2 etc...
@@ooringe378 everybody, how doesnt stick his head in jensens a$$ knows, that you have to wait for driver updates to compares. But i guess, the more you buy the more you slave burned some braincells.
@@Uthleber AMD released said drivers with exclusive highlights for optimization for some of the said games. How many driver updates will it take? I run a 6900XT for my main BTW. If i criticize AMD doesn't mean I'm up Jensens a hole, exercise those braincells.
Good Video, but I don’t agree with the conclusion. It would be crazy not to go for the 12gb version.