But only in specific games that implement it. And only save 10%. And only on 50 series. It'll be another "4x performance with frame Gen" etc. basically lies
With neural rendering you no longer need gpu with lot of vram, with frame gen and dlss upscale you no longer need 500w feeding 10240+ cuda cores. This is why we are launching the GeForce PTX 5090 with 256 cuda cores, 1gb and 64 bits for $ 2499. Happy new year, gamers!
You forgot that with Neural rendering you will need to stay connected for our 15 min ads and have a working Internet while playing any game on your computer or mobile device. If you disconnect or skip ads we will charge you an amount you never will forget and set you game to only show frame per second up to 25 FPS as it was in the good old days of TV as there are where such people belongs that doesn't follow Nvidia Neural policy.
Just a reminder that the 80 class is shaping up to be the most cut down 80 in Nvidia history. 50% percent of total 102 cores. 80s used to be built on 100 dies. Now they are selling us scraps for triple the cost and asking that we be grateful for the insult.
It's really a 5070 under the 5080 tag. They tried to scam us like this with the 4080 when it launched but the people didn't let it fly. Don't forget that! Don't let Nvidia scam us this time either!
x80's varied from either full fledged upper midrange dies or cut down high end dies. 680, 980, and 1080 were all upper midrange dies, for instance. But Nvidia is lowering the '2nd best die' to be way lower spec. If GB202 is the top end, then GB203 is basically what Nvidia would have called GB204 in pre-40 series generations. They've actually raised the naming by TWO levels per tier, essentially. The 4080 is more like what the 3070 was. With 4070Ti in between that is literally TWO tiers of naming and pricing being raised. From $500 to $1200 in one generation. Insanity. And they're just making it even worse now. And most people who complain about it will still buy eventually anyways. smh
Yes because a bunch of dumb companies will buy 5090s for ai. They stop producing the 4090 so they can sell the 5080 at the same price point and have so much more profit. All business. I won't be buying a gpu in about 4 yrs
@@CrashBashL Incorrect: it is a 5060 with a Titan price tag. Let me remind you the GTX 760 had half the transistors of the Titan of that generation. The GTX 1060 had two thirds of the transistors of the top of the line GTX 1080 at the time of release. The RTX 4060 has less than 25% of the RTX 4090.
Yes, Nvidia will likely force the tech into the RT/PT pipeline and/or image scaling and rendering somehow making sure mid-level cards such as the 5070 run circles around 4090 cards in neural rendering. Planned obsolescence at its best.
they realised that gamers are just CRACKHEADS addicted BEETCHES that will buy whatever they launch anyway. AFTER they launch 12gb cards for miners and 6 to 8 gb for gamers, they realised that. It is what it is. I hope they sell 5060 with 6gb and 64bit bus cause ppl will still think its status to buy nvidia.
@@MicromationDLSS is imperfect, but it's hardly garbage. You squeeze so much more frames out while retaining pretty impressive visual fidelity. Now FSR... That's garbage.
Love his videos. he does such a great job of breaking tech down for someone who isn't well versed with all the verbiage that some of these other channels use.
The keynotes from Siggraph over the years are also very interesting. Yep the neural rendering thing is not a new term. NVIDIA research has been using it for years. Can't wait for the next logical step in game rendering. DLSS was merely a stepping stone. What comes next is going to be absolutely bonkers.
@@vitordelima the NeRF stuff applying as a filter on top of some rasterized bare bones implementation or even replacing it completely could be very interesting. Will be exciting to see where the technology is in 3-5 years.
I'm disappointed that such professionals as you are, didn't watch NVIDIA SIGGRAPH 2024 presentation about real-time neural rendering - regarding material shaders (hierarchical textures). You definitely should read the article which is available on-line.
Very dissapointed as well. All you have DF is to do is write "NVIDIA neural rendering" online and you get shoved a ton of research papers in your head.
That's because most don't ever get to see anything with their own eyes. Their whole life xp is based on TH-cam videos, which aren't 1 for 1 of what you would actually see. I think for people that have owned enough hardware, surely they have had the situations where its like, wait why does it look different on TH-cam? Why does TH-cam hide the over sharpening effect of PS5 on AC Valhalla? How can I compare a 4090 to a PS5 for visuals and see something entirely different than what TH-cam is showing me. Are there incentives or an agenda to show even parity? Are settings being held back to make one look closer? Is there a marketing component on why aliasing isn't showing up in a TH-cam video? Who knows the full scope of effects that TH-cam video compression has, I know over sharpening makes still images look very good, and sometimes it is hard to see in motion if not in person with your own eyes. But then you play it yourself and it looks pretty ugly and noticeable. DF has warned about this many times, even with the PS5 vs PS5 Pro, that if you see it with your own eyes it really is significant vs what you will see on a YT video.
I have both PS5 and PS5 pro and I can’t see any difference on the compressed TH-cam videos but I see it at home on my oled TV. And most important of all i feel the 90fps 4K is great on Stellar Blade but impossible to show on a TH-cam video.
@@McnoobletYeah, it's ridiculous how everyone keeps saying that the PS5 Pro versions of games look no different from the base PS5 versions, when in reality a ~40-50% increase between GPUs (RX 6700 vs RX 6800) is a very noticeable performance uplift.
For most people I don't think the problem is that they don't see a difference, it's that they don't see enough of a difference for a four-digit price tag to not be an eye roll. If you don't buy the card/don't know anyone IRL who owns the card, of COURSE you're only going to see the TH-cam videos, and if the TH-cam videos don't make a convincing sales pitch, then you're probably not going to take a gamble on a card with a four-digit price tag.
You guys do know that nvidia has released papers and demos of neural rendering as well as neural texture compression? (1-2 years ago...) the latter one is interesting as well, 4x pixels in allocating the same amount of vram then other compressions at the cost of slower rendering. but if placed well within render cycle, it may not increase render time. and the neural rendering stuff seems big to me. it appears that we get rid of triangles. but nvidias demos suggest that it might be used as material.
Just means PC ports of console games will be worse than they already are. Games will be designed for whatever the next gen of consoles has to offer, but we'll still be getting games targeting the PS5.
My take is that even if they aren't talking about Ray Reconstruction and the other DLSS features ATM for "Neural Rendering" most routes shouldn't be so expensive to be exclusive to RTX 5000. Even with Ray Reconstruction, the Tensor Cores on the RTX 3000 cards still struggle to get saturated, especially in the higher end
Aren't tensor cores usually 4 to 1 matched with SMs, and RT cores 1 to 1. I think at one time tensor cores use to be 8 to 1 per SM until they improved tensor cores, so it isn't always a moar = better. Also with some of the different generations of tensor cores, I think FP8 support was added for the 40 series. No clue how all of this plays out or why they added certain things per generation of tensor core. Maybe it has nothing to do with gaming, maybe it will eventually? Maybe it is going to be specifically for AI applications. I'm curious to see what they do with all of it.
@SALTINBANK there are some tools you can use to measure % usage if the Tensor Cores and even when they light up for a specific part of the pipeline, the % used during those light up situations barely crack 10% on even some midrange cards at worst
@@Alovon yeah unless Nvidia supports the software side we’re out of luck. Hence why I like AMD more conceptually they build to be open source, others can improve on their implementations (ex pssr)
I've seen a video, probably from Daniel Owen, saying that this new Neural Rendering thing runs concomitantly with Path Tracing (or was it Ray Tracing? Idk) calculations so it doesn't actually hinder in any way the time it takes to process. And it was done using a 4090. So, in theory, we should get it, unless NVidia gets even greedier.
My guess is that neural rendering is a hardware version of what Lumen does or something to approximate the space around a single ray or maybe something that helps a tracer find light paths faster.
Okay, so what would all this mean for generative AI? More specifically, I am very interested in AI video generation. I know there are plenty of online platforms like Kling, Luma, Hailuo etc. that offer those services and that they're pretty good at it but I am adamantly against any sort monthly subscriptions.
Perhaps an AI inferred renderer instead of traditional rasterization? There was an Nvidia paper that I don't recall the name of, but it had an offline scene that was rendered 1,300% faster with an AI technique. A more recent paper was on neural materials although I think the bullet point is referring to the former AI rendering technique.
Seems like you guys haven’t seen nvidia’s video on neural rendering from while back. It has to do with a hybrid of ray tracing and path tracing in real time. Check it out. Mystery solved. I’m sure nvidia will reveal more details at CES.
I would like to see Neutral AI polygon generation where a polygon are generated in reatime just base on the the skeleton rig. Everything from the human modeling ,cloth/equiments, AI character animation, facial animations and textures and textures scaling. Would apply too to the enviroment generation base on the simple spline shape base on criterias define by the game developers. To simplify this, there is no 3d models/texture/animation in the games files, Just criterias and intructions set.
I believe the next step for DLSS would be optimizing geometry through mesh shaders. You could cave something like nanite, as in dynamic progressive LODs based on camera distance, but without the huge overhead Nanite causes.
It has been 6 years with ray tracing. It is the distance between Quake (96) and GTA Vice City (Or Marfia). Give it 6 years more and we have GTA IV (Or Crysis) In 20 years no one will argue that ray tracing did not matter. And there will still be good 2D games. Ray tracing have been the holly grail for as long as I can remember. Physics bast lighting can give you realistic results but at the price of a lot of computation. But you will still be able to make games in what ever art stile you want. 2D games, 3D rasterization games, and 3D ray-tracing games they can all be good and bad looking games.
It's already come along massively since then. From early RT reflections only in games like Battlefield and quake running with full path tracing to modern demanding games like wukong, cyberpunk and alan wake, indy having full path tracing options handling lighting, shadows and reflections with playable framerates. Give it another 6 years and it will be in every AA and AAA game and playable across the entire stack of modern GPU's. RT was never going to happen overnight. It was a pipe dream for the distant future only 8 years ago. Nvidia has made it a reality now.
@mojojojo6292 I do agre. But to be fair, there are not even a handful of GPUs and maybe just a handful of games that makes ray tracing worth it. The 5060 will not be able to run games with full RT, and people that buy it this summer will not be able to use RT in a lot of modern games 2025 and forward. Thus I don't think everyone will be enjoying high quality RT even in 6 years. But I think a lot of new games will be RT only from now on and especially in 6 years. Because it is just easier to only do RT and not both RT and raster. But if you have a 4060/5060 card the games might actually look a little worse than if the game was made using raster. Just because those cards simply do not have enough power and VRAM to make RT look good enough. Right now good quality RT is for the privileged and I don't see that change much in the near future. 🤷♂
I also buy/sell every time a new generation comes out, for the same reasons as Rich's friend: you sell before the price of the old generation drops too much. This makes sense for phones and cars, too. And of course it means you're always there with the latest and greatest features.
This. Nvidia has papers published about it. Im no expert, but it appears to be mostly a compression technique. I am disappointed that DF didn't seem to be aware of the paper and able to guess what this feature is (not saying certainty, but this clip seems to be wild speculation based off the combination of words).
It has nothing to do with 3d models. It's a new way of modelling BRDFs. So supposedly it upgrades the appearance of materials. It will be integrated into the games rendering pipeline directly. Tbh I'm not too hyped because from the looks of it it's still very much a work in progress and it's expensive. Combined with that it should be used with raytracing which alone hasn't been too successful in the real time rendering domain so far.
I think the point is that you can have much higher quality materials while the speed of rendering is same or faster as with standard methods. But i might be wrong because i havent read the paper , just saw some rundown videos explaining it.
Presumably using ML to replace shader code. So instead of writing a complex BRDF equation for various materials and lighting, it's instead encoded as a neural network created from photogrammetry or other reference models.
I think there was a an article few years ago about texture compression technique that requires half the value but better output in texture meaning sharper. Now I don’t know what’s the real name is but I remember reading nvidia was testing it.
Neural Texture compression. Oh and it's 16x the pixels at the same or lower data footprint. you can find that and every other neural rendering tech by searching for "Neural Rendering NVIDIA" they have an entire section for it.
It's going to be texture compression of some sort. It alleviates gamer's complaints of low VRAM (even if placebo) while still forcing AI buyers to buy 10x more expensive options to get more ram.
I can give you a few guarantees on what the RTX 50 series will be yet another price gouging flop like the 40 series. I can go buy a 3070 for $279 right now and beat the 4060 which costs more. I can go buy a $6900 XT which will decimate a 4070 super for half the price. What will the 5060 be? A 5% uplift over a 4060 while costing 50 bucks more?
Neural Rendering = The tech available only on RTX 5000 cards. Another Nvidia play would be to limit the FG on the latest DLSS to the new cards only. They got away with that regarding the RTX 3000 cards not getting FG but RTX 4000 did.
You're forgetting 20 series (which I only just upgraded from), we didn't even get resizable bar! Nvidia does this EVERY time if they can add in extra hardware to lock out features for older cards they will.
Neural rendering is just a fancy word that Nvidia uses when they are fixing to bend everyone over who wants a 50 series card.. no Vaseline either.. so If you're going to buy a 50 series card just know that your the reason Nvidia is charging so much and your literally paying Nvidia to bend you over and take it..
@@christophermullins7163 ERRATUM with neural compression for future games it will be ... Plus if you are not a noob you can tweak the game engine (all games) to not max out VRAM buffer (the basics in gaming) But people don't bother : too complicated because they don't want to learn the basics ...
Dlss is in more games than not so it would probably be a good idea to assume that neuro rendering will be there too furthermore for people that have vram issues normally are the people that didn't purchase a graphics card with sufficient vram to run their application this is the fault of the consumer and not Nvidia the consumer is who selected the card with insufficient vram to do the job most consumers are very cheap and expect a lot more from their graphics card then they actually paid for@@christophermullins7163
@SALTINBANK It is crazy how most people have no idea how the graphics landscape is going to completely change. AI is going to revolutionize graphics and the processing of them. I think I am just not going to engage with these folks until we have neural rendering in the mainstream. They lack foresight and more importantly insight into the current technological paradigms.
Dlss takes a few pixels and makes them more. AI rendering takes few polygons and makes them more. This would be impossible for a traditional pipeline as it doesn't know what what things are. Have AI provide context and now it can upscale geometry correctly. The other thing that is capable is the developer creates a super high res asset. A tool chain creates an AI model and a minimum polygon model. The game engine only works with the minimum poly model, but it's AI model is loaded like a shader on the GPU. It is then rendered as the super high polygon asset. Very much like vertex shaders in steroids.
There is no list of requirements a gpu has to have to be 80 class. It’s just a name on the box. If 5090 didn’t exist and 5080 was the biggest gpu nvidia sold, would that make it a real 80 class?
@@johnc8327 distribution of parameters, it should go like this, mid end is the best value because the r&c of the high end needs to be recouped FROM HIGH END SALES, what they are doing is putting all that on everyone and taking in 80% profit with families who wanna just play games without issues, its fine it will just push people to consoles the last 4 years has been more and more broken launches, bad perf and COMP STUTTERS. i saw people calling ps5 pro a scam if that is a scam this is a death sentence
I believe, that it will be a frame gen on steroids - Huang said in 2024, that they plan to render 1 frame and generate 5 from it (kinda). But real reason, is that they want to add some new accelerators, or different architectures, so all previous generations will become obsolete at once.
@@sogetsu60 Wrong, the xx90 holds value better than anything else right up until launch of the next xx90. If you sell before then you lose little to no value. Look at current used 4090 prices with the 5090 just a month away. You can maintain the flagship every gen for about 200-300 lost tops depending on when you sell and whether there is a price increase.
@sogetsu60 it works for every series. If you upgrade your cards ASAP you barely lose any money. If you hold on to a card, you lose the same amount or more as if you upgraded every 2 years. So why hold on to a weaker card.
@@sogetsu60 A used 4080super will easily sell for close to MSRP right now, a used 4090 will sell for over msrp. A used AMD card will sell for less than half of msrp
Has rasterization really reached its limits when the engines and people that use them have become increasingly reliant on shortcuts and those engines? Just look at all the baked in long standing issues that UE5 has that is not getting fixed. The artistic vision and achievements from before raytracing (RDR2, Witcher3 even CP2077 without the nvidia endorsement) was done by people who cared and knew their craft and tools. Now there are crutches (DLSS, Frame gen etc) to lean on and those developers (Nvidia, AMD) are looked at to improve them instead of the people making games.
10:58 that guy might be onto something tbh Lets say you buy a graphics card for 500 keep it for 4 years sell it for 200 and buy another one for 500: total spent 800 But if you buy for 500, in 2 years sell it for 300, buy another for 500 and sell it again in 2 years for 300 to buy for another 500: total spent 900 but you get to use new hardware every gen
"What is Neural Rendering ?" Simple : the new "black-box" that will of course be bundled with the new minor iteration of UE5 (and forced by default), and will transform even a 4090 into a paperweight within 6 months
The likely reason all the neural and AI additions didn't make sense is that gamers are not the target customers. The supposed game enhancements are a cover. Massive parallel processing with trainable AI models integrated into the hardware are most useful for two groups: industry and state actors. I would bet that one of the major funding sources is the NSA for use in Echelon or whatever the follow-on system is called. Also, folks need to look carefully at what is actually in the drivers for these cards.
We will hit the point where AI can generate real-time games on demand, and/or be used to generate different game worlds, characters, etc. given a set of commands/rules. A game could be a unique experience for every player every time.
Yes it's probably going to definitely be an exclusive technology to the 5000 series gpus for they do need a selling point over the 4,000 series this is because of how powerful the 4000 series is. Furthermore it's an AI future it's a more efficient way to render graphics than raster performance My point the Intel ultra 9 series of CPUs. Possess a piece of silicon called a neural processing unit This neural processing unit in the Intel ultra 9 CPU while working in conjunction with neural rendering on a 5000 series GPU Has the potential to destroy amd's 9800x3d the neural rendering performance I really can't believe nobody has said anything about that in the comments and this better be tested I hope the author of this comment sees what I'm saying here
@@TheTripleAGamerTheFrame that’s not how software works unless Nvidia and intel collab on the software stack, meaning the only reason to test that is if they announce something. Otherwise I don’t think they matters
@aviagrawal5903 you're right about how software works definitely for sure But guess what this theory stated is not software it's hardware actual physical silicon this is not a software rendering solution dlss has never been that way Dlss uses physical silicon tensor cores Frame generation uses a flow accelerator processor also physical silicon The only way neural processing can be achieved is with a physical silicon chip Called a neural processing unit So maybe look to the real Horizon and do your homework before you try and cut down a logical theory Your comment makes you look unsmart.
One last note, do you think that the physical placement of tensor cores on a silicon chip is what causes the ui of a game to pop up “dlss”, or maybe it’s down to the software? For instance why does every game not have dlss implemented by default… because it has to be implemented (software!)
Just swapped all of my last ETH and swapped it into XAI207. Already up a little bit. Unfortunately I have some other junk staked which won’t free up for a while. Still now I am on the train!
What are the chances that Switch 2 has new enough tensor cores to utilize Neural Rendering? It would be very interesting to see Nintendo get their hands on a brand new emerging tech before any of the other console makers
The 5080 will be trash above $999. Right now i can run max settings with dlss(no framegen) in indiana jones and get around 50fps IF the game wasnt at 5fps because it is breaking the 16gb buffer of 4070 TiSuper. If that gpu is fast enough to be vram limited at reasonable fps.. the 5080 will be 100+ fps and dropping to 5fps due to maxing our vram. 16gb is not enough in 2025. Guarenteed.
@@Kinslayu You're assuming they HAVE done something to lower VRAM cost with all the new technologies that will be introduced. Show me an example, ever, in the history of gaming, where VRAM requirements have ever gone down...
@@cleric670 they said 16GB is not enough in 2025 guaranteed, they're the one making the assumption. I never said nvidia is doing anything definitively. I'd also argue when comparing GPUs, raw hardware stats have become less relevant in recent years with the 2000/3000/4000 all introducing new methods of rendering improvements(rtx, dlss, frame gen). I'm simply saying it's a little too early to claim not enough VRAM "guaranteed". In 6 days we can make a much better guess after they reveal the new tech
The backlash against raytracing isn’t a real thing. If you went by the internet you would think AMD was the market leader for GPUs. The silent majority are showing you what they care about with their wallets, not their keyboards.
Should I order a 4070 12gb for 499€, the normal cheapest price in Germany right now is 549€, to upgrade from a 3060, or wait and risk that cards get more expensive?😅
@ I think the super is not worth the extra money, as it doesn’t have 16GB yet. What a shame… I thought about it and as I don‘t like how badly optimized current games are and considering even with a 4070 I can only play Path tracing smoothly with stuff like Frame Gen, the additional money is not really worth it to me. If the market is ruined next year, I can still be happy I got a decent system and in case great gpus come out for cheap, I can upgrade later. Meh.
The problem is that the more nvidia is adding with ai the more shortcuts developers use.. DLSS is almost a must now, since game devs dont optimize games without it...
Why do people like you make these idiotic comments? You act like 4k gaming has always been super accessible, but now that DLSS is out, “bad optimization and no 4k”. 4k ultra settings has been a massive stretch ever since 4k monitors came out. The 980? Struggled. 1080ti? Struggled. 2080ti? Struggled. So did the 3090/4090. If you dont like upscaling, play at 1080p or 1440p native. You’ll have an objectively worse image than someone using DLSS on a 4k screen.
The thing is with newer graphics cards is i’ll want to upgrade if theres a good game out. But i dont care about literally anything from the last 8 years
You mentioned XAI207 will be available on Binance soon? I just got some on the presale but would love to see it there that means the token booms in price.
@@SuperToughnut why would it be? AI cards are where they make their money. These cards are pocket change for Nvidia. Could maybe even be the last generation of gaming cards they bother with. They might aswell focus on what makes them money and let Intel and AMD fight over the hone gaming market. It sucks though...
Nice video. Not a criticism, I enjoy your insights. I tend to think prices could go higher if XAI207 rockets. But understand the logic for sandbagging estimates. My opinion is XAI207 breaks 1, perhaps reaches 10 ATH, if conditions are right. But broader forces are at play now. We’re moving into really unknown territory. And these entities are shrewd. I think there’s massive manipulation ahead. If XAI207 survives that, well, we’re likely in for a good pump.
I regret buying my 4070ti as it wasnt long after they brought out the 4070ti super with 16gb vram, now i wonder if the same will happen again, but the prices are getting crazy
By getting the flagship (I can write it as business expense since I'm VR dev) I can lower my income by a greater amount and also later sell it for good money, which lowers the cost of the next flagship. If I bought a mid-tier card its price will plummet fast as there are more offers on the aftermarket so I would need to lower the price and even risking not being able to sell it at all. But is also true, that if you don't need a powerful card and/or cannot wirte it as business expense. or simply don't have the money, just don't get a flagship. You can buy last gen cards even. 30 series are still powerful.
Exactly, maintaining the flagship every gen is not that expensive if you sell at the right time. You need a back up card to tide you over until you replace it though. Look at current 4090 used sale prices. Pretty much at or above msrp. Sell now, buy the 5090 in a month for a few hundred bucks more and enjoy the best you can get for another 2 years. About €300-400 every 2 years to have the best flagship card is pretty damn cheap once you get on the ladder with the first 1. Might be a bit much for a lot of gamers but if you work with your PC and need GPU power then it's easily justifiable.
@mojojojo6292 I don't even bother selling before the new comes. I get the new one and immediately put the old one online for sale. Sells in a day or two. I got 1K back from my 3090 and probably will get 1.5K from my 4090 Aorus Master.
My guess is: It's going to be hardware level DLSS compression / decompression & upscale for higher fidelity textures that take up less VRAM. nVidia is trying to lower the VRAM requirements for games in order to justify selling people not enough VRAM for machine learning tasks on their gaming oriented GPUs. Also, to make up for diminishing results on shrinking nodes and tapping out die sizes so traditional rendering pipelines are freed up in order to increase performance without being able to increase transistor counts like has been the case traditionally up until now. Also, to increase geometric detail from lower poly models to, again, free up traditional rendering pipelines in order to increase performance in the face the above discussed diminished results of traditional node shrinkage - this should work wonders with nanite.
Jensen: "Neural Rendering saves so much RAM you can have a 2GB RTX 5060 for £800."
It probably uses even more vram so you actually can't use it on 5060 🥴
Damn exactly what i wanted to write... With 1gb of vram for 1000eur
9070 basura obsoleta que no puede ni con path tracing por solo lo que quieran pagar los fan boys de amd jajajja
Shit don't say that he be on that faster then his wife lmao😅
So funny because it’s true.
"The more you neural render; the more you save VRAM." - JenseNvidia, probably
But only in specific games that implement it. And only save 10%. And only on 50 series. It'll be another "4x performance with frame Gen" etc. basically lies
@@christophermullins7163 "Terms and conditions may apply."
"la vram sigue siendo todo" lisa su como en el siglo pasado
And take about 3x longer to compute per the original paper
the matrix math for temporal is stored in vram... ACTUALLY
With neural rendering you no longer need gpu with lot of vram, with frame gen and dlss upscale you no longer need 500w feeding 10240+ cuda cores. This is why we are launching the GeForce PTX 5090 with 256 cuda cores, 1gb and 64 bits for $ 2499. Happy new year, gamers!
You forgot that with Neural rendering you will need to stay connected for our 15 min ads and have a working Internet while playing any game on your computer or mobile device. If you disconnect or skip ads we will charge you an amount you never will forget and set you game to only show frame per second up to 25 FPS as it was in the good old days of TV as there are where such people belongs that doesn't follow Nvidia Neural policy.
I've been neural rendering my whole life already.
Just a reminder that the 80 class is shaping up to be the most cut down 80 in Nvidia history. 50% percent of total 102 cores.
80s used to be built on 100 dies. Now they are selling us scraps for triple the cost and asking that we be grateful for the insult.
It's really a 5070 under the 5080 tag.
They tried to scam us like this with the 4080 when it launched but the people didn't let it fly.
Don't forget that!
Don't let Nvidia scam us this time either!
x80's varied from either full fledged upper midrange dies or cut down high end dies. 680, 980, and 1080 were all upper midrange dies, for instance. But Nvidia is lowering the '2nd best die' to be way lower spec. If GB202 is the top end, then GB203 is basically what Nvidia would have called GB204 in pre-40 series generations. They've actually raised the naming by TWO levels per tier, essentially. The 4080 is more like what the 3070 was. With 4070Ti in between that is literally TWO tiers of naming and pricing being raised. From $500 to $1200 in one generation. Insanity. And they're just making it even worse now. And most people who complain about it will still buy eventually anyways. smh
@@maynardburger I'm happy to see I'm not the only one who remembers.
the 80tis used to be +93% of full 102.
Shameful.
Yes because a bunch of dumb companies will buy 5090s for ai. They stop producing the 4090 so they can sell the 5080 at the same price point and have so much more profit. All business. I won't be buying a gpu in about 4 yrs
@@CrashBashL Incorrect: it is a 5060 with a Titan price tag.
Let me remind you the GTX 760 had half the transistors of the Titan of that generation.
The GTX 1060 had two thirds of the transistors of the top of the line GTX 1080 at the time of release.
The RTX 4060 has less than 25% of the RTX 4090.
11:00 Nvidia's master plan to push people into the 2yr GPU upgrade cycle instead of 4yrs
Yes, Nvidia will likely force the tech into the RT/PT pipeline and/or image scaling and rendering somehow making sure mid-level cards such as the 5070 run circles around 4090 cards in neural rendering. Planned obsolescence at its best.
they realised that gamers are just CRACKHEADS addicted BEETCHES that will buy whatever they launch anyway.
AFTER they launch 12gb cards for miners and 6 to 8 gb for gamers, they realised that.
It is what it is. I hope they sell 5060 with 6gb and 64bit bus cause ppl will still think its status to buy nvidia.
Jokes on them I'm not upgrading 4090 until I can get acceptable performance (120fps
Well, jokes on Nvidia, I'm poor as fuck...
@@MicromationDLSS is imperfect, but it's hardly garbage. You squeeze so much more frames out while retaining pretty impressive visual fidelity. Now FSR... That's garbage.
Daniel Owen had a good video breaking down the article Nvidia had on this
Love his videos. he does such a great job of breaking tech down for someone who isn't well versed with all the verbiage that some of these other channels use.
Except he was most likely wrong, neural compression probably isn't what neural rendering is referring to.
Journalism hates $600 consoles, but never criticizes the price of graphics cards. Digital Foundry should be different.
There is more information inside NVIDIA's web site, such as neural materials or neural texture compression.
I'm guessing it's this
The keynotes from Siggraph over the years are also very interesting. Yep the neural rendering thing is not a new term. NVIDIA research has been using it for years.
Can't wait for the next logical step in game rendering. DLSS was merely a stepping stone. What comes next is going to be absolutely bonkers.
@@pctechloonie One of the possible goals for this would be the use of AI picture generators to directly render the final image somehow.
@@vitordelima the NeRF stuff applying as a filter on top of some rasterized bare bones implementation or even replacing it completely could be very interesting.
Will be exciting to see where the technology is in 3-5 years.
I'm disappointed that such professionals as you are, didn't watch NVIDIA SIGGRAPH 2024 presentation about real-time neural rendering - regarding material shaders (hierarchical textures). You definitely should read the article which is available on-line.
Or anything about AI-based Global Illumination.
Very dissapointed as well.
All you have DF is to do is write "NVIDIA neural rendering" online and you get shoved a ton of research papers in your head.
Amen
They’re stuck in the past since they don’t talk about AI in the correct format.
They're not up to speed on AI
Can’t wait for the comment “Can’t see any difference” when they compare 4090 with 5090 or 4080 with 5080
That's because most don't ever get to see anything with their own eyes. Their whole life xp is based on TH-cam videos, which aren't 1 for 1 of what you would actually see. I think for people that have owned enough hardware, surely they have had the situations where its like, wait why does it look different on TH-cam? Why does TH-cam hide the over sharpening effect of PS5 on AC Valhalla? How can I compare a 4090 to a PS5 for visuals and see something entirely different than what TH-cam is showing me. Are there incentives or an agenda to show even parity? Are settings being held back to make one look closer? Is there a marketing component on why aliasing isn't showing up in a TH-cam video? Who knows the full scope of effects that TH-cam video compression has, I know over sharpening makes still images look very good, and sometimes it is hard to see in motion if not in person with your own eyes. But then you play it yourself and it looks pretty ugly and noticeable.
DF has warned about this many times, even with the PS5 vs PS5 Pro, that if you see it with your own eyes it really is significant vs what you will see on a YT video.
I have both PS5 and PS5 pro and I can’t see any difference on the compressed TH-cam videos but I see it at home on my oled TV. And most important of all i feel the 90fps 4K is great on Stellar Blade but impossible to show on a TH-cam video.
@@McnoobletYeah, it's ridiculous how everyone keeps saying that the PS5 Pro versions of games look no different from the base PS5 versions, when in reality a ~40-50% increase between GPUs (RX 6700 vs RX 6800) is a very noticeable performance uplift.
For most people I don't think the problem is that they don't see a difference, it's that they don't see enough of a difference for a four-digit price tag to not be an eye roll.
If you don't buy the card/don't know anyone IRL who owns the card, of COURSE you're only going to see the TH-cam videos, and if the TH-cam videos don't make a convincing sales pitch, then you're probably not going to take a gamble on a card with a four-digit price tag.
A.I. hardware should focus on NPC behaviour / dialogues & interactions for more immersion.., and just not "slight better grafix"...
You guys do know that nvidia has released papers and demos of neural rendering as well as neural texture compression?
(1-2 years ago...)
the latter one is interesting as well, 4x pixels in allocating the same amount of vram then other compressions at the cost of slower rendering. but if placed well within render cycle, it may not increase render time.
and the neural rendering stuff seems big to me. it appears that we get rid of triangles. but nvidias demos suggest that it might be used as material.
Well you are 100 percent more academic than everyone else in the comment section thus far. Thank you for making such strong points.
God of War Ragnarok uses ML to infer texture detail on PS5.
@@John4343sh thanks :)
@@faustianblur1798 does this mean AMD is doing something similar
YESSS FINALLY SOMEONE THAT ALSO READS THAT ONE ARTICLE! yeah they did a test of it making it sharper and better while using less
Well, the main purpose of the feature is to try to sabotage backward compatibility for the sake of forcing sales of new product.
I'm staying on my x570 platform based off RTX 4090. Something about the RTX 5090 scares me honestly. Plus it is going to be 2800 cash. Nah I'm good
Just means PC ports of console games will be worse than they already are. Games will be designed for whatever the next gen of consoles has to offer, but we'll still be getting games targeting the PS5.
@@justhomas83 Pays $1600+ for GPU and says, "I'm not paying $2800 cash; I'm fine." lol
My take is that even if they aren't talking about Ray Reconstruction and the other DLSS features ATM for "Neural Rendering" most routes shouldn't be so expensive to be exclusive to RTX 5000.
Even with Ray Reconstruction, the Tensor Cores on the RTX 3000 cards still struggle to get saturated, especially in the higher end
They need to have seperate models for different tier cards to properly saturate tensor cores
Proof how you manage to monitor tensor cores just asking ?
Aren't tensor cores usually 4 to 1 matched with SMs, and RT cores 1 to 1. I think at one time tensor cores use to be 8 to 1 per SM until they improved tensor cores, so it isn't always a moar = better. Also with some of the different generations of tensor cores, I think FP8 support was added for the 40 series. No clue how all of this plays out or why they added certain things per generation of tensor core. Maybe it has nothing to do with gaming, maybe it will eventually? Maybe it is going to be specifically for AI applications. I'm curious to see what they do with all of it.
@SALTINBANK there are some tools you can use to measure % usage if the Tensor Cores and even when they light up for a specific part of the pipeline, the % used during those light up situations barely crack 10% on even some midrange cards at worst
@@Alovon yeah unless Nvidia supports the software side we’re out of luck. Hence why I like AMD more conceptually they build to be open source, others can improve on their implementations (ex pssr)
I've seen a video, probably from Daniel Owen, saying that this new Neural Rendering thing runs concomitantly with Path Tracing (or was it Ray Tracing? Idk) calculations so it doesn't actually hinder in any way the time it takes to process. And it was done using a 4090. So, in theory, we should get it, unless NVidia gets even greedier.
It's supposed to help
Make path tracing run faster
My guess is that neural rendering is a hardware version of what Lumen does or something to approximate the space around a single ray or maybe something that helps a tracer find light paths faster.
Have a Great Year Richard!!! And the guys too. Good Luck this year.
Okay, so what would all this mean for generative AI? More specifically, I am very interested in AI video generation. I know there are plenty of online platforms like Kling, Luma, Hailuo etc. that offer those services and that they're pretty good at it but I am adamantly against any sort monthly subscriptions.
Perhaps an AI inferred renderer instead of traditional rasterization? There was an Nvidia paper that I don't recall the name of, but it had an offline scene that was rendered 1,300% faster with an AI technique. A more recent paper was on neural materials although I think the bullet point is referring to the former AI rendering technique.
They will use AI to upscale low resolution mip-maps to save on VRAM!
There is already texture compressor based on AI.
@@vitordelima yes but have they been used in games? If you know of any game implementations please let me know.
@@pctechloonie Of course it wasn't.
Dell should team up w/ Nvidia to finally make the Alienware UFO.
Seems like you guys haven’t seen nvidia’s video on neural rendering from while back. It has to do with a hybrid of ray tracing and path tracing in real time. Check it out. Mystery solved. I’m sure nvidia will reveal more details at CES.
so how much is a 4080 worth on the used market currently?
I am not finding any 4080's used in my country on the marketplace.
neural might refer to NeRF (neural radiance fields ) and that tech...?
I wonder if u can buy thos vram chips and put them on the 40 series?
I would like to see Neutral AI polygon generation where a polygon are generated in reatime just base on the the skeleton rig. Everything from the human modeling ,cloth/equiments, AI character animation, facial animations and textures and textures scaling. Would apply too to the enviroment generation base on the simple spline shape base on criterias define by the game developers. To simplify this, there is no 3d models/texture/animation in the games files, Just criterias and intructions set.
It's too slow currently but adaptive tessellation is the closest you can get to it nowadays.
Depends if we're talking about neural rendering as umbrella term, or a specific rendering technique (NeRF).
I believe the next step for DLSS would be optimizing geometry through mesh shaders. You could cave something like nanite, as in dynamic progressive LODs based on camera distance, but without the huge overhead Nanite causes.
Where do you buy XAI207?
get ready to pay huge prices leaks show the 5090 is 2600$ with the 5080 being 1350$
It has been 6 years with ray tracing.
It is the distance between Quake (96) and GTA Vice City (Or Marfia).
Give it 6 years more and we have GTA IV (Or Crysis)
In 20 years no one will argue that ray tracing did not matter. And there will still be good 2D games.
Ray tracing have been the holly grail for as long as I can remember. Physics bast lighting can give you realistic results but at the price of a lot of computation.
But you will still be able to make games in what ever art stile you want. 2D games, 3D rasterization games, and 3D ray-tracing games they can all be good and bad looking games.
It's already come along massively since then. From early RT reflections only in games like Battlefield and quake running with full path tracing to modern demanding games like wukong, cyberpunk and alan wake, indy having full path tracing options handling lighting, shadows and reflections with playable framerates. Give it another 6 years and it will be in every AA and AAA game and playable across the entire stack of modern GPU's. RT was never going to happen overnight. It was a pipe dream for the distant future only 8 years ago. Nvidia has made it a reality now.
@mojojojo6292 I do agre.
But to be fair, there are not even a handful of GPUs and maybe just a handful of games that makes ray tracing worth it.
The 5060 will not be able to run games with full RT, and people that buy it this summer will not be able to use RT in a lot of modern games 2025 and forward.
Thus I don't think everyone will be enjoying high quality RT even in 6 years.
But I think a lot of new games will be RT only from now on and especially in 6 years. Because it is just easier to only do RT and not both RT and raster.
But if you have a 4060/5060 card the games might actually look a little worse than if the game was made using raster. Just because those cards simply do not have enough power and VRAM to make RT look good enough.
Right now good quality RT is for the privileged and I don't see that change much in the near future. 🤷♂
The current hardware implementation of ray tracing is shit and almost any other method (based on software or hardware) would be better.
I also buy/sell every time a new generation comes out, for the same reasons as Rich's friend: you sell before the price of the old generation drops too much. This makes sense for phones and cars, too. And of course it means you're always there with the latest and greatest features.
The next big step in nvidia tech is going to be them charging a monthly fee to unlock hardware features. Mark my words.
i think neural rendering is just going to be a new way of generating 3d models out of images
Excellent guess, checking back here after CES 😂
This. Nvidia has papers published about it. Im no expert, but it appears to be mostly a compression technique.
I am disappointed that DF didn't seem to be aware of the paper and able to guess what this feature is (not saying certainty, but this clip seems to be wild speculation based off the combination of words).
From what I undrestand, it will simulate raytracing, 2x faster and will use less vram.
@ could you elaborate on simulate? I know it’s just a leak so no worries
It has nothing to do with 3d models. It's a new way of modelling BRDFs. So supposedly it upgrades the appearance of materials. It will be integrated into the games rendering pipeline directly. Tbh I'm not too hyped because from the looks of it it's still very much a work in progress and it's expensive. Combined with that it should be used with raytracing which alone hasn't been too successful in the real time rendering domain so far.
I think the point is that you can have much higher quality materials while the speed of rendering is same or faster as with standard methods. But i might be wrong because i havent read the paper , just saw some rundown videos explaining it.
Presumably using ML to replace shader code. So instead of writing a complex BRDF equation for various materials and lighting, it's instead encoded as a neural network created from photogrammetry or other reference models.
This sounds like something that would work really well or not at all
@@xviii5780 It's already in the NVIDIA's web site.
These cards will be used to push their GFORCE now subscription and cloud gaming. Prob help streaming on the cloud more visually clear.
I think there was a an article few years ago about texture compression technique that requires half the value but better output in texture meaning sharper. Now I don’t know what’s the real name is but I remember reading nvidia was testing it.
Neural Texture compression. Oh and it's 16x the pixels at the same or lower data footprint. you can find that and every other neural rendering tech by searching for "Neural Rendering NVIDIA" they have an entire section for it.
It's something that will put a 2000€ price mark on the RTX5090 but never to be found in gaming....
It's going to be texture compression of some sort. It alleviates gamer's complaints of low VRAM (even if placebo) while still forcing AI buyers to buy 10x more expensive options to get more ram.
Oh so it is the texture compression math they were talking about a year or two ago!! wonder if they'll combine it with photogramitry.
you can render a game with basically flat models with no texture, just color coded, and then use AI to convert that into a stylized image.
That would look dogshit
How do i invest in XAI207?
I can give you a few guarantees on what the RTX 50 series will be yet another price gouging flop like the 40 series. I can go buy a 3070 for $279 right now and beat the 4060 which costs more. I can go buy a $6900 XT which will decimate a 4070 super for half the price. What will the 5060 be? A 5% uplift over a 4060 while costing 50 bucks more?
Used GPU's cost less than new? Shocker
RTX 5000 maybe be the last RTX cards.
6000 series might no longer be called RTX. Baybe NTX, A.I.TX
Neural Rendering = The tech available only on RTX 5000 cards.
Another Nvidia play would be to limit the FG on the latest DLSS to the new cards only.
They got away with that regarding the RTX 3000 cards not getting FG but RTX 4000 did.
You're forgetting 20 series (which I only just upgraded from), we didn't even get resizable bar! Nvidia does this EVERY time if they can add in extra hardware to lock out features for older cards they will.
@@joncarter3761 I remember that the RTX 3000 didn't have resizable bar for the initial release and had to be updated in firmware.
It would be really funny if they did that when a year ago they were showing off neural rendering on a 4090, so it's clearly capable of using it.
Neural rendering is just a fancy word that Nvidia uses when they are fixing to bend everyone over who wants a 50 series card.. no Vaseline either.. so If you're going to buy a 50 series card just know that your the reason Nvidia is charging so much and your literally paying Nvidia to bend you over and take it..
Intel has Battlemage now. Really great bang for your buck. Go get banged. 😉
Life is short and I can afford it. What's wrong?
I agree with your viewer. It also makes more sense to buy an expensive card as close to its release as you can.
NVIDIA : VRAM issues gone
"Vram issues gone"
But only in certain games so not really.
@@christophermullins7163 ERRATUM with neural compression for future games it will be ...
Plus if you are not a noob you can tweak the game engine (all games) to not max out VRAM buffer (the basics in gaming)
But people don't bother : too complicated because they don't want to learn the basics ...
Not sure about that. Uncompressed/high quality textures will still live in VRAM.
Dlss is in more games than not so it would probably be a good idea to assume that neuro rendering will be there too furthermore for people that have vram issues normally are the people that didn't purchase a graphics card with sufficient vram to run their application this is the fault of the consumer and not Nvidia the consumer is who selected the card with insufficient vram to do the job most consumers are very cheap and expect a lot more from their graphics card then they actually paid for@@christophermullins7163
@SALTINBANK It is crazy how most people have no idea how the graphics landscape is going to completely change. AI is going to revolutionize graphics and the processing of them. I think I am just not going to engage with these folks until we have neural rendering in the mainstream. They lack foresight and more importantly insight into the current technological paradigms.
The more you spend the more you save. Only gamers will get that joke.
Dlss takes a few pixels and makes them more. AI rendering takes few polygons and makes them more. This would be impossible for a traditional pipeline as it doesn't know what what things are. Have AI provide context and now it can upscale geometry correctly. The other thing that is capable is the developer creates a super high res asset. A tool chain creates an AI model and a minimum polygon model. The game engine only works with the minimum poly model, but it's AI model is loaded like a shader on the GPU. It is then rendered as the super high polygon asset. Very much like vertex shaders in steroids.
With half CUDA cores of RTX 5090, RTX 5080 is just a rename RTX 5070, real RTX 5080 got "Unlaunch" again, Nvidia learned their lesson.
There is no list of requirements a gpu has to have to be 80 class. It’s just a name on the box. If 5090 didn’t exist and 5080 was the biggest gpu nvidia sold, would that make it a real 80 class?
@@johnc8327 distribution of parameters, it should go like this, mid end is the best value because the r&c of the high end needs to be recouped FROM HIGH END SALES, what they are doing is putting all that on everyone and taking in 80% profit with families who wanna just play games without issues, its fine it will just push people to consoles the last 4 years has been more and more broken launches, bad perf and COMP STUTTERS. i saw people calling ps5 pro a scam if that is a scam this is a death sentence
oh NVIDIA has done this before. Remember the 680? Compare it against 780 TI. The simularities are striking.
There's no replacement for displacement.
NVIDIA is playing 4d chess, while all the idiots on reddit are getting hung up on VRAM quantity. "Bigger number better. Smaller number bad."
I believe, that it will be a frame gen on steroids - Huang said in 2024, that they plan to render 1 frame and generate 5 from it (kinda). But real reason, is that they want to add some new accelerators, or different architectures, so all previous generations will become obsolete at once.
Upgrade your GPU every generation if you want to save money. The longer you hold on to an old card, the less value it has.
This works for 60 and some 70 series. 80 and 90 series loose value alot.
@@sogetsu60 Wrong, the xx90 holds value better than anything else right up until launch of the next xx90. If you sell before then you lose little to no value. Look at current used 4090 prices with the 5090 just a month away. You can maintain the flagship every gen for about 200-300 lost tops depending on when you sell and whether there is a price increase.
OK Jensen.
@sogetsu60 it works for every series. If you upgrade your cards ASAP you barely lose any money. If you hold on to a card, you lose the same amount or more as if you upgraded every 2 years. So why hold on to a weaker card.
@@sogetsu60 A used 4080super will easily sell for close to MSRP right now, a used 4090 will sell for over msrp. A used AMD card will sell for less than half of msrp
Has rasterization really reached its limits when the engines and people that use them have become increasingly reliant on shortcuts and those engines? Just look at all the baked in long standing issues that UE5 has that is not getting fixed. The artistic vision and achievements from before raytracing (RDR2, Witcher3 even CP2077 without the nvidia endorsement) was done by people who cared and knew their craft and tools. Now there are crutches (DLSS, Frame gen etc) to lean on and those developers (Nvidia, AMD) are looked at to improve them instead of the people making games.
Ultra realistic AI enhanced images in photo mode?
10:58 that guy might be onto something tbh
Lets say you buy a graphics card for 500 keep it for 4 years sell it for 200 and buy another one for 500: total spent 800
But if you buy for 500, in 2 years sell it for 300, buy another for 500 and sell it again in 2 years for 300 to buy for another 500: total spent 900 but you get to use new hardware every gen
You mentioned XAI207 will 100x on your other video. Can you please go into detail? Everyone is wanting to know!
"What is Neural Rendering ?"
Simple : the new "black-box" that will of course be bundled with the new minor iteration of UE5 (and forced by default), and will transform even a 4090 into a paperweight within 6 months
went all in on XAI207 after your suggestion. Let's lick our chops and get those green candles
HEY GUYS, LOOK WE MADE THE MATRIX CORES BETTER SO WE CAN FILL THAT VRAM UP QUICKER!
The likely reason all the neural and AI additions didn't make sense is that gamers are not the target customers. The supposed game enhancements are a cover. Massive parallel processing with trainable AI models integrated into the hardware are most useful for two groups: industry and state actors. I would bet that one of the major funding sources is the NSA for use in Echelon or whatever the follow-on system is called. Also, folks need to look carefully at what is actually in the drivers for these cards.
We will hit the point where AI can generate real-time games on demand, and/or be used to generate different game worlds, characters, etc. given a set of commands/rules. A game could be a unique experience for every player every time.
The potential of XAI207 is unreal! Excited to see where this goes after watching your video!
Yes it's probably going to definitely be an exclusive technology to the 5000 series gpus for they do need a selling point over the 4,000 series this is because of how powerful the 4000 series is.
Furthermore it's an AI future it's a more efficient way to render graphics than raster performance
My point the Intel ultra 9 series of CPUs. Possess a piece of silicon called a neural processing unit
This neural processing unit in the Intel ultra 9 CPU while working in conjunction with neural rendering on a 5000 series GPU
Has the potential to destroy amd's 9800x3d the neural rendering performance I really can't believe nobody has said anything about that in the comments and this better be tested I hope the author of this comment sees what I'm saying here
@@TheTripleAGamerTheFrame that’s not how software works unless Nvidia and intel collab on the software stack, meaning the only reason to test that is if they announce something. Otherwise I don’t think they matters
@aviagrawal5903 you're right about how software works definitely for sure
But guess what this theory stated is not software it's hardware actual physical silicon this is not a software rendering solution dlss has never been that way
Dlss uses physical silicon tensor cores
Frame generation uses a flow accelerator processor also physical silicon
The only way neural processing can be achieved is with a physical silicon chip
Called a neural processing unit
So maybe look to the real Horizon and do your homework before you try and cut down a logical theory
Your comment makes you look unsmart.
@@TheTripleAGamerTheFrame …. Windows cannot run on m1+ Apple hardware. Think about why….
Have you ever developed an internal API to allow two pieces of hardware to communicate? It’s what is required. Hence the software….
One last note, do you think that the physical placement of tensor cores on a silicon chip is what causes the ui of a game to pop up “dlss”, or maybe it’s down to the software? For instance why does every game not have dlss implemented by default… because it has to be implemented (software!)
Just swapped all of my last ETH and swapped it into XAI207. Already up a little bit. Unfortunately I have some other junk staked which won’t free up for a while. Still now I am on the train!
What are the chances that Switch 2 has new enough tensor cores to utilize Neural Rendering? It would be very interesting to see Nintendo get their hands on a brand new emerging tech before any of the other console makers
The 5080 will be trash above $999. Right now i can run max settings with dlss(no framegen) in indiana jones and get around 50fps IF the game wasnt at 5fps because it is breaking the 16gb buffer of 4070 TiSuper. If that gpu is fast enough to be vram limited at reasonable fps.. the 5080 will be 100+ fps and dropping to 5fps due to maxing our vram. 16gb is not enough in 2025. Guarenteed.
it wasnt enough for me in 4k 2 years ago so i agree
You're assuming that nvidia has done absolutely nothing to lower VRAM cost with all the new technologies that will be introduced
@@Kinslayu You're assuming they HAVE done something to lower VRAM cost with all the new technologies that will be introduced. Show me an example, ever, in the history of gaming, where VRAM requirements have ever gone down...
@@cleric670 guess there's a first for everything lol
@@cleric670 they said 16GB is not enough in 2025 guaranteed, they're the one making the assumption. I never said nvidia is doing anything definitively. I'd also argue when comparing GPUs, raw hardware stats have become less relevant in recent years with the 2000/3000/4000 all introducing new methods of rendering improvements(rtx, dlss, frame gen). I'm simply saying it's a little too early to claim not enough VRAM "guaranteed". In 6 days we can make a much better guess after they reveal the new tech
The backlash against raytracing isn’t a real thing. If you went by the internet you would think AMD was the market leader for GPUs. The silent majority are showing you what they care about with their wallets, not their keyboards.
Capitalizing on the fact that micro-chip processing is truly the oil of the next XI wave and XAI207 is the top runner.
Should I order a 4070 12gb for 499€, the normal cheapest price in Germany right now is 549€, to upgrade from a 3060, or wait and risk that cards get more expensive?😅
That’s what I want to do as well but 4070 super from my 3060 !
Lol my poor 3060 is pushed to its limits on 4k 1440p “optimized “ settings lol
@ I think the super is not worth the extra money, as it doesn’t have 16GB yet. What a shame… I thought about it and as I don‘t like how badly optimized current games are and considering even with a 4070 I can only play Path tracing smoothly with stuff like Frame Gen, the additional money is not really worth it to me.
If the market is ruined next year, I can still be happy I got a decent system and in case great gpus come out for cheap, I can upgrade later. Meh.
@@jayzn1931 I’m personally only aiming for 60FPS on most tittles so I hope when I do upgrade the 4070 super achieves that more easily for me
NVIDIA a masterclass in how to over promise and under deliver at the highest premium price possible.
We need NPC AI based on hardware accelerator AI
We don't even know if they are talking about real time rendering.
The problem is that the more nvidia is adding with ai the more shortcuts developers use.. DLSS is almost a must now, since game devs dont optimize games without it...
Why do people like you make these idiotic comments? You act like 4k gaming has always been super accessible, but now that DLSS is out, “bad optimization and no 4k”.
4k ultra settings has been a massive stretch ever since 4k monitors came out. The 980? Struggled. 1080ti? Struggled. 2080ti? Struggled. So did the 3090/4090.
If you dont like upscaling, play at 1080p or 1440p native. You’ll have an objectively worse image than someone using DLSS on a 4k screen.
Yeah make those tensor cores worth buying man. lol The immersion breaking stuff like hair rendering should be tackled first.
WOW thank god i made this presale. Your last call it was too late this will make up for it.
I love the daniel owens viewers signaling about how they were actually keeping up to date on this tech.
The thing is with newer graphics cards is i’ll want to upgrade if theres a good game out. But i dont care about literally anything from the last 8 years
5060 specs: expensive e-waste.
Well this describes basically all rtx 60 series
For XAI207, Be greedy when others are fearful - Warren Buffet
You mentioned XAI207 will be available on Binance soon? I just got some on the presale but would love to see it there that means the token booms in price.
I see no reason yet to switch from my 3080 to a 5000 series card. All games up till now are running fine. 1440p resolution.
Actually this will be the beginning of Nvidia's downfall.
I hope so
@@SuperToughnut why would it be? AI cards are where they make their money. These cards are pocket change for Nvidia. Could maybe even be the last generation of gaming cards they bother with. They might aswell focus on what makes them money and let Intel and AMD fight over the hone gaming market. It sucks though...
Nice video. Not a criticism, I enjoy your insights. I tend to think prices could go higher if XAI207 rockets. But understand the logic for sandbagging estimates. My opinion is XAI207 breaks 1, perhaps reaches 10 ATH, if conditions are right. But broader forces are at play now. We’re moving into really unknown territory. And these entities are shrewd. I think there’s massive manipulation ahead. If XAI207 survives that, well, we’re likely in for a good pump.
Even in the volatility, I still see XAI207 coming out strong 🚀
Where's DF review of fs2024????
One of the biggest releases this year and no gpu or cpu coverage???
It could mean "increasing price" is Nvidia thinking only
I regret buying my 4070ti as it wasnt long after they brought out the 4070ti super with 16gb vram, now i wonder if the same will happen again, but the prices are getting crazy
Ronaldo just shoutouted Elon's XAI207 token.
8:28 only run on new cards, so an incentive for users to switch to GeForce Now. Seems logical from a business standpoint
You know XAI207 is gonna go parabolic bro 🚀
By getting the flagship (I can write it as business expense since I'm VR dev) I can lower my income by a greater amount and also later sell it for good money, which lowers the cost of the next flagship. If I bought a mid-tier card its price will plummet fast as there are more offers on the aftermarket so I would need to lower the price and even risking not being able to sell it at all.
But is also true, that if you don't need a powerful card and/or cannot wirte it as business expense. or simply don't have the money, just don't get a flagship. You can buy last gen cards even. 30 series are still powerful.
Hence the more you buy the more you save lol
Exactly, maintaining the flagship every gen is not that expensive if you sell at the right time. You need a back up card to tide you over until you replace it though. Look at current 4090 used sale prices. Pretty much at or above msrp. Sell now, buy the 5090 in a month for a few hundred bucks more and enjoy the best you can get for another 2 years. About €300-400 every 2 years to have the best flagship card is pretty damn cheap once you get on the ladder with the first 1. Might be a bit much for a lot of gamers but if you work with your PC and need GPU power then it's easily justifiable.
@@SevenBlades Jensen the profit prophet?
@mojojojo6292 u lowkey have a point
@mojojojo6292 I don't even bother selling before the new comes. I get the new one and immediately put the old one online for sale. Sells in a day or two. I got 1K back from my 3090 and probably will get 1.5K from my 4090 Aorus Master.
Granny is putting XAI207 in her portfolio.
Can't wait to buy the RTX 5070. I think it will pair well with my 5800X3D
RTX 5070 is 12GB VRAM 🤣 12GB VRAM is shit.
@@mmanz123 I wanted to buy the RTX 4070 but i cant find in stock anywhere. so i have to get this one now
Neural rendering is rendering an imaginary brain that makes you think the Nvidia cards are a good deal.
Looks like that brain already exists, considering the 85% market share
@@funbrute31 So popularity=quality? enjoy the price gouging.
@@deadrift886 disfruta tu de vram como en el siglo pasado fan boy de amd
@@deadrift886 Nvidia software/hardware stack >>> AMD . Any doubts?
@@deadrift886 Nvidia - We have DLSS, DLDSR, Frame gen, Neural Rendering
AMD - VRAM VRAM VRAM VRAM VRAM 🐒
Once $XAI207 breaks key resistance at $1.2 and $1.5 it's flying much higher!! 🚀
Answer ; An extra few hundred quid, Squire!
My guess is: It's going to be hardware level DLSS compression / decompression & upscale for higher fidelity textures that take up less VRAM. nVidia is trying to lower the VRAM requirements for games in order to justify selling people not enough VRAM for machine learning tasks on their gaming oriented GPUs. Also, to make up for diminishing results on shrinking nodes and tapping out die sizes so traditional rendering pipelines are freed up in order to increase performance without being able to increase transistor counts like has been the case traditionally up until now. Also, to increase geometric detail from lower poly models to, again, free up traditional rendering pipelines in order to increase performance in the face the above discussed diminished results of traditional node shrinkage - this should work wonders with nanite.