By clicking my link www.piavpn.com/Coreteks get 83% discount on Private Internet Access! That's just $2.03 a month, and also get 4 extra months completely for free!
As an Animator working in the industry. I'm absolutely sure Polygons are not going anywhere. Once the midrange hardware gets good enough to do Pathtracing you'll start seeing ultra realistic games pop up. Only thing that will take time I think is high quality realtime fluid simulations in games. Rest I believe is possible right now.
@@Siranoxz could be 10-20 years depends on how big push will be and who wants to absorb cost, i ve beed doing 3D for big AAA games for 8+ years and converting everything to new tech is not doable or extremly costly, if you dont want to pay for games like 1000+ dollars to absorb cost of this shift companies will have to stick with what they have, technologies in video are not really usable right now i myself in spare time try to come up with ai tools that could help but for this its extremle early and there are not enough good data to help for example with uv mapping or optimizing geometry, since from certain point its just pure visual balancing of what still looks good
@@WayStedYou thats minimum i think, look at cycles of consoles and gpu upgrades, game performance targets are set bit under what most people have and i dont think most of gamers would upgrade to new tech, these cycles are slow, even raytracing is not fully integrated across industry
I agreed, this seems like those guys the promised "infinite detail" with "virtual atoms" instead of polygons. And at the time their tech was impressive but people pointed out problems that it wouldn't work in animation and other things. We are more than a decade past their announcements, and polygons still reign king. Some of those techniques produce good results but they produce static environments. People want more interactivity in their games not less so it looks pretty. I think real time graphics will follow the same path as CGI rendered in render farms, using AI as a tool to cheat your way through the computational gap between real time and off-line rendering. Like Nvidia is doing with path tracing.
That´s up to developers, but still if we are talking about visuals then yes, AI and neural generative solutions could be the future and that also includes scrips, storytelling by AI with humans collaborating together.
Games are about money and bleeding every last penny out of customers now, no longer about an experience, an adventure or a truly immersive world and story. Gone are the days of wonders like MGS, Deus Ex, the earlier command and conquer or GTA titles, the best they can do is recycle or remake them and even then the huge flaws of the rotten parent companies now in charge of such franchises comes bleeding into all aspects of what they've tried to do with it when their best effort amounts to little more than a remake. The gaming industry has went the way of movies and TV, it is ruined by shite in the form of political narratives and ran by people for the most part wholly disconnected from their fanbase and even the franchise itself or with literally zero passion or ingenuity in terms of forging new ideas or tastefully building upon that which already exists. Safer just enjoying the gems of yesteryear and as a bonus this takes the pressure off our hardware knowing we won't have to climb an ever increasingly steep mountain trying to keep up with ultra flashy graphics which ultimately are let down by the poor gameplay that accompanies them.
I think the faith in gsplats for games is unrealistic. The capturing and display steps sure (virtual locations could be great) But games have interactive light and objects and I’m not sure gsplats is built to simulate an interactive world. Apart from lighting, physics sims in games are usually based on surfaces so polys will still probably be needed for that anyway.
I remember at some point someone interviewed Jensen and he mentioned the future of games is neural rendering so... Nvidia is well aware of this and is probably multiple steps ahead of everyone else in that field.
@@Big_Yin This video assumes Nvidia is being caught off guard by all this. The point of my comment is that there's evidence to the contrary. I misspoke though. It wasn't Jensen. I heard this on a Digital Foundry interview with a few of the Nvidia engineers behind DLSS 3.5. They were asked what they thought the future of graphics would be and they answered it's a long way off but that it was clear neural rendering was the direction we'd be going towards longer term and suggested Nvidia was already doing internal work in that space.
Graphics have been astounding for quite some time now. And stylized games already have all the horse power they need. If it gets even better that's fine, but I feel that the chase for better graphics is covering up a different problem: stagnation in game design. I'd much rather see graphical development stall in favor of better more intricate design. Sure, we have the more daring/experimental things with indie developers.. but how about triple A? Triple A usually plays it safe behind lowest-common-denominator design choices covered up with brilliant looking graphics.
triple A gets stagnant because of the size of the projects, as a high-profile flop can tank the company. Also, large corporate structures are inefficient. My money is on indie currently, they're delivering
one thing I'm sure of is that engines like unreal that can make fantastic looking stuff, will be pointless if almost nobody can afford the overpriced gpu required to run them
Well, think again, because Unreal isn't popular only because it can do photorealistic graphics. It's popular because it's top of the line set of tools that make everything easier. Unreal can do easy to run games, stylized games, too. Octopath Traveler and Stray are the proof of that.
Nvidia has quickly dropped quads for triangle acceleration so if those matrix point clouds ever became fruitful you could be sure Nvidia engineers would be first to accelerate them. They didnt jump too quickly on voxel acceleration and voxels have not been anywhere ever since. I think Nvidia did the right choice with its hybrid GPU since the launch of the RTX generation and point cloud objects will simply become part of an hybrid renderer.
I’m all for graphics getting even better and more advanced but not at the cost of performance. UE5 games have been…underwhelming in terms of performance if I’m being polite. Downright fucking terrible if I’m not. I wish devs would focus on performance over graphical fidelity. Every UE5 game I’ve played has been a stuttering mess with frame times all over the place. Here’s to hoping performance becomes the primary focus of devs. Unfortunately performance doesn’t sell games. Pretty graphics certainly do. If this new way of producing graphics and games can actually happen we might see some very impressive games in the future. A new era of gaming that doesn’t require insanely powerful GPUs that also need upscaling technology to even run games at an acceptable FPS sounds like a future I want to live in. Great video as usual. Very interesting topic.
One of the many downfall of performance is due to the fact that Unreal Engine 5 is still having the legacy behavior of how UE4 threads higher core coutns CPU. Apparently they are going to address that in UE 5.4 ... so at the moment devs will need to put the effort to address traversal stutter and shader compilation in their own volition, which honestly 70 percent of devs arne't going to do it.
Hopefully once we progress past all the legacy components that will result in better performance. If they can pull that off with UE5.4 I’m all for it. Like you said, devs aren’t currently getting the job down. Shader compilation is definitely one of the areas that need more focus.
@@Deliveredmean42 I thought UE5 had innate stutter-free benefits? I've played 3 UE5 games. Remnant 2, Layers of Fear 2, and THE FINALS --- none have stuttering. They might be hardware heavy hitters but UE4 stutter doesn't seem to be there to me. "The previous release, UE 5.1, had introduced an experimental PSO precaching system to improve hitching in DirectX12 games. In UE 5.2, the performance and stability have been increased, and the system now supports skipping drawing objects altogether if the relative PSOs aren't ready yet. While the goal is to have them ready, there is no guarantee they will be. With the new support for skipping, the stuttering shouldn't happen if the PSO hasn't been compiled." -Quote from WCCFTech
Unreal Engine 5 have made improvements, even its current 5.3 version yes, but it still not perfect. As you mention it, it uses an Experimental PSO, meaning it doesn't always work. For example, the new Rocobop game end up patching in the manual pre-shader compilation loading when you start the game as it began to start stutter on ocassion when people started playing the demo. Also, there is this other type of stutter called Traversal Stutter, and depending on the game, it will hitch when you reach a new area. Most if not all UE5 gamaes have this issue as well, some more severe than others. Digital Foundry did mention this in their latest video about this matter as well. @@The_Man_In_Red
The only Unreal Engine V game that I played is The First Descendant Open Beta from Nexon. They figured out how to use UE V. On top of the fun gameplay, the overall graphics and global illumination looked amazing. At 1080p high, the game ran fine on an RTX 2080 Ti/RX 6700 XT and barely playable on a Vega 56, so this is an optimization problem for developers. We can't excuse publishers/developers, who made Remnant II and Immortals of Aveum, because they will keep doing this over and over until they go bankrupt.
GPUs have become more complicated than CPUs, thus the big established players enjoy a natural "stranglehold" over the market. Even an old dog as Intel has difficult time in competing against NV and AMD, let alone any of the lesser IVHs that might have some ambition in the graphics/parallel computing. But building dedicated ML and inference hardware is a lot simpler than an over-engineered GPU with its numerous domain-specific logic blocks and endless list of API compatibility requirements. If the graphics tech evolves towards AI rendering, Nvidia specifically could be undoing its own future and a market leader, while they themselves are pushing hard towards the neural rendering utopia, inadvertently lowering the entry bar for newcomers.
Maybe this is just me being an old fuddy, but I just don't care at all about better graphics in AAA games. Graphics got more than good enough with the XBox One and PS4, late PS3 games were more than good enough. For the most part, I have no interest in playing violence simulators, I want to play games - it's not the violence that's the problem for me, it's the realism at the expense of fun. And I know I'm not alone, Nintendo sells hundreds of millions of systems that are on par with an XBox 360. Instead of improving fidelity, they should use this new horsepower to make developer's and artists lives easier and more productive.
Unreal Engine 5.0 was released almost 3 years ago. The majority of companies still use UE4. It does not progress that fast. Also, even now we get unoptimized and broken AAA games on release with no patches for 6 months, imagine adding there AI content generation on the fly. We'll sooner have hoverboards and Unlimited Details graphics (I wonder what are those guys up to now?)
I suck at videos but i think the topic of arm is huge. The time of two options with x86 has held back computers. Having like 7+ companies making arm for personal computing will be great for advancement and pricing.
Even better would be if there were a common destkop platform defined by ARM, so that you could drop in CPUs from different vendors in the same socket. That would remind me of the Socket 7 days and even more interesting for the consumer.
Classic socket, especially the Socket 7 and Super Socket 7 days were great for testing and swapping out different CPUs from different manufacturers. Truly a golden age for choice.
I'm skeptical. We're currently in the "blockchain for everything!" era of the generative content. Stable Deffusion is at best good for generating concept images, and even that because it's based on what is basically a scam (clever ToS manipulation to appropriate work of artists without paying anything). The current oversaturation of the market won't make them any good, too. 99% of current generative content (mostly in text form) is atrocious and used as a cheap source of content for spam and horrible clickbait articles. The better ones are mostly used for public manipulation. Some people will get really rich because of it. Most people will hate it. Exactly same schema as with NFT. What's worse is actual regulation of generative content are non-existent. Who owns IP right to models generated by masterpiece-x? No one knows (or just nobody) - this makes is unusable for any released game - companies won't relinquish their IP to some random service and it's not compatible with any open source license that currently exists.
Wow, I just mentioned that shit in a comment but I couldn't remember the name of the company! I still remember their weird Aztec temple demo and how they could render individual grains of dirt
We need a hardware-based AI processing unit an AIPU - for NPC/AI behavior and interaction processing- either a discreet processor or as a subset to the gfx card. It probably would be far better for this to be integrated onto the motherboard imo. This would then handle all aspects of AI interaction with the world - including speech recognition and response - path finding and behavior. And would be used to run things like LLMs and other AI models that are not directly game related. Then we end up with a more distributed processing system with maybe three or four primary processing units CPU GPU AIPU and PPU (physics).
So what's the situation with Stable Diffusion and AMD right now? About half a year ago when I checked, the only AMD GPU that outperformed my RTX 2070 Super was the one which cost 1100$ I think it was 7900 XTX, everything else was much worse
Saying nothing has progressed since the 1080 ti. But then also say ray tracing is bad. Yeah I hope apple puts in ray tracing hardware into their new macs. Like they did with the Iphone 15 pro. That surprised me.
Nothing will replace a GPU anytime soon. With a neural accelerator you won't be able to play older games and I don't think everyone will be using the new technology. So a GPU that is able to do AI rendering and real rendering will always be needed
The problem is most older games you can't run unless you crack them many are delisted just have to move on on won't be long before arm takes over and then games are gonna have to go back to the drawing board amd make games for the next gen pcs it's gonna happen soon
What you've said about neural rendering was stated in one of the Digital foundry interviews, where an NVIDIA employee Brian (forgot the last name) said the same thing. I wanted to ask you, though - why don't you like DLSS? 🙂
The problem is all of NVIDIA’s stuff is designed to be anti competitive. If you are going to use their products they force you into their ecosystem and do not allow you to use the competitions products.
Funny thing is that this bullshit has been touted as the next big revolution in 3D graphics since at least 2008. I still remember that one Australian company that claimed their point cloud engine would change gaming forever and they just vanished as quickly as they made the news despite having a working demo. It was weird as fuck
I am still on my GTX 1080ti and imo it was best gpu ever came out. I would love upgrade to 6800xt or better but here in canada they still anywhere $700 to 900 or more. WTF? it make no sense.
AI is great at many things, but I don't see it taking over as a rendering solution. It's better suited as an assistant in several tasks, like you talked about. The second the industry switches to optical light based compute is when the limitations will unlock and we'll have achieved photorealism at that point, which is the cutoff for progress. After that we can only augment reality as we perceive it. There's nothing else to achieve beyond that.
sorry to rain on your parade but: - gaussian splatting has lighting baked in and would be hell to de-light (it's hard enough in photogrammetry already), so scenes are VERY static and reflections are baked, so they break if you change anything. - neural rendering looks very nice, but with infinite variations come infinite inconsistencies. I want the places I revisit to look the same when I come back and that could be challenging. - RDR2 has no raytracing. The advances we've had in that area really do make a huge difference in lighting and reflections and they have improved each generation.
I watched your video today. Funny that yesterday this though came to me, that there is actually no need in complicated engines to edit and create the worlds, but it has to support carcass building of the worlds, storing metadata, audio engine and networking, and it's needed to have a ability to plug-in the neural networks that will make it photorealistic: no Unreal Engine required. You need the neural network that knows how different assets will look like and can create variety of it. And filters to add all that lightning and color depth and balance to make it photorealistic or any type of graphics you want like stylisation. That's why there is nothing to wrong to start writing a code for new game engine, that you can fully control and reduce all of unnecessary features like in UE/Unity and to focus on generative worlds and neural networks technologies. At least I believe that stagnation in games exists because it lacks of innovations, and media industry was driven by innovations for pretty long period of time and there is no reason to stuck now and watch overhauled games, that existed before, that look pretty much the same, still lacking realism due to complexity to process that or (what's more important) - to create that.
5:40 - the potential here is truly tremendous not for disruption but for shitposting. this is a recipe for overwhelming amount of crap you won't be able to find anything remotely bearable behind.
10:30 Anybody remember 3dfx? Nvidia might be seeing the proverbial asteroid colony over the horizon as it were. AMD, the gpu side maybe, but they seem to have been anealing themselves with lots of seed techs over the last 10+ years, on top of their healthy CPU business. Intel are not what I'd call stick in their ways but are not the nu-tech powerhouse the once were. There will be a little tech startup out there that can use agility to swing into graphics AI. Actually there's a 100. All will try 99 will fail 1 will release something new and exciting, the ARM of new-AI. They will either be a public/private thing out of an American University or from Cambridge. Or China. If it's a Chinese team, they'll leave China to get access to modern fabs. Probably in America. Or Taiwan and it will start WW3. 😂🎉
At first its going to be things like smoke, fire those types i think and yeah assets. Eventually things will be just generated totally while playing. The only issue i can think of is storage, how do you go back to where you came in say a weeks time after leaving the game. If areas are generated your never be able to visit the same place twice (unless its stored somewhere). They do kinda have a solution to this already live with games that have a sort of rule set where stuff should be and just reskin areas and repeat. That way only a few values need be stored. Not ideal though as your notice the "base" say in the same spot across from a ship or something.
The "neural textures" was a hyped potential feature of the current Xbox via it's int4/8 acceleration. It was especially exciting as it was put there to ease the ram restrictions found on the platform, as well as DX12U compatible PC GPUs. Sadly, nobody has yet bothered to do anything with it. 😢 And I can only assume it's a dev manhours limitation as ram crunch is by far the loudest complaint about the platform, especially the S, as opposed to GPU grunt. Demo code exists for all the features, although neural textures is not exactly part of the DX12U API, but async compute demo code for it exists. Not to mention the big ticket features, but none of those are in the 'AI' realm, just your standard efficiency features like SFS etc. Hell, at least we finally have a game that actually uses mesh shaders! 😂 Well done remedy, keeping the dream alive one bi-decade at a time. I hear mesh shaders is actually true big brain hard to do though, so really, well done.
I might be typing out of the left side of my right cheek but don't APUs only allow deployment and don't allow you to train any machine learning models.
We are already so close to photo realistic with polygons + ray tracing, going for neural rendering sounds like a huge step in the wrong direction, at least in terms of quality. I really hope this isn't where the industry is heading, or we may have to pay even more, swap out perfectly fine hardware _and_ get worse graphics... what a bummer
3D gaussian splatting can be applied to a 3D environment also and I think it's the most plausible way forward for gaming and simulation. Designers can model a highly complex scene with poly counts as high, or higher, than Unreal nanites and then you use 3D gaussian splatting to draw that 3D environment. This would allow the game to run on more hardware but with varying levels of detail since the 3D gaussian environment would be tuned to the capability of the local hardware. The advantage for the studios creating the content is they would no-longer have to focus on billboard trees, lod's etc. Simply create environments with millions of polygons and let the end user adjust a 3D gaussian *slider* to increase the detail their system can manage. This way the game only gets *better* over time as more powerful gpu's are released you simply increase the splatting value.
Regarding that last bit: here's hoping for a 36"~40" 4K high-refreshrate QD-OLED with at least HDR TrueBlack 500. Preferably without needing to sell a kidney.
Successor to the AW3423DWF is coming soon, will continue to be HDR 1000 and high refresh rate with higher pixel density (not that it's particularly needed). Doubt the price will be any lower than €1200 I paid for the current gen.
Where you said "there will be a portion of the hardware dedicated to turning a cube into a model" and "faster than just rendering a model" is so laughably false. This will never happen and we've been trying desperately to get away from fixed functions and have a more generic pipeline as much as possible for the last 15 years. This is why mesh shaders were created because it gives way more programmable control and allows us to maximally utilise the GPU as much as possible. GPUs now are just compute machines and the more you can parallelize the work, like with mesh shaders, the better. If you have a fixed function AI pipeline, its only going to create bottlenecks and it'll never work like you think it will. It just won't know anything about the engine at all, where is it going to get the texture from? Who's going to write and compile the shaders? It just won't happen. Generative AI is already creeping in, sure, but it happens at the start of the production pipeline, not after you dispatch work to the GPU.
Close to my take. Everything discussed in the video may be apt for the production end of game graphics, not for game engines rendering the produced graphics in real time on consumer hardware. Polygons are efficient and as detailed a framework for fast 3D rendering as the hardware allows--a mature, universally adopted paradigm. It would be madness to try to reinvent that wheel now.
I don't think you should use Star Field as a reference for the current state of in game graphics, even in its own sub genre it looks very bad compared to existing games, its game worlds are barren and it lacks enemy diversity. Obviously, Rock* studios would make better use of such tech, same for many many other talented teams that you could have used as a reference point instead of those who ship broken games.
I don't share the enthusiasm for generative "AI" at this point in time. G-Splats are unusable for games (they are static, non-interactive, things like reflections aren't possible yet, etc.) since it's only a shift from creating geometry (by whatever means) to filming real life scenes and digitise those. Might be a time saver, but certainly not a cost saver. AI toolchains these days also lack one very important detail: coherence. It's very early days and while prototyping might be faster using generative machine learning methods, getting optimised, artistically coherent and visually pleasing results, still requires lots of manual work. It's the "no-code" fad of the 2000s projected into the CG world at the moment. Finally, the most important issue is that photorealistic graphics and games don't go together well, anyway. Immersion is quickly broken by objects and agents clipping into each other, repetitive and unconvincing "AI", broken physics, and, well the requirements of gamification itself. Any type of game interface will look out of place immediately if put into truly a photorealistic setting. Making things like player inventory, maps, or just interactive items "realistic" would basically kill any fun gameplay (have fun going into a virtual room and try to find out which items are actually interactive without immersion-breaking visual hints). In reality, it's physics, game AI, and game design itself that often didn't keep up with graphics.
For who the hell we build those high speed network connections, 5G, etc, when in the near future everything will be transmitted as text, and will be locally generated. Imagine watching HD movie via 33600 bps modem line. Whole movie scene description will be transmitted as a series of prompts to local NN. Starlink will go kaput.
It seems like the new wave of AI tools for real time rendering is somewhat antithetical to all the buzzwords that nvidia love to use when it comes to their marketing for the power of their gpus. Its good for the end user but it leaves the question of what the actual point for expensive hardware will be in a couple years when AI upscaling gets so good that we can just circumvent the brute forcing of modern tech. I just want gaming to be accessible.
It won't happen any time soon, all generative AI is wrong 95% of the time and inconsistent frame-to-frame (everything you see online is the textbook definition of cherrypicking). Hardware isn't really the limiting factor here, it's mostly that the studios are unwilling to put in the required time and skill to make higher fidelity games since these ones sell just as well in a fraction of the time.
Terrifying and interesting. Really opens up some massive open world concepts that'd be really interesting. I do hope this leads to more VRAM being commonplace- wonder if they could use that neural rendering with say- a DOS game and upscale everything as a real time remaster. That'd be cool.
Honestly, when it comes to DOS games, I'm not really interested in remasters, but entirely new ones that'll run on old hardware. I'm talking system requirements of a Pentium with double-digit megabytes of RAM or below. Don't want to be too agressive in the upper limit.
If nVidia will be out of the game, and they have tensor cores in the RTX GPU series, where will AMD GPU department end up? They are making everything nVidia does but they brute force it via software. They will either profit big from that approach or end up like ATi and 3DFx, bought off due to lack of solvency.
I pretty much never choose which games to play by graphics. Most of them are good enough. I want advancements in stuff like NPC behaviour and their AI, or dynamic content which player can influence, or better physics engines. If a game looks 3% more realistic due to some tech advancements I just don't really care, if all the areas I mentioned are still the same as a decade ago. I don't care if NPCs look super realistic if they are dumb as hell and can't be interacted with in any meaningful ways, and they don't have reactions to player actions.
My biggest wish for OLED displays is one that is 27-32", 1440p (or 4K if possible for the price), not curved, 144 hz, can get very DIM in addition to bright so it doesn't hurt my eyes when gaming late at night, and with a GLOSSY DISPLAY. Matte textures on an OLED make no sense and wash out the colors, thus defeating the purpose of the OLED display in the first place. They also need to prioritize both its lowest dimness and its highest brightness because it makes no sense to have a cutting edge display that hurts my eyes bc it's too bright. I know Dough (aka Eve) has products like this, but they suck as a company and I refuse to support them. I'll happily hand over my money to Alienware, MSI, Gigabyte, LG, Samsung, or whoever else can make a monitor at that spec. I'll pay up to $700 for it if need be.
Fancy graphics are boring, where is the heart and style? The games from the PS2 era and below had to try and had less to work with, that made them interesting and provided space for the mind to fill in the details. Less is more and also style was around in heaps when we had PS1 polygons and Neo Geo pixels.
Profit motive. How are game companies gonna make a healthy return if they don't build a game suited to run on the lowest tier computer? I bet if you dug deep enough, the graphical fidelity in a generation of games coincides with console hardware specs of the generation in which they were released.
Remember that company that once broke 3DFX? It only takes a good idea to disrupt a market, and it seems today there are many paths that could lead to that disruption. Nvidia might have a lead in AI technology, AMD has a lead in chiplet design. It's only safe to say whatever technology helps devs to make better games is going to prosper.
I love DLSS, but I HATE when devs use it as an excuse to make performance crap. Like I love Control - 2019 game still looks better than 2023 games for the most part, and it runs amazingly well even with all RT features enabled, I avg about 80 FPS at 1440p native, and ~120+ with DLSS Quality - why don't modern games look better or run as well? I am still on an 8700k at 5.1 GHZ, and Control has me GPU bottlenecked
There has been a clear trend of games running poorly natively lately. I would rather go back to RDR2 graphics if it meant that games would actually run at proper frames natively.
There's no such thing as "proper" or "fake" frame. Rasterization is full of tricks and approximations that tries to emulate realistic properties of light in screen-space. It almost NEVER renders shadows, reflections, ambient occlusion, etc in full resolution. It's all fakery. RDR2 was a swan song to rasterized graphics... they've applied almost every single trick to achieve that. With ray tracing or even better path tracing it would look miles better.
In a thermonuclear computer the DIE is placed in an absolute vacuum of solid metal. Em um computador termo nuclear o DIE é colocado em um vácuo absoluto de metal macico.
IDK man, they'd have to still be many times more efficient to offset the cost. The world just needs more fabs regardless how you look at it. More players means more wafer orders.
& don't forget the ''main'' aspect? a 4090 is fluid in 2K!!! 4070ti 1080p!!! the only people playing in 4K max, are the 3090 SLI !! & who's dooing that ? super informative video, minus my mouth!
I think the next gold rush in video games is real time, universal physics. Not just broad physics "nodes", but every piece of clothing, every blade of grass, every footstep taken will be able to realistically interact with the world around it. Real time water as well, and realistic destruction. These all exist in some advanced levels, but once technology advances that they become ubiquitous, games will really feel immersive
3 generations with no real improvement, just escalating cost and power requirements is disheartening. Ray Tracing seems to be a bust. Nice idea if they could have pulled it off but at this rate it'll be another ten years before it's usable? AI could add a great deal to gaming and in quite a few ways but we still need real improvements in graphics rendering.
Models look buggy, how can they be modified or optimized for bugs removal if developers don't have access to the actual editors. A better implementation would be the AI creation of 3d Max models that are completely modifiable. Also AI can't do real time, GPUs are needed for that.
Nvidia is far ahead on the ai front. They already figured out that AI is the way to overcome raw graphics limitations. An interesting thing they’re pushing for is Universal Scene Description (USD) which is probably something they want to improve AI generative features for their graphics.
Sooner or later GPUs won't need to be this big or heavy because AI will take over everything won't be long before AI takes over the GPU and CPU meaning the AI revolution is not gonna stop once AI has a foot hold we are screwed if AI learns and turns its back on us lol
Not a fan of DLSS? Ahhhh… that’s now a general statement. As it’s literally tied to every processing bit of tech they implement. I’m a big fan. The benefits out way for sure. So without DLSS we get 30fps let’s say. But 90 with (all DLSS tech with FG) all day every day. Getting to the butter realm (north of 80fps) is where I wanna live in game. And I do what I gotta do to get there. For the 3080 in 2077 it’s no RT med settings at 4k with DLSS. And it’s rt on with mid tier DLSS on the 4090… it’s for sure the way to go..
What about Ark survival Ascended, very much next Gen graphics on EPIC settings, it I agree AI generated game graphics is the future. Looking how fast stable diffusion is now on a 4090, we have a long way to go before we can generate 120fps at 4K in games :)
I wish someone who understands all the relevant issues: Manufacturing, R&D, Shipping, Marketing, Labor, Material, etc could explain in clear simple and honest terms why a graphics card costs more than an entire computer(or a used car). Compare today to 2010. The suspects I see are YT influencers, luxury marketing, and suckers lots of suckers. Disagree? Think much?
Yeah imagine a gpu that doesn’t produce pixels at all, instead a matrix style ”pseudo graphics” that the AI would then transform into pixels. The number of computations would be completely unlinked from the output resolution, only stability would improve. What would the development tools then look like?
By clicking my link www.piavpn.com/Coreteks get 83% discount on Private Internet Access! That's just $2.03 a month, and also get 4 extra months completely for free!
There's a 10 cent/month difference between the affiliate link plan and their haloween 3 year plan... POTETO POTATO!
Make an episode about qualcomm oryon
As an Animator working in the industry. I'm absolutely sure Polygons are not going anywhere.
Once the midrange hardware gets good enough to do Pathtracing you'll start seeing ultra realistic games pop up.
Only thing that will take time I think is high quality realtime fluid simulations in games. Rest I believe is possible right now.
How long will that take?, another 5 to 6 years?.
@@Siranoxz could be 10-20 years depends on how big push will be and who wants to absorb cost, i ve beed doing 3D for big AAA games for 8+ years and converting everything to new tech is not doable or extremly costly, if you dont want to pay for games like 1000+ dollars to absorb cost of this shift companies will have to stick with what they have, technologies in video are not really usable right now i myself in spare time try to come up with ai tools that could help but for this its extremle early and there are not enough good data to help for example with uv mapping or optimizing geometry, since from certain point its just pure visual balancing of what still looks good
So another 10 years? Since it would need to be on consoles too
@@WayStedYou thats minimum i think, look at cycles of consoles and gpu upgrades, game performance targets are set bit under what most people have and i dont think most of gamers would upgrade to new tech, these cycles are slow, even raytracing is not fully integrated across industry
I agreed, this seems like those guys the promised "infinite detail" with "virtual atoms" instead of polygons. And at the time their tech was impressive but people pointed out problems that it wouldn't work in animation and other things. We are more than a decade past their announcements, and polygons still reign king.
Some of those techniques produce good results but they produce static environments. People want more interactivity in their games not less so it looks pretty. I think real time graphics will follow the same path as CGI rendered in render farms, using AI as a tool to cheat your way through the computational gap between real time and off-line rendering. Like Nvidia is doing with path tracing.
Graphics are good as it is. How about making better AI, better characters, script and STORIES? Less mindless shooters. Imagine that! 🙂
That´s up to developers, but still if we are talking about visuals then yes, AI and neural generative solutions could be the future and that also includes scrips, storytelling by AI with humans collaborating together.
Honestly, I'm over seeing stereotypical comments like this..... like this dude whole channel is rly about pc hardware & next gen graphics.
@@Siranoxznot so much about developers doing most of that in the future
Oh the irony.
Games are about money and bleeding every last penny out of customers now, no longer about an experience, an adventure or a truly immersive world and story.
Gone are the days of wonders like MGS, Deus Ex, the earlier command and conquer or GTA titles, the best they can do is recycle or remake them and even then the huge flaws of the rotten parent companies now in charge of such franchises comes bleeding into all aspects of what they've tried to do with it when their best effort amounts to little more than a remake.
The gaming industry has went the way of movies and TV, it is ruined by shite in the form of political narratives and ran by people for the most part wholly disconnected from their fanbase and even the franchise itself or with literally zero passion or ingenuity in terms of forging new ideas or tastefully building upon that which already exists.
Safer just enjoying the gems of yesteryear and as a bonus this takes the pressure off our hardware knowing we won't have to climb an ever increasingly steep mountain trying to keep up with ultra flashy graphics which ultimately are let down by the poor gameplay that accompanies them.
I think the faith in gsplats for games is unrealistic. The capturing and display steps sure (virtual locations could be great) But games have interactive light and objects and I’m not sure gsplats is built to simulate an interactive world. Apart from lighting, physics sims in games are usually based on surfaces so polys will still probably be needed for that anyway.
I remember at some point someone interviewed Jensen and he mentioned the future of games is neural rendering so... Nvidia is well aware of this and is probably multiple steps ahead of everyone else in that field.
Because Jensen said something ~ trust me bro
@@Big_Yin This video assumes Nvidia is being caught off guard by all this. The point of my comment is that there's evidence to the contrary.
I misspoke though. It wasn't Jensen. I heard this on a Digital Foundry interview with a few of the Nvidia engineers behind DLSS 3.5. They were asked what they thought the future of graphics would be and they answered it's a long way off but that it was clear neural rendering was the direction we'd be going towards longer term and suggested Nvidia was already doing internal work in that space.
Coreteks greatly underestimates NVIDIA. Its the core of this channel's existence. Of course he's wrong in almost every " prediction "
Graphics have been astounding for quite some time now. And stylized games already have all the horse power they need. If it gets even better that's fine, but I feel that the chase for better graphics is covering up a different problem: stagnation in game design. I'd much rather see graphical development stall in favor of better more intricate design. Sure, we have the more daring/experimental things with indie developers.. but how about triple A? Triple A usually plays it safe behind lowest-common-denominator design choices covered up with brilliant looking graphics.
triple A gets stagnant because of the size of the projects, as a high-profile flop can tank the company. Also, large corporate structures are inefficient.
My money is on indie currently, they're delivering
one thing I'm sure of is that engines like unreal that can make fantastic looking stuff, will be pointless if almost nobody can afford the overpriced gpu required to run them
looking at xtx ,4090 and 4080 sales people are buying
Well, think again, because Unreal isn't popular only because it can do photorealistic graphics. It's popular because it's top of the line set of tools that make everything easier. Unreal can do easy to run games, stylized games, too. Octopath Traveler and Stray are the proof of that.
You guys do talk nonsense. Consoles can run Unreal Engine fine.
@@SylvanFeanturi you missed the point by about 400 lightyears
@@philosoaper So what was the point?
Nvidia has quickly dropped quads for triangle acceleration so if those matrix point clouds ever became fruitful you could be sure Nvidia engineers would be first to accelerate them. They didnt jump too quickly on voxel acceleration and voxels have not been anywhere ever since.
I think Nvidia did the right choice with its hybrid GPU since the launch of the RTX generation and point cloud objects will simply become part of an hybrid renderer.
I’m all for graphics getting even better and more advanced but not at the cost of performance. UE5 games have been…underwhelming in terms of performance if I’m being polite. Downright fucking terrible if I’m not. I wish devs would focus on performance over graphical fidelity. Every UE5 game I’ve played has been a stuttering mess with frame times all over the place.
Here’s to hoping performance becomes the primary focus of devs. Unfortunately performance doesn’t sell games. Pretty graphics certainly do. If this new way of producing graphics and games can actually happen we might see some very impressive games in the future. A new era of gaming that doesn’t require insanely powerful GPUs that also need upscaling technology to even run games at an acceptable FPS sounds like a future I want to live in.
Great video as usual. Very interesting topic.
One of the many downfall of performance is due to the fact that Unreal Engine 5 is still having the legacy behavior of how UE4 threads higher core coutns CPU. Apparently they are going to address that in UE 5.4 ... so at the moment devs will need to put the effort to address traversal stutter and shader compilation in their own volition, which honestly 70 percent of devs arne't going to do it.
Hopefully once we progress past all the legacy components that will result in better performance. If they can pull that off with UE5.4 I’m all for it. Like you said, devs aren’t currently getting the job down. Shader compilation is definitely one of the areas that need more focus.
@@Deliveredmean42 I thought UE5 had innate stutter-free benefits? I've played 3 UE5 games. Remnant 2, Layers of Fear 2, and THE FINALS --- none have stuttering. They might be hardware heavy hitters but UE4 stutter doesn't seem to be there to me.
"The previous release, UE 5.1, had introduced an experimental PSO precaching system to improve hitching in DirectX12 games. In UE 5.2, the performance and stability have been increased, and the system now supports skipping drawing objects altogether if the relative PSOs aren't ready yet. While the goal is to have them ready, there is no guarantee they will be. With the new support for skipping, the stuttering shouldn't happen if the PSO hasn't been compiled."
-Quote from WCCFTech
Unreal Engine 5 have made improvements, even its current 5.3 version yes, but it still not perfect. As you mention it, it uses an Experimental PSO, meaning it doesn't always work. For example, the new Rocobop game end up patching in the manual pre-shader compilation loading when you start the game as it began to start stutter on ocassion when people started playing the demo.
Also, there is this other type of stutter called Traversal Stutter, and depending on the game, it will hitch when you reach a new area. Most if not all UE5 gamaes have this issue as well, some more severe than others. Digital Foundry did mention this in their latest video about this matter as well. @@The_Man_In_Red
The only Unreal Engine V game that I played is The First Descendant Open Beta from Nexon. They figured out how to use UE V. On top of the fun gameplay, the overall graphics and global illumination looked amazing. At 1080p high, the game ran fine on an RTX 2080 Ti/RX 6700 XT and barely playable on a Vega 56, so this is an optimization problem for developers. We can't excuse publishers/developers, who made Remnant II and Immortals of Aveum, because they will keep doing this over and over until they go bankrupt.
GPUs have become more complicated than CPUs, thus the big established players enjoy a natural "stranglehold" over the market. Even an old dog as Intel has difficult time in competing against NV and AMD, let alone any of the lesser IVHs that might have some ambition in the graphics/parallel computing. But building dedicated ML and inference hardware is a lot simpler than an over-engineered GPU with its numerous domain-specific logic blocks and endless list of API compatibility requirements. If the graphics tech evolves towards AI rendering, Nvidia specifically could be undoing its own future and a market leader, while they themselves are pushing hard towards the neural rendering utopia, inadvertently lowering the entry bar for newcomers.
Maybe this is just me being an old fuddy, but I just don't care at all about better graphics in AAA games. Graphics got more than good enough with the XBox One and PS4, late PS3 games were more than good enough. For the most part, I have no interest in playing violence simulators, I want to play games - it's not the violence that's the problem for me, it's the realism at the expense of fun. And I know I'm not alone, Nintendo sells hundreds of millions of systems that are on par with an XBox 360. Instead of improving fidelity, they should use this new horsepower to make developer's and artists lives easier and more productive.
I understand the AI inference part but I didn't quite get how the rendering part could be skipped
Graphics don't seem to stagnate at all. Metro Exodus, Cyberpunk 2077, Alan Wake II all games with revolutionary graphics!
Unreal Engine 5.0 was released almost 3 years ago. The majority of companies still use UE4. It does not progress that fast.
Also, even now we get unoptimized and broken AAA games on release with no patches for 6 months, imagine adding there AI content generation on the fly. We'll sooner have hoverboards and Unlimited Details graphics (I wonder what are those guys up to now?)
I suck at videos but i think the topic of arm is huge. The time of two options with x86 has held back computers.
Having like 7+ companies making arm for personal computing will be great for advancement and pricing.
Even better would be if there were a common destkop platform defined by ARM, so that you could drop in CPUs from different vendors in the same socket. That would remind me of the Socket 7 days and even more interesting for the consumer.
Classic socket, especially the Socket 7 and Super Socket 7 days were great for testing and swapping out different CPUs from different manufacturers. Truly a golden age for choice.
I'm skeptical. We're currently in the "blockchain for everything!" era of the generative content. Stable Deffusion is at best good for generating concept images, and even that because it's based on what is basically a scam (clever ToS manipulation to appropriate work of artists without paying anything). The current oversaturation of the market won't make them any good, too. 99% of current generative content (mostly in text form) is atrocious and used as a cheap source of content for spam and horrible clickbait articles. The better ones are mostly used for public manipulation. Some people will get really rich because of it. Most people will hate it. Exactly same schema as with NFT. What's worse is actual regulation of generative content are non-existent. Who owns IP right to models generated by masterpiece-x? No one knows (or just nobody) - this makes is unusable for any released game - companies won't relinquish their IP to some random service and it's not compatible with any open source license that currently exists.
Thank you for the common sense!
Where is Eucledeon? The were about to revolutionize the revolution of graphics revolution with their revolutionary point cloud revolutionary tech.
Wow, I just mentioned that shit in a comment but I couldn't remember the name of the company! I still remember their weird Aztec temple demo and how they could render individual grains of dirt
I can't stand all of this "Generative AI" nonsense.
How many times are the images repeated in this video ?
We need a hardware-based AI processing unit an AIPU - for NPC/AI behavior and interaction processing- either a discreet processor or as a subset to the gfx card. It probably would be far better for this to be integrated onto the motherboard imo.
This would then handle all aspects of AI interaction with the world - including speech recognition and response - path finding and behavior. And would be used to run things like LLMs and other AI models that are not directly game related. Then we end up with a more distributed processing system with maybe three or four primary processing units CPU GPU AIPU and PPU (physics).
So what's the situation with Stable Diffusion and AMD right now? About half a year ago when I checked,
the only AMD GPU that outperformed my RTX 2070 Super was the one which cost 1100$
I think it was 7900 XTX, everything else was much worse
Look out of the window, the realism is unreal.
Saying nothing has progressed since the 1080 ti. But then also say ray tracing is bad. Yeah I hope apple puts in ray tracing hardware into their new macs. Like they did with the Iphone 15 pro. That surprised me.
LOL, absolutely nothing in this video will come for another 15 years
A 2 minute ad is an improvement over the last video, thanks for listening to feedback.
Nothing will replace a GPU anytime soon. With a neural accelerator you won't be able to play older games and I don't think everyone will be using the new technology. So a GPU that is able to do AI rendering and real rendering will always be needed
The problem is most older games you can't run unless you crack them many are delisted just have to move on on won't be long before arm takes over and then games are gonna have to go back to the drawing board amd make games for the next gen pcs it's gonna happen soon
What you've said about neural rendering was stated in one of the Digital foundry interviews, where an NVIDIA employee Brian (forgot the last name) said the same thing.
I wanted to ask you, though - why don't you like DLSS? 🙂
Longer prompts can just add noise that the model will ignore because it doesn't know what to do with it, or in it's confusion create a monstrosity.
The problem is all of NVIDIA’s stuff is designed to be anti competitive. If you are going to use their products they force you into their ecosystem and do not allow you to use the competitions products.
this dude has absolutely no idea what he's talking about
Funny thing is that this bullshit has been touted as the next big revolution in 3D graphics since at least 2008. I still remember that one Australian company that claimed their point cloud engine would change gaming forever and they just vanished as quickly as they made the news despite having a working demo. It was weird as fuck
I am still on my GTX 1080ti and imo it was best gpu ever came out. I would love upgrade to 6800xt or better but here in canada they still anywhere $700 to 900 or more. WTF? it make no sense.
AI is great at many things, but I don't see it taking over as a rendering solution. It's better suited as an assistant in several tasks, like you talked about. The second the industry switches to optical light based compute is when the limitations will unlock and we'll have achieved photorealism at that point, which is the cutoff for progress. After that we can only augment reality as we perceive it. There's nothing else to achieve beyond that.
sorry to rain on your parade but:
- gaussian splatting has lighting baked in and would be hell to de-light (it's hard enough in photogrammetry already), so scenes are VERY static and reflections are baked, so they break if you change anything.
- neural rendering looks very nice, but with infinite variations come infinite inconsistencies. I want the places I revisit to look the same when I come back and that could be challenging.
- RDR2 has no raytracing. The advances we've had in that area really do make a huge difference in lighting and reflections and they have improved each generation.
might take a while, silicon has hit a wall...
we went from 100 mb to 100 gb sd cards in a decade during the time we went 2d to 3d
Hey man, wanted to tell you that your content is not being fully advertised by TH-cam, I had to search your channel to check your videos.
Isn't the generated-on-the-fly content still using a graphics engine to display it? polygons and 3d spaces, physics...
I watched your video today. Funny that yesterday this though came to me, that there is actually no need in complicated engines to edit and create the worlds, but it has to support carcass building of the worlds, storing metadata, audio engine and networking, and it's needed to have a ability to plug-in the neural networks that will make it photorealistic: no Unreal Engine required. You need the neural network that knows how different assets will look like and can create variety of it. And filters to add all that lightning and color depth and balance to make it photorealistic or any type of graphics you want like stylisation.
That's why there is nothing to wrong to start writing a code for new game engine, that you can fully control and reduce all of unnecessary features like in UE/Unity and to focus on generative worlds and neural networks technologies. At least I believe that stagnation in games exists because it lacks of innovations, and media industry was driven by innovations for pretty long period of time and there is no reason to stuck now and watch overhauled games, that existed before, that look pretty much the same, still lacking realism due to complexity to process that or (what's more important) - to create that.
I'm praying for Intel to come like a freight train and make amd and nvidia regret all their choices so far. My hopes are low but there is some😭
lol
A hope and a prayer?
@@Sam_Saraguy with a side of copium
@@rocker10039 I like my copium like my mashed potatoes, piled high.
Play Alan Wake 2 with path tracing and tell me graphics are stagnating 😂
5:40 - the potential here is truly tremendous not for disruption but for shitposting. this is a recipe for overwhelming amount of crap you won't be able to find anything remotely bearable behind.
10:30 Anybody remember 3dfx?
Nvidia might be seeing the proverbial asteroid colony over the horizon as it were.
AMD, the gpu side maybe, but they seem to have been anealing themselves with lots of seed techs over the last 10+ years, on top of their healthy CPU business.
Intel are not what I'd call stick in their ways but are not the nu-tech powerhouse the once were.
There will be a little tech startup out there that can use agility to swing into graphics AI.
Actually there's a 100.
All will try
99 will fail
1 will release something new and exciting, the ARM of new-AI.
They will either be a public/private thing out of an American University or from Cambridge.
Or China.
If it's a Chinese team, they'll leave China to get access to modern fabs.
Probably in America.
Or Taiwan and it will start WW3.
😂🎉
At first its going to be things like smoke, fire those types i think and yeah assets. Eventually things will be just generated totally while playing. The only issue i can think of is storage, how do you go back to where you came in say a weeks time after leaving the game. If areas are generated your never be able to visit the same place twice (unless its stored somewhere). They do kinda have a solution to this already live with games that have a sort of rule set where stuff should be and just reskin areas and repeat. That way only a few values need be stored. Not ideal though as your notice the "base" say in the same spot across from a ship or something.
The "neural textures" was a hyped potential feature of the current Xbox via it's int4/8 acceleration.
It was especially exciting as it was put there to ease the ram restrictions found on the platform, as well as DX12U compatible PC GPUs.
Sadly, nobody has yet bothered to do anything with it. 😢
And I can only assume it's a dev manhours limitation as ram crunch is by far the loudest complaint about the platform, especially the S, as opposed to GPU grunt.
Demo code exists for all the features, although neural textures is not exactly part of the DX12U API, but async compute demo code for it exists.
Not to mention the big ticket features, but none of those are in the 'AI' realm, just your standard efficiency features like SFS etc.
Hell, at least we finally have a game that actually uses mesh shaders! 😂 Well done remedy, keeping the dream alive one bi-decade at a time.
I hear mesh shaders is actually true big brain hard to do though, so really, well done.
I might be typing out of the left side of my right cheek but don't APUs only allow deployment and don't allow you to train any machine learning models.
Make an episode about qualcomm oryon please
This talk of custom ASICs reminds me of how the PS3 had no games early on because it's CPU hardware was too weird.
We are already so close to photo realistic with polygons + ray tracing, going for neural rendering sounds like a huge step in the wrong direction, at least in terms of quality. I really hope this isn't where the industry is heading, or we may have to pay even more, swap out perfectly fine hardware _and_ get worse graphics... what a bummer
3D gaussian splatting can be applied to a 3D environment also and I think it's the most plausible way forward for gaming and simulation. Designers can model a highly complex scene with poly counts as high, or higher, than Unreal nanites and then you use 3D gaussian splatting to draw that 3D environment. This would allow the game to run on more hardware but with varying levels of detail since the 3D gaussian environment would be tuned to the capability of the local hardware. The advantage for the studios creating the content is they would no-longer have to focus on billboard trees, lod's etc. Simply create environments with millions of polygons and let the end user adjust a 3D gaussian *slider* to increase the detail their system can manage. This way the game only gets *better* over time as more powerful gpu's are released you simply increase the splatting value.
Shallow worlds made by shalow devs using prompts, using scropts generated by ai - welcome to a future world of mediocrity.
Regarding that last bit: here's hoping for a 36"~40" 4K high-refreshrate QD-OLED with at least HDR TrueBlack 500. Preferably without needing to sell a kidney.
Successor to the AW3423DWF is coming soon, will continue to be HDR 1000 and high refresh rate with higher pixel density (not that it's particularly needed). Doubt the price will be any lower than €1200 I paid for the current gen.
Why did Linus Drop PIA as a Sponsor? Are they good again now?
Funny you use Starfield which is fundamentally 15 years out of date at least.
GPU's are too expensive, and also not energy efficient. New technologies are welcome that are available for all, and use less power.
Where you said "there will be a portion of the hardware dedicated to turning a cube into a model" and "faster than just rendering a model" is so laughably false.
This will never happen and we've been trying desperately to get away from fixed functions and have a more generic pipeline as much as possible for the last 15 years. This is why mesh shaders were created because it gives way more programmable control and allows us to maximally utilise the GPU as much as possible. GPUs now are just compute machines and the more you can parallelize the work, like with mesh shaders, the better. If you have a fixed function AI pipeline, its only going to create bottlenecks and it'll never work like you think it will. It just won't know anything about the engine at all, where is it going to get the texture from? Who's going to write and compile the shaders? It just won't happen. Generative AI is already creeping in, sure, but it happens at the start of the production pipeline, not after you dispatch work to the GPU.
Close to my take. Everything discussed in the video may be apt for the production end of game graphics, not for game engines rendering the produced graphics in real time on consumer hardware. Polygons are efficient and as detailed a framework for fast 3D rendering as the hardware allows--a mature, universally adopted paradigm. It would be madness to try to reinvent that wheel now.
I don't think you should use Star Field as a reference for the current state of in game graphics, even in its own sub genre it looks very bad compared to existing games, its game worlds are barren and it lacks enemy diversity.
Obviously, Rock* studios would make better use of such tech, same for many many other talented teams that you could have used as a reference point instead of those who ship broken games.
Tbh visuals are good atm the real problem are other aspects of the game which just feels like tons of tech demos rather than a game.
I don't share the enthusiasm for generative "AI" at this point in time. G-Splats are unusable for games (they are static, non-interactive, things like reflections aren't possible yet, etc.) since it's only a shift from creating geometry (by whatever means) to filming real life scenes and digitise those. Might be a time saver, but certainly not a cost saver.
AI toolchains these days also lack one very important detail: coherence. It's very early days and while prototyping might be faster using generative machine learning methods, getting optimised, artistically coherent and visually pleasing results, still requires lots of manual work. It's the "no-code" fad of the 2000s projected into the CG world at the moment.
Finally, the most important issue is that photorealistic graphics and games don't go together well, anyway. Immersion is quickly broken by objects and agents clipping into each other, repetitive and unconvincing "AI", broken physics, and, well the requirements of gamification itself. Any type of game interface will look out of place immediately if put into truly a photorealistic setting. Making things like player inventory, maps, or just interactive items "realistic" would basically kill any fun gameplay (have fun going into a virtual room and try to find out which items are actually interactive without immersion-breaking visual hints). In reality, it's physics, game AI, and game design itself that often didn't keep up with graphics.
For who the hell we build those high speed network connections, 5G, etc, when in the near future everything will be transmitted as text, and will be locally generated. Imagine watching HD movie via 33600 bps modem line. Whole movie scene description will be transmitted as a series of prompts to local NN. Starlink will go kaput.
It seems like the new wave of AI tools for real time rendering is somewhat antithetical to all the buzzwords that nvidia love to use when it comes to their marketing for the power of their gpus. Its good for the end user but it leaves the question of what the actual point for expensive hardware will be in a couple years when AI upscaling gets so good that we can just circumvent the brute forcing of modern tech. I just want gaming to be accessible.
It won't happen any time soon, all generative AI is wrong 95% of the time and inconsistent frame-to-frame (everything you see online is the textbook definition of cherrypicking). Hardware isn't really the limiting factor here, it's mostly that the studios are unwilling to put in the required time and skill to make higher fidelity games since these ones sell just as well in a fraction of the time.
Gaussian Splatting eats out VRAM
Terrifying and interesting. Really opens up some massive open world concepts that'd be really interesting. I do hope this leads to more VRAM being commonplace- wonder if they could use that neural rendering with say- a DOS game and upscale everything as a real time remaster. That'd be cool.
Honestly, when it comes to DOS games, I'm not really interested in remasters, but entirely new ones that'll run on old hardware. I'm talking system requirements of a Pentium with double-digit megabytes of RAM or below. Don't want to be too agressive in the upper limit.
If nVidia will be out of the game, and they have tensor cores in the RTX GPU series, where will AMD GPU department end up? They are making everything nVidia does but they brute force it via software. They will either profit big from that approach or end up like ATi and 3DFx, bought off due to lack of solvency.
I pretty much never choose which games to play by graphics. Most of them are good enough.
I want advancements in stuff like NPC behaviour and their AI, or dynamic content which player can influence, or better physics engines.
If a game looks 3% more realistic due to some tech advancements I just don't really care, if all the areas I mentioned are still the same as a decade ago.
I don't care if NPCs look super realistic if they are dumb as hell and can't be interacted with in any meaningful ways, and they don't have reactions to player actions.
My biggest wish for OLED displays is one that is 27-32", 1440p (or 4K if possible for the price), not curved, 144 hz, can get very DIM in addition to bright so it doesn't hurt my eyes when gaming late at night, and with a GLOSSY DISPLAY. Matte textures on an OLED make no sense and wash out the colors, thus defeating the purpose of the OLED display in the first place. They also need to prioritize both its lowest dimness and its highest brightness because it makes no sense to have a cutting edge display that hurts my eyes bc it's too bright. I know Dough (aka Eve) has products like this, but they suck as a company and I refuse to support them. I'll happily hand over my money to Alienware, MSI, Gigabyte, LG, Samsung, or whoever else can make a monitor at that spec. I'll pay up to $700 for it if need be.
Fancy graphics are boring, where is the heart and style?
The games from the PS2 era and below had to try and had less to work with, that made them interesting and provided space for the mind to fill in the details. Less is more and also style was around in heaps when we had PS1 polygons and Neo Geo pixels.
maybe a neural net processor could simulate an X86 processor or an ARM processor to run a variety of operating systems and apps
Profit motive. How are game companies gonna make a healthy return if they don't build a game suited to run on the lowest tier computer?
I bet if you dug deep enough, the graphical fidelity in a generation of games coincides with console hardware specs of the generation in which they were released.
Im in the ad but let me guess full neural rendering?
Ai is just being used to replace workers because paying people for their work takes too much money for corporate suits’ extra bonuses. 😒
Remember that company that once broke 3DFX? It only takes a good idea to disrupt a market, and it seems today there are many paths that could lead to that disruption. Nvidia might have a lead in AI technology, AMD has a lead in chiplet design. It's only safe to say whatever technology helps devs to make better games is going to prosper.
Neural textures? Ah, the latest hack in the saga of ever-expanding textures eating up all the VRAM and refusing to use procedural content!
I love DLSS, but I HATE when devs use it as an excuse to make performance crap.
Like I love Control - 2019 game still looks better than 2023 games for the most part, and it runs amazingly well even with all RT features enabled, I avg about 80 FPS at 1440p native, and ~120+ with DLSS Quality - why don't modern games look better or run as well? I am still on an 8700k at 5.1 GHZ, and Control has me GPU bottlenecked
Contemplating a tree for 5 minutes seems like exactly the kind of thing Coreteks would do. 😆
There has been a clear trend of games running poorly natively lately. I would rather go back to RDR2 graphics if it meant that games would actually run at proper frames natively.
There's no such thing as "proper" or "fake" frame. Rasterization is full of tricks and approximations that tries to emulate realistic properties of light in screen-space. It almost NEVER renders shadows, reflections, ambient occlusion, etc in full resolution. It's all fakery. RDR2 was a swan song to rasterized graphics... they've applied almost every single trick to achieve that. With ray tracing or even better path tracing it would look miles better.
In a thermonuclear computer the DIE is placed in an absolute vacuum of solid metal.
Em um computador termo nuclear o DIE é colocado em um vácuo absoluto de metal macico.
IDK man, they'd have to still be many times more efficient to offset the cost. The world just needs more fabs regardless how you look at it. More players means more wafer orders.
Since I rather play one or two games for 10 years, I don’t care about high tech graphics anymore. Guess I’ve grown up 😐
"A.I" was stated more times than I count on my hands, therefore, WIN!
😉
& don't forget the ''main'' aspect? a 4090 is fluid in 2K!!! 4070ti 1080p!!! the only people playing in 4K max, are the 3090 SLI !! & who's dooing that ?
super informative video, minus my mouth!
This is all very speculative.
I think the next gold rush in video games is real time, universal physics. Not just broad physics "nodes", but every piece of clothing, every blade of grass, every footstep taken will be able to realistically interact with the world around it. Real time water as well, and realistic destruction. These all exist in some advanced levels, but once technology advances that they become ubiquitous, games will really feel immersive
Can you imagine how many uninspired, generic-looking, badly optimized and soulless games are gonna be shoveled our way?
Procedural character animation will definitely be the future
I plan to jump all over the upcoming 32" 4k 240hz OLED displays. At least one for my main rig.
3 generations with no real improvement, just escalating cost and power requirements is disheartening.
Ray Tracing seems to be a bust.
Nice idea if they could have pulled it off but at this rate it'll be another ten years before it's usable?
AI could add a great deal to gaming and in quite a few ways but we still need real improvements in graphics rendering.
Very Informative :)
W8 r u portuguese???
Tenstorrent is going to dominate
"stagnating" 🤔🧐 i don't disagree, it can be better. Luckily I skipped AC mirage 😅
Models look buggy, how can they be modified or optimized for bugs removal if developers don't have access to the actual editors. A better implementation would be the AI creation of 3d Max models that are completely modifiable. Also AI can't do real time, GPUs are needed for that.
Nvidia is far ahead on the ai front. They already figured out that AI is the way to overcome raw graphics limitations. An interesting thing they’re pushing for is Universal Scene Description (USD) which is probably something they want to improve AI generative features for their graphics.
Sooner or later GPUs won't need to be this big or heavy because AI will take over everything won't be long before AI takes over the GPU and CPU meaning the AI revolution is not gonna stop once AI has a foot hold we are screwed if AI learns and turns its back on us lol
Not a fan of DLSS? Ahhhh… that’s now a general statement. As it’s literally tied to every processing bit of tech they implement. I’m a big fan. The benefits out way for sure. So without DLSS we get 30fps let’s say. But 90 with (all DLSS tech with FG) all day every day. Getting to the butter realm (north of 80fps) is where I wanna live in game. And I do what I gotta do to get there. For the 3080 in 2077 it’s no RT med settings at 4k with DLSS. And it’s rt on with mid tier DLSS on the 4090… it’s for sure the way to go..
I though this vid was going to be better, interesting maybe.
Guy just talks about algorithms/data/“AI”.
What about Ark survival Ascended, very much next Gen graphics on EPIC settings, it I agree AI generated game graphics is the future. Looking how fast stable diffusion is now on a 4090, we have a long way to go before we can generate 120fps at 4K in games :)
finally someone who known about wtf is going on with gaming
I wish someone who understands all the relevant issues: Manufacturing, R&D, Shipping, Marketing, Labor, Material, etc could explain in clear simple and honest terms why a graphics card costs more than an entire computer(or a used car). Compare today to 2010. The suspects I see are YT influencers, luxury marketing, and suckers lots of suckers. Disagree? Think much?
The generated 3d mods look like trash, even for poc
It's time for NURBS.
Yeah imagine a gpu that doesn’t produce pixels at all, instead a matrix style ”pseudo graphics” that the AI would then transform into pixels. The number of computations would be completely unlinked from the output resolution, only stability would improve. What would the development tools then look like?