AMD was never going to catch up to Nvidia by themselves. They just don't have the size and R&D budget to match. Partnerships like this are the best way to fix that. And Sony gets new tech with less R&D investment needed for their products.
They actually do now, but these plans were laid 3-5 years ago. AMD didn't know that Intel would have a short blip with 12th gen, and then go back to sucking balls. But AMD split gaming and datacenter GPUs off from one another because one wasn't making money and the other was. AMD absolutely has the grunt now, but they don't see gaming GPUs as ever becoming a "real" cash cow. Nvidia, btw, thinks the same. Their margins on gaming GPUs suck compared to what they make on workstation and data center GPUs. Which makes sense. They're gigantic dies and gamers expect to get them cheap. And since Dr Su is a fiscally responsible lady she insists that the Radeon division must pull it's own weight financially, so trying to partner with as many as possible to gang up on Nvidia makes a lot of sense.
They can't catch up with dlss. It had few years more to train and nvidia supposedly have much bigger capabilities to do so. PSSR clearly showes that. The only way it could happen is from the incopetency or malice from Nvidia. Or diminishing returns in training. Expecting anything else than ,,AMD still not as good/AMD disapoints" clickbaits as a end result seems foolish to me.
Its obvious now why RDNA 3 was neglected.Timing and schedule of PS hardware and co-development of RT saved AMD's underfunded Radeon divison schedule to both catch up on RT featureset,denoisers,math;engineering that came from vector math work etc. Release it with RDNA4
I will have to say that I will expect little. AMD has a track record of releasing unfinished but promising technologies and not following well after, such as FSR and Anti-lag 2. If Sony is involved maybe chances are higher, but Nvidia is a trillion plus dollar company. And Sony just released Playstation Pro with a less than stellar performance improvement. "Expect little and be gladly surprise if they surpass it" is my motto.
Performance gains with PS5Pro is significant. Its what you make off it whats matter. Game developers still put all their effort in the base console as they clearly should.
@@stefanschuchardt5734 800 to 900 dollar console performs at 20%+ average and extremely controversial raytracing improvement for 2x the price. A pretty bad value proposition for consumers.
Lisa Su used to work for Sony. She was the lead designer/ engineer of the PS3's main chip, as I understand it, so AMD's connections with Sony effectively go back well before the development of the PS4.
It's crazy that you can genuinely listen to videos at 2x speed. I just can't. Can't really understand what they're saying nor do I even want to listen to that in 2x. It sounds.. uncomfortable.
I can't do it, either. Maybe people who do tend to have subtitles on (as Daniel does here) to help them follow along. That kind of prevents you from multitasking in other tabs/apps while you listen, though, so I'm not sure how much time you actually end up saving. Also, if you're rewinding or pausing to read the subtitles because it's going a little too fast, that also eats away at the time savings.
My girlfriend thinks the same. She can't understand what's being said. She can however spell out whole sentences and I'll lose whatever she's saying after like 3 letters lol.
I can easily understand what's being said, but it's still uncomfortable. Especially when most of my TH-cam watching is listening in the background while doing some other task. Not possible to do that at 2x speed when it requires your complete attention.
I just hope the future is not poor optimization, noise, temporal smudges, flicker and games that somehow have less fidelity with 10x the comoputing power.
When Ray Tracing is explained like that, I can see why it's so hard to run. The process looks so inefficient and convoluted I'm surprised it works in real time. But I guess that is the MOST efficient way to do it right now, otherwise someone would've already found a better way. I imagine the next step would be to embed more properties into the assets themselves to cut some time in comprobations and allat. But that would mean that the software isn't doing everything automatically and we don't do that round here.
Honestly it is never going to be fully efficient due to the divergent nature of RT (the full kind of RT and not just sun shadow). To combat the inefficiency, instead of fully resolving an image or RT effects, comes the denoiser. The denoiser alone will not be enough for real time application, thus comes accumulation which is using data from multiple frames to help resolving the image of current frame. What you ended up with is a result that is smooth and a bit laggy. A game with full path tracing will have noticeable lag in lighting and reflection resolve (need time to resolve fully) and also becoming less detailed, especially in movement. Unless they comes up with a fundamentally different way of doing RT, I don't think this problem can be fully fixed, but should be able to be reduced with faster hardware and probably more AI to fill in the gaps.
the most efficient way is to implement ray rtacing into the agme itself and not require nvidia only gpu to do good rt. That is why amd did open source fsr. You can see that mod FSR games are more good implement than the dev itself LOL
@@mikelay5360 yea they just wait for Nvidia to launch/price and then follow. if the 5070 is $699 the 8800XT will be $599/$649 so you might aswell buy Nvidia which is what people do
@@rangersmith4652 Correct me if I'm mistaken, but I believe most of their consumers are corporate entities and not gamers. We don't really matter to them anymore. However, if Intel (INTC) and AMD can create viable products and features for the corporate space and offer real competition in that market, it may encourage Nvidia to make a greater effort with gamers. We're only approximately 17 percent of their revenue now.
Well, the strategy used in the PS5Pro & PS6 are going to be very different with a constrained SoC, compared to full fat PC architecture. It's very clear that Sony does not want to pay for the extra silicon acting as a separate frame buffer on SoC. This is achievable with a 128MB Vcache acting as a frame buffer but that will make the SoC extremely expensive and add $300-400 to the base price of the console. Using the Register buffers is quite smart but comes with other performance considerations because switching context in the program requires emptying the register buffers and bringing back the register data for the previous instruction. Anyway, AMD can steal some ideas from PSSR and Ray Tracing optimisation but the hardware on the PC side will be general, NPU on the CPU for LLM for NPC interaction, RT & Vector processing on the GPU to handle all the graphical AI stuff. GPU using GDDR6/7 also has much greater bandwidth compared to the DDR5 used in consoles. What this means for PC is, in a less memory constrained environment, AMD can leverage the VRAM to a much higher degree and optimise the driver code to use less ram to perform the same tasks like upscaling, frame gen & ray reconstruction. This will be very helpful for low/mid-range GPUs with 12-16GB of VRAM when FSR4 comes out. Another thing about AMD AI is, ROCm is open-sourced. Sony may be forced to use open license for the hardware because the underlying driver for RDNA2 / RT / Vector processing is open-sourced. Only the Custom silicon can be closed sourced, so Sony probably has to yield a lot of the control to AMD. This is great news for everyone involved with graphics because console developers are well-known to be able to squeeze every drop of performance out of the silicon compared to the lax developers in the PC space who has the luxury of tons of memory. Now to be clear, I am NOT knocking PC dev. PC games pioneered rich open world games because PC dev concentrate on populating the huge world with content with less worries on optimising the game to use less RAM & VRAM. That's why PC games and console games are so different. Regarding openness of the CNN stuff, it is going to be proprietary to AMD due to driver. I am not aware that ROCm can run on NVidia or Intel hardware. However, the mathematical concepts & logical workflow of BVH (or using registry buffers as render buffer discussed in this video) is open and any GPU makers and game devs can opt to use those concepts in their graphical pipeline.
I was really concerned we'd never get to see these presentations from Cerny due to all the idiots throwing a tantrum about not seeing the plastic casing in his previous presentation. I really enjoyed it and will probably watch it again.
Just realized that the part about free use for the learnings etc. sounds like what UDNA would be like. I think that essentially confirms UDNA, probably in place of RDNA5.
One thing, the improvement is not that the BVH now holds twice the nodes per level, it is that the ray intersector can do double the intersection, thus being able to make a wider and shorter BVH. 4->8 parallel ray-box intersections and 1->2 parallel ray-triangle intersections per cycle, that’s the key.
@DragonOfTheMortalKombat a decent amount of new games fall into the category of "unoptimized slop". If new games are going to run badly anyway there's less reason to upgrade.
Also AMD should remember the cloud server chip it made for Microsoft Azure. It is the “fully fused network” the issue is cost. The PS5 Pro doesn’t need 340GB of on CPU package HBM3 so it can probably cost much less than the chip in the Azure cloud. So they should know an answer or a few answers to the question of how the issue is how low priced is AMD willing to sell it and what will their cloud server client businesses say to them putting HBM3 on a console before an everyone can buy it server chip. Why do I say HBM3 not GDDR7? Yes GDDR7 is fast but it has the problem of not a wide enough bit bus for the NPU and on graphics cards that use it for the Tensor Process Unit. Nvidia had to get creative to get AI Upscaling to work on a consumer card. In short it takes an image that can git into 1MB to 10MB and streams it to the tensor processors for processing to be upscaled to a much bigger image at the end of post processing.
do you know about the "video speed controller" add on? I always watched youtube at 2x as well, but sometimes even that is too slow. With the add on you can speed it up to 16x. I have it bound to speed up / slow down on 2 of my mouse buttons. Makes watching content so much more time efficient. Also, the add on works on pretty much any video on the internet, not just youtube.
I will sometimes watch TH-cam videos with unnaturally slow dialogue at 1.25x or even 1.5x in extreme cases, but Daniel that 2x speed was insane 😅 I can't believe you watch everything like that lol.
Not gonna lie, it's hard not getting excited about this lol This could very well become a transformative step not only for console gaming but also for the overall gaming space and many many devs across basically all levels. Knowing that Mark Cerny is fully focused on working stuff like this is great to see, especially since they already have a pretty clear vision and path how to move forward. But I agree, the PS6 will be VERY interesting to see then, if they can stick the landing with that machine in combination with Amethyst, it might as well become a incredibly powerful piece of hardware that could REALLY push the boundaries of console gaming once again.
**Sony:** "We sold you a $700 PS5 PRO with 'machine learning capabilities'-because who doesn't love a console that can learn from you while you game? And yes, it's all part of our master plan to experiment on you, gather data, and, let's be honest, squeeze every last penny out of your pocket. 🎮💸" and get our sheep's ready for PS6
Still doesn't quite matter, Nvidia is the one hindering performance behind their overbudgeted GPU paywall. There's absolutely no reason why games should run worse on GPUs that are way better than their Nvidia's counterparts. But it is Nvidia """sponsoring""" those titles after all, so these advancements don't matter outside of console as long as Nvidia keeps "suggesting" the developers they are partnering with to keep giving the competition the low end of the stick when it comes to optimizations and GPU usage.
@@mikelay5360 Yeah but aren't like 45/50% of those users per steam survey still using the 1080ti/1650/1660? It isn't like they are taking any advantages out of RT nor DLSS nor FG
@@praisetheoak NVIDIA has been at it since those days. It didn't start with the 4000 series. Also the 4090 is more popular than any current gen AMD cards on that survey, if you choose to believe it. I would say that game devs collect their own data during or after installation, They know where the masses are.
@@praisetheoak 1080 Ti, 1650 (both variants) and 1660 (all 3 variants) account for 9.38% of all Steam users as of November according to my quick napkin math. Nowhere near 50% lol. Another bit of quick math had non-RT capable GPU owners at slightly below 30%, and a lot of those are people who either have integrated graphics or have clearly given up on playing new games years ago (GTX 970, RX 550...)
@@neoey2x makes me anxious AF. Most TH-cam videos I watch to have them in the background to relax while driving or whatever. Not worth it to me. I run 1.25 or 1.5 often depending on the speakers.
i think that the main problem with RDNA 2 and RDNA 3 is that the Ray and AI (for RDNA 3) accelerators are tied with the amount of CU on each GPU stacks, which limits what Radeon can do with ray tracing and upscaling scenarios (AI FSR is not released yet, but RDNA 3 definitely will also have it, given that they already got the cores, but not the task).
So, reading between lines i see that Sony want unified architecture to make porting their games to PC easier.. which is good news, i dont like exclusivity.
When I was doing my computer science degree in the late 90s, ray tracing was something so computationally expensive it was only used in images produced by PhDs in their research into ray tracing. It was something the profs might occasionally mention in conversation outside of lectures, but it wasn’t mentioned in any lecture, even in my computer graphics class. I honestly never thought we’d get to a point where there could be ray tracing in a video game. But I’m not an optimist about technology. So… given the expected price for the 5090 (based on the leaks), and the middling stats for the 5080 and below (considering the 4000 series), I’m beginning to wonder if consoles will be the future of high-end gaming. If I buy a new console every 6 to 8 years, that console could contain THE cutting edge video card inside it, and be more affordable than THE cutting edge video card in a PC. There would be manufacturing efficiencies gained through selling more cards, and doing so over 6-8 years, instead of the current 2-year lifespan of video cards. There would even be programming efficiencies gained through a 6-8 year lifespan of programming for that card. I applaud AMD and Sony for working together on quasi-open standards to improve gaming for both PCs and consoles. But, you know, if the 6090 launches at 3000$, and the PS7 is less powerful than a PC, and 4K gaming in still only available via upscaling, maybe the gaming hardware people need a consider a new approach, and the game designers need to reconsider ray tracing. I’m not entirely sure this was the right video for this comment… it’s just something I’ve been mulling over for the last couple weeks…
We`re lucky to have PlayStation to motivate AMD to focus and improve their ATI GPU division at least a little bit. For some reason they go all out on Ryzen only. Unlike Intel, Nvidia is not sleeping around at least for now. The only way they can have Ryzen moment with Radeon is if they just... get better - or if Nvidia repeats the same mistakes of Intel, maybe even encountering brain drain
The problem with the idea of quantum leaps being availabe in AI is that AI capability is necessarily tied to hardware advancement. As someone who researches AI, I don't think there's "quantum leaps" available for consumers. There's "quantum leaps" available in cost-cutting. There's "quantum leaps" available for investors. There'll be meaningful improvement for consumers, but I wouldn't hold your breath on "quantum leaps".
I get what you mean on the Desktop Segment because it is very general but you need to think different when it comes to AMD but especially Sony/PS imo. They can change Stuff for Consoles that only really work well for Consoles. Or especially well for Consoles.
This is my main issue with the coming generations. It seems that AI has just contaminated every product tier. People do not understand that AI capabilities come at the cost GPU space which translates into lesser traditional computational gains and high prices due to process complexity. Cheaper solutions can solve typical GI, shadows and reflections with minimal cost to traditional computational power, and provide similar results for the vast majority of gamers. This is a solution in search of a problem that could be solved with simple smoke and mirrors tricks. But nooooooo, lets try to emulate real light behavior on a frame per frame basis on a single computer units for the typical gamer that will annoyed because is costing him 2 thirds of the performance while making a blur of everything.
This is marketing. Keeping the fire lit between generations. It's not about names or targets. It's about tiny algorithms and chipsets in the end. Which just aren't there yet.
To not confuse someone make it clear that if you talk of System Memory ( 22:20 ) you mean the GDDR6(7) on the GPU , its clearly labeled as GDDR6 in the Picture , it should be clear , but usually if someone talks of System Memory he usually mean the Main Memory of the Computer and not the VRAM of the GPU . To add more Cache Memory would make the GPU bigger and more expensive , it should be possible to add Infinity Cache which has 17,2 TB Bandwith on a seperate Chiplet , but not with RDNA4 RX 8000 GPUs which will have a monolithic Design and which are not meant for the high end Market . I guess there will be a tradeoff between costs and Cache size . Doubble the RT Speed should be possible with RDNA4 and i hope the new GPUs will be in the 400 - 700 € range but guess it will depend how fast the GPUs in general will be , on the lower end Intels Battlemaage may help to keep the Prices down
@@AnalogFoundry Cerny ? , i meant Daniel Owen , the Timestamp is there , 22:20 . Besides , the PS5Pro is basically an APU = it uses the 16GB DDR6 for Graphics and has 2 GB DDR5 for operational purposes
Yes , i meant Daniel Owen and because this Channel is a PC Gamer/Gaming Channel it can be slightly confusing if the talks of " System " Memory and means the GDDR6 VRAM instead . The Video is only on the surface about the PS5Pro , Daniel suggest that AMD and Sony could learn from each other bacause the next Console will very likely have an AMD APU as well , the Playstation is AMD Hardware with Sonys Software
HAHAHA! You being the mouse pointer with your finger pointing is hilarious and brilliant! Love your channel and insight into this sort of stuff, you break down so that an older guy like me can understand. Thank you bro! 👍
i seriously recommend people start watching all videos in 2x it was a lifechanger for me when i first learnt of it i saved so much time and was able to watch so much more . normal talking pace feels so slow without it
pssr has had issues in some games, but then videos ignores the ones it's does very well in. That out the way i hope this means AMD has more fight in them, since i'd rather not team green be the only gpu's left in the game for my wallets sake alone.
I think there is fundamental failure here to recognize the there in an increased focus of the Unreal Engine with a smaller budget which means those teams are going to have less experience formulating performances for their vision versus the engine because of their lack of experience.
Congrats on 215k subscribers! Random question, doing the math how long you’re thinking it would take you to hit 1 million at the current rate? 😅 I wish you all the best and very happy for you and the work you’re doing to the community.
So basically, graphics are cooked because the only way for them to sell new hardware was to deliberately cripple rasterized graphics and then rely on AI techniques to fix it? Humanity sure has mastered the art of creating fake problems in order to sell solutions. Like what are we even talking about anymore? It's always AI-generated this, machine learning that. Am I supposed to be impressed that the computer is generating textures on its own? From an artistic perspective, that seems really devoid of meaning. Whenever AI is used by a company, at best it's blatant laziness that should be treated as slop, and at worst it's them looking for ways to fire the talent while the business people get promoted, which is incredibly deplorable and damaging to gaming as a whole. It's one thing if they figured out some way to have AI speed up the optimization process without sacrificing visuals somehow, but using AI to generate more realistic textures is not something that I find interesting because it gets rid of the artistic process. I don't know, maybe this was inevitable, but if the future of gaming is fake frames and AI-generated graphics, I think I'm gonna stick with my 4070 for a very, very long time. Which I guess is a good thing, saves me lots of money.
The problem isn't tech advancement, maybe there's diminishing returns but its about budgeting, Cerny even said it in Digital Foundry interview, PC hardware always brute forces GFX etc but in consoles its about budget, similar to cell phones, the high end gets everything tech wise while mid to low has sacrifices, consoles fall on mid tech when new and low when mid life etc Consoles will never be on par with PCs again after the 32 bit era
I disagree. DLSS and FG are the only reasons why I play at 4k/120 on my TV using a 4070ti Super and a 650watt PSU. If I wanted to do so rasterised I would have needed a 4080 and a PSU upgrade. Both of which cost more and would have generated more heat and noise inside my case.
You can't make that much sense in a single comment in yt comment section. People who get paid to develop such tech can not wrap their heads around this.
I wish Nintendo were with AMD, since is portable, they could get something really good for small package instead of Nvidia. And Nvidia with Xbox and PS5 as they want more performance, dlss, ray tracing and those kind of things, lol
Nintendo did get something really really good for the purpose they wanted and if leaks are to believed the next switch is also looking really good with a gpu faster then amds 890m and 12 gigs of lpddr5 7500 as for the other two nvidia doesn’t offer a comparable soc design that delivers what Microsoft and Sony expect while maintaining backwards compatibility as they lack an x86 license
More than 1.5x Play speed loses its edge if you are not a native speaker. I will go to Spiffing Britt-he uses lazy 0.25x speed for cheesing TH-cam "features." Exploiting the clock of user interaction which feeds the algorithm, what else did you expect?
@@DBTHEPLUG multitasking is one way of increasing efficiency. However, you cannot focus on 2 things with 100% attention so it isn't always beneficial to do more than 1 task at a time. watching 2x and being able to watch 2 hours worth of videos in 1 hour is most efficient and opens up time for other things or learning more
@@DBTHEPLUG there were some studies about that, usually multitasking means that you do multiple things poorly or the time to switch between tasks and gather your thoughts (aka switching cost) to do all of them efficiently overweighs any benefits of multitasking. Humans are poor multitaskers, it's far better to arrange your tasks in a way that will allow you to complete them efficiently one by one.
The reading comprehension of your viewers is insane. These are all not console tech. These are Radeon tech. You guys should be happy since radeon r&d is getting help. They have significantly less employees and resources compared to nvidia. This is good news since they will have more budget and resource to spare. Good thing Playstation is stepping up to its hardware, forcing Radeon to innovate or at least improve where it lacks. RDNA 4 will just be a sneak peak to whats to come. I am an nvidia user, but hey, I am also a tech enthusiast.
A lot of viewers are unfortunately for most part just tribal about PCMR. Being tribal in general immediately make them focus on specific points here and there without thinking about the big picture and that's very sad. The most interesting part of that presentation indeed, even if you don't own a console or I would dare say hate consoles, is that it gives glimpses at future directions for AMD and Radeon but also games in general. PCMRs people like to forget that they like it or not, consoles are still there and will still be there for a while. Yes they are not for everyone, but there is a demand for a simple box that sits in the living room and that JUST WORKS. And this means that development of game have to take into account both PCs and consoles. The fact that AMD and Sony unite for ML based models as they can't fight alone, that even Cerny admits that rasterization has almost reached a dead end and RT/AI is the future, that shows glimpses of what the future of gamedev may be made of.
Will any of this RDNA 2.5+ tech for the PS5 pro have any software side benefits for RDNA 3 cards? Or will this only impact RDNA 4 and beyond GPUs with specific hardware modules?
Unfortunately a bit hard to watch. I also play videos double speed and then the Sony guy at 4x is way too hard to understand - trying to switch dynamically between 2x and normal speeds so both are actually 2x was too much work after the first 10 mins....sorry :(
Considering, RDNA4 is the last of it's lineage, and AMD is supposed to be shifting to UDNA for all GPUs after that, I figure this is Sony's play at getting some personal wish list stuff into the mix before the first successor to the RDNA4 core gets taped out. Custom silicon may still be custom. But, even Sony doesn't ship in numbers strong enough to warrant AMD going very far off their own path. Now, the thing that's bugged me for years is how AMD hasn't enjoyed better gaming uptake on the PC side as a direct result of their dominant presence in the gaming console market. Hooowever, with Sony having bought up so many game developers, that may finally change a bit. Problem is that seems a pretty long ways away, unless the PS6 is closer to ready than we realized. And, to that end, maybe this presentation from Mark was more of a preamble for RDNA4's expected debut at CES. There just isn't enough information to make a confident prediction on. The best I can say is that it makes sense that Sony would take interest in the expanded options and performance potential of UDNA over RDNA. And with as weird and AI-tastic as things are getting, they'll need this to keep up.
Looking at the reasoning behind the collaboration by Sony and AMD, it makes sense. Nvidia has a commanding lead over their competitors in terms of AI technologies and has enough funds to spend on R&D - AMD doesn't have that luxury. If this collab means further ML-based upscaling for consoles and PCs, it's likely going to pay off in the long term.
If Sony is right about the future of the industry going the machine learning way, and AMD instead spends their silicon going ray tracing, they're going to be even more behind Nvidia, with all the neural stuff.
Why cant we just separate ray tracing technology to another separate card? Those who wants ray tracing, just purchase another card solely with ray tracing chip. So we can plug in 2 card in desktop… one GPU and another ray tracing card….
If you haven't addressed that need for speed, then there are a number of free browser extensions in the Chrome web store, useable on any chromium browser like Brave, Vivaldi, etc. I use "TH-cam Playback Speed Control", and I can speed up videos to 16x lol. YT chokes on anything faster than 8x depending on your connection and YT servers.
That's cool & all, but it almost feels completely pointless to buy a console these days. Anything AMD does with console hardware will have already been done better on PC years earlier. And Sony is putting all their games on PC with significant upgrades, just as Microsoft has already been doing. It's so hard to justify buying into these closed, highly curated ecosystems that charge you to use your own internet... All while having less backwards compatability, only one store front with more expensive games, limited control options, no mod support, and far worse policies (like refund support). Even if AMD & Sony figure out some great AI upscaling technique to use on consoles, how far ahead will Nvidia already be with DLSS 4.0 & beyond? If anything, this tells me PSSR is a dud.
Further proof that AMD and Sony are co-developing FSR4. I think PSSR is FSR4 Beta/Lite with AMD adding to it for PC. I just hope my 7900xt works with it...
4X speed beat me, 3.5X speed seems ok. Sony has an interesting place, most games are made console first & PS5 is the primary console. Sony befits when all devs make better games, open helps them make more money. The AMD problem has always been hardware features are used on console but can be left out on PC ports if there missing on Nvidia, async compute was used on console but the GTX 10XX line missed the feature so devs had to make games work without async compute which can hurt AMD performance.
Nvidia does not make better graphics cards. It makes cards that are "better" at overblown reflections, fake resolution, and fake frames but which have less VRAM at a higher cost per frame. If that's how you define "better," you be you.
@rangersmith4652 and yet it makes the experience much better. All frames are created through hardware and software techniques, you fail to see this is more of the same.
@@rangersmith4652 They do make better graphics cards. There's a very real reason why the 7900XTX fails to compete with the 4090 like it was originally intended to do.
@PCandTech-tr8xt Yeah I get your point of view but honestly I said that so I wouldn't sound crazy or random. That last part puts it into perspective that it's a joke about some niche and specific thing. I know there's a lot of young people here so maybe after that it could possibly compel them to search micro machines. It's an old commercial 😂
So PS5 Pro is a hardware/software test platform. But I'm cautiously optimistic AMD will finally leverage their console knowledge for PC. They have full platform access, CPU, GPU, chip set. Always wondered if they could optimize past others that only access single parts of the chain.
ML graphics, in case of current PSSR, are a double edged sword. It can look great in places, but be totally lacking in others - in the very same game. I wish people wouldnt look for artistical shortcuts that inevitably will lead to results no one can improve on because it is out of their hands. Raytracing no matter how stunning it looks only is a fraction of what you see during fast gameplay, but costs too many ressources so JUST SCRAP this idea until it is possible with raw power in every situation instead of relying on estimations and guesswork of any sort of AI.
Can anyone who is working on lightweight neural networks tell me why we don't have neural upscaling asics with a dedicated cache yet? Is it bc we aren't sure of the best way to get good quality yet and no one wants to lock hardware in on it or is it simply not doable?
AMD may have fewer resources to develop new technologies compared to Nvidia, but why did they start talking about neural technologies and retracing only in 2023-2024? Obviously, this is a strategic miscalculation. And this was greatly helped by popular video bloggers who foamed at the mouth to prove that RT is an unnecessary proprietary toy...How can you be immersed in the industry and be so short-sighted?? Fools...
This isn't surprising. (I am not an expert on anything.) Going all in on raytracing for the Playstation 6 was a pretty obvious move for Sony, even just with the current state of the art, before you consider the possiblity of making further advances. PlayStation largely isn't held back by the things that have caused PC hardware and software vendors to go slow with raytracing. No installed base of existing graphics cards to support; less need to support old games; no PC gamers huffing that that raytracing sucks. That means that Sony can provide game studios with a low-compromises raytracing machine and encourage them to stop dragging along the boat anchor of trying to support rasterisation and RT in the same game. (There are still portability risks to publishers in pushing _too_ far ahead of what can be replicated on PC or the next XBox, but those will be fairly manageable by early 2027, and of course true PS exclusives won't have to worry about that.) And the big risk to Sony isn't from going all out for a big generational leap from the PS5, it's from not doing so. Another generational improvement that feels incremental and meh is likely to hurt PlayStation: raytracing is the best and really only chance to deliver something like the staggering improvements of older console generations.
I watch YT on 1.25, any faster than that and too many people end up coming across as unatural in how they speak (there are enough people who speak quickly enough to make normies at 1.25 still sound normal). I'm so fed up of all this AI bullshit with fake frame and fake resolution just for the game devs to keep getting away with dogshit optimisation using the upscalers and fake frames as a crutch for their shitty games. We've not made any progress since the mid 2010s and it's really depressing, we should be getting way better performance out of the hardware than we currently do.
Sony have a lot of experience in the TV sector with AI upscaling, they have been using LG's AI processor with custom firmware on most OLED screens since 2018, it was LG and Samsung who first introduced this type of video upscaling into the TV market, and this is just a carry over into the gaming sector. Sony are also renowned for that firmware giving the best results based on the same OLED panel as the equivalent TV from LG, the only thing they don't do is protect their TV's from burn in properly like LG do, so it swings in roundabouts. However, it's 2024, why even need AI upscaling in the first place, it should all be native. You have games running from over a decade ago on 360/PS3 looking better in native than this horrible FSR upscaling that is currently being used, some games on the Series S are running in 650p it's just madness even though it was advertised as a 1440p machine, that's just hyperbole. Even a 4090 can't hit 1440p 60fps in some titles due to lazy devs taking the easy route out and using DLSS as a fallback if they even add it in like BGS did with Starfield.
Given how poorly PSSR is performing currently why on earth would anybody be excited? Intel XeSS and even Microsoft's Direct ML super resolution both perform much better in its first generation
I watch your videos at 1.75x speed. It was annoying to have to switch speeds back and forth so I decided to watch the Sony video first. At 1.5x. The guy talks plenty fast.
AMD was never going to catch up to Nvidia by themselves. They just don't have the size and R&D budget to match. Partnerships like this are the best way to fix that. And Sony gets new tech with less R&D investment needed for their products.
They actually do now, but these plans were laid 3-5 years ago. AMD didn't know that Intel would have a short blip with 12th gen, and then go back to sucking balls. But AMD split gaming and datacenter GPUs off from one another because one wasn't making money and the other was. AMD absolutely has the grunt now, but they don't see gaming GPUs as ever becoming a "real" cash cow. Nvidia, btw, thinks the same. Their margins on gaming GPUs suck compared to what they make on workstation and data center GPUs. Which makes sense. They're gigantic dies and gamers expect to get them cheap. And since Dr Su is a fiscally responsible lady she insists that the Radeon division must pull it's own weight financially, so trying to partner with as many as possible to gang up on Nvidia makes a lot of sense.
@@andersjjensenthis makes complete sense
They can't catch up with dlss. It had few years more to train and nvidia supposedly have much bigger capabilities to do so.
PSSR clearly showes that.
The only way it could happen is from the incopetency or malice from Nvidia. Or diminishing returns in training.
Expecting anything else than ,,AMD still not as good/AMD disapoints" clickbaits as a end result seems foolish to me.
@@damianabregba7476 Yeah nvidia would have to pull an intel and stagnate and AMD would need a generational leap or two
@@damianabregba7476 I think there are deminishing returns in training though
Its obvious now why RDNA 3 was neglected.Timing and schedule of PS hardware and co-development of RT saved AMD's underfunded Radeon divison schedule to both catch up on RT featureset,denoisers,math;engineering that came from vector math work etc. Release it with RDNA4
Radeon's best gens have been those that released alongside new consoles.
I will have to say that I will expect little. AMD has a track record of releasing unfinished but promising technologies and not following well after, such as FSR and Anti-lag 2. If Sony is involved maybe chances are higher, but Nvidia is a trillion plus dollar company. And Sony just released Playstation Pro with a less than stellar performance improvement. "Expect little and be gladly surprise if they surpass it" is my motto.
Performance gains with PS5Pro is significant. Its what you make off it whats matter. Game developers still put all their effort in the base console as they clearly should.
@@stefanschuchardt5734 800 to 900 dollar console performs at 20%+ average and extremely controversial raytracing improvement for 2x the price. A pretty bad value proposition for consumers.
Is RDNA4 AMD's Snyder's Director's Cuts?
Lisa Su used to work for Sony. She was the lead designer/ engineer of the PS3's main chip, as I understand it, so AMD's connections with Sony effectively go back well before the development of the PS4.
Wait really? It's something I didn't know and very interesting!
Dr Su worked for IBM, not Sony. Sony, Toshiba and IBM co-developed the Cell Processor that was used in the PS3.
Cell processor was from IBM, where Su worked.
IBM @@ayac.4998
budget Elton John
Lmaoo
Elton John at home
hahaha
Yoooo is that why he looks so familiar?? Lmao
I thought it was Dana Carvey
It's crazy that you can genuinely listen to videos at 2x speed. I just can't. Can't really understand what they're saying nor do I even want to listen to that in 2x. It sounds.. uncomfortable.
I can't do it, either. Maybe people who do tend to have subtitles on (as Daniel does here) to help them follow along. That kind of prevents you from multitasking in other tabs/apps while you listen, though, so I'm not sure how much time you actually end up saving. Also, if you're rewinding or pausing to read the subtitles because it's going a little too fast, that also eats away at the time savings.
My girlfriend thinks the same. She can't understand what's being said. She can however spell out whole sentences and I'll lose whatever she's saying after like 3 letters lol.
I can listen to Daniel at like 3x lol
I can easily understand what's being said, but it's still uncomfortable. Especially when most of my TH-cam watching is listening in the background while doing some other task. Not possible to do that at 2x speed when it requires your complete attention.
I watch videos at 2x speed when I'm not planning on sitting for long or it doesn't require my full attention.
I just hope the future is not poor optimization, noise, temporal smudges, flicker and games that somehow have less fidelity with 10x the comoputing power.
Yeah with them emphasizing ML/RT I'm not too hopeful about all this.
As long as that sells more GPUs, you can bet that's exactly what you'll get.
Watching Mark Cerny through you at 4x and still understanding him makes me worried..
How do you understand that? I can barely keep up at 2x
That's just a lie lol
@Davinmk Any avid gamer who's has taken a course or 2 in ML/Ai should be able to understand this.
I can't really understand Cerny at 4x speed. It's actually really funny. It sounds like he's speaking a foreign language, and then being translated.
@Davinmk They have ADHD.
When Ray Tracing is explained like that, I can see why it's so hard to run. The process looks so inefficient and convoluted I'm surprised it works in real time.
But I guess that is the MOST efficient way to do it right now, otherwise someone would've already found a better way.
I imagine the next step would be to embed more properties into the assets themselves to cut some time in comprobations and allat.
But that would mean that the software isn't doing everything automatically and we don't do that round here.
its not the most efficient way. UE5 is everything but efficient. Its the way Nvidia enforced raytracing on all games.
*Coughs in Threat Interactive*
Honestly it is never going to be fully efficient due to the divergent nature of RT (the full kind of RT and not just sun shadow). To combat the inefficiency, instead of fully resolving an image or RT effects, comes the denoiser. The denoiser alone will not be enough for real time application, thus comes accumulation which is using data from multiple frames to help resolving the image of current frame. What you ended up with is a result that is smooth and a bit laggy. A game with full path tracing will have noticeable lag in lighting and reflection resolve (need time to resolve fully) and also becoming less detailed, especially in movement.
Unless they comes up with a fundamentally different way of doing RT, I don't think this problem can be fully fixed, but should be able to be reduced with faster hardware and probably more AI to fill in the gaps.
@brando3342 but that guy talks about baked light not lumen /raytraced GI
the most efficient way is to implement ray rtacing into the agme itself and not require nvidia only gpu to do good rt. That is why amd did open source fsr. You can see that mod FSR games are more good implement than the dev itself LOL
missed opportunity not calling it HOLLOW PURPLE
Domain expansion:..... *copyright infringement*
@@panakajack1525lmaoooooo
Dodged a bullet there
And ending like the user of that technique?
@@Proaz15don’t hurt them too bad
This is RNDA 4 or atleast a version of RDNA4's raytracing soon to be UDNA.
Great news, INTC and AMD are making decent strides. We as the consumer need to hope the monopoly gets weakened.
Hoping for the monopoly to be weakened is pointless. We have to actively weaken it. We do that by not buying Nvidia.
@@rangersmith4652 In some use case thats just impossible, there werent any reasonable alternatives.
Only If the price remains as competitive as intel. Undercutting NVIDIA by 50 dollars will not work.
@@mikelay5360 yea they just wait for Nvidia to launch/price and then follow. if the 5070 is $699 the 8800XT will be $599/$649 so you might aswell buy Nvidia which is what people do
@@rangersmith4652 Correct me if I'm mistaken, but I believe most of their consumers are corporate entities and not gamers. We don't really matter to them anymore. However, if Intel (INTC) and AMD can create viable products and features for the corporate space and offer real competition in that market, it may encourage Nvidia to make a greater effort with gamers. We're only approximately 17 percent of their revenue now.
Well, the strategy used in the PS5Pro & PS6 are going to be very different with a constrained SoC, compared to full fat PC architecture.
It's very clear that Sony does not want to pay for the extra silicon acting as a separate frame buffer on SoC. This is achievable with a 128MB Vcache acting as a frame buffer but that will make the SoC extremely expensive and add $300-400 to the base price of the console.
Using the Register buffers is quite smart but comes with other performance considerations because switching context in the program requires emptying the register buffers and bringing back the register data for the previous instruction.
Anyway, AMD can steal some ideas from PSSR and Ray Tracing optimisation but the hardware on the PC side will be general, NPU on the CPU for LLM for NPC interaction, RT & Vector processing on the GPU to handle all the graphical AI stuff. GPU using GDDR6/7 also has much greater bandwidth compared to the DDR5 used in consoles.
What this means for PC is, in a less memory constrained environment, AMD can leverage the VRAM to a much higher degree and optimise the driver code to use less ram to perform the same tasks like upscaling, frame gen & ray reconstruction. This will be very helpful for low/mid-range GPUs with 12-16GB of VRAM when FSR4 comes out.
Another thing about AMD AI is, ROCm is open-sourced. Sony may be forced to use open license for the hardware because the underlying driver for RDNA2 / RT / Vector processing is open-sourced. Only the Custom silicon can be closed sourced, so Sony probably has to yield a lot of the control to AMD. This is great news for everyone involved with graphics because console developers are well-known to be able to squeeze every drop of performance out of the silicon compared to the lax developers in the PC space who has the luxury of tons of memory.
Now to be clear, I am NOT knocking PC dev. PC games pioneered rich open world games because PC dev concentrate on populating the huge world with content with less worries on optimising the game to use less RAM & VRAM. That's why PC games and console games are so different.
Regarding openness of the CNN stuff, it is going to be proprietary to AMD due to driver. I am not aware that ROCm can run on NVidia or Intel hardware. However, the mathematical concepts & logical workflow of BVH (or using registry buffers as render buffer discussed in this video) is open and any GPU makers and game devs can opt to use those concepts in their graphical pipeline.
Translation: I'm smart.
The end.
I was really concerned we'd never get to see these presentations from Cerny due to all the idiots throwing a tantrum about not seeing the plastic casing in his previous presentation. I really enjoyed it and will probably watch it again.
Ok
Just realized that the part about free use for the learnings etc. sounds like what UDNA would be like. I think that essentially confirms UDNA, probably in place of RDNA5.
I really hope so, since chiplets shared with data-center is the only way for AMD to be able to take on Nvidia.
One thing, the improvement is not that the BVH now holds twice the nodes per level, it is that the ray intersector can do double the intersection, thus being able to make a wider and shorter BVH. 4->8 parallel ray-box intersections and 1->2 parallel ray-triangle intersections per cycle, that’s the key.
this says more about how AMD is doing in this space than it does Playstation. Cerny is pushing boundaries of console gaming
Cerny is a hack that lies constantly lmao of course you’re praising a false idol
These projects not matter when the games are terribly optimized by developers
Then don't buy unoptimized slop
@DragonOfTheMortalKombat a decent amount of new games fall into the category of "unoptimized slop". If new games are going to run badly anyway there's less reason to upgrade.
Also AMD should remember the cloud server chip it made for Microsoft Azure. It is the “fully fused network” the issue is cost. The PS5 Pro doesn’t need 340GB of on CPU package HBM3 so it can probably cost much less than the chip in the Azure cloud. So they should know an answer or a few answers to the question of how the issue is how low priced is AMD willing to sell it and what will their cloud server client businesses say to them putting HBM3 on a console before an everyone can buy it server chip.
Why do I say HBM3 not GDDR7? Yes GDDR7 is fast but it has the problem of not a wide enough bit bus for the NPU and on graphics cards that use it for the Tensor Process Unit. Nvidia had to get creative to get AI Upscaling to work on a consumer card. In short it takes an image that can git into 1MB to 10MB and streams it to the tensor processors for processing to be upscaled to a much bigger image at the end of post processing.
Cerny looks like the tech-version of Dana Carvey. I don't know about you, but I have never seen them in the same room before.
SNL? 🤭
Cerny (Černý) stands for BLACK in Czech, where it comes from
do you know about the "video speed controller" add on? I always watched youtube at 2x as well, but sometimes even that is too slow. With the add on you can speed it up to 16x. I have it bound to speed up / slow down on 2 of my mouse buttons. Makes watching content so much more time efficient. Also, the add on works on pretty much any video on the internet, not just youtube.
I will sometimes watch TH-cam videos with unnaturally slow dialogue at 1.25x or even 1.5x in extreme cases, but Daniel that 2x speed was insane 😅 I can't believe you watch everything like that lol.
ive been watching almost everything in 2x for years eventhough english is not my native language and i watch mostly english videos.
Not gonna lie, it's hard not getting excited about this lol
This could very well become a transformative step not only for console gaming but also for the overall gaming space and many many devs across basically all levels.
Knowing that Mark Cerny is fully focused on working stuff like this is great to see, especially since they already have a pretty clear vision and path how to move forward.
But I agree, the PS6 will be VERY interesting to see then, if they can stick the landing with that machine in combination with Amethyst, it might as well become a incredibly powerful piece of hardware that could REALLY push the boundaries of console gaming once again.
**Sony:** "We sold you a $700 PS5 PRO with 'machine learning capabilities'-because who doesn't love a console that can learn from you while you game? And yes, it's all part of our master plan to experiment on you, gather data, and, let's be honest, squeeze every last penny out of your pocket. 🎮💸" and get our sheep's ready for PS6
"If you're watching me at 2x, watch him at 4x" is the best part.
@23:35 so all we need is 128MB of cache on die? well and already has 96MB solution and that means we're not far boys
Still doesn't quite matter, Nvidia is the one hindering performance behind their overbudgeted GPU paywall. There's absolutely no reason why games should run worse on GPUs that are way better than their Nvidia's counterparts. But it is Nvidia """sponsoring""" those titles after all, so these advancements don't matter outside of console as long as Nvidia keeps "suggesting" the developers they are partnering with to keep giving the competition the low end of the stick when it comes to optimizations and GPU usage.
Its worse than that. Nvidia sends engineers, for free, to "help" with engine modifications that benefit their hardware.
90% market share is very persuasive.
@@mikelay5360 Yeah but aren't like 45/50% of those users per steam survey still using the 1080ti/1650/1660? It isn't like they are taking any advantages out of RT nor DLSS nor FG
@@praisetheoak NVIDIA has been at it since those days. It didn't start with the 4000 series. Also the 4090 is more popular than any current gen AMD cards on that survey, if you choose to believe it. I would say that game devs collect their own data during or after installation, They know where the masses are.
@@praisetheoak 1080 Ti, 1650 (both variants) and 1660 (all 3 variants) account for 9.38% of all Steam users as of November according to my quick napkin math. Nowhere near 50% lol. Another bit of quick math had non-RT capable GPU owners at slightly below 30%, and a lot of those are people who either have integrated graphics or have clearly given up on playing new games years ago (GTX 970, RX 550...)
2:22 I actually watch in 2x. Impossible to understand in 4x 😂😂
same haha
@@neoey2x makes me anxious AF. Most TH-cam videos I watch to have them in the background to relax while driving or whatever. Not worth it to me. I run 1.25 or 1.5 often depending on the speakers.
i think that the main problem with RDNA 2 and RDNA 3 is that the Ray and AI (for RDNA 3) accelerators are tied with the amount of CU on each GPU stacks, which limits what Radeon can do with ray tracing and upscaling scenarios (AI FSR is not released yet, but RDNA 3 definitely will also have it, given that they already got the cores, but not the task).
If you watch how PS5 Pro got the 300 TOPs of INT8 + how to actually feed them you probably need RDNA4 for AI FSR.
RDNA3 won't have AI upscaling in their HW. Maybe it'll be software based, but definitely not HW like RDNA4.
Cool that Dana Carvey works at AMD now
So, reading between lines i see that Sony want unified architecture to make porting their games to PC easier.. which is good news, i dont like exclusivity.
22:30 The PS5 does not have separate memory for the system and graphics.
PSSR in PS5 PRO looks so bad. FSR already looks better, I don’t know what they did.
Sony wants to do something that benefit the whole gaming industry? like blocking 180 countries from playing their exclusive PSN games? LMAO
Oh get over it, trying to hard lol.
@@TakaChan569 well hopefully Sony will get over the fact that many people like me will never support them again.
@@n9ne cool, from the sounds of it you never were in the first place and are just looking for internet points...but hey you do you.
@@TakaChan569 what has me not liking certain business practices to do with virtue signaling? or whatever you're trying to say..
Bro get off your high house
When I was doing my computer science degree in the late 90s, ray tracing was something so computationally expensive it was only used in images produced by PhDs in their research into ray tracing. It was something the profs might occasionally mention in conversation outside of lectures, but it wasn’t mentioned in any lecture, even in my computer graphics class. I honestly never thought we’d get to a point where there could be ray tracing in a video game. But I’m not an optimist about technology.
So… given the expected price for the 5090 (based on the leaks), and the middling stats for the 5080 and below (considering the 4000 series), I’m beginning to wonder if consoles will be the future of high-end gaming.
If I buy a new console every 6 to 8 years, that console could contain THE cutting edge video card inside it, and be more affordable than THE cutting edge video card in a PC. There would be manufacturing efficiencies gained through selling more cards, and doing so over 6-8 years, instead of the current 2-year lifespan of video cards. There would even be programming efficiencies gained through a 6-8 year lifespan of programming for that card.
I applaud AMD and Sony for working together on quasi-open standards to improve gaming for both PCs and consoles. But, you know, if the 6090 launches at 3000$, and the PS7 is less powerful than a PC, and 4K gaming in still only available via upscaling, maybe the gaming hardware people need a consider a new approach, and the game designers need to reconsider ray tracing.
I’m not entirely sure this was the right video for this comment… it’s just something I’ve been mulling over for the last couple weeks…
Thanks Sony for supporting AMD :)
Once again here to request a look at the history of generational GPU price:performance improvement over the years 🙏
This would definitely be nice to look at. Especially comparing msrp prices
Sounds like AMD needs Sony more than Sony needs AMD
We`re lucky to have PlayStation to motivate AMD to focus and improve their ATI GPU division at least a little bit.
For some reason they go all out on Ryzen only. Unlike Intel, Nvidia is not sleeping around at least for now. The only way they can have Ryzen moment with Radeon is if they just... get better - or if Nvidia repeats the same mistakes of Intel, maybe even encountering brain drain
The problem with the idea of quantum leaps being availabe in AI is that AI capability is necessarily tied to hardware advancement. As someone who researches AI, I don't think there's "quantum leaps" available for consumers. There's "quantum leaps" available in cost-cutting. There's "quantum leaps" available for investors. There'll be meaningful improvement for consumers, but I wouldn't hold your breath on "quantum leaps".
I get what you mean on the Desktop Segment because it is very general but you need to think different when it comes to AMD but especially Sony/PS imo. They can change Stuff for Consoles that only really work well for Consoles. Or especially well for Consoles.
37:47 I think he means AMD udna that ps6 will use.
Oh great, more garbage AI instead of actually good, and powerful hardware 😒
This is my main issue with the coming generations. It seems that AI has just contaminated every product tier. People do not understand that AI capabilities come at the cost GPU space which translates into lesser traditional computational gains and high prices due to process complexity. Cheaper solutions can solve typical GI, shadows and reflections with minimal cost to traditional computational power, and provide similar results for the vast majority of gamers. This is a solution in search of a problem that could be solved with simple smoke and mirrors tricks. But nooooooo, lets try to emulate real light behavior on a frame per frame basis on a single computer units for the typical gamer that will annoyed because is costing him 2 thirds of the performance while making a blur of everything.
This is marketing. Keeping the fire lit between generations. It's not about names or targets. It's about tiny algorithms and chipsets in the end. Which just aren't there yet.
To not confuse someone make it clear that if you talk of System Memory ( 22:20 ) you mean the GDDR6(7) on the GPU , its clearly labeled as GDDR6 in the Picture , it should be clear , but usually if someone talks of System Memory he usually mean the Main Memory of the Computer and not the VRAM of the GPU . To add more Cache Memory would make the GPU bigger and more expensive , it should be possible to add Infinity Cache which has 17,2 TB Bandwith on a seperate Chiplet , but not with RDNA4 RX 8000 GPUs which will have a monolithic Design and which are not meant for the high end Market . I guess there will be a tradeoff between costs and Cache size . Doubble the RT Speed should be possible with RDNA4 and i hope the new GPUs will be in the 400 - 700 € range but guess it will depend how fast the GPUs in general will be , on the lower end Intels Battlemaage may help to keep the Prices down
The "system memory" Cerny in this video is referring to PS5 Pro's single pool of 16GB GDDR6 memory, not PC's main memory or the dGPU's VRAM.
@@AnalogFoundry
Cerny ? , i meant Daniel Owen , the Timestamp is there , 22:20 .
Besides , the PS5Pro is basically an APU = it uses the 16GB DDR6 for Graphics and has 2 GB DDR5 for operational purposes
@@MK-xc9to - you mean Daniel Owen? I'm aware of the PS5 Pro specs.
Yes , i meant Daniel Owen and because this Channel is a PC Gamer/Gaming Channel it can be slightly confusing if the talks of " System " Memory and means the GDDR6 VRAM instead . The Video is only on the surface about the PS5Pro , Daniel suggest that AMD and Sony could learn from each other bacause the next Console will very likely have an AMD APU as well , the Playstation is AMD Hardware with Sonys Software
2x TH-cam supremacy. 4x mode when???
lol no
I have 5x speed option
Love your channel, keep it up man.
HAHAHA! You being the mouse pointer with your finger pointing is hilarious and brilliant! Love your channel and insight into this sort of stuff, you break down so that an older guy like me can understand. Thank you bro! 👍
OMG, i normally love your videos but having super high pitched and fast Mark Cerny was HORRIBLE to listen to. Had to stop.
Please don't do that again!
i seriously recommend people start watching all videos in 2x it was a lifechanger for me when i first learnt of it i saved so much time and was able to watch so much more . normal talking pace feels so slow without it
pssr has had issues in some games, but then videos ignores the ones it's does very well in. That out the way i hope this means AMD has more fight in them, since i'd rather not team green be the only gpu's left in the game for my wallets sake alone.
I think there is fundamental failure here to recognize the there in an increased focus of the Unreal Engine with a smaller budget which means those teams are going to have less experience formulating performances for their vision versus the engine because of their lack of experience.
Congrats on 215k subscribers! Random question, doing the math how long you’re thinking it would take you to hit 1 million at the current rate? 😅
I wish you all the best and very happy for you and the work you’re doing to the community.
So basically, graphics are cooked because the only way for them to sell new hardware was to deliberately cripple rasterized graphics and then rely on AI techniques to fix it? Humanity sure has mastered the art of creating fake problems in order to sell solutions.
Like what are we even talking about anymore? It's always AI-generated this, machine learning that. Am I supposed to be impressed that the computer is generating textures on its own? From an artistic perspective, that seems really devoid of meaning.
Whenever AI is used by a company, at best it's blatant laziness that should be treated as slop, and at worst it's them looking for ways to fire the talent while the business people get promoted, which is incredibly deplorable and damaging to gaming as a whole. It's one thing if they figured out some way to have AI speed up the optimization process without sacrificing visuals somehow, but using AI to generate more realistic textures is not something that I find interesting because it gets rid of the artistic process.
I don't know, maybe this was inevitable, but if the future of gaming is fake frames and AI-generated graphics, I think I'm gonna stick with my 4070 for a very, very long time. Which I guess is a good thing, saves me lots of money.
creating fake problems in order to sell solutions. ? if you are talking about covid and climate change then you are spot on
The problem isn't tech advancement, maybe there's diminishing returns but its about budgeting, Cerny even said it in Digital Foundry interview, PC hardware always brute forces GFX etc but in consoles its about budget, similar to cell phones, the high end gets everything tech wise while mid to low has sacrifices, consoles fall on mid tech when new and low when mid life etc Consoles will never be on par with PCs again after the 32 bit era
I disagree.
DLSS and FG are the only reasons why I play at 4k/120 on my TV using a 4070ti Super and a 650watt PSU.
If I wanted to do so rasterised I would have needed a 4080 and a PSU upgrade. Both of which cost more and would have generated more heat and noise inside my case.
You can't make that much sense in a single comment in yt comment section.
People who get paid to develop such tech can not wrap their heads around this.
Mark Cerny on 2x speed is basically Mordin Salus from Mass Effect
I wish Nintendo were with AMD, since is portable, they could get something really good for small package instead of Nvidia.
And Nvidia with Xbox and PS5 as they want more performance, dlss, ray tracing and those kind of things, lol
Nintendo did get something really really good for the purpose they wanted and if leaks are to believed the next switch is also looking really good with a gpu faster then amds 890m and 12 gigs of lpddr5 7500 as for the other two nvidia doesn’t offer a comparable soc design that delivers what Microsoft and Sony expect while maintaining backwards compatibility as they lack an x86 license
More than 1.5x Play speed loses its edge if you are not a native speaker. I will go to Spiffing Britt-he uses lazy 0.25x speed for cheesing TH-cam "features."
Exploiting the clock of user interaction which feeds the algorithm, what else did you expect?
The fact that you watch youtube in 2x speed leads me to think you may be an actual psychopath lmao
ain't nobody got the time for 1x
Multitask @@aNINETIEZkid
@@DBTHEPLUG multitasking is one way of increasing efficiency. However, you cannot focus on 2 things with 100% attention so it isn't always beneficial to do more than 1 task at a time.
watching 2x and being able to watch 2 hours worth of videos in 1 hour is most efficient and opens up time for other things or learning more
@@DBTHEPLUG there were some studies about that, usually multitasking means that you do multiple things poorly or the time to switch between tasks and gather your thoughts (aka switching cost) to do all of them efficiently overweighs any benefits of multitasking. Humans are poor multitaskers, it's far better to arrange your tasks in a way that will allow you to complete them efficiently one by one.
I watch at 2.3x you slo mo 😤
The reading comprehension of your viewers is insane. These are all not console tech. These are Radeon tech. You guys should be happy since radeon r&d is getting help. They have significantly less employees and resources compared to nvidia. This is good news since they will have more budget and resource to spare. Good thing Playstation is stepping up to its hardware, forcing Radeon to innovate or at least improve where it lacks. RDNA 4 will just be a sneak peak to whats to come. I am an nvidia user, but hey, I am also a tech enthusiast.
Whose fault is that? AMD dopped the ball with wannabe prices. Gaming market used to be 50/50 when AMD bought ATI.
A lot of viewers are unfortunately for most part just tribal about PCMR.
Being tribal in general immediately make them focus on specific points here and there without thinking about the big picture and that's very sad.
The most interesting part of that presentation indeed, even if you don't own a console or I would dare say hate consoles, is that it gives glimpses at future directions for AMD and Radeon but also games in general.
PCMRs people like to forget that they like it or not, consoles are still there and will still be there for a while. Yes they are not for everyone, but there is a demand for a simple box that sits in the living room and that JUST WORKS.
And this means that development of game have to take into account both PCs and consoles.
The fact that AMD and Sony unite for ML based models as they can't fight alone, that even Cerny admits that rasterization has almost reached a dead end and RT/AI is the future, that shows glimpses of what the future of gamedev may be made of.
Bro, I usually watch 2x speed, and you watching Cerny at 2x speed while I'm watching you at 2x is kind of cartoonish. 😂
man I just wanna play games at 60fps with 2016-2019 era fidelity
Dana Carvey is that you😁
Will any of this RDNA 2.5+ tech for the PS5 pro have any software side benefits for RDNA 3 cards? Or will this only impact RDNA 4 and beyond GPUs with specific hardware modules?
Unfortunately a bit hard to watch. I also play videos double speed and then the Sony guy at 4x is way too hard to understand - trying to switch dynamically between 2x and normal speeds so both are actually 2x was too much work after the first 10 mins....sorry :(
Can you imagine Nvidia upscaling old Nintendo games on Switch 2 with ML?
Considering, RDNA4 is the last of it's lineage, and AMD is supposed to be shifting to UDNA for all GPUs after that, I figure this is Sony's play at getting some personal wish list stuff into the mix before the first successor to the RDNA4 core gets taped out. Custom silicon may still be custom. But, even Sony doesn't ship in numbers strong enough to warrant AMD going very far off their own path. Now, the thing that's bugged me for years is how AMD hasn't enjoyed better gaming uptake on the PC side as a direct result of their dominant presence in the gaming console market. Hooowever, with Sony having bought up so many game developers, that may finally change a bit. Problem is that seems a pretty long ways away, unless the PS6 is closer to ready than we realized. And, to that end, maybe this presentation from Mark was more of a preamble for RDNA4's expected debut at CES. There just isn't enough information to make a confident prediction on. The best I can say is that it makes sense that Sony would take interest in the expanded options and performance potential of UDNA over RDNA. And with as weird and AI-tastic as things are getting, they'll need this to keep up.
Man that paused closed caption had me fucked up
Looking at the reasoning behind the collaboration by Sony and AMD, it makes sense.
Nvidia has a commanding lead over their competitors in terms of AI technologies and has enough funds to spend on R&D - AMD doesn't have that luxury.
If this collab means further ML-based upscaling for consoles and PCs, it's likely going to pay off in the long term.
If Sony is right about the future of the industry going the machine learning way, and AMD instead spends their silicon going ray tracing, they're going to be even more behind Nvidia, with all the neural stuff.
Why cant we just separate ray tracing technology to another separate card? Those who wants ray tracing, just purchase another card solely with ray tracing chip. So we can plug in 2 card in desktop… one GPU and another ray tracing card….
I've always wanted to watch TH-cam at above 2x speed!
forealz. Some people talk so slow. 🤣🤣
[Cough]Gamers Nexus[/Cough]
If you haven't addressed that need for speed, then there are a number of free browser extensions in the Chrome web store, useable on any chromium browser like Brave, Vivaldi, etc. I use "TH-cam Playback Speed Control", and I can speed up videos to 16x lol. YT chokes on anything faster than 8x depending on your connection and YT servers.
I have a modded youtube and it has 5x speed option💀
That's cool & all, but it almost feels completely pointless to buy a console these days. Anything AMD does with console hardware will have already been done better on PC years earlier.
And Sony is putting all their games on PC with significant upgrades, just as Microsoft has already been doing. It's so hard to justify buying into these closed, highly curated ecosystems that charge you to use your own internet...
All while having less backwards compatability, only one store front with more expensive games, limited control options, no mod support, and far worse policies (like refund support).
Even if AMD & Sony figure out some great AI upscaling technique to use on consoles, how far ahead will Nvidia already be with DLSS 4.0 & beyond? If anything, this tells me PSSR is a dud.
How am I supposed to be impressed with all this when we're still talking about games running worse on PS5 Pro
Further proof that AMD and Sony are co-developing FSR4. I think PSSR is FSR4 Beta/Lite with AMD adding to it for PC. I just hope my 7900xt works with it...
We just got a glimpse of what Mr. Owen sounds like in class. Makes sense ?
4X speed beat me, 3.5X speed seems ok.
Sony has an interesting place, most games are made console first & PS5 is the primary console. Sony befits when all devs make better games, open helps them make more money.
The AMD problem has always been hardware features are used on console but can be left out on PC ports if there missing on Nvidia, async compute was used on console but the GTX 10XX line missed the feature so devs had to make games work without async compute which can hurt AMD performance.
This is good news, the fact AMD is so far behind DLSS is one of the biggest reasons Nvidia makes better cards right now
Nvidia does not make better graphics cards. It makes cards that are "better" at overblown reflections, fake resolution, and fake frames but which have less VRAM at a higher cost per frame. If that's how you define "better," you be you.
@rangersmith4652 and yet it makes the experience much better.
All frames are created through hardware and software techniques, you fail to see this is more of the same.
@@ehenningsen im convinced anyone who defends upscaling tech uses a tn panel. Upscaling looks like garbage on an OLED
@@rangersmith4652 They do make better graphics cards. There's a very real reason why the 7900XTX fails to compete with the 4090 like it was originally intended to do.
@@JohnSmith-ro8hk people who blow their budget on GPU usually don't have much more left for decent display lmao.
Daniel checking his stream computer every other second like obsessed with it. I say he had problems with it one too many😂
2x speed makes it sound like *Micro* Machine learning.....If you know you know 🏎️
Bro, not everyone likes the whole 'I’m special because I know this' act. Just share the joke without the extra ego boost
@PCandTech-tr8xt Yeah I get your point of view but honestly I said that so I wouldn't sound crazy or random. That last part puts it into perspective that it's a joke about some niche and specific thing.
I know there's a lot of young people here so maybe after that it could possibly compel them to search micro machines. It's an old commercial 😂
@@SmoothSportsGaming got it, thx
@@SmoothSportsGamingThe tiny little toy cars??? I still have hundreds of those things lmao
So PS5 Pro is a hardware/software test platform. But I'm cautiously optimistic AMD will finally leverage their console knowledge for PC. They have full platform access, CPU, GPU, chip set. Always wondered if they could optimize past others that only access single parts of the chain.
xbox series s and x can be switched to windows. it is possible, just not for normal consumer
We won’t see this from Sony for at least two years
I feel like we’re getting a real peek at your teaching style here haha
I watch you in 2x so the Amethyst announcement killed me
Optimized Hardware running under optimized Unreal Engine 5 is a sad story.
ML graphics, in case of current PSSR, are a double edged sword.
It can look great in places, but be totally lacking in others - in the very same game.
I wish people wouldnt look for artistical shortcuts that inevitably will lead to results no one can improve on because it is out of their hands.
Raytracing no matter how stunning it looks only is a fraction of what you see during fast gameplay, but costs too many ressources so JUST SCRAP this idea until it is possible with raw power in every situation instead of relying on estimations and guesswork of any sort of AI.
so they will optimize the game for the developers, thats cool, we need that for UE5 games on pc as well
People tell me that's not Dana Carvey, and they are wrong.
Marc Cerny and his team are really smart
Can anyone who is working on lightweight neural networks tell me why we don't have neural upscaling asics with a dedicated cache yet? Is it bc we aren't sure of the best way to get good quality yet and no one wants to lock hardware in on it or is it simply not doable?
Garth?
Underrated.
1.5 gang :D
AMD may have fewer resources to develop new technologies compared to Nvidia, but why did they start talking about neural technologies and retracing only in 2023-2024? Obviously, this is a strategic miscalculation. And this was greatly helped by popular video bloggers who foamed at the mouth to prove that RT is an unnecessary proprietary toy...How can you be immersed in the industry and be so short-sighted?? Fools...
I was hoping we'd see some Choppin' Broccoli.
This isn't surprising. (I am not an expert on anything.) Going all in on raytracing for the Playstation 6 was a pretty obvious move for Sony, even just with the current state of the art, before you consider the possiblity of making further advances. PlayStation largely isn't held back by the things that have caused PC hardware and software vendors to go slow with raytracing. No installed base of existing graphics cards to support; less need to support old games; no PC gamers huffing that that raytracing sucks. That means that Sony can provide game studios with a low-compromises raytracing machine and encourage them to stop dragging along the boat anchor of trying to support rasterisation and RT in the same game. (There are still portability risks to publishers in pushing _too_ far ahead of what can be replicated on PC or the next XBox, but those will be fairly manageable by early 2027, and of course true PS exclusives won't have to worry about that.) And the big risk to Sony isn't from going all out for a big generational leap from the PS5, it's from not doing so. Another generational improvement that feels incremental and meh is likely to hurt PlayStation: raytracing is the best and really only chance to deliver something like the staggering improvements of older console generations.
Ha! And when i said PSSR was a joint collab with AMD due to hardware ppl laughed well whos laughing now 😂
as a none native speaker of english I m having a very hard time understanding 2x speed
I watch YT on 1.25, any faster than that and too many people end up coming across as unatural in how they speak (there are enough people who speak quickly enough to make normies at 1.25 still sound normal).
I'm so fed up of all this AI bullshit with fake frame and fake resolution just for the game devs to keep getting away with dogshit optimisation using the upscalers and fake frames as a crutch for their shitty games.
We've not made any progress since the mid 2010s and it's really depressing, we should be getting way better performance out of the hardware than we currently do.
now I think the PS6 is going to put out fully AI generated images
Sony have a lot of experience in the TV sector with AI upscaling, they have been using LG's AI processor with custom firmware on most OLED screens since 2018, it was LG and Samsung who first introduced this type of video upscaling into the TV market, and this is just a carry over into the gaming sector. Sony are also renowned for that firmware giving the best results based on the same OLED panel as the equivalent TV from LG, the only thing they don't do is protect their TV's from burn in properly like LG do, so it swings in roundabouts.
However, it's 2024, why even need AI upscaling in the first place, it should all be native. You have games running from over a decade ago on 360/PS3 looking better in native than this horrible FSR upscaling that is currently being used, some games on the Series S are running in 650p it's just madness even though it was advertised as a 1440p machine, that's just hyperbole.
Even a 4090 can't hit 1440p 60fps in some titles due to lazy devs taking the easy route out and using DLSS as a fallback if they even add it in like BGS did with Starfield.
I don’t like all this new Ai upscaler, frame gen, lumen, nanite nonsense. Optimization is dead because of all these.
Given how poorly PSSR is performing currently why on earth would anybody be excited? Intel XeSS and even Microsoft's Direct ML super resolution both perform much better in its first generation
I watch your videos at 1.75x speed.
It was annoying to have to switch speeds back and forth so I decided to watch the Sony video first. At 1.5x. The guy talks plenty fast.