i lived in california once upon a time, and I can tell you while it isn't as oversaturated as GTA V, it's not even remotely that gray and overcast lmao :P
When you watch the video the authors made on it, they mentioned that one of the improvements from previous methods is that it doesn't hallucinate objects such as the mercedes star on the hood/bonnet of the car. I suppose there's still a few kinks to be worked out though
@@ilearncode7365 I feel like they don't want my money! while I wait for the ps5 version to come out, just like I did for the ps4 version and the ps3 version after the pc version. I really hope there's a £70 bundle this time too! I hate myself and I would happily kill any of the rockstar/Gta dev team just to get £5 back. I'll do it for free. Dare me.😭
I'm excited about the future of horror games. What kind of real-life looking eldritch horror will bring the future? I can't even imagine all the possibilities.
@@rickydo6572 true imagine tech being so advanced that u cant tell anymore if you see the real world right now or a game. not only in a way of graphics, but also with manipulating nerve systems. This really reminds me of Sword art Online, but much more scarier. The people in the game wouldnt even realize they are still in the game. holy shit
Trouble is with its knowledge base. It doesn't have anything alien/monster to compare it too. If it was a simple human and earth based horror then yes it would work well.
More like video game about entire world will be created by google earth and modelled into a game with real time. Cars moving in real life and people walking will be simulated
My guess is that it won't be long until the first AAA games will have a "neural rendering" alpha feature that you can switch on and off to try to play in a more realistic scenario in real time :)
Well, they have been using DLSS (Deep Learning Super Sampling) already for quite some time, it was only a question of time until a DL post-processing step could also add details. This really looks like the future of game rendering. Amazing.
@@GeroldH What if, in one or two decades, most of what we will see in videogames will be actually imagined by a bunch of neural networks ? Would be an interesing experiment, at least : to make a game that renders a minuscule image each frame and then puts it through a series of neural networks to generate the image the player actually sees.
It works for AAA games like GTA V, but what about medieval games where you have to render people with armour and helmets on? The AI has never seen such humans so it they would have to create some extra dataset. I currently doubt this technology will see much use in the gaming world outside of those niche indie games that use it as a gimmick, but I hope to be wrong ofcourse.
I don't know about that, you would have to design the game for this feature because this already takes care of lighting, texturing and shadow, reflection and somewhat depth. I don't see a game entirely based on this but i can definitely see it as part render tool for things like grass, hair, fences anything that would need a lot of polygons to look real but is often a background element.
LMAO that is what I was thinking. The games will still be priced high because less employees =/= lower price. Rather less employee = more income for the CEO.
@@neurofiedyamato8763 But at some point Triple A games will be able to be made by an idie dev team, which should decrease some of the problems in the Triple A space
@@neurofiedyamato8763 Less design workforce -> cheaper to make games -> more money per current projects -> more projects to compete or companies lowering prices and taking over the market unless the others follow. Capitalism usually works unless there is a monopoly, and the only monopoly is steam, not the studios.
I think that the desaturated look of the AI could also be in part because the video dataset was be filmed with dashcams and similar, which film stuff less saturated than we percieve it.
I spent the whole video repeating to myself: “what a time to be alive”, this is the most inspiring thing I’ve seen this month, thank you so much for sharing it this well !!!
This technology will find its way to film industry as well. Imagine that you could make a whole realistic scene with only 3D graphics ... that's absolutely incredible.
I am afraid that digital does not looks as good as film and until they can develop ai systems that can make imagines look film it will not be considered a contest.
@@stirrcrazy2704 perhaps I watched a couple of those and even one of their old robotics video but I haven't seen all of it. Two minutes paper is famous among researchers no doubt
@JerryKooper Official That effect will stay if you switch to third person too though. Looking with a camera behind your car with this effect would look awkward in my opinion. Especially if you leave the car and play it like a normal GTA. But Rockstar is capable enough to collect their own San Francisco dataset if they consider to do something like that in the future so that wouldn't much of a problem for them. ( For this research they could also ideally parse real-life looking dataset from Google Street view too but it wouldn't be segmented like this dataset )
Man this guy nailed it. When I saw this in tech news outlets, MY JAW DROPPED. Some people were complaining that it was bland and didn't feel real at all. It was clear a lot of people didn't understand what they saw. All the footage really needed was some color grading and we're living the in science fiction. Loved this video explanation.
This is really good because it means we could render the game in a much simpler way, only providing the kind of information that the post-processor needs, speeding it up significantly.
@@StopBuggingMeGoogleIHateYou real-time is 2 fps here lol, although they admit it is not well-optimized. Still, it's really exciting stuff. "Inference with our approach in its current unoptimized implementation takes half a second on a Geforce RTX 3090 GPU. Since Gbuffers that are used as input are produced natively on the GPU, our method could be integrated more deeply into game engines, increasing efficiency and possibly further advancing the level of realism."
@@christophkogler6220 It's also possible that future GPUs could cannibalize the 3D rendering power to put more compute/AI power on the chip. Framerate would be extremely consistent as well because, if I understand correctly, AI are generally constant time regardless of input. So if the rendering is simple enough to not bottleneck, it might be possible to design such that you could have almost perfectly consistent high framerates.
@@LanceThumping Classic rasterization cores, don't do well on AI tasks, because of their architecture, so it would not be enough to invest more GPU power on the postprocess, you'll actually need more tensor cores. Also, no, AI could vary wildly on rendering time depending on resollution and complexity of both object and material information, it's not the same to stylize an empty parking lot than a crowded shopping mall
This looks very promising. All we need to do is bump up that render quality and get better training data than just dash cam footage. The cityscape looks so green probably cause of the slightly green tint that is added to most windshields. When I think of photorealistic, I actually think of what the human eye would see. Can't wait to see how this improves!
Don’t even need better training data. Everything is modeled already- if we use high-poly renders of objects as a training set they could probably achieve the same thing. Meaning this isn’t limited to realistic graphics. AI generated pseudo-2D or heavily stylized art would still work with this method.
@@nathandam6415 It's endless possibilties. And not only that, but this will significantly reduce the gap between what indie devs can do vs what AAA companies can do. Instead of needing hundreds of employees to churn out absurd amounts of high quality photoreal assets, you simply need some more basic assets and a good training data set and you're golden, any indie dev can make a hyper-real game, or really any styled game they want. I thought Raytracing was the future of gaming graphics, but now I realise it's actually one level deeper than that, the tech behind realtime RT, deep learning and AI, THAT is the real future of gaming. Raytracing is still caveman thinking, like how old shows and books predicted phones, but failed to predict the extent to which phones would become hyper advanced multipurpose tools.
This is nuts...truly next gen. Every time you think technology has reached its peak there's a new crazy innovation. Also it felt more German than Germany itself lol. Also made me realise real life places have their own real life colour schemes.
California, or the whole US actually is pretty saturated. Lots of vibrant colors in nature. Europe has more of a desaturated look. I guess its because the US is huge and most of the land is still forest though.
sadly, gta's Npcs are pretty bad, they wont act like real human, they dont cross to the other side, or other situations the car would find in real life
I'm sorry, but I don't think these photos are realistic. The only place where it changes the picture is where the new image is cloudy. And it seems different and realistic because it is great due to the changing weather, but just as realistic as in the game. No surprise here.
Interesting side-effect of the process is that many of the traffic lines painted on the asphalt become white, probably because the entirety of the training footage has strictly white ones
@@OrangeC7 More likely approach for game devs in future, I think, is tailoring that training data to the setting. You may even include multiple models in one game, if it takes place in multiple locations.
The issue with it adding trees to the mountain is understated. It would be weird to walk to the mountain and have the trees disappear because they aren't there in the game engine.
The most mind-blowing thing about this is that it's rendered in real-time! The whole video I was thinking "Oh well but you can't use it in a game when it takes so long to create the image" but apparently no, it's real time. I hope the next big thing after raytracing will be AI enhanced visuals!
one of the key limitations the video game developers must consider is hardware capabilities of it's client base. If it runs real time on a dual high end video card it's still not gonna change anything. Unfortunately hardware limitation hasn't been discussed at all
I can't wait to see this in any game that I am playing! This seems like RTX on steroids. Your idea about freeing up game design by requiring minimal base graphics and letting the AI system do the graphics is amazing!
Since only driving footage was used, I'm interested to see how it would handle the player getting the camera up close to objects like trees and NPC's, and seeing how much detail comes through.
@@yxhuang3239 Or perhaps they just need a different dataset for a different purpose like that. I'm not sure if Google Maps would work (or if they would be legally allowed to do that), but something similar might be able to handle walking views instead of driving
@@OrangeC7 Indeed. But what I'm thinking is that there's a lot more than walking and driving in this game (shooting, explosions, building interiors, and all the cut-scenes from different camera angles and with detailed facial animation, etc.) The dataset would have to be comprehensive, but then the training becomes extremely difficult with such complex data.
The output looks kinda soft. Would love to see this put into DLSS 2.0 to regain some detail, as well as maybe try colour correcting the training dataset to look more like what humans see rather than what cameras see.
I'm not sure this would work for every game out there, though. Would you really want a photorealistic slash and gore? What we have in games right now still is cartoonish enough not to cross that line.
ray tracing makes a minimal difference, its 1000x more realistic, but 'realistically' you're not really going to notice the difference. im far more interested in what dlss can do. especially in vr where we need all the performance we can get. or at least, if you are going to mention raytracing, include dlss in that sentence. ray tracing on its own is just a way to play at 10fps, or 40fps with all settings on low and 720p....
@@sonicbroom8522 its going to be a lot longer than 10 months. until they have some new manufacturing plants up and running, there is going to be no change in supply. all the tech out there all needing their chips made in the same places... and it will take years before the planning and construction etc are done, then several more years after that to get the process nodes working, and by then the node will be outdated and the will need to start again... i mean, look at how much trouble intel has been having. stuck on 14nm or whatever it was for like 5+ years now. (thats a different situation though i suppose).
@@Neyna4ever Nah fam, if you want to make a cellshaded game or pixel art game you can still do that. If you want to do stylized photorealistic then you can use the method in the paper. This is just a tool for photorealism, the same way the aseprite is a tool for making pixel art. Someone could probably make a machinelearning algorithm to take a photorealistic picture/animation and turn it into pixel art as well, which would be kinda nice because pixelart is a pain to do. But yes technically you're right, if everyone used it all games would look the same. But not everyone has the same artistic vision for different games. Which means realistically it'll never be everyone using it and people can still be creative.
02:42 "Now, as you see, most of the generated photorealistic images are dimmer, and less saturated than the video game graphics. Why is that? This is because computer game engines often create a more stylized world where the saturation, haze, and bloom effects are often more pronounced." Or maybe it's because the AI was trained on German imagery where skies tend to be more overcast than in California. So it takes the unmistakably Californian views from GTA with their dawn/dusk pinkness and brightness, and transforms them into a cloudy German midday. Which is arguably more real, but only because Germany is arguably more real than California. Nothing to do with imagined failings in the game engine. This is a lot like the one where Abraham Lincoln's iconic wrinkled skin was smoothed out, by AI scientists who couldn't imagine that anyone might really have skin that wrinkled because today we have mostly eliminated such conditions through advances in medicine. They supposed they were fixing a fault in the photographic technique, even though that made no sense. The data scientists can't imagine a world that isn't totally homogenized and everywhere looks identical to the view outside their front door, so they use AI to totally homogenize the world and make it look exactly like the view from their front door, regardless of what it might actually look like.
"Nothing to do with imagined failings in the game engine.". yes it has, but calling them "failings" is maybe strong word. This is new technology that it would have been hard to expect already being developed and implemented back when GTA V was released. And it still isnt perfect. But you did see the video, right? It looks absolutely stunningly real, and that is not just because Germany on a cloudy day looks more real than California, it's because the tech is working, it just so happens to be that their datasets are from cloudy Germany, not sunny California.
Yeah these papers tend to spread a good deal through the usual channels.. Pretty sick stuff and still just the beginning of what's going to happen to all creative endeavors.
And we were happy for those back then. When I was kid, we played with stuff before video games even existed. :) In the future, people will find it funny that we find photo realism exciting or even VR headsets. I expect direct sensory stimulation of the neural system, Total Recall effect and then we'll all turn out like the blobs in Wall-E.
Bro! imagine being able to simply add a Post-Processing effect into Unity that transforms a low-poly style game into a photorealistic one that actually looks like realism 🤯🤯🤯
@@kmx5992 No. More like: make realistic graphics more accessible to small creators who don't have a huge team of devs 👌 For example: guess why the low-poly look was a trend for a while. Definitely not just because of aesthetic/stylistic choices. Mainly because it's accessible.
Can confirm, this is just the trübe reality we live in here in germany but tbf, cloudy more muted/shadowless data is probably better to process than sunny/harsh lighting conditions. may be intentional
Different areas have different post-processing effects, the developers decided to make the Californian servers have a really vivid effect. Some people think it was a mistake and the devs meant for it to be less intense, I'm just wondering why the devs couldn't just put it in user preferences instead of having it change depending on which area you're in.
Bro, you said it all: What a great time to be alive! About the uses of such a momentous achievement (and its unfolding), I truly believe that by the end of the decade we will have at our home PCs a small-budget Hollywood studio, with virtual sets, actors, props, you name it. Using it to make a movie or a game or anything in-between will be a matter of creativity and hard work.
My guess is the trees will start to fade away since in can't generate brand new data, but it interprets already existing data. It can interpret a hill with many trees from afar but not a non existing tree from up close.
Ideally all the assets that make up the landscape would also be 'labeled', trees, rocks, mud, etc... but its made tricky by the fact that even the original game doesn't use distinct assets for objects in the distance (the trees might just be part of a texture painted directly on the mountain, etc). You would likely see some significant changes as you approach, as the far away texture doesn't have any constraints specifying where to draw the trees, etc. Though I'm sure a clever approach could solve these issues too... and actually modern games still have significant discrepancies in the objects you see in the distance and what you see up close (resolution, texture and even placement)
@@StuartDesign You're mostly right yes (althought Rockstar has never painted trees in terrain textures). Far away objects are just a smooth terrain with paper cut or extremelly basic geometry objects placed on top of it, as you get close, details are faded in until the new more detailed model is fully opaque to then unload the lower quality one, this is made various times until you see the final max quality object. With this clear, the AI doesn't need to correct its own interpretation of the object or terrain materials, when they change in game, they'll also change in the stylized image.
Ideally the AI would choose an interpretation in such a way that you won't be able to tell if it chooses wrong. If you can't decipher what's actually there because it's too small, it doesn't matter what's really there. Only when you're able to actually see that it's trees, there should be trees placed by the AI, but at this point, the AI should be able to recognize that it's trees.
No it literally does not. This is a post processing step that is very fast. Neuronal networks in general need significantly weaker hardware then conventional methods for the same job.
Wow. Amazing. It is about time that one of the big car games adds this feature and allows for real life car training at home to prepare for getting a drivers license. There is such a huge market for this. And visually this is exceptionally outstanding improvement over what has been there before. THIS is the next stepp after DLSS, obviously. Imagine rendering barely low quality graphics and an AI is creating beautiful stuff from it.
said in a defeated almost like you have to say it but dont want to what a time to be alive. rememeber that guy eyYYY???????????????????????????????????
the camera footage is actually quite undersaturated compared to what you would see if you were actually there. Perhaps the cameras that captured the data set just weren't very good. Too many people associate undersaturation with "realism"
It looks realistic from the perspective of a fairly low quality camera, it doesn't look realistic to how a human sees everything. Now only if the data set was recorded with better cameras with fine-tuned colors...
Very impressive!! A bit later on, swapping specific things wil become easy. Say, disply Califoprnia as a post-apolaptic world where coniferous trees replace the tropical and leafy trees, and buildings are in ruins. That will simply the needed modeling by miles.
The most important thing is that in like 10 years or something like that games would probably actually look like this because at that point that hardware will be advanced enough to actually do it at 60 frames.
@@arroganceofficial2089 that depends on if it’s going to be affordable to most people or cost over a thousand dollars! Just because it’s technically possible if you have a $3 000 computer doesn’t mean that it’s going to be the norm for games to look like that. And unless something impressive happens in the next 3 to 5 years it’s probably going to be a decade before the general public has access to hardware good enough and cheap enough to actually use this type of software enhancement at a reasonable frame rate.
@@The_hot_blue_fire_guy bro the thing runs already on modern computers in real time, give the code say 2 years time to get more efficient and its gonna be a breeze to run, the question is if its gonna be useful to developers or not
@Arber _D GTA IV looks quite washed out compared to GTA V because it has little saturation. So GTA IV looks like the enhanced, non-saturated version :)
Personally, I would do this as you said, ~25% representation of the setting and letting the AI fix it in post. This would open up a ton of possibilities for smaller teams or saving time doing things alone. Things wouldn't need to be as detailed ahead of time. Perhaps focus more on the character models to do close up where the AI wouldn't work as well. As an experiment, I would want to see what it does with forest/open world environments, as well as some mesh deforming for testing physics objects (car crashes, building collapse, etc)
Aha! Finally a paper I'd already seen and studied before our dear Dr. made his video! Still worth holding on to those papers when watching it again, though.
Wow, this is really impressive. I wish they tested on some really low graphics settings for the game to push the limits of the AI though, because it’s hard to gauge it’s capabilities when the input is already pretty close to realistic. Nonetheless, it seems like it could be a promising new tech to help push photorealism to a new height once we are able to run this in real time alongside games on ordinary consumer hardware.
I was wondering this too, and I suspect I know the reason: the AI must be leaning heavily on the shadows for its light direction assumptions, because even on high settings GTA('s engine) isn't good at specular, but the shadows are crisp. On low settings, the shadows are effectively nonexistent, so the AI will assume it's overcast and really fuck with the look of the scene. Additionally, when a car's material in the game wasn't rendering reflections, the AI appeared to add scratches. So if you turned off reflections, you'd probably end up with a road full of broke down beaters lol
This technique will be popular with the retro gaming community as a quick and effective way to reskin old game mechanics to a more modern photorealistic look. Going to be a lot of fun methinks.
That will be great, I believe it's possible. Mabye a simpler version first. Imagine all the videos :D The most obvious training data for me is from older to newer title, e.g. mass effect 1 to mass effect 3
Games, animations, hell even city planning could all be accomplished with this sort of image technology. Could Ready Player One style VR be on the horizon next?
A VRMMO like that is exciting We may expect it as early as the end of this decade (Zenith MMO is currently the best VRMMO out there (even if it's not nowhere near RP1), we are inching toward that future)
@@antonrodionov7216 This algorithm doesn’t need displacements. Notice how it implemented grass geometry that replaced that phony texture. And the grass geometry isn’t really geometry, it’s a post processing effect. So no textures, geometry, etc... it also emulates more realistic looking reflections/refractions. This is the future
Theres no need for it to run real time it should just update the assets once and be done with it. Him saying it runs in real time makes it seem like a scam because someone obviously doesn't what they are doing.
@@GhostEmblem The reason developers use downgraded textures is because of resource constraints. Updating the assets would increase the level of detail and make the game run slower. Plus this post-processing handles lighting so you need it to run in real-time.
@@mariusfacktor3597 Actually using higher res textures and downgrading is way more common and posses far less problems. Rendering the high level assets will be exactly the same if the AI does it it is literally impossible for rendering premade assets to be slower than rendering on the fly AI generated graphics thats the reason assets are used in the first place otherwise no one would bother using so much space to save the assets when the they could just have the entire environment rendered from scratch faster. I am a software developer but this is stuff that basic logic can determine I have no idea how got anyone to vote you up just a bunch of hype men who blindly believe anything.
@@GhostEmblem As a fellow software developer I'm confused by what you are saying. The AI can't "update" the assets. The whole idea of this AI is that it takes the image rendered by the game engine, and processes that image. It has no idea of any textures / models / assets / ... It only knows the input images. So when the _game grass_ changes to _realistic grass_ is not just the grass texture updating, its completely interpreting the image and remapping the grass part of the image based on the grass it was trained on. That's why it ONLY works when used as a real time post processing.
I feel like that at some point we wont really advance in algorithms anymore (e.g. for lighting), but instead put an AI on top who beats any known algorithm easily.
This is really cool. I wonder how it would be to have realistic maps in games, accurate by using google maps AI learning and such. That way one could theoretically visit one's own house in the game! Imagine driving down your own neighborhood, to work, go to all your hangout spots, etc., in a game! I live in NYC so this would be phenomenal. It would probably require a massive amount of space, but I think it could be possible!
It would be great, but there are ethical concerns regarding the representation of information in maps and games that would use information from Google Maps and other data sources. For example, the privacy of personal data and property information could be compromised if they are represented in a game.
I'm sorry, but I don't think these photos are realistic. The only place where it changes the picture is where the new image is cloudy. And it seems different and realistic because it is great due to the changing weather, but just as realistic as in the game. No surprise here.
@@MrFEARFLASH TH-cam compresses videos, so the video does not show in a good way, but if you look at the Intel ISL website, the images are strikingly photorealistic.
This is a very cool idea, but the question really is speed of processing, if it can keep up doing those calculations, then very cool. Also, especially being from a computer animation background, one thing I learned is to simply take out the saturation, use more light greys and very light blues on most objects, that helps to give it the more realistic look. But lighting is everything in animation, so I would put a slightly yellow sun in the seen (i love amber lighting) and see how that works. Just using clever lighting can help hugely in rendering and light calculations. Anyway, I can't WAIT for two papers from now!
The real problem is GTA already looks incredibly realistic. Not many games can achieve such realism then how ai gonna translate it is the real question here.
@Progenitor77 No, philosophers in general don't believe that. Nick Bostrom has theorized it in his simulation hypothesis paper, and the conclusions he comes to are extremely thought-provoking.
This is incredible. You mention it works via post-processing and I'm curious when it renders the lush grass instead of the textured plane is this projected over the original game footage as essentially AI altered video footage? Would lack of persistence be a limitation, like if you were to go past the same location twice, might it still look similar but generate slightly different results?
@@krathya is Candice the CEO? also i heard that Joe the SuperIntelligent Computer has the new Rydon Technology... it is very powerful when used with a Penny Trading Power Supply!!!
It's very interesting to note the difference between the mapillary dataset's render and the cityscapes render, specifically looking at palm trees. It appears, to me at least, that because the cityscapes dataset has no image data from parts of the world with palm trees, it tends to erase them or fuzz them out significantly more than they would be in real life, whereas the mapillary data has plenty of data from cali and the equator and does a great job at accurately tuning them. I'm also very impressed with the clarity of the images and videos, as well as the retention, no important details that I could find were erased or changed significantly by the algorithm. This really is an awesome paper, I honestly hope they release the renderer as a mod, if it's true that it can be added as post processing, I'd love to play with it. Oh and it's also really funny that some of the images on the paper just look like they came straight outta GTA IV, guess it's a testament to how well rockstar did at keeping NYC realisticly grimey and low-saturated 😂😂
Imagine how many old games could be refreshed with this tool, old need for speed games, gta 3, vice city and many more, because you can feel free to feed an algoritm with any images
I am a developer for an upcoming game engine, I would love to implement this as a feature. The game engine is modular, I have already developed about 10 modules for it. I would love to create a module for this kind of feature, we already have realtime NPC interaction and communication and it's a module. The game engine uses Taskflow for Multi-threading and Dilligent Graphics for pre-processing. We also created an EOS module (epic online services), and a Quixel Mixer module. But this a post-processing capablity would be awesome.
i'm from germany and i can tell you that the blue/grey, dark, undersaturated, depressive look is spot on.
+1
GTA V Sauerland Edition
That's why Germans are white. lol
i lived in california once upon a time, and I can tell you while it isn't as oversaturated as GTA V, it's not even remotely that gray and overcast lmao :P
@@Random_Iceberg 100% true.
Interesting how the algorithm seems to really want to add the blue EU flag to the licence plates.
Algorithm was trained with a data set from German cities, must be why it does that.
yeah lol
Yeah, you can see at about 5:40
It also changes street signs to euro colours.
When you watch the video the authors made on it, they mentioned that one of the improvements from previous methods is that it doesn't hallucinate objects such as the mercedes star on the hood/bonnet of the car. I suppose there's still a few kinks to be worked out though
In other news: Rockstar delays any news of GTA VI once again, offering new "realistic rendering" pack upgrade.
Cant wait for the GTA V re-re-remaster for the ps6!
@@ilearncode7365 🤣
@@ilearncode7365 I feel like they don't want my money! while I wait for the ps5 version to come out, just like I did for the ps4 version and the ps3 version after the pc version. I really hope there's a £70 bundle this time too!
I hate myself and I would happily kill any of the rockstar/Gta dev team just to get £5 back. I'll do it for free. Dare me.😭
@@ilearncode7365 yea because PlayStation doesn’t have backwards compatibly
Me thinking, did they have to use GTA V as an example, it's about to get delayed again because of this video
This technology would be absolutely perfect for a horror game.
I'm excited about the future of horror games.
What kind of real-life looking eldritch horror will bring the future? I can't even imagine all the possibilities.
Imagine this on a VR horror game.
Maybe it could traumatize people for real actually...
@@rickydo6572 true
imagine tech being so advanced that u cant tell anymore if you see the real world right now or a game. not only in a way of graphics, but also with manipulating nerve systems.
This really reminds me of Sword art Online, but much more scarier. The people in the game wouldnt even realize they are still in the game. holy shit
@@Umarudon Na... the Algorithm looks at Lady Dimitrescu, thinks "Nah, women aren't that thicc" and turns her into 1970s Twiggy xD
Trouble is with its knowledge base. It doesn't have anything alien/monster to compare it too. If it was a simple human and earth based horror then yes it would work well.
Year 2030: This new gaming neural processing card is expensive.
I can totally see that. People probably said the same thing about “graphics acceleration cards”! :)
I actually doubt that in 10 years many people will still play on their own hardware. Streaming will be more convenient and powerful.
More like video game about entire world will be created by google earth and modelled into a game with real time. Cars moving in real life and people walking will be simulated
They will be hard to come by due to the cyrptocurrency mining as well!!
Nah mate, the scalpers have already bought them and selling them for Neural Coin mining.
My guess is that it won't be long until the first AAA games will have a "neural rendering" alpha feature that you can switch on and off to try to play in a more realistic scenario in real time :)
Agree
Well, they have been using DLSS (Deep Learning Super Sampling) already for quite some time, it was only a question of time until a DL post-processing step could also add details. This really looks like the future of game rendering. Amazing.
@@GeroldH What if, in one or two decades, most of what we will see in videogames will be actually imagined by a bunch of neural networks ? Would be an interesing experiment, at least : to make a game that renders a minuscule image each frame and then puts it through a series of neural networks to generate the image the player actually sees.
It works for AAA games like GTA V, but what about medieval games where you have to render people with armour and helmets on? The AI has never seen such humans so it they would have to create some extra dataset. I currently doubt this technology will see much use in the gaming world outside of those niche indie games that use it as a gimmick, but I hope to be wrong ofcourse.
I don't know about that, you would have to design the game for this feature because this already takes care of lighting, texturing and shadow, reflection and somewhat depth. I don't see a game entirely based on this but i can definitely see it as part render tool for things like grass, hair, fences anything that would need a lot of polygons to look real but is often a background element.
imagine how much modeling time we could save for the artist?
*Take Two heard:* imagine how many artist we could fire?
Most accurate comment about triple A games.
LMAO that is what I was thinking. The games will still be priced high because less employees =/= lower price. Rather less employee = more income for the CEO.
@@neurofiedyamato8763 But at some point Triple A games will be able to be made by an idie dev team, which should decrease some of the problems in the Triple A space
Alternatively, only 1000 hours of crunch...
@@neurofiedyamato8763 Less design workforce -> cheaper to make games -> more money per current projects -> more projects to compete or companies lowering prices and taking over the market unless the others follow.
Capitalism usually works unless there is a monopoly, and the only monopoly is steam, not the studios.
I think that the desaturated look of the AI could also be in part because the video dataset was be filmed with dashcams and similar, which film stuff less saturated than we percieve it.
underrated comment!
Nicely spotted. I was wondering why the desaturation was so intense, it does look very much like dashcam footage.
true
@@thethievingmonkey inb4 the next generation of games all look like either dashcam footage or google streetview images.
@@JoranGroothengel We went from making realistic games brown and desaturated to making realistic games blue and desaturated
I spent the whole video repeating to myself: “what a time to be alive”, this is the most inspiring thing I’ve seen this month, thank you so much for sharing it this well !!!
This technology will find its way to film industry as well. Imagine that you could make a whole realistic scene with only 3D graphics ... that's absolutely incredible.
Imagine how much money that would save
That's how they do it... since the invention of cgi
@@icaras12x87 i think what the other person was trying to say is that cgi can look absolutely realistic, with no flaws and with half the budget
I am afraid that digital does not looks as good as film and until they can develop ai systems that can make imagines look film it will not be considered a contest.
I was waiting for this one, ever since I've seen that demo a couple weeks back, I knew our fellow scholar would love it!
Doesn’t the guy in one of the demo videos even make a reference to “hold on to your papers”?
@@stirrcrazy2704 perhaps I watched a couple of those and even one of their old robotics video but I haven't seen all of it. Two minutes paper is famous among researchers no doubt
WE CAN BEAT HIM th-cam.com/video/AuvNl_VAhWs/w-d-xo.html
I had the exam same thought when I saw that video!
I was surprised when I clicked on the video and realized it wasn't a Two Minute Papers video
"enhancing photorealism enhancement"
"real reality is really real"
Real eyes realize real lies :D
nono it makes sense, its improving (enhancing) photorealism enhancement from its current state
@@machbauer132 simp p:
o'rly?
Reality is really really real
The undersaturated results look like that greenish filter in "The Matrix" applied on the inside-the-simulation scenes.
that's just how germany looks in real life
@@SkaveRat You mean it's how Germany looks in the Matrix.
@@DanielLCarrier That's what he said :)
I think dataset is taken from dashcams which are usually behind greenish tinted glass and that's why dataset have that color tint.
@JerryKooper Official That effect will stay if you switch to third person too though. Looking with a camera behind your car with this effect would look awkward in my opinion. Especially if you leave the car and play it like a normal GTA. But Rockstar is capable enough to collect their own San Francisco dataset if they consider to do something like that in the future so that wouldn't much of a problem for them. ( For this research they could also ideally parse real-life looking dataset from Google Street view too but it wouldn't be segmented like this dataset )
Man this guy nailed it. When I saw this in tech news outlets, MY JAW DROPPED. Some people were complaining that it was bland and didn't feel real at all. It was clear a lot of people didn't understand what they saw. All the footage really needed was some color grading and we're living the in science fiction. Loved this video explanation.
this for realtime gameplay + VR will literally turn games into reality in the future
This is really good because it means we could render the game in a much simpler way, only providing the kind of information that the post-processor needs, speeding it up significantly.
Semantic label map game engines are the future!
Unless that increases the cost of postprocessing to the point where it's not performant enough for real-time rendering.
@@StopBuggingMeGoogleIHateYou real-time is 2 fps here lol, although they admit it is not well-optimized. Still, it's really exciting stuff.
"Inference with
our approach in its current unoptimized implementation takes half a second on a Geforce RTX 3090 GPU. Since Gbuffers that are used as input are produced natively on the GPU, our method could be integrated more deeply into game engines, increasing efficiency and possibly further advancing the level of realism."
@@christophkogler6220 It's also possible that future GPUs could cannibalize the 3D rendering power to put more compute/AI power on the chip.
Framerate would be extremely consistent as well because, if I understand correctly, AI are generally constant time regardless of input. So if the rendering is simple enough to not bottleneck, it might be possible to design such that you could have almost perfectly consistent high framerates.
@@LanceThumping Classic rasterization cores, don't do well on AI tasks, because of their architecture, so it would not be enough to invest more GPU power on the postprocess, you'll actually need more tensor cores.
Also, no, AI could vary wildly on rendering time depending on resollution and complexity of both object and material information, it's not the same to stylize an empty parking lot than a crowded shopping mall
This looks very promising. All we need to do is bump up that render quality and get better training data than just dash cam footage. The cityscape looks so green probably cause of the slightly green tint that is added to most windshields. When I think of photorealistic, I actually think of what the human eye would see. Can't wait to see how this improves!
Don’t even need better training data. Everything is modeled already- if we use high-poly renders of objects as a training set they could probably achieve the same thing. Meaning this isn’t limited to realistic graphics. AI generated pseudo-2D or heavily stylized art would still work with this method.
@@nathandam6415 It's endless possibilties. And not only that, but this will significantly reduce the gap between what indie devs can do vs what AAA companies can do. Instead of needing hundreds of employees to churn out absurd amounts of high quality photoreal assets, you simply need some more basic assets and a good training data set and you're golden, any indie dev can make a hyper-real game, or really any styled game they want.
I thought Raytracing was the future of gaming graphics, but now I realise it's actually one level deeper than that, the tech behind realtime RT, deep learning and AI, THAT is the real future of gaming. Raytracing is still caveman thinking, like how old shows and books predicted phones, but failed to predict the extent to which phones would become hyper advanced multipurpose tools.
@@TyrannoFan Data sets are *very* expensive, though
yeah, and dashcams are often low quality, i wonder if that contributes to this looking 480p or if that is just a limitation of the ai.
@@AlphaGarg couldnt you just use images from google street view theoretically?
LITERALLY a "Game" Changer!
@@bibblybobbly9951 ?
@@bibblybobbly9951 it turns it into real world
LOL. Good one
@@bibblybobbly9951 IV > V on the real
Dan: a game changer!
Unreal engine 5: am I a joke to you?
😂
This is nuts...truly next gen. Every time you think technology has reached its peak there's a new crazy innovation.
Also it felt more German than Germany itself lol. Also made me realise real life places have their own real life colour schemes.
California, or the whole US actually is pretty saturated. Lots of vibrant colors in nature. Europe has more of a desaturated look. I guess its because the US is huge and most of the land is still forest though.
I would use this as an indi game developer to keep up with aaa studios and focus more on the gameplay.
Seems like a nice environment to develop a self-driving car algorithm
;)
Just have to be sure to recreate the grand variety of lighting conditions you can encounter in real-life
Yep 😃
sadly, gta's Npcs are pretty bad, they wont act like real human, they dont cross to the other side, or other situations the car would find in real life
I'm sorry, but I don't think these photos are realistic. The only place where it changes the picture is where the new image is cloudy. And it seems different and realistic because it is great due to the changing weather, but just as realistic as in the game. No surprise here.
Interesting side-effect of the process is that many of the traffic lines painted on the asphalt become white, probably because the entirety of the training footage has strictly white ones
I wonder how much having a dataset of cities from all over the world would improve this method
@@OrangeC7 More likely approach for game devs in future, I think, is tailoring that training data to the setting. You may even include multiple models in one game, if it takes place in multiple locations.
Add in the U-turns prohibited changing from black to blue around 6 minutes haha.
It also tries putting the EU flag on license plates
@@iemozzomei and there's the other sign where it also slightly changes the shape of the arrow. Like damn AI quit being so nitpicky lol
Reshade: I make games more realistic with raytracing!
This AI: I'm about to end this man's whole career
@@Brekfestus nah, reshade looks like crap because it has no idea what is reflective or smooth or rough or light emitting, it just pretends it does
Lol you could say that about this AI as well…
@@MrSlimeOfSlime oh ok
The issue with it adding trees to the mountain is understated. It would be weird to walk to the mountain and have the trees disappear because they aren't there in the game engine.
But there was no ray tracing in this video or method at all?
The most mind-blowing thing about this is that it's rendered in real-time! The whole video I was thinking "Oh well but you can't use it in a game when it takes so long to create the image" but apparently no, it's real time.
I hope the next big thing after raytracing will be AI enhanced visuals!
one of the key limitations the video game developers must consider is hardware capabilities of it's client base. If it runs real time on a dual high end video card it's still not gonna change anything. Unfortunately hardware limitation hasn't been discussed at all
@@vovalos These days, yes. Back in the 90s, that wasn't such an issue. The game won't run on your machine? Get an upgrade then. :P
I can't wait to see this in any game that I am playing! This seems like RTX on steroids. Your idea about freeing up game design by requiring minimal base graphics and letting the AI system do the graphics is amazing!
Since only driving footage was used, I'm interested to see how it would handle the player getting the camera up close to objects like trees and NPC's, and seeing how much detail comes through.
Yes, and walk around or drive other vehicles...
Good point. Perhaps it's more suitable for driving games like Forza Horizon than GTAs.
@@yxhuang3239 Or perhaps they just need a different dataset for a different purpose like that. I'm not sure if Google Maps would work (or if they would be legally allowed to do that), but something similar might be able to handle walking views instead of driving
@@OrangeC7 Indeed. But what I'm thinking is that there's a lot more than walking and driving in this game (shooting, explosions, building interiors, and all the cut-scenes from different camera angles and with detailed facial animation, etc.) The dataset would have to be comprehensive, but then the training becomes extremely difficult with such complex data.
Most likely it wouldn't fare very well.
The output looks kinda soft. Would love to see this put into DLSS 2.0 to regain some detail, as well as maybe try colour correcting the training dataset to look more like what humans see rather than what cameras see.
Maybe the training data used low resolution video? If they bump up the resolution I think the results would be even more dramatic.
Of course! This is only the first paper, after all!
Edit: the first paper with very great results, of course.
They could recreate not from the original, but from a modified one, that would be even better.
Most likely also due to the German datasets. Would need training data from actual California.
I'm not sure this would work for every game out there, though. Would you really want a photorealistic slash and gore? What we have in games right now still is cartoonish enough not to cross that line.
Between this, ray tracing, and unreal engine 5's lumen and nanite, I am excited for the future graphical fidelity of video games.
It really feels like we are at the brink of a new generational leap doesn't it?
Yeah, just have to wait 10 months to pay $2000 for a low-mid tier GPU. Great time to be a gamer... :/
@@sonicbroom8522 ?
ray tracing makes a minimal difference, its 1000x more realistic, but 'realistically' you're not really going to notice the difference. im far more interested in what dlss can do. especially in vr where we need all the performance we can get. or at least, if you are going to mention raytracing, include dlss in that sentence. ray tracing on its own is just a way to play at 10fps, or 40fps with all settings on low and 720p....
@@sonicbroom8522 its going to be a lot longer than 10 months. until they have some new manufacturing plants up and running, there is going to be no change in supply. all the tech out there all needing their chips made in the same places... and it will take years before the planning and construction etc are done, then several more years after that to get the process nodes working, and by then the node will be outdated and the will need to start again... i mean, look at how much trouble intel has been having. stuck on 14nm or whatever it was for like 5+ years now. (thats a different situation though i suppose).
That's amazing, i hope it doesn't end up just being a good idea and actually changes the way we create video games in the future.
Just think 2s, it can be very very bad, if everyone uses this, all games will look the same.
@@Neyna4ever oof
@@Neyna4ever Nah fam, if you want to make a cellshaded game or pixel art game you can still do that. If you want to do stylized photorealistic then you can use the method in the paper. This is just a tool for photorealism, the same way the aseprite is a tool for making pixel art. Someone could probably make a machinelearning algorithm to take a photorealistic picture/animation and turn it into pixel art as well, which would be kinda nice because pixelart is a pain to do.
But yes technically you're right, if everyone used it all games would look the same. But not everyone has the same artistic vision for different games. Which means realistically it'll never be everyone using it and people can still be creative.
02:42
"Now, as you see, most of the generated photorealistic images are dimmer, and less saturated than
the video game graphics. Why is that? This is because computer game engines often create a more stylized world where the saturation, haze, and bloom effects are often more pronounced."
Or maybe it's because the AI was trained on German imagery where skies tend to be more overcast than in California. So it takes the unmistakably Californian views from GTA with their dawn/dusk pinkness and brightness, and transforms them into a cloudy German midday. Which is arguably more real, but only because Germany is arguably more real than California. Nothing to do with imagined failings in the game engine.
This is a lot like the one where Abraham Lincoln's iconic wrinkled skin was smoothed out, by AI scientists who couldn't imagine that anyone might really have skin that wrinkled because today we have mostly eliminated such conditions through advances in medicine. They supposed they were fixing a fault in the photographic technique, even though that made no sense. The data scientists can't imagine a world that isn't totally homogenized and everywhere looks identical to the view outside their front door, so they use AI to totally homogenize the world and make it look exactly like the view from their front door, regardless of what it might actually look like.
"Nothing to do with imagined failings in the game engine.". yes it has, but calling them "failings" is maybe strong word. This is new technology that it would have been hard to expect already being developed and implemented back when GTA V was released. And it still isnt perfect. But you did see the video, right? It looks absolutely stunningly real, and that is not just because Germany on a cloudy day looks more real than California, it's because the tech is working, it just so happens to be that their datasets are from cloudy Germany, not sunny California.
Someone needs to make this actually available to try in GTA in real time.
Imagine this being a reshade shader or something
@@MonsieurSwag there is a ray tracing shader being developed for reshade, at the moment its in development so you'll have to pay on patreon
@@gordonfreeman9641 non its not the same thing
@@gordonfreeman9641 no. It's not real rtx
@@ANW1173 I am aware that it isn't able to use the tensor cores of a GPU but it essentially overlays path traced lighting over the original image
yo this is so crazy,
usually the papers you cover a new to me but this is the first time i read the paper before the video.
Yeah these papers tend to spread a good deal through the usual channels.. Pretty sick stuff and still just the beginning of what's going to happen to all creative endeavors.
@randomguy8196 exactly I was surprised too
@randomguy8196 watched that too, the video got recommended to me and i found the paper from there, pretty interesting stuff.
@randomguy8196 I did too, I honestly though it was a paper karoly already covered until I saw this video lol
The Matrix is nearing closer with every single upload!
Either that or SAO, or ready player one.
Or maybe all three!
@@OrangeC7 don’t forget Spy Kids 3 lol
neurallink + vrchat is essentially the matrix already
@@Interpause Don't tell me what to do Jimmy!
I love watching videos like this, then looking at games that were called "photo-realistic" in the 90s for just containing polygons.
And we were happy for those back then. When I was kid, we played with stuff before video games even existed. :)
In the future, people will find it funny that we find photo realism exciting or even VR headsets. I expect direct sensory stimulation of the neural system, Total Recall effect and then we'll all turn out like the blobs in Wall-E.
I wonder where is this technology at this point, 1 year later.
This would be good for old games.
Wowww.. imagine photorealistic monkey island, C&c, Larry , Doom II.
I would buy them all again!!! Please some one make it a reallity
@@tomich20 Maybe in the future we will have standard AI filters to do this sort of thing, Steam could have a list of filters to apply to games.
@@garthnareng4898 Or it could be done through the graphics card's software, and you could turn it on for specific games
@@OrangeC7 If Nvidia isn't already working on exactly that, they soon will be.
@@OrangeC7 Yeah, I think that would be more likley actually.
Bro! imagine being able to simply add a Post-Processing effect into Unity that transforms a low-poly style game into a photorealistic one that actually looks like realism 🤯🤯🤯
turns post processing on*
2 fps.
Yeah, 0.1 FPS seems playable.
So in short, make game devs even lazier
@@kmx5992 No. More like: make realistic graphics more accessible to small creators who don't have a huge team of devs 👌
For example: guess why the low-poly look was a trend for a while. Definitely not just because of aesthetic/stylistic choices. Mainly because it's accessible.
@@YVZSTUDIOS so in short, make game soydevs even lazier 😎
to be fair, California must be more saturated than Germany
Also the German footage is from behind a possibly tinted windshield.
@@multiplayerlove this lol.
I guess this is the same bug that decorated the arid mountains with trees. The authors missed this. But you brought them. Great.
Can confirm, this is just the trübe reality we live in here in germany
but tbf, cloudy more muted/shadowless data is probably better to process than sunny/harsh lighting conditions. may be intentional
Different areas have different post-processing effects, the developers decided to make the Californian servers have a really vivid effect. Some people think it was a mistake and the devs meant for it to be less intense, I'm just wondering why the devs couldn't just put it in user preferences instead of having it change depending on which area you're in.
Bro, you said it all: What a great time to be alive!
About the uses of such a momentous achievement (and its unfolding), I truly believe that by the end of the decade we will have at our home PCs a small-budget Hollywood studio, with virtual sets, actors, props, you name it. Using it to make a movie or a game or anything in-between will be a matter of creativity and hard work.
The fact that this runs in real time is both terrifying and unbelievably exciting
You said "Unreal!"
No no, that's Rage engine.
What does the AI do when you look at a dry hill from afar and it generates trees but then you come close to said "trees"?
My guess is the trees will start to fade away since in can't generate brand new data, but it interprets already existing data. It can interpret a hill with many trees from afar but not a non existing tree from up close.
Ideally all the assets that make up the landscape would also be 'labeled', trees, rocks, mud, etc... but its made tricky by the fact that even the original game doesn't use distinct assets for objects in the distance (the trees might just be part of a texture painted directly on the mountain, etc).
You would likely see some significant changes as you approach, as the far away texture doesn't have any constraints specifying where to draw the trees, etc. Though I'm sure a clever approach could solve these issues too... and actually modern games still have significant discrepancies in the objects you see in the distance and what you see up close (resolution, texture and even placement)
@@StuartDesign this is what I thought too
@@StuartDesign You're mostly right yes (althought Rockstar has never painted trees in terrain textures).
Far away objects are just a smooth terrain with paper cut or extremelly basic geometry objects placed on top of it, as you get close, details are faded in until the new more detailed model is fully opaque to then unload the lower quality one, this is made various times until you see the final max quality object.
With this clear, the AI doesn't need to correct its own interpretation of the object or terrain materials, when they change in game, they'll also change in the stylized image.
Ideally the AI would choose an interpretation in such a way that you won't be able to tell if it chooses wrong. If you can't decipher what's actually there because it's too small, it doesn't matter what's really there. Only when you're able to actually see that it's trees, there should be trees placed by the AI, but at this point, the AI should be able to recognize that it's trees.
It is incredible how fast software is evolving. I love it.
Hardware needs to catch up
No it literally does not. This is a post processing step that is very fast. Neuronal networks in general need significantly weaker hardware then conventional methods for the same job.
AI
Wow. Amazing. It is about time that one of the big car games adds this feature and allows for real life car training at home to prepare for getting a drivers license. There is such a huge market for this. And visually this is exceptionally outstanding improvement over what has been there before. THIS is the next stepp after DLSS, obviously. Imagine rendering barely low quality graphics and an AI is creating beautiful stuff from it.
Every time Two Minute Papers uploads a new video:
_Ah.. here my reality goes once again._
I really feel like "what a time to be alive!"
said in a defeated almost like you have to say it but dont want to what a time to be alive. rememeber that guy eyYYY???????????????????????????????????
Imagine if they trained this AI on the streets of LA that are resembled in GTA
the camera footage is actually quite undersaturated compared to what you would see if you were actually there. Perhaps the cameras that captured the data set just weren't very good. Too many people associate undersaturation with "realism"
It looks realistic from the perspective of a fairly low quality camera, it doesn't look realistic to how a human sees everything. Now only if the data set was recorded with better cameras with fine-tuned colors...
It collected its data from germany and they dont have california weather in german cities. I think thats why.
It would be cool to see it running on GTA4, which was a much greyer and desaturated game
@@DenterNu California isn't the only place where it can be sunny though, if that's what you mean.
definitevly true. lookong outside hete, real life is even more colorful than the game
Very impressive!!
A bit later on, swapping specific things wil become easy.
Say, disply Califoprnia as a post-apolaptic world where coniferous trees replace the tropical and leafy trees, and buildings are in ruins.
That will simply the needed modeling by miles.
Thank you for editing this so perfectly, as to truly encompass your ideas and content. Very immersive my dear Dr.
The most important thing is that in like 10 years or something like that games would probably actually look like this because at that point that hardware will be advanced enough to actually do it at 60 frames.
way less than 10 years
@@arroganceofficial2089 that depends on if it’s going to be affordable to most people or cost over a thousand dollars! Just because it’s technically possible if you have a $3 000 computer doesn’t mean that it’s going to be the norm for games to look like that. And unless something impressive happens in the next 3 to 5 years it’s probably going to be a decade before the general public has access to hardware good enough and cheap enough to actually use this type of software enhancement at a reasonable frame rate.
@@The_hot_blue_fire_guy bro the thing runs already on modern computers in real time, give the code say 2 years time to get more efficient and its gonna be a breeze to run, the question is if its gonna be useful to developers or not
An algorithm to make GTA V look like GTA IV
LMAO
@Arber _D GTA IV looks quite washed out compared to GTA V because it has little saturation. So GTA IV looks like the enhanced, non-saturated version :)
Me in the future: "Am i playing or dreaming?"
How do you know you aren’t dreaming now?
@@NextLevelCode covid nightmare
@@amosdotl6892 “This is the darkest timeline” - Abed
What if I were unable to wake from that dream?
Both
Personally, I would do this as you said, ~25% representation of the setting and letting the AI fix it in post. This would open up a ton of possibilities for smaller teams or saving time doing things alone. Things wouldn't need to be as detailed ahead of time. Perhaps focus more on the character models to do close up where the AI wouldn't work as well. As an experiment, I would want to see what it does with forest/open world environments, as well as some mesh deforming for testing physics objects (car crashes, building collapse, etc)
Aha! Finally a paper I'd already seen and studied before our dear Dr. made his video! Still worth holding on to those papers when watching it again, though.
Wow, this is really impressive. I wish they tested on some really low graphics settings for the game to push the limits of the AI though, because it’s hard to gauge it’s capabilities when the input is already pretty close to realistic. Nonetheless, it seems like it could be a promising new tech to help push photorealism to a new height once we are able to run this in real time alongside games on ordinary consumer hardware.
Realistically, they did. By the time something like this is ready for market adoption, our high-end cards now are going to be low-end.
I was wondering this too, and I suspect I know the reason: the AI must be leaning heavily on the shadows for its light direction assumptions, because even on high settings GTA('s engine) isn't good at specular, but the shadows are crisp. On low settings, the shadows are effectively nonexistent, so the AI will assume it's overcast and really fuck with the look of the scene. Additionally, when a car's material in the game wasn't rendering reflections, the AI appeared to add scratches. So if you turned off reflections, you'd probably end up with a road full of broke down beaters lol
nice post-processing effect bro. eventually whole games will be rendered like this!
Like the one we're currently playing ;)
the truth
@@DeusExNihilo wait what do you m-
OH HELL NAH
@@DeusExNihilo wait no
People play games to escape from reality not replay it
This technique will be popular with the retro gaming community as a quick and effective way to reskin old game mechanics to a more modern photorealistic look. Going to be a lot of fun methinks.
@int It's not as such, but getting under a second for frame times is a hell of a start.
You know what they say... 'Two more papers down the road.'
That will be great, I believe it's possible. Mabye a simpler version first. Imagine all the videos :D
The most obvious training data for me is from older to newer title, e.g. mass effect 1 to mass effect 3
Now I'm curious to see what happens if you put real life footage through the algorithm
Im here for the person who actually gonna do it
@@naftaliten7989i also purchased a ticket
Games, animations, hell even city planning could all be accomplished with this sort of image technology. Could Ready Player One style VR be on the horizon next?
A VRMMO like that is exciting
We may expect it as early as the end of this decade
(Zenith MMO is currently the best VRMMO out there (even if it's not nowhere near RP1), we are inching toward that future)
This amazing technology could make almost any game look photo realistic, and even change the art style.
now imagine this on a vr set🤯🤯
Something tells me that it won't produce details displacement for textures, so they'll be completely flat in vr. But I'd love to know that I'm wrong.
@@antonrodionov7216 This algorithm doesn’t need displacements. Notice how it implemented grass geometry that replaced that phony texture. And the grass geometry isn’t really geometry, it’s a post processing effect. So no textures, geometry, etc... it also emulates more realistic looking reflections/refractions. This is the future
@@tstone9151 one problem is that the added detail will have to be consistent across both of eyes but this seems like a solvable problem
i was all like "is this it?" until he said it runs real time. okay theeeen we have something
Theres no need for it to run real time it should just update the assets once and be done with it. Him saying it runs in real time makes it seem like a scam because someone obviously doesn't what they are doing.
@@GhostEmblem The reason developers use downgraded textures is because of resource constraints. Updating the assets would increase the level of detail and make the game run slower. Plus this post-processing handles lighting so you need it to run in real-time.
@@mariusfacktor3597
Actually using higher res textures and downgrading is way more common and posses far less problems.
Rendering the high level assets will be exactly the same if the AI does it it is literally impossible for rendering premade assets to be slower than rendering on the fly AI generated graphics thats the reason assets are used in the first place otherwise no one would bother using so much space to save the assets when the they could just have the entire environment rendered from scratch faster.
I am a software developer but this is stuff that basic logic can determine I have no idea how got anyone to vote you up just a bunch of hype men who blindly believe anything.
@@GhostEmblem As a fellow software developer I'm confused by what you are saying. The AI can't "update" the assets. The whole idea of this AI is that it takes the image rendered by the game engine, and processes that image. It has no idea of any textures / models / assets / ... It only knows the input images. So when the _game grass_ changes to _realistic grass_ is not just the grass texture updating, its completely interpreting the image and remapping the grass part of the image based on the grass it was trained on. That's why it ONLY works when used as a real time post processing.
@@pb9405 They said its playable in real time so its not just a video which is what you seem to be implying.
6:46 Holy cow! It's already almost a Photorealistic Post-Processing Injector. That would be incredible to see.
It's a great improvement of photorealism techniques.
Imagine if the input was high quality camera footage instead of dash cam quality footage.
I feel like that at some point we wont really advance in algorithms anymore (e.g. for lighting), but instead put an AI on top who beats any known algorithm easily.
Hohoho! There is only one question left to ask! That is “CAN IT RUN CRYSIS?”!!!
Forget crysis, can it run DOOM
This is really cool. I wonder how it would be to have realistic maps in games, accurate by using google maps AI learning and such. That way one could theoretically visit one's own house in the game! Imagine driving down your own neighborhood, to work, go to all your hangout spots, etc., in a game! I live in NYC so this would be phenomenal. It would probably require a massive amount of space, but I think it could be possible!
It would be great, but there are ethical concerns regarding the representation of information in maps and games that would use information from Google Maps and other data sources. For example, the privacy of personal data and property information could be compromised if they are represented in a game.
This looks more like how a real life footage from a cheap camera would look like.
I'm holding the paper so hard now. Amazing, what a time to be alive!
He said squeeze, man. SQUEEZE
@@jamesbizs but thats gonna crumple the paper :(
This is insane
I'm sorry, but I don't think these photos are realistic. The only place where it changes the picture is where the new image is cloudy. And it seems different and realistic because it is great due to the changing weather, but just as realistic as in the game. No surprise here.
@@MrFEARFLASH TH-cam compresses videos, so the video does not show in a good way, but if you look at the Intel ISL website, the images are strikingly photorealistic.
this is actually crazy, imagine a VR experience with this..
Great! Seems we are one step closer to realize that everything goes full circle eventually.
You have to be kidding me. This is usable right now?! I'm super impressed.
This channel has already gave me a few WTF moments, and can't stop giving!
I agree. It gave me a few surprises as well. Never tought that technology is evolving this fast.
Wow! Incredible! I hope Digital Foundary cover this also.
I like how you can see they played it with keyboard and not controller :D
PLEASE use this on Flight Simulator 2020
needs dataset tho
So we can have every location in the world look like Germany.
This is a very cool idea, but the question really is speed of processing, if it can keep up doing those calculations, then very cool. Also, especially being from a computer animation background, one thing I learned is to simply take out the saturation, use more light greys and very light blues on most objects, that helps to give it the more realistic look. But lighting is everything in animation, so I would put a slightly yellow sun in the seen (i love amber lighting) and see how that works. Just using clever lighting can help hugely in rendering and light calculations. Anyway, I can't WAIT for two papers from now!
*scene
This is not a 2-minute paper, but it is totally worth it!
Wow. Especially with the second dataset it looks amazing. Can't wait for someone to hopefully make this work as a mod in real time...
The future of gaming? interesting paper for sure
"What a time to be alive"
INDEED
Such an amazing time, that one wonders if one is actually in this actual time, rather than in a simulation of it.
(gets enslaved by AI)
GTA V looks like my world when i got high and the photorealistic way when i'm normal
This looks amazing. It looks like real dash cam footage. I’m so excited for future gaming technology.
The real problem is GTA already looks incredibly realistic. Not many games can achieve such realism then how ai gonna translate it is the real question here.
Huh? GTA V looks super dated by today's standards.
I'm starting to think that real life is just a simulation by some super advanced AI.
some even call that super AI god or something.
@Progenitor77 No, philosophers in general don't believe that. Nick Bostrom has theorized it in his simulation hypothesis paper, and the conclusions he comes to are extremely thought-provoking.
Assuming it is generated only for you (not multiplayer) it would even not need a very advanced hardware to run.
Can't wait for GTA RP streamers implement it during streams
doesnt run in realtime. only 2 FPS.
@@theupsider didnt he say it did run in rt ?
I was also thinking of that on my second re-watch.
This channel is gold
The only thing breaking the immersion now is the way the car moves.
This is incredible. You mention it works via post-processing and I'm curious when it renders the lush grass instead of the textured plane is this projected over the original game footage as essentially AI altered video footage? Would lack of persistence be a limitation, like if you were to go past the same location twice, might it still look similar but generate slightly different results?
I would definitely use this as a masterpiece in acuteness in interior design.
Artists 10 years ago: "we are not going to lose our jobs to AI, AIs cant be creative"
AI in 2021: My name is Deez
I can't wait to meet Joe the SuperIntelligent Computer!
@@Umarudon Woah does it has ligma systems?
@@krathya No but it has an alternative Sugondese System!!
@@Umarudon Oh very well known systems developed by Rhidon Co.
@@krathya is Candice the CEO? also i heard that Joe the SuperIntelligent Computer has the new Rydon Technology... it is very powerful when used with a Penny Trading Power Supply!!!
Good lord, this is AMAZING.
Love your videos TMP. Keep it up!
It's very interesting to note the difference between the mapillary dataset's render and the cityscapes render, specifically looking at palm trees. It appears, to me at least, that because the cityscapes dataset has no image data from parts of the world with palm trees, it tends to erase them or fuzz them out significantly more than they would be in real life, whereas the mapillary data has plenty of data from cali and the equator and does a great job at accurately tuning them.
I'm also very impressed with the clarity of the images and videos, as well as the retention, no important details that I could find were erased or changed significantly by the algorithm. This really is an awesome paper, I honestly hope they release the renderer as a mod, if it's true that it can be added as post processing, I'd love to play with it.
Oh and it's also really funny that some of the images on the paper just look like they came straight outta GTA IV, guess it's a testament to how well rockstar did at keeping NYC realisticly grimey and low-saturated 😂😂
Someone should do this with Red Dead Redemption 2.
When games look like real life, everyone will crave for stylized cartoony games
imagine being paid to provide GTA V content for this paper
Imagine how many old games could be refreshed with this tool, old need for speed games, gta 3, vice city and many more, because you can feel free to feed an algoritm with any images
I am a developer for an upcoming game engine, I would love to implement this as a feature. The game engine is modular, I have already developed about 10 modules for it. I would love to create a module for this kind of feature, we already have realtime NPC interaction and communication and it's a module.
The game engine uses Taskflow for Multi-threading and Dilligent Graphics for pre-processing. We also created an EOS module (epic online services), and a Quixel Mixer module.
But this a post-processing capablity would be awesome.