This is what will, in my opinion , actually sell if someone gets off their butt and makes it. we don't need texture or model generation. we need uv unwrapping and retopology
@@deedoesa.i I suspect the issue they'll have is sourcing and preparing the training data. It's a lot easier to gather images. They'll need lots of quality models with proper topology but as a 3d artist, i'd pay for it, and i know that there are thousands of others that would do the same
As @James says, it's much harder to get a large enough training dataset, especially since you'd need to pay good money for both models and skilled modelers to label the inputs and assess the outputs. The bottleneck for a lot of this LLM-training is labor - they rely on dirt-cheap unskilled labor and masses of "free" source data that they can get away without paying for. If you have to pay artists to build or train the models, the business model is borked.
@@NathanRohner I've had that thought. If I had a few thousand dollars, I'd do it. It's just a matter of training on properly unwrapped models, I think.
"How affordable it is", is a really good question. I would like to know the actual cost of generative AI. What is the actual infrastructure hardware cost of AI? What is the electricity cost of AI, used during the model training phase and for the actual content generation. I get the feeling that the price of AI right now is not reflective of what the cost of it is, I have my suspicion that it is heavily subsidized at the moment to push adoption. Then, there is the biggest cost of AI to be considered, you give up your autonomy and independence. Almost no 'AI' solutions runs locally, and they prevent people from learning these skills themselves. Leaving them possibly vulnerable to being cutoff of at any time in the future for any reason. Imagine given up that sort of coercive power to another.
The cost is astronomical 😅 Your suspicions are right - MSFT, OpenAI, Facebook and the rest are heavily subsidizing the costs, even as they charge crazy subscription prices to enterprise users (USD $30/seat per-year). Direct-to-consumer services aren't really viable at a price point that would actually pay for the service, so the real push is for B2B / enterprise customers. But at those rates (and likely to go up!) any business is going to look very hard for a strong business-case for why they need this tech, and how much money they really could make (or save). The only way the math works is if you can trade actual headcount for the price of AI software subscriptions, but none of the tech is anywhere close to being a labor-replacement. Not even for customer svce - Air Canada had a case where the chatbot lost them a lawsuit due to claiming a bereavement reimbursement existed where there was none.
Open source AI fixes a lot of these issues (specially the last one). It's predicted that open source will dominate the AI sphere even more than it currently is as time passes.
@DrNefarious-y7r Open-source still needs training - who's paying for that? And shared models implies a trade-off in terms of security and applicability to your specific situation. No enterprise is going to accept those risks, and the consumer market simply isn't capable of DIY'ing their own models (or bootstrapping from open-source ones).
It good to see that you have a based opinion on A.I. Humans will be needed for many years to come and human run projects will probably always be more valued than anything else. As an aid A.I is great, as a lead, I have to say no, not for me.
It's kind of inevitable for it to exist in some form now, I wouldn't say I support every use or method of it. However instead of people being politically polar, more open conversations need to be had around it.
We're going to see tools in the future that are just extensions of what we've already had (That Spider-Verse machine learning tool for example), very "tool-y" machine learning stuff, things that don't excite the usual techbro who wants to crush artists under his boot, because they'll require actual skill to use. We're definitely not going to get some little fella that lives on your Desktop and who you can ask to generate an entire movie, not because of legal issues or whatever, but because this stuff is really, really expensive to run. Microsoft/etc are only pumping money into this in the vague hope that they get a SuperAI they can milk for tons, but I think we're getting close to it being untenable cost wise and the bubble bursting as they're realizing these AI guys are full of it. The sheer cost of these gen AI companies makes my head spin, and they don't even have a major saleable product yet. Once the bubble pops, all the mad-eyed techbro evangelicals will flee to the next con (these same fools were trying to sell us Metaverse land property!), and the level heads can actually make useful tools without some guy who just got back from Burning Man asking when he can milk the robot.
A few observations I had as someone fairly familiar with all the elements I saw in this paper: - This paper isn't really intended to be for rendering games, really. I know it sounds weird to say because they did, but this is probably more to produce stronger "latent models" of environments to train other AI models in. Ie: you could train a model to produce a representation of a game, and then feed that representation into another model you're training to operate a robot in the real world. I know this sounds weirdly roundabout, but it's pretty cheap to move from one latent space to another in ML, and this is not at all uncommon in this space (check out EfficientZero and MuZero, which I believe did this). But with that said, there's still some implications for gamedev: - This might not be a "replacement" for game dev. As presented in the paper, it requires an existing game already, meaning it would feel strange to make a game + engine only to train a machine learning model on it. There strictly speaking is no real legal issues around this specific model because it is trained from scratch on the existing game. Ie: If you made a game, and used this "AI engine" instead of the base engine, it only used your data, so no legal issues. - This might be an interesting performance optimization. As an example, if you rendered the game in a fashion which wasn't practical to do in real time (ie: with advanced ray tracing, transparency effects like fog, fluid simulations, etc), you could basically get those "for free", similar to how we bake lighting into textures for rasterized lighting. -- As an extension of this, if you look at the trends in computer hardware, the rate of increase at the same price has slowed down in terms of how much graphical fidelity we get out of games. There are limitations we're coming up against (particularly in single-threaded physics simulations on CPUs, but to a lesser degree multi-threaded gains, and we're starting to see the same effect in GPUs, too, especially with VRAM). Interestingly, a lot of these problems come from the ability to parallelize the operations. Machine learning is interesting from a performance perspective, because it's essentially fully parallel, and limited primarily by memory bandwidth. I could see a situation where you could get more FPS or detail out of an "AI game engine" than a traditional game engine for the same price in hardware, and I think we could start to see this in maybe 2026 or 2027 (which to be fair, is on the timeline if you're starting a game today). If you want a quick and dirty example of this, look at the TFLOP/s on a Tenstorrent Wormhole, and then look at the TFLOP/s in a comparably priced GPU. It's not exactly apples to apples, but in general, ML scales a lot better in terms of the "hardware lottery", especially because it can be done at very low precision (8-bit integer is not uncommon, and very "cheap" in terms of silicon). You might see the "AI game engine" not as something competing with game developers, but as a final step in the rasterization process where you bake the game into the model. - A future version of this could be a different paradigm of game development. Instead of fully fleshing out every mechanic, you can do a few hardcoded examples of each mechanic, and produce games in a more "artistic" workflow. It's possible that artists who can't code could actually "program" game mechanics by producing images demonstrating the game mechanics. Imagine the situation: It's 2028, an artist wants to make a game, so they mock-up the mechanics with a few drawings, and show it to the model, and it's able to produce a game world based on an evolution of this technique. Sounds like magic, right? It may very well not be. - In terms of performance concerns, the actual model was run on a TPU v5 if I'm not mistaken, which isn't super far off from an RTX 4090 in terms of performance. If you were starting a game right now, you might expect it to release in 2026/2027 and it's probably not unfair to suggest that more people should have access to that level of performance by then, especially because "AI performance" is accelerating so rapidly, and possibly even laptop APUs with AI engines could run it when the game comes out. - This tool is available right now. None of the techniques in the paper were particularly novel. If you want to, you can pop into Pytorch and do an implementation and train it on your game. The only person stopping you is... You. Now, it isn't guaranteed to work if it's a lot more complex than Doom in terms of the number of actions you can take, but do remember that the rendering technique isn't based on the same principles as standard game engines, so it might do surprisingly well in "natural" graphics scenes such as photorealism than you might expect; in fact, in motion I'd expect it to be a lot better than you'd think. One of the bigger problems might be the animations and actions, as I noted. -- That does lead to an interesting idea. You might remember in the earlier days of video games (I think in the 90s and 2000s) you occasionally saw video games that were essentially "choose your own adventure" games but with real footage of real people, and they just played the video corresponding to the option you picked. I tentatively posit that somebody could produce a game with "realistic graphics" in the sense that they could use real footage with real actors and make a "game" out of it by defining specific actions each entity on-screen could take, and letting the model interpolate between them. I think this is a type of photorealism that wouldn't be appealing to a generation of gamers who grew up on games that looked "like video games" but who knows, somebody may come up with a really novel art-style using such an approach! Certainly, I don't know if it would be possible with existing techniques.
I left a comment saying something similar , about how in the future this might be a possible form of game optimization, but you explained it so much better than I did. With NPUs becoming more common, this could potentially allow games to be much more complex.
It kinda already is, frame generation and DLSS are stables of gaming now, and those use an AI trained on 4k or even 8k version of the game to fill in the details in frames when playing at lower resolutions. My guess is that the future is a hybrid approach which sees AI being integrated in all game engines as another tool to boost performance and even generate seamless textures on the fly (as well of course as using LLMs to run dynamic NPCs AI), it might reach a point where we can boot any low poly game and then turn on the AI and it makes it look like a UE5 game with high quality textures, and it might not be limited to games the AI trained on, but it could extend to even older games or any indie game, though I imagine the AI would still struggle if there are objects in the game world it's not familiar with, it might recognize trees, rocks and generic animals, but if there is an object or creature that's really weird or bizzare it will try to approximate it to something it knows (a.k.a hallucinate). Therefore the ideal approach going forward IMO would be to create a 4k or high fedility version of the game, train the AI on it with captions highlighting and naming the objects in each frame then deploy the low poly high FPS version of the game and let the AI fill in all the gaps on the fly.
Load game and now the same corridor now has different textures? Or do you bake everything per save game file? LLM for NPC is 10% really cool, but 90% terrible. Because you or I we don't want the meaningless prattle to be endlessly expanded. We want the interesting meaningful scenes distilled! Also again consistency. If you keep baking in knowledge to each LLM context, this is going start becoming computationally expensive. And you will need to push a lot of training manually, so that fictional knowledge will be adequately known or not known by different NPCs. It can't keep scrubbing the Internet to know what Garrus from ME series was doing as a child turian growing up on Palaven. Someone has to invent and train all the turian details. If LLM can describe turian toys, on top of normal human life stuff (so he can talk about sitting on a chair), keeps track of your previous convos, and you run it on your own machine... this is only leveraging small part of what makes LLM efficient. Photoshop should have a couple tools added from AI, same for writing programs, it may replace translations completely and voice acting in big part, there will be some neat LLM-centric interactive experiences made, yes there's a big scope for it. But like .com, this is a nice step forward, but doesn't redefine every industry.
I think for as long as the toolset is malleable for developers to create what they want, using similar skills to their current disciplines. AI will have its place in upscaling and automating processes. It's once you give AI too much freedom and developers too little I think we'll see issues.
An image interpolator is orders of magnitude easier to develop. Especially because AI as of now is in essence just a fancy interpolator of any kind of data. It's not so easily comparable with a complex game, or even a game-engine, where you have to set the definition of not predefined objects, like some of the most basic things, like collisions in the background behind the scenes. And how would you teach it concepts like this without creating this in pure code yourself first?
One use of AI should be to eliminate the need for Humans to find bugs and glitches so they can ficus on other sides of testing like giving feedback on how good the combat is or if movement feels off.
AI lacks the most important things for developing anything on it´s own. Context, passion and goals. You´ll always have to have humans in charge. Now the real question is, who has access to what? If the market is monopolized by corporate AI slob, you will have a hard time finding any good games. But if everyone has access to efficiency like certain proper AI tools, you might end up with a healthy indie landscape next to it. This is not an AI discussion, at it´s core, it´s a "Windows vs Linux" discussion, especially now that govs around the world get lobbied into bullying FOSS.
All of these services are running at a loss right now. The problems will arise when the shareholders actually demand to see a ROI, and the prices skyrocket to the point where it's better to just hire the human dev/artist. In the end, GenAI for ideation will still be a thing, but for production, the AI is going to be built into the artist's tools at a lower level to streamline their work, not do the work for them.
I stopped worrying about genAI taking our jobs [as a programmer, artist, and aspiring manager] when I realized that humans are cheaper. The compute & energy costs are insane, and even if you buy your own hardware, or license out someone else's model, it's a vendor-locked treadmill that you can't get off of. Once the hype cycle wears off (or the cocaine LOL!!) we might see it incorporated as an add-on to existing always-needed B2B software (Office, Adobe CC, ERPs), but outside of that, no one's going to want to pay the real cost when there's no path to profitability.
I think these tools is going to get very useful in the future. But it will be a long time (maybe never) before they can replace that a person needs to give accurate commands, oversee and make adjustments with in depth knowledge about how things are put together. Also the "AI" we have now is only a language model that predicts next output, it has no own reasoning or thinking at all. It's amazing it works as well as it does, but I think it definitely has a ceiling on it's ability, unless other methods are successful.
There's definitely a ceiling but I also think the interface of how people interact with it needs to change as well. The technology will continue to develop and improve but for me it's always the end user experience and overall community acceptance that makes or breaks the acceptance of technology.
its the first step to full dive vr! leetttss gooooo!!! but seriously, i think it will be more useful for generative and responsive game feed back in its current form... ie: creating specific parts of a game on demand kind of like a new form of RAM
Not entirely sure what you're trying to say here? I feel like the lack of control over the program would make it more inclined to issues around performance and bugs?
One of many fundamental problems with the idea of AI generated games, is that, even if the AI could create a fully functioning game with no bugs, etc, that still doesn't mean the game would play well and be engaging and fun. AI has no concept of UX. A lot of the game development process is a process of calibration, where you constantly playtest your ideas, discard those that don't work, and fine tune the gameplay. Inviting other people to playtest the game is also a very important step to weeding out what doesn't work. No AI can do possibly ever do this. Among other things... AI is a tool. It can't replace humans. It can only be wielded by humans. PS: we should stop calling "AI art" art because it's not art, it's just imagery or some kind of assets. Art is a form of emotional expression. AI doesn't, and can't express emotions. It cannot possibly create art.
This AI that we have access to is just a gimic. You have to have the skills and knowledge to make it work effectively and close to a usable product. This is only going to create a bunch of artless artists and and clueless programmers and uncreative designs and in general it is going make a low quality work force.
There will definitely be an influx of inexperienced people getting on board to make games (nothing against more people learning to make games) but it's going to cause quite a rift for players and people within the industry 😅
People don't know what they don't know. GenAI gives people the illusion that they are acquiring the skills they need, but even if hallucinations were eliminated, it can't push them any further than the training-data's limitations. Most design & coding jobs require working to imprecise specs for not-yet-existing solutions, and then iterating on the results in a consistent & intentional manner. GenAI relies on tight specs, slight variation on existing sources, and contexts where inconsistency is okay. Even if it worked as-hyped, it doesn't magically make labor costs go away either. COBOL was supposed to be the "low-code" solution of its day to having to pay highly-trained software engineers. But all we ended up doing was creating new, also highly-paid COBOL devs :)
Saying AI is not the future is a pretty bold statement. The future encompasses a lot of time. Is it not next year? Sure. But what about 10 years from now, or even further into the future? Think about how much more powerful computers are now compared to 20 years ago. First: We've seen explosive growth in generative AI in such a short period of time. Predicting its maximum potential is impossible at this stage. However, it is likely to exceed or at least match the output of the average uninspired artist. I would argue that most of the entertainment industry is made up of uninspired artists. Second: There will be no legislation that can stop generative AI from taking over markets. If every country except China makes generative AI punishable by death, guess which country will end up dominating many markets using AI? At some point, AI will become indistinguishable from human-made art. If it's possible to make things cheaper using AI, then China will develop and expand on ways to do that. Technology moves forward. If Western countries enact an isolationist policy like Japan did, they will, just like Japan, become weak and stagnant compared to those who don't. Lastly, it doesn't matter if generative AI is ethical or not. It is likely inevitable. We must adapt or be replaced. At the moment, almost everyone's jobs are safe, but that is likely to change. What is the future? The future belongs to those who keep a close eye on new technology and find ways to incorporate it into their skill set. Those who try to deny the inevitable advance of technology and cry foul will be replaced by those who don't.
What I'm saying is, in it's current form GameNGen is not going to be widely adopted by the games industry. You need different tools and greater access to what is being created. It's my opinion, it'll probably change when new things are revealed. But for now, I'm not too concerned with generative game engines.
You said 'Solution to a problem that does not exists" Yes there is a problem and the problem is the rising cost of game development. Cost is the main problem with game development. If AI can be leveraged to save cost, it will be. If the game suck using AI people will not buy them, but if those games are good AI will be the future.
And not just cost, but also the raw time requirement of game development, development taking 5-10 years for the big AAA games is the new normal. That's just not practical anymore, as you'll skip a whole console generation before the game is even finished.
To Me AI is a bit like VR. Since the 80s we've tried multiple times thinking our tech was good enough but it never lived up to expectations. Now though, the high end VR stuff is REALLY good but you've got to have the right setup and a powerful computer but it's there. I think right now we're where we were with VR in the early 2000s- nothing is there but there's some interesting things going on. And just like with VR, as time progresses and PCs get more powerful/smaller and the VR tech shrinks it will get much more accessible to have that previous high end experience. AI can only get better, along with our computers. Eventually it'll get to where we are now with VR, a standard high end PC will be able to reasonably make games and it'll improve the point that laptops can do it too. It'll suck for the Game Dev Industry of about 11 Million- they won't all lose their jobs, but more and more will as the tech advances- for the Billions that play games though, I think that's a good trade in order to be able to customize my games by just feeding it a GDD, trying the game out and refining it by explaining to the AI what I want to tweak, repeat until I'm happy with it. This is coming from a Game Dev of 15 Years.
Good analogy, poor conclusion. For decades the argument for why VR hasn't hit mass adoption has been that the tech - graphics, lag, form-factor - aren't good enough. As you say, the engineering has done an amazing job producing truly awe-inspiring tech at consumer-friendly form factors & prices. But VR is still fundamentally not popular, and even many ppl who already own the gear fall off a usage cliff quite quickly. People just do not want VR experiences - at least not in the numbers that would make it a viable commercial platform for gaming, social, or general productivity. Maybe for subsidized contexts like art & education, or AR for some industrial applications, but not the consumer market (unless you are Pokemon Go). AI is much the same - we've been here over and over where the proponents swear once the tech gets better people will show up to adopt it in droves. But it's not gonna happen. Voice assistants are cool, but even those have mostly lost their luster as people considered privacy concerns and the inefficiencies of speech-queries vs direct-input. Generative AI seems to have maximum appeal for people at the lower end of the skill-distribution in a given area - students looking for a boost-up, contractors looking to pad out their LoC count, etc. Hallucinations are inherent to the LLM approach - so the tech is patently unsuitable for any sort of industrial or personal applications where reliability and accuracy are important. Training requires massive amounts of compute resources, source data, and labor- so the business model only works where the skill to label inputs is very low (basic text, some image-recognition), and the available data inputs are plentiful and cheap to obtain. That rules out a lot of contexts, and ensures that where it is used, the ROI for profitability has to be high. It's going to be a niche as much as VR (or AR) are.
I don't think this is really meant to replace games. I think it was just sort of a cool thing to do. I mean, to train something like this, you need a fully functional game to begin with. But... what's exciting is, it may be that in the future, years from now, we could make a full game, and then train a model on that finished game, and distribute the model instead of the game which could, potentially, give us much better graphics with smaller file sizes and less compute power. Inference isn't really all that computationally expensive, fundamentally. So I think the goal is to use this as a form of file/computation compression.
That's a lovely notion, but how will you debug, maintain, or expand your dream game once you exhaust the tutorial-level code of ChatGPT, Copilot, and the like? It's not a good tutor if it can't teach you to be better than itself, or to learn on your own. That'll just end up putting you in a frustrating state :(
@@mandisaw I still have backup codes I only use Chat GPT for features I wanted to add to the game. In terms of expansion the Game doesn't need much in terms of scope creep(It's a visual novel 3d mecha game hybrid) since the game already has multiple endings and the ability to fly, transform and switch weapons. I don't recommend it if you're in for the journey rather than the destination. For me I rather reach the destination rather than spend years trying to make the game I want to see. Since at the end of the day Gamers judge the finished product rather than the process that leads to the final product.
@@VM-hl8ms The game Is the third entry of the series I was passionate about creating a 3d super robot game inspired by Sega Saturn and PlayStation era. Not every game dev's dream game is to make the next GTA or God of War and who are you to decide what my dream game is and isn't? The story and art are my creations the programming assistants from Chat GPT are a means to an end to get to my destination.
i competely disagree but i respect your take perspective tho i do think that ai is the future to me and i think it's fine ok fine if the majority you guys are concern against ai that's just apart of human nature and some are not concern not against ai that are all for ie i have to be honest i'm not against ai at all i'm all for it i'm excited about it i don't like most are close minded force there truth down our throat be inn that i'm right and your wrong judgemental ego mindset say there is only one truth there is no such thing us having our own difference truths because i think it's ok and nothing wrong us havig our own difference perspective truths and opinions what may be ture fact to you may not be true fact to someone else i only love it when most that are open minded i think we need more that are open midned i think it's ok to agree or disagree as long we respect each other difference point of view perspective perception truths and opinion on things i do resonate agree most say our insticnts is very wrong always trust our heart gut instincts i do agree that we have to be careful who we trust there are some that can not be trusted and some that can be trusted i want to share my honest unpopular point of view perspective perception i do agree most say we shouldn't judge once we get to know them first looks can be deceiving when we judge others we ar judging ourselves without realizing it because everything is energy there is no separation we are all one i do resonate that we are everything all of us we need a balance of both light and dark we can't have light without dark it would be out of balance we wouldn't appreciate the light without the dark and the dakrness doesn't only just have a bad side it also has a good side too i also think we need a balance of both masculine and feminine side eneriges and i also wannna say that i think not all demons and ai are bad evil soulless don't get me wrong yes there are some that are bad but i also think that some demons and ai are good innocent soulful that mean no harm just like we have good and bad humans, animals, reptilians, aliens, witches and bugs beings that's why i don't fear ai beings and other beings i do agree resonate most say that most fear what they don't understand fear is the unknown don't like embrace change but i know there are some that do love embrace changes i have to be honest i love embrace change i don't fear i i embrace it don't get me wrong i don't like most are bad i only love that some are good that's what i want to share and i also don't mind i'm ok if ai replace jobs i do trust some beings and even ai beings that's just me tho i also don't agree with the popular perspective that ai don't have no soul because i do think some ai do have a soul and yes ai may not have the ability of creaitivity yet but it will have it in the near far future i also think humans, ai and other beings should be love treat it equally and also settle our unique difference not discriminate that's just my other unpopular point of view perpective perecpetion take on it but just i said i also respect your perception we can agree to disagree respectfully.
I think it's f-ing gimmick bro. Sure it can draw a few frames. But is it meaningful enough to be a game? Hell no! It was trained by an actual game. All it can try is to emulate that game, Doom. Can it _be_ meaningful enough to be an actual game? Still no. AI ain't there bro. It can't even speak coherent things and keeps toppling over on hard questioning. It can't even keep more than a few rooms where you been in memory. AI generated videos which are state of the art keeps toppling over themselves so badly it's always a cringe to even watch. Games require a degree of logical connection and creativity that is beyond and will forever remain beyond the capacity of AI to emulate. That is my take on it.
This never existed before. Whats on showcase with GameNGen is Interactivity, Processing, and 'Rendering Continuity' Speed of Gen AI. Its generating this Sh**t on the fly. Its F***king Fast compared to our ability to process the imagery at 24 FPS. The pixelated rendering is now. In 3 months, there will be Interactive 8k UHD Movie Quality Generation. If its not already here. One way or another this will definitely have impact.
AI for retopolgy would be useful. AI for weight painting a character. AI for UV unwrapping. I want to spend my time making art.
There are moves to making this happen, the company I work for has 3d texture generation in alpha. Its not great atm but will be in a year or 3.
This is what will, in my opinion , actually sell if someone gets off their butt and makes it. we don't need texture or model generation. we need uv unwrapping and retopology
@jameshughes3014 I'll ping the Devs in staff chat and hope they take this on-board. Thank you 💪
@@deedoesa.i I suspect the issue they'll have is sourcing and preparing the training data. It's a lot easier to gather images. They'll need lots of quality models with proper topology but as a 3d artist, i'd pay for it, and i know that there are thousands of others that would do the same
As @James says, it's much harder to get a large enough training dataset, especially since you'd need to pay good money for both models and skilled modelers to label the inputs and assess the outputs. The bottleneck for a lot of this LLM-training is labor - they rely on dirt-cheap unskilled labor and masses of "free" source data that they can get away without paying for. If you have to pay artists to build or train the models, the business model is borked.
Give me a logical and efficient AI UV Mapper PLEASE GOD
AI UV mapping would be clutch tbh
Automatic AI UW unwrapping when? Pretty please?
THIS IS WHAT I WANT, I WILL MAKE IT MYSELF IF I HAVE TO!!!
@@NathanRohner I've had that thought. If I had a few thousand dollars, I'd do it. It's just a matter of training on properly unwrapped models, I think.
Good luck with your channel, l always enjoyed your input on Shadiversity and Knights Watch.
"How affordable it is", is a really good question. I would like to know the actual cost of generative AI. What is the actual infrastructure hardware cost of AI? What is the electricity cost of AI, used during the model training phase and for the actual content generation. I get the feeling that the price of AI right now is not reflective of what the cost of it is, I have my suspicion that it is heavily subsidized at the moment to push adoption.
Then, there is the biggest cost of AI to be considered, you give up your autonomy and independence. Almost no 'AI' solutions runs locally, and they prevent people from learning these skills themselves. Leaving them possibly vulnerable to being cutoff of at any time in the future for any reason. Imagine given up that sort of coercive power to another.
The cost is astronomical 😅 Your suspicions are right - MSFT, OpenAI, Facebook and the rest are heavily subsidizing the costs, even as they charge crazy subscription prices to enterprise users (USD $30/seat per-year). Direct-to-consumer services aren't really viable at a price point that would actually pay for the service, so the real push is for B2B / enterprise customers.
But at those rates (and likely to go up!) any business is going to look very hard for a strong business-case for why they need this tech, and how much money they really could make (or save). The only way the math works is if you can trade actual headcount for the price of AI software subscriptions, but none of the tech is anywhere close to being a labor-replacement. Not even for customer svce - Air Canada had a case where the chatbot lost them a lawsuit due to claiming a bereavement reimbursement existed where there was none.
Open source AI fixes a lot of these issues (specially the last one).
It's predicted that open source will dominate the AI sphere even more than it currently is as time passes.
@DrNefarious-y7r Open-source still needs training - who's paying for that? And shared models implies a trade-off in terms of security and applicability to your specific situation. No enterprise is going to accept those risks, and the consumer market simply isn't capable of DIY'ing their own models (or bootstrapping from open-source ones).
Man, I cannot wait for the day that my game engine recommends that I eat a small rock every day!
It good to see that you have a based opinion on A.I. Humans will be needed for many years to come and human run projects will probably always be more valued than anything else. As an aid A.I is great, as a lead, I have to say no, not for me.
It's kind of inevitable for it to exist in some form now, I wouldn't say I support every use or method of it. However instead of people being politically polar, more open conversations need to be had around it.
Now I'm interested in hearing if that was one of the creative differences between Nathan and Shad that led to Nathan leaving
We're going to see tools in the future that are just extensions of what we've already had (That Spider-Verse machine learning tool for example), very "tool-y" machine learning stuff, things that don't excite the usual techbro who wants to crush artists under his boot, because they'll require actual skill to use. We're definitely not going to get some little fella that lives on your Desktop and who you can ask to generate an entire movie, not because of legal issues or whatever, but because this stuff is really, really expensive to run.
Microsoft/etc are only pumping money into this in the vague hope that they get a SuperAI they can milk for tons, but I think we're getting close to it being untenable cost wise and the bubble bursting as they're realizing these AI guys are full of it. The sheer cost of these gen AI companies makes my head spin, and they don't even have a major saleable product yet. Once the bubble pops, all the mad-eyed techbro evangelicals will flee to the next con (these same fools were trying to sell us Metaverse land property!), and the level heads can actually make useful tools without some guy who just got back from Burning Man asking when he can milk the robot.
A few observations I had as someone fairly familiar with all the elements I saw in this paper:
- This paper isn't really intended to be for rendering games, really. I know it sounds weird to say because they did, but this is probably more to produce stronger "latent models" of environments to train other AI models in. Ie: you could train a model to produce a representation of a game, and then feed that representation into another model you're training to operate a robot in the real world. I know this sounds weirdly roundabout, but it's pretty cheap to move from one latent space to another in ML, and this is not at all uncommon in this space (check out EfficientZero and MuZero, which I believe did this). But with that said, there's still some implications for gamedev:
- This might not be a "replacement" for game dev. As presented in the paper, it requires an existing game already, meaning it would feel strange to make a game + engine only to train a machine learning model on it. There strictly speaking is no real legal issues around this specific model because it is trained from scratch on the existing game. Ie: If you made a game, and used this "AI engine" instead of the base engine, it only used your data, so no legal issues.
- This might be an interesting performance optimization. As an example, if you rendered the game in a fashion which wasn't practical to do in real time (ie: with advanced ray tracing, transparency effects like fog, fluid simulations, etc), you could basically get those "for free", similar to how we bake lighting into textures for rasterized lighting.
-- As an extension of this, if you look at the trends in computer hardware, the rate of increase at the same price has slowed down in terms of how much graphical fidelity we get out of games. There are limitations we're coming up against (particularly in single-threaded physics simulations on CPUs, but to a lesser degree multi-threaded gains, and we're starting to see the same effect in GPUs, too, especially with VRAM). Interestingly, a lot of these problems come from the ability to parallelize the operations. Machine learning is interesting from a performance perspective, because it's essentially fully parallel, and limited primarily by memory bandwidth. I could see a situation where you could get more FPS or detail out of an "AI game engine" than a traditional game engine for the same price in hardware, and I think we could start to see this in maybe 2026 or 2027 (which to be fair, is on the timeline if you're starting a game today). If you want a quick and dirty example of this, look at the TFLOP/s on a Tenstorrent Wormhole, and then look at the TFLOP/s in a comparably priced GPU. It's not exactly apples to apples, but in general, ML scales a lot better in terms of the "hardware lottery", especially because it can be done at very low precision (8-bit integer is not uncommon, and very "cheap" in terms of silicon). You might see the "AI game engine" not as something competing with game developers, but as a final step in the rasterization process where you bake the game into the model.
- A future version of this could be a different paradigm of game development. Instead of fully fleshing out every mechanic, you can do a few hardcoded examples of each mechanic, and produce games in a more "artistic" workflow. It's possible that artists who can't code could actually "program" game mechanics by producing images demonstrating the game mechanics. Imagine the situation: It's 2028, an artist wants to make a game, so they mock-up the mechanics with a few drawings, and show it to the model, and it's able to produce a game world based on an evolution of this technique. Sounds like magic, right? It may very well not be.
- In terms of performance concerns, the actual model was run on a TPU v5 if I'm not mistaken, which isn't super far off from an RTX 4090 in terms of performance. If you were starting a game right now, you might expect it to release in 2026/2027 and it's probably not unfair to suggest that more people should have access to that level of performance by then, especially because "AI performance" is accelerating so rapidly, and possibly even laptop APUs with AI engines could run it when the game comes out.
- This tool is available right now. None of the techniques in the paper were particularly novel. If you want to, you can pop into Pytorch and do an implementation and train it on your game. The only person stopping you is... You. Now, it isn't guaranteed to work if it's a lot more complex than Doom in terms of the number of actions you can take, but do remember that the rendering technique isn't based on the same principles as standard game engines, so it might do surprisingly well in "natural" graphics scenes such as photorealism than you might expect; in fact, in motion I'd expect it to be a lot better than you'd think. One of the bigger problems might be the animations and actions, as I noted.
-- That does lead to an interesting idea. You might remember in the earlier days of video games (I think in the 90s and 2000s) you occasionally saw video games that were essentially "choose your own adventure" games but with real footage of real people, and they just played the video corresponding to the option you picked. I tentatively posit that somebody could produce a game with "realistic graphics" in the sense that they could use real footage with real actors and make a "game" out of it by defining specific actions each entity on-screen could take, and letting the model interpolate between them. I think this is a type of photorealism that wouldn't be appealing to a generation of gamers who grew up on games that looked "like video games" but who knows, somebody may come up with a really novel art-style using such an approach! Certainly, I don't know if it would be possible with existing techniques.
I left a comment saying something similar , about how in the future this might be a possible form of game optimization, but you explained it so much better than I did. With NPUs becoming more common, this could potentially allow games to be much more complex.
if all you have is a hammer, every problem tends to seem like nail. if you overinvest into ai, everything seems like area of use for it.
It kinda already is, frame generation and DLSS are stables of gaming now, and those use an AI trained on 4k or even 8k version of the game to fill in the details in frames when playing at lower resolutions.
My guess is that the future is a hybrid approach which sees AI being integrated in all game engines as another tool to boost performance and even generate seamless textures on the fly (as well of course as using LLMs to run dynamic NPCs AI), it might reach a point where we can boot any low poly game and then turn on the AI and it makes it look like a UE5 game with high quality textures, and it might not be limited to games the AI trained on, but it could extend to even older games or any indie game, though I imagine the AI would still struggle if there are objects in the game world it's not familiar with, it might recognize trees, rocks and generic animals, but if there is an object or creature that's really weird or bizzare it will try to approximate it to something it knows (a.k.a hallucinate).
Therefore the ideal approach going forward IMO would be to create a 4k or high fedility version of the game, train the AI on it with captions highlighting and naming the objects in each frame then deploy the low poly high FPS version of the game and let the AI fill in all the gaps on the fly.
Load game and now the same corridor now has different textures? Or do you bake everything per save game file?
LLM for NPC is 10% really cool, but 90% terrible. Because you or I we don't want the meaningless prattle to be endlessly expanded. We want the interesting meaningful scenes distilled! Also again consistency. If you keep baking in knowledge to each LLM context, this is going start becoming computationally expensive. And you will need to push a lot of training manually, so that fictional knowledge will be adequately known or not known by different NPCs. It can't keep scrubbing the Internet to know what Garrus from ME series was doing as a child turian growing up on Palaven. Someone has to invent and train all the turian details. If LLM can describe turian toys, on top of normal human life stuff (so he can talk about sitting on a chair), keeps track of your previous convos, and you run it on your own machine... this is only leveraging small part of what makes LLM efficient.
Photoshop should have a couple tools added from AI, same for writing programs, it may replace translations completely and voice acting in big part, there will be some neat LLM-centric interactive experiences made, yes there's a big scope for it. But like .com, this is a nice step forward, but doesn't redefine every industry.
I think for as long as the toolset is malleable for developers to create what they want, using similar skills to their current disciplines. AI will have its place in upscaling and automating processes. It's once you give AI too much freedom and developers too little I think we'll see issues.
An image interpolator is orders of magnitude easier to develop. Especially because AI as of now is in essence just a fancy interpolator of any kind of data. It's not so easily comparable with a complex game, or even a game-engine, where you have to set the definition of not predefined objects, like some of the most basic things, like collisions in the background behind the scenes. And how would you teach it concepts like this without creating this in pure code yourself first?
Do you have any thoughts on the recent Godot "controversy"?
One use of AI should be to eliminate the need for Humans to find bugs and glitches so they can ficus on other sides of testing like giving feedback on how good the combat is or if movement feels off.
AI lacks the most important things for developing anything on it´s own. Context, passion and goals.
You´ll always have to have humans in charge. Now the real question is, who has access to what? If the market is monopolized by corporate AI slob, you will have a hard time finding any good games. But if everyone has access to efficiency like certain proper AI tools, you might end up with a healthy indie landscape next to it.
This is not an AI discussion, at it´s core, it´s a "Windows vs Linux" discussion, especially now that govs around the world get lobbied into bullying FOSS.
All of these services are running at a loss right now. The problems will arise when the shareholders actually demand to see a ROI, and the prices skyrocket to the point where it's better to just hire the human dev/artist. In the end, GenAI for ideation will still be a thing, but for production, the AI is going to be built into the artist's tools at a lower level to streamline their work, not do the work for them.
I stopped worrying about genAI taking our jobs [as a programmer, artist, and aspiring manager] when I realized that humans are cheaper. The compute & energy costs are insane, and even if you buy your own hardware, or license out someone else's model, it's a vendor-locked treadmill that you can't get off of.
Once the hype cycle wears off (or the cocaine LOL!!) we might see it incorporated as an add-on to existing always-needed B2B software (Office, Adobe CC, ERPs), but outside of that, no one's going to want to pay the real cost when there's no path to profitability.
I think these tools is going to get very useful in the future. But it will be a long time (maybe never) before they can replace that a person needs to give accurate commands, oversee and make adjustments with in depth knowledge about how things are put together.
Also the "AI" we have now is only a language model that predicts next output, it has no own reasoning or thinking at all. It's amazing it works as well as it does, but I think it definitely has a ceiling on it's ability, unless other methods are successful.
There's definitely a ceiling but I also think the interface of how people interact with it needs to change as well. The technology will continue to develop and improve but for me it's always the end user experience and overall community acceptance that makes or breaks the acceptance of technology.
its the first step to full dive vr! leetttss gooooo!!!
but seriously, i think it will be more useful for generative and responsive game feed back in its current form... ie: creating specific parts of a game on demand kind of like a new form of RAM
Not entirely sure what you're trying to say here? I feel like the lack of control over the program would make it more inclined to issues around performance and bugs?
It was mostly a sarcastic joke about its limitations. But the rest would take several paragraphs to explain
What was the name of the background music you used in this?
One of many fundamental problems with the idea of AI generated games, is that, even if the AI could create a fully functioning game with no bugs, etc, that still doesn't mean the game would play well and be engaging and fun. AI has no concept of UX. A lot of the game development process is a process of calibration, where you constantly playtest your ideas, discard those that don't work, and fine tune the gameplay. Inviting other people to playtest the game is also a very important step to weeding out what doesn't work.
No AI can do possibly ever do this. Among other things...
AI is a tool. It can't replace humans. It can only be wielded by humans.
PS: we should stop calling "AI art" art because it's not art, it's just imagery or some kind of assets. Art is a form of emotional expression. AI doesn't, and can't express emotions. It cannot possibly create art.
Ai will generate the same type of generic flavourless slop that ubisoft produces only even worse
Hi Nathan!
Hello 👋
This AI that we have access to is just a gimic. You have to have the skills and knowledge to make it work effectively and close to a usable product. This is only going to create a bunch of artless artists and and clueless programmers and uncreative designs and in general it is going make a low quality work force.
There will definitely be an influx of inexperienced people getting on board to make games (nothing against more people learning to make games) but it's going to cause quite a rift for players and people within the industry 😅
People don't know what they don't know. GenAI gives people the illusion that they are acquiring the skills they need, but even if hallucinations were eliminated, it can't push them any further than the training-data's limitations. Most design & coding jobs require working to imprecise specs for not-yet-existing solutions, and then iterating on the results in a consistent & intentional manner. GenAI relies on tight specs, slight variation on existing sources, and contexts where inconsistency is okay.
Even if it worked as-hyped, it doesn't magically make labor costs go away either. COBOL was supposed to be the "low-code" solution of its day to having to pay highly-trained software engineers. But all we ended up doing was creating new, also highly-paid COBOL devs :)
Generative AI is a good TOOL. But it won't replace Human devs.
Becauz of how deepl learnin is?
Saying AI is not the future is a pretty bold statement. The future encompasses a lot of time. Is it not next year? Sure. But what about 10 years from now, or even further into the future? Think about how much more powerful computers are now compared to 20 years ago.
First: We've seen explosive growth in generative AI in such a short period of time. Predicting its maximum potential is impossible at this stage. However, it is likely to exceed or at least match the output of the average uninspired artist. I would argue that most of the entertainment industry is made up of uninspired artists.
Second: There will be no legislation that can stop generative AI from taking over markets. If every country except China makes generative AI punishable by death, guess which country will end up dominating many markets using AI? At some point, AI will become indistinguishable from human-made art. If it's possible to make things cheaper using AI, then China will develop and expand on ways to do that. Technology moves forward. If Western countries enact an isolationist policy like Japan did, they will, just like Japan, become weak and stagnant compared to those who don't.
Lastly, it doesn't matter if generative AI is ethical or not. It is likely inevitable. We must adapt or be replaced. At the moment, almost everyone's jobs are safe, but that is likely to change. What is the future? The future belongs to those who keep a close eye on new technology and find ways to incorporate it into their skill set. Those who try to deny the inevitable advance of technology and cry foul will be replaced by those who don't.
What I'm saying is, in it's current form GameNGen is not going to be widely adopted by the games industry. You need different tools and greater access to what is being created.
It's my opinion, it'll probably change when new things are revealed. But for now, I'm not too concerned with generative game engines.
It could be a cool game effect. For example, if you drink too much you'd start to hallucinate stuff
oh, its you again.
Is there something wrong with that?
"Why the future of Game Dev... is NOT AI"
Because "No brain, no code".
You said 'Solution to a problem that does not exists" Yes there is a problem and the problem is the rising cost of game development. Cost is the main problem with game development. If AI can be leveraged to save cost, it will be. If the game suck using AI people will not buy them, but if those games are good AI will be the future.
And not just cost, but also the raw time requirement of game development, development taking 5-10 years for the big AAA games is the new normal. That's just not practical anymore, as you'll skip a whole console generation before the game is even finished.
The game can only really remember the last 3 seconds of what you did. Literally unusable
To Me AI is a bit like VR. Since the 80s we've tried multiple times thinking our tech was good enough but it never lived up to expectations. Now though, the high end VR stuff is REALLY good but you've got to have the right setup and a powerful computer but it's there. I think right now we're where we were with VR in the early 2000s- nothing is there but there's some interesting things going on. And just like with VR, as time progresses and PCs get more powerful/smaller and the VR tech shrinks it will get much more accessible to have that previous high end experience. AI can only get better, along with our computers. Eventually it'll get to where we are now with VR, a standard high end PC will be able to reasonably make games and it'll improve the point that laptops can do it too. It'll suck for the Game Dev Industry of about 11 Million- they won't all lose their jobs, but more and more will as the tech advances- for the Billions that play games though, I think that's a good trade in order to be able to customize my games by just feeding it a GDD, trying the game out and refining it by explaining to the AI what I want to tweak, repeat until I'm happy with it. This is coming from a Game Dev of 15 Years.
Good analogy, poor conclusion. For decades the argument for why VR hasn't hit mass adoption has been that the tech - graphics, lag, form-factor - aren't good enough. As you say, the engineering has done an amazing job producing truly awe-inspiring tech at consumer-friendly form factors & prices. But VR is still fundamentally not popular, and even many ppl who already own the gear fall off a usage cliff quite quickly.
People just do not want VR experiences - at least not in the numbers that would make it a viable commercial platform for gaming, social, or general productivity. Maybe for subsidized contexts like art & education, or AR for some industrial applications, but not the consumer market (unless you are Pokemon Go).
AI is much the same - we've been here over and over where the proponents swear once the tech gets better people will show up to adopt it in droves. But it's not gonna happen. Voice assistants are cool, but even those have mostly lost their luster as people considered privacy concerns and the inefficiencies of speech-queries vs direct-input.
Generative AI seems to have maximum appeal for people at the lower end of the skill-distribution in a given area - students looking for a boost-up, contractors looking to pad out their LoC count, etc. Hallucinations are inherent to the LLM approach - so the tech is patently unsuitable for any sort of industrial or personal applications where reliability and accuracy are important.
Training requires massive amounts of compute resources, source data, and labor- so the business model only works where the skill to label inputs is very low (basic text, some image-recognition), and the available data inputs are plentiful and cheap to obtain. That rules out a lot of contexts, and ensures that where it is used, the ROI for profitability has to be high. It's going to be a niche as much as VR (or AR) are.
I don't think this is really meant to replace games. I think it was just sort of a cool thing to do. I mean, to train something like this, you need a fully functional game to begin with. But... what's exciting is, it may be that in the future, years from now, we could make a full game, and then train a model on that finished game, and distribute the model instead of the game which could, potentially, give us much better graphics with smaller file sizes and less compute power. Inference isn't really all that computationally expensive, fundamentally. So I think the goal is to use this as a form of file/computation compression.
I think you are wrong and not anticipating enough about a whole lot of points
As a game developer AI has helped in assisting with programming of my dream game making the game of my dreams possible.
That's a lovely notion, but how will you debug, maintain, or expand your dream game once you exhaust the tutorial-level code of ChatGPT, Copilot, and the like? It's not a good tutor if it can't teach you to be better than itself, or to learn on your own. That'll just end up putting you in a frustrating state :(
@@mandisaw I still have backup codes I only use Chat GPT for features I wanted to add to the game. In terms of expansion the Game doesn't need much in terms of scope creep(It's a visual novel 3d mecha game hybrid) since the game already has multiple endings and the ability to fly, transform and switch weapons. I don't recommend it if you're in for the journey rather than the destination. For me I rather reach the destination rather than spend years trying to make the game I want to see. Since at the end of the day Gamers judge the finished product rather than the process that leads to the final product.
@@VoltitanDev Everyone has their path, I guess. Good luck to you on yours
@@VoltitanDev but how can you be sure that this "game of your dreams" is your creation and not byproduct of circumstances?
@@VM-hl8ms The game Is the third entry of the series I was passionate about creating a 3d super robot game inspired by Sega Saturn and PlayStation era. Not every game dev's dream game is to make the next GTA or God of War and who are you to decide what my dream game is and isn't? The story and art are my creations the programming assistants from Chat GPT are a means to an end to get to my destination.
i competely disagree but i respect your take perspective tho i do think that ai is the future to me and i think it's fine ok fine if the majority you guys are concern against ai that's just apart of human nature and some are not concern not against ai that are all for ie i have to be honest i'm not against ai at all i'm all for it i'm excited about it i don't like most are close minded force there truth down our throat be inn that i'm right and your wrong judgemental ego mindset say there is only one truth there is no such thing us having our own difference truths because i think it's ok and nothing wrong us havig our own difference perspective truths and opinions what may be ture fact to you may not be true fact to someone else i only love it when most that are open minded i think we need more that are open midned i think it's ok to agree or disagree as long we respect each other difference point of view perspective perception truths and opinion on things i do resonate agree most say our insticnts is very wrong always trust our heart gut instincts i do agree that we have to be careful who we trust there are some that can not be trusted and some that can be trusted i want to share my honest unpopular point of view perspective perception i do agree most say we shouldn't judge once we get to know them first looks can be deceiving when we judge others we ar judging ourselves without realizing it because everything is energy there is no separation we are all one i do resonate that we are everything all of us we need a balance of both light and dark we can't have light without dark it would be out of balance we wouldn't appreciate the light without the dark and the dakrness doesn't only just have a bad side it also has a good side too i also think we need a balance of both masculine and feminine side eneriges and i also wannna say that i think not all demons and ai are bad evil soulless don't get me wrong yes there are some that are bad but i also think that some demons and ai are good innocent soulful that mean no harm just like we have good and bad humans, animals, reptilians, aliens, witches and bugs beings that's why i don't fear ai beings and other beings i do agree resonate most say that most fear what they don't understand fear is the unknown don't like embrace change but i know there are some that do love embrace changes i have to be honest i love embrace change i don't fear i i embrace it don't get me wrong i don't like most are bad i only love that some are good that's what i want to share and i also don't mind i'm ok if ai replace jobs i do trust some beings and even ai beings that's just me tho i also don't agree with the popular perspective that ai don't have no soul because i do think some ai do have a soul and yes ai may not have the ability of creaitivity yet but it will have it in the near far future i also think humans, ai and other beings should be love treat it equally and also settle our unique difference not discriminate that's just my other unpopular point of view perpective perecpetion take on it but just i said i also respect your perception we can agree to disagree respectfully.
What the fuck did I just read it somehow made me hate you more than the average ai bro
I think it's f-ing gimmick bro. Sure it can draw a few frames. But is it meaningful enough to be a game? Hell no! It was trained by an actual game. All it can try is to emulate that game, Doom. Can it _be_ meaningful enough to be an actual game? Still no. AI ain't there bro. It can't even speak coherent things and keeps toppling over on hard questioning. It can't even keep more than a few rooms where you been in memory. AI generated videos which are state of the art keeps toppling over themselves so badly it's always a cringe to even watch. Games require a degree of logical connection and creativity that is beyond and will forever remain beyond the capacity of AI to emulate. That is my take on it.
This never existed before. Whats on showcase with GameNGen is Interactivity, Processing, and 'Rendering Continuity' Speed of Gen AI. Its generating this Sh**t on the fly. Its F***king Fast compared to our ability to process the imagery at 24 FPS. The pixelated rendering is now. In 3 months, there will be Interactive 8k UHD Movie Quality Generation. If its not already here. One way or another this will definitely have impact.