Recently I learned about Vram for optimization. My game uses Unity and mixes 2D and 3D. So we have a lot of hand-paint texture, filling up my Vram. Starting to learn how Vram works. I found most tech is a comparison for disk size like jpg but when it sends GPU is still big(I know Crunch but it is just for fast decompression not reduced size). Found the only thing I could do was pack the channel and tell the artist to reduce texture count and separate scenes into small chunks to load. I want to know more about Vram's work and if there is any way non-destructive or lost too much texture quality to be optimized in modern GPU. Anything even not for unity is appreciated!
I think the first arm cpu was so energy efficient that it ran on ambient electricity or something like that, maybe that could be an actual goal point of optimization, to have it run on that.
This is why you can use double buffering, which is kind of like what you are doing but instead of for the framebuffer, you are doing it for the mesh. Normally GPU operations queue up in the buffer, they aren't meant to be completed by the end of the refresh. If you use a double buffered system, it allows everything to render without a stall, because when a stall would have occurred, it just shows the back buffer like normal.
Thank you! so that's where I recognised this technique, it's like a Double Buffering👍❤ Its probably still worth it, but the trade off you have with this technique is with the increased memory usage, as you'd about double the memory needed for the area around the player, and it adds more complexity managing these separate sources of meshes.
@@SirNightmareFuel That's true but GPUs have massive amounts of very-fast memory these days in order to support the heavy texture bandwidth of AAA games. Doubling or even 10x'ing the memory used by the meshes is not a huge deal because they all refer to the same textures and the textures are the most memory-hungry part of rendering unless you go crazy with poly-count.
It's even better with tripple buffering. One framebuffer is displayed on screen, one is being rendered to and the third one is ready to be displayed. When the part of hardware responsible for displaying is done with a display cycle (or VSync kicks in), it looks at the buffers and either displays the same one again or displays the ready buffer if it is newer. When the part that does the rendering is done, it marks it's buffer ready, then renders into whichever of the other buffers is not being displayed. This way the newest ready buffer is always displayed, rendering works continuously and there is no tearing, at least in theory.
The real downside to this type of technique when used for the framebuffer is latency. I can't play games with triple buffering except on very high refresh-rate monitors because the latency causes me nausea (except with an uncapped internal/buffer refresh rate which doesn't seem to be commonly implemented sadly).
Semaphores could also be ideal as fences can cause the CPU to stall waiting for GPU work to finish. Semaphores would allow for exclusive synchronization of GPU tasks ensuring the CPU is constantly writing commands and the GPU is constantly processing them. Great video nonetheless.
In real world implementations these are probably implemented as semaphores, because the transferring of data is being done in a background thread anyway. The busy-waiting done by a fence is probably done here for simplicity purposes because it makes the 'waiting' part easier to explain.
@@Stoney3K Yes. Transfers don't just run as a background thread, but DMA transfers over PCIe can be done completely async to the rest of the system (and GPUs often have dedicated transfer queues to accommodate this, allowing for ultra-fast async transfer from the CPU). Nowadays you can just generate some data yourself on the GPU though in compute, eliminating the need to transfer entirely and keeping everything local to VRAM.
@@joeroeinski1107True if you consider transferring of things like textures and shaders from the CPU to the GPU. But most of the waiting is often caused by the game loading in or parsing new assets from disk, which make it noticeably slow because I/O is *much* slower than CPU/GPU/memory access. It's even more pronounced when the game is an online one and the assets have to be fetched from an offsite server.
I just binged all of your optimisation videos. The metaphors are awesome and easy to follow, the code and graphics are clean, the voiceover is easy to understand. This is what I wish school/university was like. Instant subscribe, you have earned a place among my favorite TH-camrs. Thank you for your work.
It's worth noting that for some games, the additional input delay might be considered unacceptable. Most games it won't matter but in those rare cases it may be necessary to explore alternatives
I think this technique won't increase input delay. It effectively just leaves old geometry on screen until new geometry is available. At worst you'll get "pop ins" instead of stalls.
@@markdmckennait it does add input delay, think of it this way: now you're shipping a whole frame with this old mesh instead of the new one, that might take 16ms or so, but what if 5ms into those 16ms the transfer completes? now you still have to wait for the full frame to be drawn before we can include the new mesh in the next frame. Whereas if we just stalled for those 5ms we'd only have to wait 5+16 ms instead of 16+16ms
@@X606 Maybe a different definition of input delay here? IMO as long as we're providing "reasonably" up to date geometry for the user to interact with, there is no added input delay.
@@markdmckenna Yeah. Earlier thread wasn't making sense to me. Input processing is non-graphical, so dunno why delayed rendering would delay input processing unless you tied input processing to your draw() routine. Albeit which happens a lot in amateur games using generalized game engines, but I presume this topic is scoped to more advanced game programming techniques.
@@AntiAntYTthe eye sees up to 60fps. Everything above is still useful, because the world runs at ∞ FPS, so the closer your pc can be to that, the better it will ~feel~
@@aqua-bery Eyes can see much more than 60fps. The visual difference from 60fps to 120fps is quite noticeable as 120fps is just a lot smoother visually and your eyes can see that difference.
Videos like this are rare to find But It has to be my favorite type of video by far, making code run faster, better gaming experience, GPU goes vroom I love it!
It's know as Asynchronous Buffer Update/Upload, nothing new. The underlying concept of decoupling data transfer from rendering to avoid stalling the graphics pipeline is fundamental to graphics programming and has been a consideration as long as there have been programmable GPUs and sophisticated graphics APIs. it's been a feature of both directx and ogl for decades at this point.
Asynchronous double buffering is used all over in software architecture - from network, i/o, file system, ipc, database, sound & graphics handling & more, in kernel, drivers, in plumbing layers & in user programs, from industrial automated real-time applications to interactive user-centered gui apps like browsers, games & more.
I'm not coding games and stumbled here by chance, but this is a really, really well done description which can be appreciated by anyone with at least a bit of coding experience. Thanks.
Lots of people seem to be mad at the fact that these optimizations are not groundbreaking and already exist in some engines. But as a game designer making solo projects on several engines, I'm super thankful for the amazing explanations of processes I would otherwise be unaware of. I love to see how the technologies we used are built from thousands of smart decisions like these. Also, good job and good luck on your engine project :D
yooooo i love that not only did you make a cool game i enjoyed, but that you're sharing all the knowledge you learned from making it with everyone. Good luck dude! I hope you continue this, It's really interesting and I love it.
Uhm, the best person to ask is the dev, I recomment going to the subreddit and asking Redigit and his crew if this is something they have toyed with or even considered. I'm sure he would be stoked to see some new ideas!@@chickennuggetman2593
Usually the code with fewer lines is actually severely less optimized. Take a look at all the ""clean"" javascript code with a million abstractions and chained method calls. You have your fancy iterators when a simple for loop is literally 20x faster
it depends. if your bottleneck is storage, which used to be true when we had floppies, fewer lines meant more content and thus a more complete experience, a better value overall
I really loved the video and the analogy. I think you should increase the size of the code and animations, because seeing them on mobile is kind of hard. Great video nonetheless!
The master chef has an infinite number of pans, but doesn't want to overwhelm his apprentices, so he keeps only the amount of pans necessary in the kitchen until a greater or lesser amount is needed.
Well, maybe not _infinitely_ many pans. More like, only about 4 billion pans. And usually when the chef needs more, rather than just grabbing the amount needed, usually they'll just grab as many extra pans as they're already using. It wastes a bit more space in the kichen, but it saves on the amount of trips to fetch more pans a way that generally balances out well.
I had a similar issue in my 2D game made in Game Maker Studio. When it tried to render some high resolution sprites I would get those stutters. However, I found a very useful command "texture_prefetch" which as you can guess loads a texture group into VRAM memory even if it's not being drawn yet. And "texture flush" clears a group from VRAM. By being manually proactive about texture loading, I not only removed all the stutter, but I also cut level loading time down to under half a second. I'm pretty sure all engines have asynchronous buffered data loading / rendering, but I think some developers don't utilize it to it's full extend and just try to render a bunch of unloaded data at once. Your explanation is very good, and I hope it reminds other devs to think more carefully about asset loading.
Great video and I am sure it will help me in the future 👍At the first moment I thought the sorting by swapping with the first element only works with 2 meshes but after thinking about it I realized it also works well with longer lists too if the transfer time is equal for every mesh. If not, then a circular array/ring buffer might be a better alternative to avoid rendering fast-to-transfer future meshes before slow-to-transfer past ones. That would cause the future mesh to render 2 frames while skipping the past one with your algorithm.
Excellent point you’re right, a ring buffer would be more appropriate here. Now you have me thinking: I should be removing the element and inserting it at the start, rather than swapping. You’re right, great pickup! I wrote a similar thing for a particle engine, which does skip over the slow-to-transfer ones to ensure the most up to date buffer is always being rendered (even if it means an entire frames worth of data is skipped), i.e. if the cpu runs faster than the GPU and is writing to 2 buffers each frame, every 2nd buffer will never make it to the screen
Won't these extra copy of the mesh consume more memory? I suspect this is a compromise. Here the mesh is extremely simple so it doesn't need much memory. On a more complex mesh this could require you to "just add memory".
@@kyraxx The extra threads would require more cores to be more efficient as well. This solution is literally "just add more cores/memory", but it works.
If only you had posted this video a few months sooner! I recently bought a new pc, and repeatedly sent it back to the company since I was experiencing weird stutters when meshes were loaded / updated in my games. I eventually boiled it down to a game engine issue myself, but I'm so glad that this video confirmed that. No money wasted on a broken machine :).
There's actually an optimization mod for Minecraft that more or less does this where the renderer will keep using an older mesh until the new ones are ready. You did a nice job with the analogies!
This is awesome! Managing stuff like this is still a real challenge and this is a great solution. Thank you for the great easy to understand explanation too!
Long freezes in modern games can be attributed to a variety of factors, often relating to resource-intensive operations or inefficiencies in handling game assets and rendering. Here are some of the common causes: Asset Loading: When games load large assets (like textures, models, or sound files) from the disk into memory, it can cause a noticeable freeze, especially if the game is not using asynchronous loading techniques. Garbage Collection: In games developed with languages that have automatic memory management (like Java or C#), garbage collection can sometimes cause freezes or stutters. This happens when the garbage collector runs to free up memory, temporarily halting other processes. CPU/GPU Synchronization Issues: If the CPU is waiting for the GPU to finish rendering (a scenario known as a GPU bottleneck), or vice versa (CPU bottleneck), it can result in freezes. Efficient parallel processing and synchronization are crucial to avoid such stalls. (the one this video covers part of, and far from the only possible cause) Inefficient Resource Management: Poorly managed resources, such as repeatedly loading and unloading the same assets, can lead to performance issues and freezes. Complex Calculations or Scripts: Intensive computations, like complex AI calculations, physics simulations, or extensive world updates, can cause freezes if they are not efficiently managed or offloaded to separate threads. Network Latency or Hiccups: For multiplayer games, network issues can cause freezes or lag if the game's state is tightly coupled with the timely receipt of network packets. Driver or Hardware Issues: Sometimes, the problem may lie outside the game itself, such as outdated or buggy graphics drivers, or hardware that is overheating or malfunctioning.
Thanks for this, I'm really enjoying watching your videos. When I come to optimizing I'll be circling back to make sure I've covered the things your have. All the best 🙂
Hey small feedback: could you please zoom in when displaying code? Small screens cant see the logic you're trying to showcase (even if the exact text doesn't matter). Awesome video as always❤️
Wow, I really like that underlaying background simulation at 1:51 I'm trying building something similar myself using Openprocessing but this looks like some great end result! Thanks for the inspiration.
also I love your channel, content like yours is rare, unique and very informative, I learn a lot with every Video, optimizing things to the limit is addicting hehe
Really excellent explanation for an issue a lot of players, and even developers, incorrectly associate with optimization. Also, I noticed your mic is picking up a lot of room reverb. A cheap mic shroud behind the mic will help a lot. The poor man's shroud is a closet full of clothes with the mic partially inside. If you can't fix it at the mic, try a "de-room" or "de-reverb" plugin. Your visuals are fantastic, your audio can be just as good without much more effort.
I can't hear the reverb myself but I did notice an echo when standing next to a wall and recording. I recorded while standing in the center of my room instead, and I'll try a shroud too thank you for the recommendation :)
@@Vercidium Cool, glad I could help. I noticed you've got a demo video about raytraced audio. Maybe this is an opportunity to do a game engine video about it to test your audio setup.
Why this works: modern GPU has a lot of bandwidth but with a high latency to transfer data from main memory to gpu.(a lot of car lanes but long way to drive). If we could have 0 latency then we wouldn't need this because any transfer would be finished very quickly (high likelihood of finishing transfer before being drawn). Instead of doing nothing and waiting for the transfer to complete, we effectively stack multiple extra transfers for future frames within the first transfer's wait time (we can do this because we have a lot of bandwidth. aka we are sending more trucks on different car lanes simultaneously instead of waiting for the first one to come back and then send it out again). This is a form of parallelism and this idea of "doing other things or doing more of the same thing while waiting for the first one to finish" is everywhere in both cpu and gpu programming and let's you make more efficient(full) use of your hardware.
I’m trying to understand how the thumbnail relates to the video, but I guess that’s the point, to make it hard to know what will be talked about without watching.
he didnt though, most good engines already do this sort of thing. though it is up to devs to properly use the engines systems to take advantage. you could easily circumvent them. Also, this is only 1 possible cause of freezes. It's known as Asynchronous Buffer Updates/Uploads.
Honestly, that's the future of everything nowadays, almost everything needs to be pipelined, the problem is that the more you pipeline, the more (relative) lag it introduces. Personally I settled for 2 mesh buffers for dynamic meshes, block frame if we catch up and the mesh isn't fully sent yet, though I might change this to non-blocking later. I'm also thinking which mesh data actually needs to be updated anyways. UVs might not need to be re-uploaded to the GPU, it's mostly just the coordinates and vertex normals. So I'm thinking of splitting the vertex into 2 buffers, one for coordinates and normals, and the rest into another struct. This way the transfer size should be smaller and shouldn't cause many issues. This saves space on the VRAM, allows custom attributes. Cache misses are very likely to happen though so it needs to be measured... Lots of options to explore. :)
That is not really seen as a real optimization, more like a technique that (as you said) has been done. It is really interesting to see your approach compared to approaches game studio's make. A GDC talk that is on a technical level really interesting is this one "Marvel's Spider-Man: A Technical Postmortem". I also want to ask a question about your previous video, you mention that indeed triangle strip is way faster than triangle list. The problem that I seem to face, is that there are no tools, as far as I know of that convert triangle list to triangle strip. This is not a problem when you make all other models yourself and create a script that does it for you. But how did you do it in your previous video?
Absolutely, technically this is a timing issue but since it affects game performance, it falls under the broader ‘optimisation’ category For complex models like characters, converting them into triangle strips isn’t easy. Modern renderers will use an index buffer to help, where the model is broken down into triangle strips of the same length (e.g. 3 triangles) and then a GL_PRIMITIVE_RESTART is set to tell the shader to start a new triangle strip every 3 triangles (for example) There will still be some vertices that hold the same data but any reduction in memory is a performance win
@@budgetarms looks like DirectX 10 has this feature, check out the ‘Generating Multiple Strips’ section learn.microsoft.com/en-us/windows/win32/direct3d11/d3d10-graphics-programming-guide-primitive-topologies
Bruv you are the single most valuable resource I have as an indie dev. My games will run buttery smooth because of you and those like you. I've had these stories in mind since i was a child and didn't have the tools or skills to do them right. But with people like you and tools like blender and unreal, I will bring some really fun stuff! For the love of gaming!
@@Vercidium When I grow my company and add to it, you're definitely hired if you want it. I'll respond in the future with a list of works & the studio name.
Great video, I learned a lot! Small comment for future endeavors: watching this on mobile was difficult because I assume you optimized the text size for fullscreen desktop viewing. Other than that, I loved it!
This is the first time I hear the recommendation of using a list instead of an array for a performance boost! I imagine that the list is actually an dynamic array under the hood. The performance boost of caching contiguous data in memory is just way over the benefit you can get from delegating freeing some memory of a linked list to the garbage collector.
Man, I was watching through this and I looked at your name and was like "Huh, his name reminds me of the developer of Sector's Edge." Then a moment later: "Wait... that is really similar..." And then I checked and you actually were the dev for Sector's edge lol. Cool! Man I miss that game, it was my favourite fps! I can't wait for your next project!
Stuff like this is why i love graphics engineering, the interwoven understanding of what the hardware does and how you can get it to do what you want it to in the most efficient manner is just fun. Cool stuff!
Wouldn't there be a chance that each new mesh clogs up the transfer pipeline even more, so you would end up with drawing really old meshes at some point?
@@masheen_ it’s all animated in engine and screen recorded, then edited together in Davinci Resolve I use OpenGL for the 3D animations, and SkiaSharp and RichTextKit for 2D animations
@@masheen_ no worries :) It’s purely because controlling the animations in code is easier for me. I’m sure these animations would look a lot nicer in programs like After Effects
Hey love the video man but the code font is a little small for a pc and almost microscopic for my phone. The animations and everything are also amazing but if you could enlarge the code it’d make the content more accessible.
im not really doing anything in game development these days but this tutorial was so well handled and impressive that you earned a new sub. thanks for that vid lol
Would sorting still be viable if the number of meshes is very big? Also, I loved this video. I never even realized this. Or rather, I knew of it but never was consciously aware of it while coding
It should, I was talking to a Unity dev about how models are loaded behind the scenes onto the GPU, and it seems like there’s a few tricks you can do with offscreen rendering and preloading (not sure these terms are right) so you can know when a model is actually sent to the GPU, before you try to render it If Unity has these features I’d be surprised if Unreal didn’t! It’s a pretty important feature I reckon
This is an epic video man, really liked the way you showcased your iterative design methodology to refine the problem into smaller chunks and solved them one at a time.
Thank you, I’m not a teacher by any means but I’m learning so much creating these videos. Trying to find the best way to explain these things takes a while and I always resort back to food analogies haha
@@VercidiumHonestly, its still really cool that you're able to make parallels with these kinds of arguments. Oftentimes, when I'm trying to explain my code using analogies, they end up just becoming warped, and all of a sudden you have a taxi driver talking about repairing a plane or something :/
@@akhileshchandorkar1807 hahaha I know what you mean. I originally was talking about a restaurant with tables, plus a waiter and a chef, but it all got too complex. Simplifying it down to a chef and their pans worked much better. Takes a lot of iteration and feedback before I’m happy posting it here
I hope more game devs from indie to AAA take into account stuttering more, people seem to only be phased by FPS and are completely uneducated regarding frametime and how a consistent frametime is more important than FPS the majority of the time.
I’m curious: do you ever look at Minecraft optimization mods to get ideas of how to make your own voxel game engine better? People have tried many methods and there’s a lot of source code out there
@@Vercidium Check out Sodium and related projects. They rewrote rendering in Minecraft to work with OpenGL and made it really efficient. There are related projects and extension type mods that are easy to find as well.
@@toxiccan175 Minecraft has used OpenGL since the beginning, Sodium and the like just rewrote core parts of the renderer to use newer OpenGL features and be generally more efficient with memory and GPU usage.
Imagine a UE5.3 game with all these 3 render feature combined: Nanite, Parallax mapping, and of course, the UE5 plugin remake of this render feature you created from your game engine. Now I (and we too) will had higher prediction about what games looks like in future if we had these 3 🤤.
For a tiny amount of items a list is fine, but if we had a global list of all meshes in the scene then 100% should not use a list. A ring buffer that uses an array under the hood would be much better
Just wow! I am more than impressed with your communication and teaching skills (instant sub because of these traits alone). 🙏🏾 To say that I am so excited to consume your content is an understatement. 😁 I look forward to discovering and learning many new things alongside you. Please be encouraged and Thank You for such amazing content.
I hope more of these topic of videos will be known in the near future, our amateurs and semi-pro (or soon pro) game devs who trained themselves in open source, must benefits from it
Congratulations! You've basically (re-)invented Double/Triple Buffering! 😂 JK lol, that's an *excellent* explanation of the inner workings of the graphics pipeline in a video game / really *any* interactive piece of software with complex video output!
Game engine devs 100%. Game engines should take care of this but I’ve heard of some devs talking about stuttering because their models weren’t preloaded/off-screen-rendered (not sure if these terms are right) before they tried rendering them onto the screen
When I saw the title I had to immediately think of why I created my own website. There was a redirection page on our school's IServ server, from _domain/_ to _domain/iserv_ . That redirection page was just a black line of text on white background, saying you're getting redirected. But I wanted to have a link there, like "Click here if you don't get redirected". So I created my own website to have a better redirection page. And still, my website is my JS learning and testing ground, which is why my website breaks every now and then just to suddenly work again after a few seconds to minutes.
Download the source code for all my videos here: patreon.com/vercidium
If you have any rendering or game dev questions, ask them here!
Christ, could you have made the code any smaller? I don't know what 4k 60 inch monitor you have but this is bad on small screens.
Have you looked at Minetest, per chance? Do you have thoughts about it?
Is this technique used by any popular game engines and why?
I feel like they should stop inventing new hardware, and instead try to optimise games so they work with the "older" hardware if you know what i mean
Recently I learned about Vram for optimization. My game uses Unity and mixes 2D and 3D. So we have a lot of hand-paint texture, filling up my Vram. Starting to learn how Vram works. I found most tech is a comparison for disk size like jpg but when it sends GPU is still big(I know Crunch but it is just for fast decompression not reduced size). Found the only thing I could do was pack the channel and tell the artist to reduce texture count and separate scenes into small chunks to load.
I want to know more about Vram's work and if there is any way non-destructive or lost too much texture quality to be optimized in modern GPU. Anything even not for unity is appreciated!
The greater skill here isn't code optimization, but rather how you break it down with tasty metaphors to make for easily digestible content. Kudos.
I too, am now hungry for burgers.
Just let this man cook
Haha, he's cooking fr@@DriftJunkie
Analogy an average american can understand, yes.
I prefer strawberry flavored metaphors 🍓
123 optimization videos later: This game runs so smooth you can get 4k graphics with 120 frames a second on a potato clock even without the potatoes.
Quake 666
I think the first arm cpu was so energy efficient that it ran on ambient electricity or something like that, maybe that could be an actual goal point of optimization, to have it run on that.
@@SethbotStarrebuilds crysis to run on a single double A for a year
@@RowbotMasterand yet it remains un-portable for the switch bc nintendo.
He's gonna get Doom working on an Ancient Roman sun dial
Vercidium trying to explain game engine optimisation to an American:
"So imagine a burger."
They should make a version more at home for Aussies
"Ok so imagine a parma.."
@@emporioalnino4670 I love a good parma
@@emporioalnino4670 What's a parma, a chicken parmesan? That's standard bar food in Australia? I love that
most healthy american breakfest be like
@@violet_broregarde a chicken parmigiana
This is why you can use double buffering, which is kind of like what you are doing but instead of for the framebuffer, you are doing it for the mesh. Normally GPU operations queue up in the buffer, they aren't meant to be completed by the end of the refresh. If you use a double buffered system, it allows everything to render without a stall, because when a stall would have occurred, it just shows the back buffer like normal.
Thank you! so that's where I recognised this technique, it's like a Double Buffering👍❤
Its probably still worth it, but the trade off you have with this technique is with the increased memory usage, as you'd about double the memory needed for the area around the player, and it adds more complexity managing these separate sources of meshes.
@@SirNightmareFuel That's true but GPUs have massive amounts of very-fast memory these days in order to support the heavy texture bandwidth of AAA games. Doubling or even 10x'ing the memory used by the meshes is not a huge deal because they all refer to the same textures and the textures are the most memory-hungry part of rendering unless you go crazy with poly-count.
It's even better with tripple buffering. One framebuffer is displayed on screen, one is being rendered to and the third one is ready to be displayed. When the part of hardware responsible for displaying is done with a display cycle (or VSync kicks in), it looks at the buffers and either displays the same one again or displays the ready buffer if it is newer. When the part that does the rendering is done, it marks it's buffer ready, then renders into whichever of the other buffers is not being displayed. This way the newest ready buffer is always displayed, rendering works continuously and there is no tearing, at least in theory.
The real downside to this type of technique when used for the framebuffer is latency. I can't play games with triple buffering except on very high refresh-rate monitors because the latency causes me nausea (except with an uncapped internal/buffer refresh rate which doesn't seem to be commonly implemented sadly).
Yup, screen buffer was my first thought too.
This is standard behavior in all major game engine used in production for the last 10-15 years.
I was thinking that might be the case because unless I'm missing something, which is likely, he's described async code
I wonder when someone working at Minecraft is going to find out. Split screen with my kids is ridiculous if anyone is crafting. It freezes every time
UE already have that?
I'd be surprised if this was true.
@@greatbriton8425 Yeah it was definitely an exaggeration but he's right that it's not new.
Semaphores could also be ideal as fences can cause the CPU to stall waiting for GPU work to finish. Semaphores would allow for exclusive synchronization of GPU tasks ensuring the CPU is constantly writing commands and the GPU is constantly processing them. Great video nonetheless.
Love semaphores, very useful if a bit scary!
In real world implementations these are probably implemented as semaphores, because the transferring of data is being done in a background thread anyway. The busy-waiting done by a fence is probably done here for simplicity purposes because it makes the 'waiting' part easier to explain.
@@Stoney3K Yes. Transfers don't just run as a background thread, but DMA transfers over PCIe can be done completely async to the rest of the system (and GPUs often have dedicated transfer queues to accommodate this, allowing for ultra-fast async transfer from the CPU). Nowadays you can just generate some data yourself on the GPU though in compute, eliminating the need to transfer entirely and keeping everything local to VRAM.
@@joeroeinski1107True if you consider transferring of things like textures and shaders from the CPU to the GPU. But most of the waiting is often caused by the game loading in or parsing new assets from disk, which make it noticeably slow because I/O is *much* slower than CPU/GPU/memory access.
It's even more pronounced when the game is an online one and the assets have to be fetched from an offsite server.
@@Stoney3K Absolutely, though with emerging technology such as DirectStorage this soon may not be an issue.
I just binged all of your optimisation videos. The metaphors are awesome and easy to follow, the code and graphics are clean, the voiceover is easy to understand. This is what I wish school/university was like. Instant subscribe, you have earned a place among my favorite TH-camrs. Thank you for your work.
Far out, thank you! I'm stoked you like them and hope they help!
I can’t read the tiny font
school/university should teach you how to come up with these ideas by yourself
@@DaStuntChannel Universities teach you 15 years old techniques in a field that is always 5 years ahead and counting.
@@nicosoftnt That as well
It's worth noting that for some games, the additional input delay might be considered unacceptable. Most games it won't matter but in those rare cases it may be necessary to explore alternatives
I think this technique won't increase input delay. It effectively just leaves old geometry on screen until new geometry is available. At worst you'll get "pop ins" instead of stalls.
@@markdmckennait it does add input delay, think of it this way: now you're shipping a whole frame with this old mesh instead of the new one, that might take 16ms or so, but what if 5ms into those 16ms the transfer completes? now you still have to wait for the full frame to be drawn before we can include the new mesh in the next frame. Whereas if we just stalled for those 5ms we'd only have to wait 5+16 ms instead of 16+16ms
@@X606 Maybe a different definition of input delay here? IMO as long as we're providing "reasonably" up to date geometry for the user to interact with, there is no added input delay.
@@markdmckenna instead you get more render latency.
@@markdmckenna Yeah. Earlier thread wasn't making sense to me. Input processing is non-graphical, so dunno why delayed rendering would delay input processing unless you tied input processing to your draw() routine.
Albeit which happens a lot in amateur games using generalized game engines, but I presume this topic is scoped to more advanced game programming techniques.
I love this man's food analogies as much as this guy loves his optimization.
Amazing video, I'm so in love with the animations!
It really was worth buying this 16,000 fps monitor---this is the first video to really use it to its fullest, but boy is it glorious.
Waste of money. Nobody needs 16,000 fps. 12,000 is more than enough.
the eye doesn't see more than 4,000 fps
@@waltonsimons12 Trust me dude, once you've experienced 16,000 fps, 12,000 feels like looking at a slideshow.
@@AntiAntYTthe eye sees up to 60fps. Everything above is still useful, because the world runs at ∞ FPS, so the closer your pc can be to that, the better it will ~feel~
@@aqua-bery Eyes can see much more than 60fps. The visual difference from 60fps to 120fps is quite noticeable as 120fps is just a lot smoother visually and your eyes can see that difference.
Videos like this are rare to find But It has to be my favorite type of video by far, making code run faster, better gaming experience, GPU goes vroom I love it!
Thank you, more videos to come!
This is a highly optimized dev tutorial.
Jokes aside I think you have a true talent for game dev and teaching.
Subscribed.
This kind of sounds like double buffering, but for the geometry instead of the actual framebuffer. Very cool idea!
It's know as Asynchronous Buffer Update/Upload, nothing new.
The underlying concept of decoupling data transfer from rendering to avoid stalling the graphics pipeline is fundamental to graphics programming and has been a consideration as long as there have been programmable GPUs and sophisticated graphics APIs. it's been a feature of both directx and ogl for decades at this point.
That's exactly what I thought! Same basic principles as double buffering!
Asynchronous double buffering is used all over in software architecture - from network, i/o, file system, ipc, database, sound & graphics handling & more, in kernel, drivers, in plumbing layers & in user programs, from industrial automated real-time applications to interactive user-centered gui apps like browsers, games & more.
I'm not coding games and stumbled here by chance, but this is a really, really well done description which can be appreciated by anyone with at least a bit of coding experience. Thanks.
Lots of people seem to be mad at the fact that these optimizations are not groundbreaking and already exist in some engines. But as a game designer making solo projects on several engines, I'm super thankful for the amazing explanations of processes I would otherwise be unaware of.
I love to see how the technologies we used are built from thousands of smart decisions like these.
Also, good job and good luck on your engine project :D
@@lopodyr thank you for the kind words, I’m glad this video has helped!
yooooo i love that not only did you make a cool game i enjoyed, but that you're sharing all the knowledge you learned from making it with everyone. Good luck dude! I hope you continue this, It's really interesting and I love it.
Now I'm hungry.
Bro i need more optimization videos this is amazing af
Working on it! Thank you
@@VercidiumCan this work for terraria? If so, maybe a mod could be made for my dinky glorified potato with a screen..
Uhm, the best person to ask is the dev, I recomment going to the subreddit and asking Redigit and his crew if this is something they have toyed with or even considered. I'm sure he would be stoked to see some new ideas!@@chickennuggetman2593
the biggest thing this video taught me is
optimization =/= fewer lines
*fewer
@@Kyrelel thanks
Usually the code with fewer lines is actually severely less optimized. Take a look at all the ""clean"" javascript code with a million abstractions and chained method calls. You have your fancy iterators when a simple for loop is literally 20x faster
it depends. if your bottleneck is storage, which used to be true when we had floppies, fewer lines meant more content and thus a more complete experience, a better value overall
I really loved the video and the analogy. I think you should increase the size of the code and animations, because seeing them on mobile is kind of hard.
Great video nonetheless!
Will do, thank you for the feedback and glad you liked the video!
The master chef has an infinite number of pans, but doesn't want to overwhelm his apprentices, so he keeps only the amount of pans necessary in the kitchen until a greater or lesser amount is needed.
Haha excellent analogy
Well, maybe not _infinitely_ many pans. More like, only about 4 billion pans. And usually when the chef needs more, rather than just grabbing the amount needed, usually they'll just grab as many extra pans as they're already using. It wastes a bit more space in the kichen, but it saves on the amount of trips to fetch more pans a way that generally balances out well.
something about this video made me super excited to watch, maybe the thumbnail/title.
I had a similar issue in my 2D game made in Game Maker Studio. When it tried to render some high resolution sprites I would get those stutters. However, I found a very useful command "texture_prefetch" which as you can guess loads a texture group into VRAM memory even if it's not being drawn yet. And "texture flush" clears a group from VRAM. By being manually proactive about texture loading, I not only removed all the stutter, but I also cut level loading time down to under half a second.
I'm pretty sure all engines have asynchronous buffered data loading / rendering, but I think some developers don't utilize it to it's full extend and just try to render a bunch of unloaded data at once. Your explanation is very good, and I hope it reminds other devs to think more carefully about asset loading.
My favorite series on youtube, thank you for doing it!
Too kind! Thank you
"Explain double buffer like I'm five"
Great video Vercidium!
I don't code yet but it is something that I find fascinating. I really enjoyed how you made this problem make sense even to me. Thanks for sharing
Great video and I am sure it will help me in the future 👍At the first moment I thought the sorting by swapping with the first element only works with 2 meshes but after thinking about it I realized it also works well with longer lists too if the transfer time is equal for every mesh. If not, then a circular array/ring buffer might be a better alternative to avoid rendering fast-to-transfer future meshes before slow-to-transfer past ones. That would cause the future mesh to render 2 frames while skipping the past one with your algorithm.
Excellent point you’re right, a ring buffer would be more appropriate here. Now you have me thinking: I should be removing the element and inserting it at the start, rather than swapping. You’re right, great pickup!
I wrote a similar thing for a particle engine, which does skip over the slow-to-transfer ones to ensure the most up to date buffer is always being rendered (even if it means an entire frames worth of data is skipped), i.e. if the cpu runs faster than the GPU and is writing to 2 buffers each frame, every 2nd buffer will never make it to the screen
@@VercidiumI would love to see a video where you take all the improvement ideas from the comments and try to apply them and see how they do.
This guy just added Promises to a game engine
Best comment hahaha
Fantastic overview. Right to the point and thorough enough to show off everything without getting bogged down in the details.
I like how you're showing alternatives to the "just add more cores/memory" mentality.
Won't these extra copy of the mesh consume more memory? I suspect this is a compromise.
Here the mesh is extremely simple so it doesn't need much memory.
On a more complex mesh this could require you to "just add memory".
@@kyraxx The extra threads would require more cores to be more efficient as well. This solution is literally "just add more cores/memory", but it works.
Just one more
If only you had posted this video a few months sooner! I recently bought a new pc, and repeatedly sent it back to the company since I was experiencing weird stutters when meshes were loaded / updated in my games. I eventually boiled it down to a game engine issue myself, but I'm so glad that this video confirmed that. No money wasted on a broken machine :).
Great video, loved all the animations. Even as someone who isn't particularly experienced in coding I feel like I understood everything!
There's actually an optimization mod for Minecraft that more or less does this where the renderer will keep using an older mesh until the new ones are ready. You did a nice job with the analogies!
What's it called?
What is it called?
@@nindew21Laughyourassoffit’s either Sodium or Nvidium, there is also a mod Distant Horizons on the relevant topic
This is awesome! Managing stuff like this is still a real challenge and this is a great solution. Thank you for the great easy to understand explanation too!
You lost me at burger
He won me at burger
Mmmm. Burger
The first part also explains why some fast food drive thru allocate an employee taking orders instead of letting customers order at the fixed kiosk.
Long freezes in modern games can be attributed to a variety of factors, often relating to resource-intensive operations or inefficiencies in handling game assets and rendering. Here are some of the common causes:
Asset Loading: When games load large assets (like textures, models, or sound files) from the disk into memory, it can cause a noticeable freeze, especially if the game is not using asynchronous loading techniques.
Garbage Collection: In games developed with languages that have automatic memory management (like Java or C#), garbage collection can sometimes cause freezes or stutters. This happens when the garbage collector runs to free up memory, temporarily halting other processes.
CPU/GPU Synchronization Issues: If the CPU is waiting for the GPU to finish rendering (a scenario known as a GPU bottleneck), or vice versa (CPU bottleneck), it can result in freezes. Efficient parallel processing and synchronization are crucial to avoid such stalls. (the one this video covers part of, and far from the only possible cause)
Inefficient Resource Management: Poorly managed resources, such as repeatedly loading and unloading the same assets, can lead to performance issues and freezes.
Complex Calculations or Scripts: Intensive computations, like complex AI calculations, physics simulations, or extensive world updates, can cause freezes if they are not efficiently managed or offloaded to separate threads.
Network Latency or Hiccups: For multiplayer games, network issues can cause freezes or lag if the game's state is tightly coupled with the timely receipt of network packets.
Driver or Hardware Issues: Sometimes, the problem may lie outside the game itself, such as outdated or buggy graphics drivers, or hardware that is overheating or malfunctioning.
chatgpt has entered the chat
The infographic of the CPU handing off the packet to the GPU while demanding it render it over and over made me chuckle
Got it, I just need enough memory for 15 thousand meshes.
Hahaha noooo
Thanks for this, I'm really enjoying watching your videos. When I come to optimizing I'll be circling back to make sure I've covered the things your have. All the best 🙂
Hey small feedback: could you please zoom in when displaying code? Small screens cant see the logic you're trying to showcase (even if the exact text doesn't matter). Awesome video as always❤️
Very humble explanation style for laymen to understand. Kudos!
Wow, I really like that underlaying background simulation at 1:51 I'm trying building something similar myself using Openprocessing but this looks like some great end result! Thanks for the inspiration.
this is what wgpu does behind the scenes, I love wgpu
also I love your channel, content like yours is rare, unique and very informative, I learn a lot with every Video, optimizing things to the limit is addicting hehe
Feel like I've seen a couple of these coding youtube channels run by fellow Aussies. Makes me happy knowing we're doing our part for the world.
Really excellent explanation for an issue a lot of players, and even developers, incorrectly associate with optimization. Also, I noticed your mic is picking up a lot of room reverb. A cheap mic shroud behind the mic will help a lot. The poor man's shroud is a closet full of clothes with the mic partially inside. If you can't fix it at the mic, try a "de-room" or "de-reverb" plugin. Your visuals are fantastic, your audio can be just as good without much more effort.
I can't hear the reverb myself but I did notice an echo when standing next to a wall and recording. I recorded while standing in the center of my room instead, and I'll try a shroud too thank you for the recommendation :)
@@Vercidium Cool, glad I could help. I noticed you've got a demo video about raytraced audio. Maybe this is an opportunity to do a game engine video about it to test your audio setup.
Why this works: modern GPU has a lot of bandwidth but with a high latency to transfer data from main memory to gpu.(a lot of car lanes but long way to drive). If we could have 0 latency then we wouldn't need this because any transfer would be finished very quickly (high likelihood of finishing transfer before being drawn). Instead of doing nothing and waiting for the transfer to complete, we effectively stack multiple extra transfers for future frames within the first transfer's wait time (we can do this because we have a lot of bandwidth. aka we are sending more trucks on different car lanes simultaneously instead of waiting for the first one to come back and then send it out again). This is a form of parallelism and this idea of "doing other things or doing more of the same thing while waiting for the first one to finish" is everywhere in both cpu and gpu programming and let's you make more efficient(full) use of your hardware.
Isnt this just asyncronous compute?
I’m trying to understand how the thumbnail relates to the video, but I guess that’s the point, to make it hard to know what will be talked about without watching.
that's clickbait for ya
This man spent 4 years of his life fixing the little freezes you get every once in a while. Real dedication!
he didnt though, most good engines already do this sort of thing. though it is up to devs to properly use the engines systems to take advantage. you could easily circumvent them.
Also, this is only 1 possible cause of freezes.
It's known as Asynchronous Buffer Updates/Uploads.
4 decades ago we called this double buffering and it was pretty normal.
Very good, thank you for informing the community
My god it's so hight quality. I subscribe immediately!
Reminds me of a swap chain, only for geometry instead of fully rendered frames. Very cool!
Honestly, that's the future of everything nowadays, almost everything needs to be pipelined, the problem is that the more you pipeline, the more (relative) lag it introduces. Personally I settled for 2 mesh buffers for dynamic meshes, block frame if we catch up and the mesh isn't fully sent yet, though I might change this to non-blocking later.
I'm also thinking which mesh data actually needs to be updated anyways. UVs might not need to be re-uploaded to the GPU, it's mostly just the coordinates and vertex normals. So I'm thinking of splitting the vertex into 2 buffers, one for coordinates and normals, and the rest into another struct. This way the transfer size should be smaller and shouldn't cause many issues. This saves space on the VRAM, allows custom attributes. Cache misses are very likely to happen though so it needs to be measured... Lots of options to explore. :)
the major props for making my brain understand things i wouldnt have fathomed even trying to understand before
I know nothing about code. thank you for making something im still engrossed in despite not knowing anything
That is not really seen as a real optimization, more like a technique that (as you said) has been done.
It is really interesting to see your approach compared to approaches game studio's make.
A GDC talk that is on a technical level really interesting is this one "Marvel's Spider-Man: A Technical Postmortem".
I also want to ask a question about your previous video, you mention that indeed triangle strip is way faster than triangle list.
The problem that I seem to face, is that there are no tools, as far as I know of that convert triangle list to triangle strip.
This is not a problem when you make all other models yourself and create a script that does it for you.
But how did you do it in your previous video?
Absolutely, technically this is a timing issue but since it affects game performance, it falls under the broader ‘optimisation’ category
For complex models like characters, converting them into triangle strips isn’t easy. Modern renderers will use an index buffer to help, where the model is broken down into triangle strips of the same length (e.g. 3 triangles) and then a GL_PRIMITIVE_RESTART is set to tell the shader to start a new triangle strip every 3 triangles (for example)
There will still be some vertices that hold the same data but any reduction in memory is a performance win
@@Vercidium I wish DirectX had something like that.
It's really interesting how other API's do specific things.
@@budgetarms looks like DirectX 10 has this feature, check out the ‘Generating Multiple Strips’ section
learn.microsoft.com/en-us/windows/win32/direct3d11/d3d10-graphics-programming-guide-primitive-topologies
@@Vercidium Thanks, I am looking into it, just wondering do you have a discord server (or is something like that in the works)?
@@Vercidium Just wondering, how do you do networking on your game, do you use an external OpenGL library for that or what?
Bruv you are the single most valuable resource I have as an indie dev. My games will run buttery smooth because of you and those like you. I've had these stories in mind since i was a child and didn't have the tools or skills to do them right. But with people like you and tools like blender and unreal, I will bring some really fun stuff! For the love of gaming!
Thanks so much! I’m glad to help
@@Vercidium When I grow my company and add to it, you're definitely hired if you want it. I'll respond in the future with a list of works & the studio name.
Great video, I learned a lot! Small comment for future endeavors: watching this on mobile was difficult because I assume you optimized the text size for fullscreen desktop viewing. Other than that, I loved it!
I’ll increase the font size in the next video, thank you!
Holy shit, this is spectacular! It's amazing how much more you can do with that one step further.
didn't realize you were sector's edge dev until the end!
Haha hey there! What gave it away?
0:20 Wendy’s thanks you for the inspiration for lunch lol.
Haha I love it
This is the first time I hear the recommendation of using a list instead of an array for a performance boost!
I imagine that the list is actually an dynamic array under the hood.
The performance boost of caching contiguous data in memory is just way over the benefit you can get from delegating freeing some memory of a linked list to the garbage collector.
Love the analogies and especially the visuals, you have such a knack for presenting problems in an easy to understand light! Great vid :)
Thank you, that means a lot!
It is called pipelining. Use in many places, not just games / real time graphics.
Nice video, not everbody knows about it in graphics world.
My guy explains double buffer so my 5 year-olds can actually understand it, kudos to you!
as I 5 year old I can confirm
Man, I was watching through this and I looked at your name and was like "Huh, his name reminds me of the developer of Sector's Edge." Then a moment later: "Wait... that is really similar..." And then I checked and you actually were the dev for Sector's edge lol. Cool! Man I miss that game, it was my favourite fps! I can't wait for your next project!
Thank you! I miss it too, hoping to revisit it again some day
Stuff like this is why i love graphics engineering, the interwoven understanding of what the hardware does and how you can get it to do what you want it to in the most efficient manner is just fun. Cool stuff!
Absolutely, it is a very rewarding kind of programming
Great job breaking down such a complex concept to an easily digestible narrative.
Wouldn't there be a chance that each new mesh clogs up the transfer pipeline even more, so you would end up with drawing really old meshes at some point?
This video has made me realize that I still don't understand coding at all
From now on, my toaster can run crysis.
This presentation is beautiful, would love to know what you used! Also I learned a ton about a game engine!! Thanks/subbed!
@@masheen_ thank you so much, glad to hear!
@@masheen_ it’s all animated in engine and screen recorded, then edited together in Davinci Resolve
I use OpenGL for the 3D animations, and SkiaSharp and RichTextKit for 2D animations
@@Vercidium Hmm never thought of animations in engine for ui charts 🫠. Thanks so much!!!
@@masheen_ no worries :)
It’s purely because controlling the animations in code is easier for me. I’m sure these animations would look a lot nicer in programs like After Effects
Hey love the video man but the code font is a little small for a pc and almost microscopic for my phone. The animations and everything are also amazing but if you could enlarge the code it’d make the content more accessible.
Will do for the next video, thank you for the feedback!
Changing the thumbnail in the name of optimisation on a video about optimisation is pure optimisation #ISawTheBurger
Haha yep I design a few and then test one each day. I think this thumbnail+title is the one though
Everything before 2:57 was basically explaining double buffering right?
im not really doing anything in game development these days but this tutorial was so well handled and impressive that you earned a new sub. thanks for that vid lol
Thank you! It took a while to make
Would sorting still be viable if the number of meshes is very big?
Also, I loved this video. I never even realized this. Or rather, I knew of it but never was consciously aware of it while coding
For heaps of meshes it would get pretty slow, something like a ring buffer would be much better. Thank you!
I love how every metaphor is a pub. “How does quantum computing work?” “Well it’s like a pub with a quantum pan” 😂
youtuber with 30k subs vs multimillion game companies
incredibly well edited, paced and informative video
Wow this is incredible, those this also work in Unreal Engine 5?
It should, I was talking to a Unity dev about how models are loaded behind the scenes onto the GPU, and it seems like there’s a few tricks you can do with offscreen rendering and preloading (not sure these terms are right) so you can know when a model is actually sent to the GPU, before you try to render it
If Unity has these features I’d be surprised if Unreal didn’t! It’s a pretty important feature I reckon
This video is epic. Liked, subscribed. I can't wait to watch your other ones, you have a great mind for being able to teach and explain concepts.
Explaining optimization to an American: "So Imagine a burger..."
This is an epic video man, really liked the way you showcased your iterative design methodology to refine the problem into smaller chunks and solved them one at a time.
Thank you, I’m not a teacher by any means but I’m learning so much creating these videos. Trying to find the best way to explain these things takes a while and I always resort back to food analogies haha
@@VercidiumHonestly, its still really cool that you're able to make parallels with these kinds of arguments. Oftentimes, when I'm trying to explain my code using analogies, they end up just becoming warped, and all of a sudden you have a taxi driver talking about repairing a plane or something :/
@@akhileshchandorkar1807 hahaha I know what you mean. I originally was talking about a restaurant with tables, plus a waiter and a chef, but it all got too complex. Simplifying it down to a chef and their pans worked much better. Takes a lot of iteration and feedback before I’m happy posting it here
Fun fact: Having a taste of this source code is cheaper than having a taste of an actual burger (at least where I live)
I hope more game devs from indie to AAA take into account stuttering more, people seem to only be phased by FPS and are completely uneducated regarding frametime and how a consistent frametime is more important than FPS the majority of the time.
I’m curious: do you ever look at Minecraft optimization mods to get ideas of how to make your own voxel game engine better? People have tried many methods and there’s a lot of source code out there
I haven’t but that’s a good idea, would be interesting to have a look through their source code to see what OpenGL tricks they’re using
@@Vercidium Check out Sodium and related projects. They rewrote rendering in Minecraft to work with OpenGL and made it really efficient. There are related projects and extension type mods that are easy to find as well.
@@toxiccan175 awesome thank you, will do!
@@toxiccan175 Minecraft has used OpenGL since the beginning, Sodium and the like just rewrote core parts of the renderer to use newer OpenGL features and be generally more efficient with memory and GPU usage.
@@jcm2606 i've heard mumbles about the sodium sub-mod, nvidium. what's up with that?
Imagine a UE5.3 game with all these 3 render feature combined: Nanite, Parallax mapping, and of course, the UE5 plugin remake of this render feature you created from your game engine. Now I (and we too) will had higher prediction about what games looks like in future if we had these 3 🤤.
Should you really be using a list? Wouldn't a pool be much preferable if we're considering optimization?
A pool is a list though? (Not all lists are linked lists: this looks like C# where List is a dynamic array (like std::vector, ArrayList, Vec, etc)
For a tiny amount of items a list is fine, but if we had a global list of all meshes in the scene then 100% should not use a list. A ring buffer that uses an array under the hood would be much better
Just wow! I am more than impressed with your communication and teaching skills (instant sub because of these traits alone). 🙏🏾 To say that I am so excited to consume your content is an understatement. 😁 I look forward to discovering and learning many new things alongside you. Please be encouraged and Thank You for such amazing content.
You are too kind, thank you very much!
Bro spend 6 years to create a Game Engine just to told us, What a hero!
I hope more of these topic of videos will be known in the near future, our amateurs and semi-pro (or soon pro) game devs who trained themselves in open source, must benefits from it
Just a note. The code you show is a very tiny font size. It was difficult for me to read.
I'm so glad TH-cam randomly showed this to me. Really cool stuff.
Congratulations! You've basically (re-)invented Double/Triple Buffering! 😂
JK lol, that's an *excellent* explanation of the inner workings of the graphics pipeline in a video game / really *any* interactive piece of software with complex video output!
Damn explained in not even 5 minutes. Good video, keep up the great work!
Does this apply to game engine devs, or also to game devs that use game engines like unreal or unity?
Game engine devs 100%. Game engines should take care of this but I’ve heard of some devs talking about stuttering because their models weren’t preloaded/off-screen-rendered (not sure if these terms are right) before they tried rendering them onto the screen
When I saw the title I had to immediately think of why I created my own website. There was a redirection page on our school's IServ server, from _domain/_ to _domain/iserv_ . That redirection page was just a black line of text on white background, saying you're getting redirected. But I wanted to have a link there, like "Click here if you don't get redirected". So I created my own website to have a better redirection page. And still, my website is my JS learning and testing ground, which is why my website breaks every now and then just to suddenly work again after a few seconds to minutes.