I just started studying computer engineering myself and my goal is to become a game developer, however as I am living in Norway, I know the industry is not that big here and I’m having doubts about whether or not it is a realistic dream. I never knew you worked in tech, and you also mentioned having worked in gaming, which is really exciting to hear, considering you’re nordic. Do you mind sharing any amount of information regarding what you did and/or tips that you might have? Would love to just talk to some norwegian/nordic game developers who have made it just so I know where to begin.
the way I solved match 3, is simply simulating all the moves, and checking the expected score after each, then simply sorting them. that makes it slightly better compared to just doing the closest to the bottom, as it will actively rate multiple matches well. going 2 layers deep is actually fairly fast, there's only 224 possible moves (the inner 6x6 gems have 4 moves, the sides 3 and the corners 2), squared that's around 50k simulations maximum, which isn't that much for modern computers, but half of them are redundant (swapping 3,4 and 2,4 is the same as swapping 2,4 and 3,4) and most of the moves aren't valid, so in the end you end up with way less (less than 1000, but most likely around 100 or less) you could go deeper, but at this point you've got a lot of uncertainty from new gems falling, though going up to 5 might be valid if you're aiming for a high score, and you can probably still compute each move in less than 10s.
So you essentially made your own TAS (Tool Assisted Speedrun) tool. Awesome! A TAS is when you use a tool to assist you at speedrunning the game. Sometimes it's just slowing down the game, but more often you are pre-planning perfect inputs that you give to the game, so you end up with the theoretically best speedrun. (Sometimes the inputs are so rapid that it's impossible for a human to perform without assistance, but sometimes it helps human players in finding new strategies.)
Let's not forget how it paved the way for next gen mods, on software written in 1993. It's one of the first FPS game engines, and it can still be edited for todays tech. (Total Chaos) is one of those projects.
This was amazing! I ended up working as a DevOps Engineer precisely because I played so much with game bots when I was younger. The best one so far was the open source Botty for Diablo 2 Resurrected that also used OpenCV. They made something very impressive, but unfortunately it is archived because people abused it online.
Thanks for sharing mate! I feel like the goal in Level 2 was a bit different than the rest, because you actually built a program to solve the puzzle by itself.. I was expecting you to do the same for vvv 😅 Loved it!
Oh if I had the time, that would be amazing. But since the talk was "secretly" about testing applications (and having some fun) I felt it was ok. Thank you :D
Oh there are so many interactions that are being missed if you don't cover enough cases. I also ran some demos on the full release locally but I can't host that on GitHub. But even if you did that, how do you know you're covering everything? It's more an example that you can do this.
so basically, what he said from 18:20 to ~ 18:56 is he played Doom and his time was 0 seconds. And, he played it on multiple computers simultaneously. LOL
Why does it take 10 minutes to build? If you just take Moore's law that would mean it would have took over 20 years to build on computers in 1993, obviously this isn't a perfect estimate but how is it that far off?
it's transistor count, not cpu speed, even though we run more instructions per clock now, so it's not 100% accurate, 1990 intel cpus were at 25mhz, we're now at 5ghz, so that's only a factor of 200, so it would take ~33hours, maybe like 60 or so adjusting for ipc for a full compilation, and that's assuming they were using a single cpu. note that you usually don't need to fully recompile the software everytime, only the changed part can be recompiled, so the next compilations might only take an hour or 2, and can be ran overnight. yes that means you can't test as much, but that's still doable. an i860 is ~1M transistors, an m3 ultra has 134,000 times the number of transistors, moore's law says the number of transistors roughly doubles every 2 years, which means in 32 years it would multiply by 65536, so we're about double that (though the m3 ultra is 2 m3 in a trenchcoat so it still kinda holds). but even if speed scaled linearly with transistor count, and we use moore's law that'd be 11k hours, not years. (that's still roughly 1 year and 3 months.) I think where a mistake was made was that moore's law is every 2 years, not every year.
@@satibel Transistor count is a *much* closer approximation to cpu speed than clock speed (in fact clock speed is not really an approximation at all, if it was then CPUs haven't changed in speed in 20 years, and in ~2000 CPUs would be faster than modern day CPUs as they used to have higher clock speeds. This is of course nonsense). CPUs now are *much* faster than 200 times 1993. .obj and linking were pretty uncommon in the early 90s, I doubt DOOM used it so would need to fully recompile each time, though regardless this really doesn't have any relevance to my question. Moore's law is every 18 months and is much more than 11k hours scaling 10 minutes by 30 years (20 doublings), it would be over 20 years.
@@BlueCosmology regardless of moore's law, current cpus have 70k-135k times more transistors than the 1M from 1990, which would make it 22.5k hours at most. but a big amount of transistors are dedicated to memory (cache/registers) and specific operations (e.g. square root, or even more specific like SSE/AVX), so it's also not a good metric, a pentium 3 mobile from 2000, with 44m transistors is only roughly 20-25 times slower in single thread (using a benchmark) than an M3 max, using 3000 times less transistors, even if we divide by 16 because the M3 is a 16 core, that's 190 times more transistors for a 25 times speed up. (and yeah ipc is about as much a factor as clock, as in the 25 times speed up there's a 5 times clock speed up.) processors in the early 2000s were barely hitting the 2 ghz at most, so there's still a large upgrade, even without ipc, cache and core/thread count if we assume multithread, there's been a roughly 400 times speed up between 2000 and 2023 and there's been a roughly 20-40 times speed up between 1990 and 200, so the speed up is between 8000 and 16k times. so that'd be about 2 months on a single cpu, but they probably had a farm or a cluster to compile, which would drastically reduce the time. and they could probably still do partial compiles, like only compiling the part they want to test. also odds are that the 10 minutes compile isn't optimized.
@@olafurw Huh, I've never noticed such a drastic difference in compilation times in C vs C++ (obviously there's some but I never realised it can be quite so huge), have you ever profiled the compiler or similar to see what in particular is taking so long in the DOOM compilation... I'm guessing if it's a difference from C++ to C it's something to do with the STL having a lot of overhead?
Hey, that's me! :D
Do i have the frist or second comment then? 😅Nice talk 👍🇩🇰
@@Codex_Regius thank you
I really liked it. Thank you.
This was great! I found you through your Nordic videos, and was very pleasantly surprised to see this!
It's good to see video of you when you where younger.
I love how you are basically describing a Tool Assisted Speedrun engine, but it's a debug tool.
Love his comedy sketches on the clock app. Didn't realize he was a conference speaker! Now I've got to track down all his talks! 👍
There's a link on my profile (not the ndc one) to a playlist of all my talks.
Found the link in bio under reels in your bio, was instantly hooked up. Do not regret. Great presentation, enjoyed it very much.
i'm just now learning to code in my free time and found this talk so fascinating and motivating. thank you!
A short got me here.
Well done for this talk. Interesting even for a non programmer.
Thank you!
Nice talk. Level 3 of the talk reminded me of the "metamod" (and previously "admin mod") of Half-Life, from 15+ years ago.
I just started studying computer engineering myself and my goal is to become a game developer, however as I am living in Norway, I know the industry is not that big here and I’m having doubts about whether or not it is a realistic dream. I never knew you worked in tech, and you also mentioned having worked in gaming, which is really exciting to hear, considering you’re nordic. Do you mind sharing any amount of information regarding what you did and/or tips that you might have? Would love to just talk to some norwegian/nordic game developers who have made it just so I know where to begin.
the way I solved match 3, is simply simulating all the moves, and checking the expected score after each, then simply sorting them. that makes it slightly better compared to just doing the closest to the bottom, as it will actively rate multiple matches well.
going 2 layers deep is actually fairly fast, there's only 224 possible moves (the inner 6x6 gems have 4 moves, the sides 3 and the corners 2), squared that's around 50k simulations maximum, which isn't that much for modern computers, but half of them are redundant (swapping 3,4 and 2,4 is the same as swapping 2,4 and 3,4) and most of the moves aren't valid, so in the end you end up with way less (less than 1000, but most likely around 100 or less) you could go deeper, but at this point you've got a lot of uncertainty from new gems falling, though going up to 5 might be valid if you're aiming for a high score, and you can probably still compute each move in less than 10s.
Heck yes! Well done.
@@olafurw🎉🎉🎉🎉🎉🎉
Saw this on an instagram reel and came to see if it's good, cheers to the speaker
So you essentially made your own TAS (Tool Assisted Speedrun) tool.
Awesome!
A TAS is when you use a tool to assist you at speedrunning the game. Sometimes it's just slowing down the game, but more often you are pre-planning perfect inputs that you give to the game, so you end up with the theoretically best speedrun. (Sometimes the inputs are so rapid that it's impossible for a human to perform without assistance, but sometimes it helps human players in finding new strategies.)
Let's not forget how it paved the way for next gen mods, on software written in 1993. It's one of the first FPS game engines, and it can still be edited for todays tech. (Total Chaos) is one of those projects.
This was amazing! I ended up working as a DevOps Engineer precisely because I played so much with game bots when I was younger. The best one so far was the open source Botty for Diablo 2 Resurrected that also used OpenCV. They made something very impressive, but unfortunately it is archived because people abused it online.
Hi! I am excited to listen to
Amazing! I was hoping to catch this talk at NDC Oslo earlier this year, but it got swapped with something else, so I missed it >_
I hope you enjoy!
I have some game projects I’ve been tinkering with, I’m definitely incorporating some of this.
Thanks for sharing mate! I feel like the goal in Level 2 was a bit different than the rest, because you actually built a program to solve the puzzle by itself.. I was expecting you to do the same for vvv 😅 Loved it!
Oh if I had the time, that would be amazing. But since the talk was "secretly" about testing applications (and having some fun) I felt it was ok. Thank you :D
I need Talks like this on Peertube
Similar techniques are used to allow AI code to play games in order to experiment with learning in challenging and non-trivial environments.
My guy made a TAS bot for VVV
You know it.
The man built TAS.
8:00 if you're unit-testing the shareware episode only, you won't notice bugs in the BFG code, e.g.
Oh there are so many interactions that are being missed if you don't cover enough cases. I also ran some demos on the full release locally but I can't host that on GitHub. But even if you did that, how do you know you're covering everything? It's more an example that you can do this.
Ever played space ace or dragons lair?
There is malloc in doom which is not determistic. I call this "deterministic enough" for non real time systems.
You look like the short version of The Mountain
bro is straight up teaching people how to bot 💀
so basically, what he said from 18:20 to ~ 18:56 is he played Doom and his time was 0 seconds. And, he played it on multiple computers simultaneously. LOL
This is not gonna air in China.
Why does it take 10 minutes to build? If you just take Moore's law that would mean it would have took over 20 years to build on computers in 1993, obviously this isn't a perfect estimate but how is it that far off?
it's transistor count, not cpu speed, even though we run more instructions per clock now, so it's not 100% accurate, 1990 intel cpus were at 25mhz, we're now at 5ghz, so that's only a factor of 200, so it would take ~33hours, maybe like 60 or so adjusting for ipc for a full compilation, and that's assuming they were using a single cpu.
note that you usually don't need to fully recompile the software everytime, only the changed part can be recompiled, so the next compilations might only take an hour or 2, and can be ran overnight.
yes that means you can't test as much, but that's still doable.
an i860 is ~1M transistors, an m3 ultra has 134,000 times the number of transistors, moore's law says the number of transistors roughly doubles every 2 years, which means in 32 years it would multiply by 65536, so we're about double that (though the m3 ultra is 2 m3 in a trenchcoat so it still kinda holds).
but even if speed scaled linearly with transistor count, and we use moore's law that'd be 11k hours, not years. (that's still roughly 1 year and 3 months.)
I think where a mistake was made was that moore's law is every 2 years, not every year.
@@satibel
Transistor count is a *much* closer approximation to cpu speed than clock speed (in fact clock speed is not really an approximation at all, if it was then CPUs haven't changed in speed in 20 years, and in ~2000 CPUs would be faster than modern day CPUs as they used to have higher clock speeds. This is of course nonsense). CPUs now are *much* faster than 200 times 1993.
.obj and linking were pretty uncommon in the early 90s, I doubt DOOM used it so would need to fully recompile each time, though regardless this really doesn't have any relevance to my question.
Moore's law is every 18 months and is much more than 11k hours scaling 10 minutes by 30 years (20 doublings), it would be over 20 years.
@@BlueCosmology regardless of moore's law, current cpus have 70k-135k times more transistors than the 1M from 1990, which would make it 22.5k hours at most.
but a big amount of transistors are dedicated to memory (cache/registers) and specific operations (e.g. square root, or even more specific like SSE/AVX), so it's also not a good metric, a pentium 3 mobile from 2000, with 44m transistors is only roughly 20-25 times slower in single thread (using a benchmark) than an M3 max, using 3000 times less transistors, even if we divide by 16 because the M3 is a 16 core, that's 190 times more transistors for a 25 times speed up.
(and yeah ipc is about as much a factor as clock, as in the 25 times speed up there's a 5 times clock speed up.)
processors in the early 2000s were barely hitting the 2 ghz at most, so there's still a large upgrade, even without ipc, cache and core/thread count
if we assume multithread, there's been a roughly 400 times speed up between 2000 and 2023 and there's been a roughly 20-40 times speed up between 1990 and 200, so the speed up is between 8000 and 16k times. so that'd be about 2 months on a single cpu, but they probably had a farm or a cluster to compile, which would drastically reduce the time. and they could probably still do partial compiles, like only compiling the part they want to test.
also odds are that the 10 minutes compile isn't optimized.
It was a jab at how slower C++ can be to compile compared to C.
@@olafurw Huh, I've never noticed such a drastic difference in compilation times in C vs C++ (obviously there's some but I never realised it can be quite so huge), have you ever profiled the compiler or similar to see what in particular is taking so long in the DOOM compilation... I'm guessing if it's a difference from C++ to C it's something to do with the STL having a lot of overhead?
The YT Short dude can talk about non Northern Countries funny things like more than 1 Minute?
Bro could be making millions on COD hacks but he’s going the academic route instead
It feels like such a waste seeing that the actual demo run is basically in no time, but you need 10min to setup the whole job to run.