Harder Than It Seems? 5 Minute Timer in C++

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 723

  • @TheCherno
    @TheCherno  6 หลายเดือนก่อน +80

    So… got any more comedy for me to look at? 👇
    Also don’t forget you can try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/TheCherno . You’ll also get 20% off an annual premium subscription.

    • @shafiullahptm909
      @shafiullahptm909 6 หลายเดือนก่อน

      bro i really love your videos can you pls make a c++ one shot video pls

    • @Silencer1337
      @Silencer1337 6 หลายเดือนก่อน

      I'm interested to learn how you would cap the framerate when vsync is off. I've always looked for alternatives to sleep() because it likes to oversleep, but never found anything.

    • @heavymetalmixer91
      @heavymetalmixer91 6 หลายเดือนก่อน +2

      Given that you're using the standard library in this video I'd like to ask: As a game engine dev what's your opinion on the standard library?
      Most game devs out there tend to avoid it but I'm not sure why.

    • @theo-dr2dz
      @theo-dr2dz 6 หลายเดือนก่อน

      @@heavymetalmixer91
      Standard library design and implementations are optimised on correctness and generality. That can be suboptimal on performance. For example, the standard library calendar implementation is designed to get leap seconds right. That will probably not be relevant for games but it will never be completely free.
      Also the standard library uses exceptions quite extensively and exceptions create some unpredictability in timing. So, if you really need ultimate performance and every cpu cycle counts, like in AAA games, high frequency trading and that kind of applications, creating some kind of custom implementation of the standard library (or some kind of alternative for it) can be worth the effort. But generally C++ code is very fast, even without doing all kinds of optimisation tricks. I would say the standard library implementations in leading compilers are fine, except in really cutting edge performance critical situations.

    • @Brahvim
      @Brahvim 6 หลายเดือนก่อน

      ​@@heavymetalmixer91I don't know as much as other people around here, but I like to think that the reason why it's so is because there are new edge cases for them to know of, it takes up space wherever it's taken, it may use a few `virtual`s around the place, I think, so... mostly because it's a library, the implementation of which, they don't know a lot about!
      It _does_ make life easier once one gets into its mindset, though.

  • @dhjerth
    @dhjerth 6 หลายเดือนก่อน +1680

    I am a Python programmer and this is how I would solve it:
    import os
    import sys
    import time
    # All done, Python takes 5 minutes to start

    • @madking3
      @madking3 6 หลายเดือนก่อน +192

      I usually create list with 500 random numbers and sort it with bubble sort it gives me 5 min best case

    • @jongeduard
      @jongeduard 6 หลายเดือนก่อน +17

      Yeah, but let's also talk about performance in Python and how you want to compare it to anything like C, C++ or Rust.

    • @thuan-jinkee9945
      @thuan-jinkee9945 6 หลายเดือนก่อน +7

      Hahahah

    • @MunyuShizumi
      @MunyuShizumi 6 หลายเดือนก่อน +40

      ​@@jongeduard whoosh

    • @iritesh
      @iritesh 6 หลายเดือนก่อน +28

      ​@@jongeduardwooosh

  • @christopherweeks89
    @christopherweeks89 6 หลายเดือนก่อน +2228

    Remember: this is the stuff we’re training our AI on

    • @monad_tcp
      @monad_tcp 6 หลายเดือนก่อน +193

      job security for humans

    • @enzi.
      @enzi. 6 หลายเดือนก่อน +12

      @@monad_tcp 😂😂

    • @Avighna
      @Avighna 6 หลายเดือนก่อน +7

      💀☠️💀☠️💀

    • @platin2148
      @platin2148 6 หลายเดือนก่อน +16

      It doesn’t matter as LLMs have inherited fuzziness as them being a statistic model.

    • @codinghuman9954
      @codinghuman9954 6 หลายเดือนก่อน +4

      good

  • @AJMansfield1
    @AJMansfield1 6 หลายเดือนก่อน +441

    As a firmware engineer, my first instinct was "set the alarm peripheral to trigger an interrupt handler"

    • @jamesblack2719
      @jamesblack2719 6 หลายเดือนก่อน +34

      That was my thought also, but I come at it from a C background and his approach just didn't seem elegant. It seems overly complicated on something that is rather simple to do. Shame AI will be trained on this approach.

    • @cpK054L
      @cpK054L 6 หลายเดือนก่อน +3

      Wtf is an alarm peripheral?
      Did you mean Timer?

    • @AJMansfield1
      @AJMansfield1 6 หลายเดือนก่อน

      @@cpK054L on a system with a free-running continuously-increasing system clock, you set the alarm register to generate an interrupt when that system clock reaches the set value - in this case, you'd take the current time, add two minutes worth of clock ticks to that value, and set the alarm to that value.

    • @3xtrusi0n
      @3xtrusi0n 6 หลายเดือนก่อน +48

      @@cpK054L MCU's have hardware timers that you can use without consuming a thread. Depending on the CPU and type of timer implemented (in hardware), you can have it trigger a hardware interrupt which will then kick off a given task/instruction.
      It's a peripheral alarm, because it is a peripheral on the hardware/MCU. You can also call it a timer, but either name means the same. Alarm would indicate you are 'counting down' and timer would indicate you are 'counting up'.

    • @cpK054L
      @cpK054L 6 หลายเดือนก่อน +4

      @3xtrusi0n I've never heard jt called an alarm.
      Also, timers don't have "counters" from what I've seen...they only have flag bits
      The ISR just waits for it to raise then you must reset otherwise it do t work the next cycle

  • @TwistedForHire
    @TwistedForHire 6 หลายเดือนก่อน +405

    Funny. I am an office application engineer and my first thought at looking at your code was "noooooo!!!" We try to use as little resources as possible and a 5ms constant loop is "terrible" for battery life. It's funny how people from different coding worlds approach a problem different. My first instinct was much closer to the sleep/wait implementation (though I wouldn't waste an entire thread just to wait).

    • @Brenden.smith.921
      @Brenden.smith.921 6 หลายเดือนก่อน +57

      I was thinking the same thing. I would've had a thread sleeping and then doing whatever needs to be done after the sleep timeout using a callback. If there was a need to share data with the main thread and I didn't want to do safe multi threading I'd use a signal to interrupt the main thread (unless it was something that wasn't very important, unless, unless, unless).
      Looping over and over like that and sleeping for 10ms is the exact same solution as the second guy except he slept for 1s which is what was laughed at, but it's fundamentally the same solution. Just a lot sloppier.

    • @wi1h
      @wi1h 6 หลายเดือนก่อน +24

      @@Brenden.smith.921 as for your second point, it's not the same. the "game loop" solution presented is off from the final by at most 5 ms, the second solution from the thread is off by (loop processing time required) * (loop iterations, in that case 300)

    • @RepChris
      @RepChris 6 หลายเดือนก่อน +11

      As with anything "engineering" (to clarify: coding and CS has a lot of stuff that's sitting in the fuzzy zone between science and engineering, not trying to knock your status as an engineer), there isnt one "best" solution, even just by cost and development time being in the picture. In a game engine the (relatively) minuscule overhead doesn't matter since youre doing a lot of other stuff per frame/simulation step which is way way more costly, and the inaccuracy youre going to get is probably a nonissue since a game generally doesn't need a 5 minute timer to be accurate down to the millisecond. So the time spend thinking about a better solution and implementing it is going to be better spent working on something more important.
      Completely different picture for something that needs to be very accurate, or actually power/compute efficient (which games certainly are not in any capacity, at least 99+% of them)

    • @youtubehandlesux
      @youtubehandlesux 6 หลายเดือนก่อน +18

      Me writing a video game and trying to make it stable up to 300 fps: A whopping 5ms??? In this economy???

    • @livinghypocrite5289
      @livinghypocrite5289 6 หลายเดือนก่อน +9

      Yeah, coming from yet another background, I immediately catched other stuff. Just reading the original problem my immediate question was: How accurate does the timer need to be? Because I constantly have to explain people, that I can't give them millisecond accuracy on an operating system, that isn't a real time OS. So, I saw the Sleep solution and my immediate reflex was: That isn't going to be accurate, because a Sleep tells the OS to sleep at least that amount of time, so the OS can decide to wake my application at a later time. Could be fine, but this depends on how accurate the timer needs to be.
      Also when seeing the recursive function, I also noticed the stack usage of that solution, but also the problem, that a loop is simply faster than a recursive function, because a function call has overhead, building that stack takes CPU time, so simply by calling the function recursively the timer will get more inaccurate, without even looking at how long the stuff that is executed while running the timer takes.

  • @systemhalodark
    @systemhalodark 6 หลายเดือนก่อน +862

    Trolling is a art; Topnik1 is a true artist.

    • @mabciapayne16
      @mabciapayne16 6 หลายเดือนก่อน +11

      an* ( ͡° ͜ʖ ͡°)
      And I don't think he made a bad code on purpose.

    • @херзнаетгражданинЕбеньграда
      @херзнаетгражданинЕбеньграда 6 หลายเดือนก่อน +69

      @@mabciapayne16 trolling is art, and @systemhalodark is an true artist

    • @mabciapayne16
      @mabciapayne16 6 หลายเดือนก่อน

      @@херзнаетгражданинЕбеньграда You should really learn English articles, my dear friend ( ͡° ͜ʖ ͡°)

    • @mabciapayne16
      @mabciapayne16 6 หลายเดือนก่อน

      @@херзнаетгражданинЕбеньграда a true artist* ( ͡° ͜ʖ ͡°)

    • @benhetland576
      @benhetland576 6 หลายเดือนก่อน +31

      And top it off with a recursive call _#seconds_ deep instead of iterating, just to increase the chance of stack overflow on a long waits I assume.

  • @akashpatikkaljnanesh
    @akashpatikkaljnanesh 6 หลายเดือนก่อน +384

    You want your users to hate you? Tell the user in the console to set a timer for 5 minutes, wait for them to press space and start the timer, and press space to finish it. :)

    • @no_name4796
      @no_name4796 6 หลายเดือนก่อน +33

      Just have the user manually update the timer at this point...

    • @HassanIQ777
      @HassanIQ777 6 หลายเดือนก่อน +16

      just have the user manually write the code

    • @dandymcgee
      @dandymcgee 6 หลายเดือนก่อน +42

      just have the user go touch grass, then they won't need a timer.

    • @akashpatikkaljnanesh
      @akashpatikkaljnanesh 6 หลายเดือนก่อน +1

      ​@@dandymcgeeWonderful idea

    • @DasHeino2010
      @DasHeino2010 6 หลายเดือนก่อน

      Just have the user prompt ChatGPT! :3

  • @asteriskman
    @asteriskman 6 หลายเดือนก่อน +245

    "Train the AI using the entire internet, it will contain all of human knowledge."
    The AI: "derp, but with extraordinary confidence"

    • @Pablo360able
      @Pablo360able 6 หลายเดือนก่อน +1

      this explains so much

  • @add-iv
    @add-iv 6 หลายเดือนก่อน +218

    sleep doesn't take any cpu resources during the sleep time since the thread will be put into the pending queue (on most OS). Periodically checking will consume CPU time, even if it is minimal, and is a very Game Engine like solution.

    • @nerdError0XF
      @nerdError0XF 6 หลายเดือนก่อน +3

      Isn't creating a thread expensive by itself?

    • @tylisirn
      @tylisirn 6 หลายเดือนก่อน +45

      @@nerdError0XF Sleep isn't creating any threads, it puts the calling thread to sleep.

    • @nerdError0XF
      @nerdError0XF 6 หลายเดือนก่อน +1

      @@tylisirn okay, makes sense

    • @Abc-jq4oz
      @Abc-jq4oz 6 หลายเดือนก่อน +1

      So who checks the OS’s pending queue then? And how often?

    • @tylisirn
      @tylisirn 6 หลายเดือนก่อน

      @@Abc-jq4oz The OS's task scheduler does in conjunction with hardware. The scheduler maintains a priority queue which has all tasks organized by priority and when they need to wake up. When a task finishes its timeslice the scheduler looks at the next task in the priority queue and if it's ready to execute, it executes it. If the next task is not ready to execute the OS sets a hardware timer to raise an interrupt when the next task is scheduled to run and puts the CPU into low power sleep state (usually ACPI state C1 (halted) or C2 (stopped clocks), these days even C3 state (deep sleep) is used for ultra low power computing when on battery power; in C3 state most of the CPU core is powered down and caches are allowed to go stale requiring cache refresh when the CPU reactivates). The hardware interrupt wakes up the CPU at the scheduled time.

  • @cubemaster1298
    @cubemaster1298 6 หลายเดือนก่อน +181

    I am not trying to protect topnik1's code in the video, it is pretty bad indeed BUT I am pretty sure it is not going to be 300 stack frames deep. From the looks of it, it is a tail recursive function, so any major compiler (e.g. clang) will do tail call optimization.

    • @JFMHunter
      @JFMHunter 6 หลายเดือนก่อน +4

      This should be higher

    • @JuniorDjjrMixMods
      @JuniorDjjrMixMods 6 หลายเดือนก่อน +30

      But then you would be expecting for the compiler to fix a problem that shouldn't exist...

    • @MeMe-gm9di
      @MeMe-gm9di 6 หลายเดือนก่อน +27

      @@JuniorDjjrMixMods Tail Call Optimization is often required to write certain algorithms "pretty", so it's often guaranteed.

  • @Kazyek
    @Kazyek 6 หลายเดือนก่อน +151

    Good video overall, but the part about precision at 15:21 is a bit lacking. To be honest, precision is most likely not very important when sleeping for 5 minutes, but the overall takeaway of how sleep work is a bit wrong. Sleep will sleep for *AT LEAST* the time specified, but could sleep for quite a bit longer depending on other task's CPU utilization, the HPET (Hardware Precision Event Timer) used by the system (or not, on some system there might not even be one), the OS's timer resolution settings, the virtual timer resolution thing that windows do on laptops for powersaving where it will actually stretch the resolution, etc etc...
    Therefore, when very high precision is desired (for example, a frame limiter in a game, to have smooth frame pacing), you don't want to sleep all the way, but rather, sleep for a significant portion of the time, but busy-loop at the end.
    This fundamental misunderstanding of how sleeping work is why so many games have built-in frame limiters with absolutely garbage frame-pacing, and that you get a much smoother experience by disabling it and using something like RTSS's frame limiter instead.

    • @Kazyek
      @Kazyek 6 หลายเดือนก่อน +38

      And by "quite a bit longer", I mean that on a windows laptop in default configuration, a sleep(1ms) might sleep for over 15ms sometimes!

    • @Fs3i
      @Fs3i 6 หลายเดือนก่อน +11

      Yeah, “make something happen at x time” is a hard problem, and really hard (near impossible) to write in a portable fashion

    • @shadowpenguin3482
      @shadowpenguin3482 6 หลายเดือนก่อน +4

      When I was younger I was always surprised how sleeping for 0ms is much slower than sleeping for 1 ms

    • @JohnRunyon
      @JohnRunyon 6 หลายเดือนก่อน

      You can get pre-empted anyway. If you need to guarantee it'll happen at an exact moment then you should be using an RTOS. Thankfully you almost never actually need to guarantee that.
      A frame limiter should be maintaining an average, not using a constant delay, and then it won't even matter if the OS delays you for 15ms.
      Btw, a 15ms jitter is completely and totally unnoticeable.

    • @TheArtikae
      @TheArtikae 4 หลายเดือนก่อน +3

      @@JohnRunyonBro, that’s a whole ass frame. Two if you’re running at 144 Hz.

  • @brawldude2656
    @brawldude2656 6 หลายเดือนก่อน +129

    I recently made a discord bot. The task was giving every user a cooldown timer. Well at first glance it may seem like an insane task but once you realise time just goes on you don't have to do any computation meanwhile. You can just compare start and end whenever user needs to be updated. And this is how many village/base building games use on their playerbase. Like for example you need a buliding that takes 3 days to bulids. When the player is online you can just update every second but when the player is offline you can have the end date and compare to that when the player logs in again. Or someone interacts with that user.

    • @theairaccumulator7144
      @theairaccumulator7144 6 หลายเดือนก่อน +22

      Duuh like if you can't figure this out you really shouldn't touch an ide

    • @boycefenn
      @boycefenn 6 หลายเดือนก่อน

      ​@theairaccumulator7144 asshole alert!

    • @brawldude2656
      @brawldude2656 6 หลายเดือนก่อน

      @@theairaccumulator7144 there are many people who can't even get close to figuring this out I'm not even kidding

    • @Brahvim
      @Brahvim 6 หลายเดือนก่อน +15

      Lazy-loading, pretty much, right?! Nicely used as always!
      Some things are okay to do right before their consequences are needed...

    • @Brahvim
      @Brahvim 6 หลายเดือนก่อน +62

      @@theairaccumulator7144 Don't act so, please...

  • @scowell
    @scowell 6 หลายเดือนก่อน +94

    In embedded land we have real timers! Talk about accurate... sub-nanosecond is easily doable. Overhead? It's a peripheral! Ignore it until it interrupts you.... or have it actually trigger an output without bothering you if you really need that accuracy. Love timers.

    • @JohnSmith-pn2vl
      @JohnSmith-pn2vl 6 หลายเดือนก่อน +5

      time is everything

    • @gonun69
      @gonun69 6 หลายเดือนก่อน +5

      They are great but you better have the datasheet and calculator ready to figure out how you need to set them up.

    • @RepChris
      @RepChris 6 หลายเดือนก่อน +14

      @@gonun69 thats the case for pretty much everything embeded

    • @muschgathloosia5875
      @muschgathloosia5875 6 หลายเดือนก่อน +13

      @@gonun69 I can't imagine you would ever not have the datasheet ready

    • @scowell
      @scowell 6 หลายเดือนก่อน +1

      @@gonun69 Exactly... gets easier when using a PLL to run the clock... I do this for syncing to video.

  • @0xkleo
    @0xkleo 6 หลายเดือนก่อน +404

    I'd never think i would ever spend 20 minutes to watch a 11 year old post about a 5 minute timer. but i learned something
    Edit: 350 likes??? Damn i must be famous

    • @monkeywrench4166
      @monkeywrench4166 6 หลายเดือนก่อน +8

      He doesn't look 11 year old tbh

    • @driz6353
      @driz6353 4 หลายเดือนก่อน +1

      @@monkeywrench4166 11 year old *post*

  • @oleksandrpozniak
    @oleksandrpozniak 6 หลายเดือนก่อน +12

    As an embedded developer I like to use SIGALRM and handler in case I'm sure that I'll need to have one timer only at the same time. If I need to have several timers I use timer_create aka Linux timers.

  • @kuhluhOG
    @kuhluhOG 6 หลายเดือนก่อน +7

    17:15 Btw, small nitpick for the C++14 users (and above): move your callback into the lambda capture, because if the callback is an object with a defined operator() (like a lambda), there could be big-ish members (like with a lambda capture).

  • @peterjansen4826
    @peterjansen4826 6 หลายเดือนก่อน +61

    A game-developer who cautions to not use OS-dependent libraries. Music in my Linux-gaming ears. 😉

  • @andersonklein3587
    @andersonklein3587 6 หลายเดือนก่อน +134

    I'm surprised no one brought up interrupts, I don't know about modern C++, but I've seen in old school assembly this concept of setting a "flag" that interrupts execution and calls/executes a function before handing back the CPU.

    • @MrHaggyy
      @MrHaggyy 6 หลายเดือนก่อน +51

      On embedded devices, this works like a charm as dozens of timers are running all your peripherals. So you pick one of them and derive a logic for all the other timed events

    • @sopadebronha
      @sopadebronha 6 หลายเดือนก่อน +29

      This was literally the first thing that came to my mind. I think it's the instinctive solution for a firmware programmer.

    • @sinom
      @sinom 6 หลายเดือนก่อน +23

      I'm not an embedded programmer so I might just not know something, but afaik the C++ stl doesn't provide any device agnostic way of handling interrupts, so anything you do with interrupts will always be hardware dependent and non portable.
      If you are using some specific microcontroller and don't care about portability then interrupts would probably be a good way of handling the problem.

    • @fullaccess2645
      @fullaccess2645 6 หลายเดือนก่อน +3

      If I want to run the callback on the main thread, could interrupts avoid the while loop that checks for the task queue?

    • @sopadebronha
      @sopadebronha 6 หลายเดือนก่อน +7

      @@fullaccess2645 That's the whole point of interrupts.

  • @Reneg973
    @Reneg973 6 หลายเดือนก่อน +36

    ... And then you notice that your 5sec timer needs 5.03sec on your first PC. On the second it takes 5.1s and after some debugging you find out the OS moved the thread onto an E core and that your thread priority was not high enough. Would be nice to extend this video to handle more details. Like higher+highest accuracy or lower+lowest CPU usage.

    • @dozog
      @dozog 2 หลายเดือนก่อน +3

      Please let 2013 know about this issue.😂

    • @htpc002Weirdhouse
      @htpc002Weirdhouse หลายเดือนก่อน

      ​@@dozogNah, 2013 is dealing with clocks that jump forwards and backwards as they're moved from core to core.

    • @htpc002Weirdhouse
      @htpc002Weirdhouse หลายเดือนก่อน

      Once had to write a Bayesian model to determine the inverse function for the sleep-like function to (gradually) determine what delay to request to actually get the desired delay.

    • @deltamico
      @deltamico หลายเดือนก่อน

      Why not just use exponential checks

  • @rogercruz1547
    @rogercruz1547 6 หลายเดือนก่อน +6

    25 years ago when I started coding I took setTimeout and setInterval in ActionScript for granted, I was 8.
    Now I was thinking of a thread with a loop and events that trigger callbacks as other threads depending on timers you set that would mimick that behaviour but when you mentioned Promises I realized it would be way easier to open a thread for each timer and just sleep...

  • @KieranDevvs
    @KieranDevvs 6 หลายเดือนก่อน +28

    The best solution for this is asynchronous execution. That way you can decide how the execution is performed i.e on the same thread or on a separate thread, and when the execution / timer is complete, you can decide if you want to rejoin the execution context (thread) back to main and take the perf hit, or run your logic on the background thread without any perf hit.
    You get all the benefits i.e you don't need to worry about thread safety and its fully configurable in how you want it to run.

    • @phusicus_404
      @phusicus_404 6 หลายเดือนก่อน

      Wonderful, how to do it in C++?

    • @KieranDevvs
      @KieranDevvs 6 หลายเดือนก่อน

      @@phusicus_404 std::async? I thought that was pretty obvious.

    • @phusicus_404
      @phusicus_404 6 หลายเดือนก่อน

      @@KieranDevvs he used this in his code, you use it in other way then

    • @KieranDevvs
      @KieranDevvs 6 หลายเดือนก่อน

      @@phusicus_404 Nope, the way shown in the video is correct more or less. The thread sleeping is bad, but apart from a few fixes, the general premise is there. If you put the thread to sleep and don't use a state machine to allow the thread to return, you block the main thread in async cases where you only use one thread (mainly in cases where you're using a UI).

    • @w花b
      @w花b หลายเดือนก่อน

      What about C

  • @hi117117
    @hi117117 หลายเดือนก่อน +3

    there is what I would argue as a better way than any of the methods described here that does not require async, it does not require threads, etc. It's just plain simple single threaded and it still allows your program to do other stuff while the timer runs.
    what you do is you use the settimer function from C, and then register a signal handler for SIGALARM. in your signal handler, you unset the timer if you are done or you can keep it there if you want it to continue for the next 5 minutes.

  • @virkony
    @virkony 6 หลายเดือนก่อน +7

    9:21 for that case tail call elimination should fire unless there were stack allocations done in "dowhatuwantinmeantime". So it effectively turns into jump to beginning of function.

  • @valseedian
    @valseedian 6 หลายเดือนก่อน +4

    haven't watched for even 1 second, but, the answer is a thread that sleeps for nearly 5m, then a few ms until the time is reached, then calls a callback or sets a flag.
    when I was making my scratch gui system in c++ I had to solve the timer issue so I wrote a whole scheduler and event handler subsystem.

  • @TryboBike
    @TryboBike 6 หลายเดือนก่อน +27

    This threaded timer has subtle bug. If 'work' performed during the timer duration takes longer than the timer itself then after the timer concludes its scheduled work will need to wait for the 'join' thus delaying the execution by more than the 5 minutes. On the flip side - moving the 'timer' callback to the timer thread will require work of main and 'timer' to be concurrent which brings its own set of problems.
    Frankly - having any sort of 'delayed' execution done in a single thread whil stuff is happening during the wait period is a pretty difficult problem to tackle. Unless it is something like a game, where there is a game loop or an event driven application. But even then, depending on resolution of the loop the wait period might be very, very different to what was specified.

    • @delta3244
      @delta3244 5 หลายเดือนก่อน +1

      That's not what thread::join() does. thread::join() has _no effect_ on the thread corresponding to the std::thread it is called on. It only affects the thread which calls thread::join(), by making it block until the std::thread which .join() was called on finishes.
      Without thread::join() at the end of main(), the code following the timer would fail to run if main ended before the timer did. That's why it exists. To reiterate: it does not tell the timed thread to do any work. It tells the main thread to wait for the timed thread's work to finish before ending the program. The timed thread does work on its own, once the OS wakes it up (which will happen sometime after the sleep duration).

  • @not_herobrine3752
    @not_herobrine3752 6 หลายเดือนก่อน +13

    My way would include obtaining a timestamp at the beginning, checking every iteration of the application loop whether the time elapsed is greater than or equals the start time, then doing whatever if said condition was true

    • @ruix
      @ruix 6 หลายเดือนก่อน +3

      This is also what I thought

  • @motbus3
    @motbus3 6 หลายเดือนก่อน +179

    Fork Execve bash -c sleep 5

    • @yoshi314
      @yoshi314 6 หลายเดือนก่อน +11

      isn't that 5 seconds wait?

    • @sadhlife
      @sadhlife 6 หลายเดือนก่อน +23

      sleep 300

    • @ProtossOP
      @ProtossOP 6 หลายเดือนก่อน

      @@yoshi314easy fix, just multiply by 60

    • @Pritam252
      @Pritam252 6 หลายเดือนก่อน +1

      MS Windows be like:

    • @no_name4796
      @no_name4796 6 หลายเดือนก่อน +3

      Or bash -c sleep 300 on linux...

  • @mike200017
    @mike200017 6 หลายเดือนก่อน +2

    Coming from POSIX land, where anything interesting has a pollfd (file-descriptor) at the bottom of it, event loops consist of something that gathers all the interesting events and then calling "poll" on their pollfd's (or calling "epoll" or "select"). So, in that world, a timer like this is either implemented via a timerfd (you tell the kernel to create a "file" and trigger it at a specific time) or by simply setting the timeout for the poll call to the earliest wake-up time among your active timers (personally, I prefer that, gives more control). No messing around with threads. Coroutines are another way to do the same thing (coroutines are syntactic sugar on top of the same mechanisms).

  • @thelimatheou
    @thelimatheou 5 หลายเดือนก่อน +5

    A fascinating historical snapshot of the Indian application development process. Thanks!

    • @siddy.g6146
      @siddy.g6146 5 หลายเดือนก่อน

      What makes it Indian?

    • @thelimatheou
      @thelimatheou 5 หลายเดือนก่อน

      @@siddy.g6146 copy, paste and iteration of code on stack exchange...

    • @thelimatheou
      @thelimatheou 5 หลายเดือนก่อน

      @@siddy.g6146 copying and pasting crappy code from stack exchange

    • @thelimatheou
      @thelimatheou 5 หลายเดือนก่อน

      @@siddy.g6146 copy/paste/stealing code from forums

    • @justsomeguy6336
      @justsomeguy6336 หลายเดือนก่อน

      @@siddy.g6146Indian code is infamous for being atrociously bad

  • @pastasawce
    @pastasawce 6 หลายเดือนก่อน +12

    Yeah def getting into thread pool territory. Would love to see more on this.

  • @szirsp
    @szirsp 6 หลายเดือนก่อน +1

    20:00 My use cases of timers usually involve programming the interrupt controller, setting up HW timers or RTC alarms in microcontrollers... setting up "sleep"/standby/poweroff states
    What different worlds we live in :)

  • @nenomius1148
    @nenomius1148 6 หลายเดือนก่อน +15

    Шел Черно по интернету, увидел форум, заглянул в него и сгорел.

  • @satibel
    @satibel 6 หลายเดือนก่อน +3

    note that doing what you did with the system_clock or high_resolution_clock (in case it's not steady) instead of steady_clock can work most of the time, but you'll get issues when the time changes due to daylight savings or such, and you can accidentally get a one hour and 5 min timer

    • @delta3244
      @delta3244 5 หลายเดือนก่อน +1

      or a zero minute timer, for that matter

  • @mikefochtman7164
    @mikefochtman7164 6 หลายเดือนก่อน +2

    We had to run code in 'real-time' in the sense of training simulators. This means we had to perform a lot of calculations, then do I/O interfacing with the student's control panels in a way that the student couldn't tell the difference between the simulator and the actual control room. So update the I/O with new calculation results at LEAST every 250 ms. I know sounds slow by gaming standards, but we did a LOT of physics calculations for an entire power plant.
    So we set up what had to be done in each 'frame' and used a repeating interrupt timer configuration. A frame ran doing calcs and I/O then sleeps until the next interrupt. If we occasionally 'miss' an interrupt because the calcs took too long, we had to 'catch up' the next frame. (one way to do this was the interrupt service routine increment a simple frame_counter and main loop checks if we 'missed' an incremental step)
    For time delays, we simply did a counter in the main code that would count up to 'x' value because we knew each time the code executed it was 'delta-time' step since last execution. So for 5 minutes at a frame time of 250 ms, simply count up to 1200.
    This was a few years back, but you can see it's similar to your 'game engine' concept.

  • @sumikomei
    @sumikomei 6 หลายเดือนก่อน +90

    at first glance I totally didn't read "using namespace std::cherno_literals;"

    • @ADAM-qd9bi
      @ADAM-qd9bi 6 หลายเดือนก่อน +14

      I’ve always thought of us, and used to always misspell it with “cherno” 😭

  • @xlerb2286
    @xlerb2286 6 หลายเดือนก่อน +4

    Just shows that nothing is simple. What type of app are you working with? Do you need the thread to remain alive while the timer is running? Do you care about multi-platform? How much accuracy do you need? How important is it that code have low processing overhead? And the list goes on. (And that recursive example is going to keep me awake tonight, it takes a special type of person to write code like that)

  • @jamesmackinnon6108
    @jamesmackinnon6108 6 หลายเดือนก่อน +4

    I remember when I was first starting programming I learned visual basic script (Why I chose that I have no idea), and I was looking up how to wait for a period of time and ended up on a forum that said the way to set a timer was to ping google, figure out how long that took, and then divide the time you want to wait by the length of the ping and ping google that amount of times.

    • @tunk_2ton168
      @tunk_2ton168 6 หลายเดือนก่อน

      I also have chosen this path.
      I chose vbs because it doesn't require much. Literally just open notepad and you are good to go and its easy to learn.
      What did you move onto from that?

  • @alexanderheim9690
    @alexanderheim9690 หลายเดือนก่อน +1

    Create a thread with a binary heap holding deadlines and a conditional variable. Pop the binary heap in a loop and block on the condvar max until deadline occurs. If a new deadline is inserted, just notify the cond var.
    Add additional logic for timers and for repushing deadlines that aren't finished yet but got interrupted by a smaller and newer deadline.

  • @abraxas2658
    @abraxas2658 6 หลายเดือนก่อน +1

    19:34 If I wanted it to happen on the main thread, I'd probably have a game loop (as you showed) but with an integrated event system. This would be implemented as a min-heap with the time it should be called at as the value being sorted on. Then all timers could be checked with a single comparison. (If the lowest time has not been reached, all the others are guaranteed not to have been reached.) At this point though, we are very close to a full game engine core haha

  • @radumotrescu3832
    @radumotrescu3832 5 หลายเดือนก่อน

    I think this is one of the best situations where Asio (also packaged in Boost) actually makes sense if you are planning to do this kind of thing multiple times in a project. If you have to run multiple callbacks on a repeating and variable timer, and you have to handle IO in general, slapping an Asio io_context and a few steady timers is super easy and extremely reliable. You also get nice functionality like early cancelation, error code checking and other things that make it nice for production.

  • @harold2718
    @harold2718 6 หลายเดือนก่อน +20

    TBH I really don't like all those "sleep"-based solutions, which (1) consume an entire thread just to have it do nothing, and (2) make the actual waited time depend on when the kernel decides to schedule the thread after the sleep runs out, depending on CPU load at the time etc. (at a scale of 5 minutes that's not very important, but still, it's a fundamentally inaccurate approach) To be fair to the people who suggested it, C++ doesn't really give us the tools to actually build a timer. (but windows does, so I guess we're back to #include after all)

    • @ashton7981
      @ashton7981 6 หลายเดือนก่อน +5

      The waited time depending on the scheduling can be mitigated by using std::this_thread::sleep_until instead of std::this_thread::sleep_for. So instead of going off after the thread has been running for 5 min, it'll go off the first time it's scheduled after the 5 min mark.

    • @anon_y_mousse
      @anon_y_mousse 6 หลายเดือนก่อน +7

      With a hosted environment, you can't depend on a timer to more than a few milliseconds of resolution anyway. This isn't an unhosted realtime OS that most people will be using this for. Also, there are better OS's than Windows that someone should be using if they don't enjoy having their data stolen and sometimes erroneously deleted by a piracy checking algorithm.

    • @sub-harmonik
      @sub-harmonik 6 หลายเดือนก่อน +8

      if your timer is 5 minutes it shouldn't matter too much. It's when you get

  • @sub-harmonik
    @sub-harmonik 6 หลายเดือนก่อน +1

    generally the extensible way is to maintain a priority queue that contains time values and callbacks. Every loop poll the first element of the priority queue and remove until the time value > current time. That way you can have as many timers as you like.
    Things get way more complex if you need accurate sleep without spinning though. You pretty much need to get into platform-specific api as well as setting certain thread priority/interrupt rate. Recent windows has pretty weird and relatively undocumented timer handling.

  • @Templarfreak
    @Templarfreak 3 หลายเดือนก่อน

    the best way to handle a timer: if you ever can avoid calculating the timer yourself, you should.
    what do i mean? if you ever have to try and calculate the current time that the timer has ran for or how much longer the timer has to run for, then your timer *will always* be more inaccurate compared to when you are *not* doing that, because you will always be using time to calculate that timer's progress and that will change when you check when the timer is completed. not by a lot, but at best you will have a different and more insidious version of an off by 1 error that can cause problems that are very difficult to debug.
    so, this solution you have in this video is very good on the basis that it avoids that problem. there are other useful features to have for generalized timers (pausing/unpausing, getting remaining time, getting current time, having more than one callback, whether to repeat the timer or not, etc) but this covers the absolute basic necessities to just get the timer working and functioning as one would typically expect and that is good in my book

  • @dennissdigitaldump8619
    @dennissdigitaldump8619 2 หลายเดือนก่อน +3

    Really it comes down to accuracy. If it's "tell me in 5 minutes", ms accuracy is probably too much resources. Vs the rocket needs to launch in 5 minutes. Different techniques for each case. In the extreme a separate thread that syncs & tests against an atomic click, calibrate the system interrupt to the atomic clock. Do some calculations & set an assembly interrupt. BTW I had to do this once.

  • @woobilicious.
    @woobilicious. 6 หลายเดือนก่อน +2

    I was thinking about the "busy wait" issue you end up with in game loops, especially if you need to serialize timers / handle game saves when the user quits, and I came up with, storing all your deferred functions in a heap/priority queue, and then just check the head of the queue, and sleep for that amount of time, if you have a DSL, you could potentially have your code look like "bad" code that just calls sleep(), but really it's just a co-routine that yields the CPU.

  • @jongeduard
    @jongeduard 6 หลายเดือนก่อน +2

    Yeah people can really think in too simple ways about such a thing, but that forum thread was really bad. LOL.
    As someone with experience for many years in several programming languages, especially including C# professionally and but for example also Rust nowadays (and it is my favorite now), I can only say, that this modern type of Async code at the end of the video was obviously the solution I was thinking about immediately, even though I didn't exactly know the modern C++ implementation for async code.
    But this is how in modern programming this kind of thing is generally done. Many languages do very similar things.
    All these things are also a related to having programming experience here. If you have done enough concurrent and parallel programming, then it gradually becomes far more natural to think that way.

  • @robwalker4653
    @robwalker4653 6 หลายเดือนก่อน +1

    For the first idea example you showed I would have just calculated now + 5 mins when the timer is created, store that time as target time. Check in loop if current time is greater or equal to the target time, if so, the timer has triggered. Rather than casting a duration of one time minus the other each loop.

  • @gn0my
    @gn0my 12 วันที่ผ่านมา

    Hi, SWE here. Worked for a couple Fortune500 companies. In some areas you are interfacing directly w/ HW, so you dont have the luxury of doing for loops. In fact, you have to be extremely careful for threading. I will say that if your router or cable box says "wait should take about two minutes" that code is running on a while loop, a literal counter thats an estimated time, and every mf its trying to poll that specific data that could be coming in on various frequencies.

  • @sirzorg5728
    @sirzorg5728 20 วันที่ผ่านมา

    My solution idea:
    1. establish a boolean for "is the timer up".
    2. Fork off the timer process. It will sleep for 5 minutes, then set the "timer up" variable to true, then the fork will terminate.
    3. In the main thread, have a loop that does whatever, and at the end of each cycle checks if the timer is up. If the timer is up, the loop terminates, and whatever end conditions apply will apply.
    This avoids all race conditions, because the only variable shared by both threads is "blnTimerUp". The only downside is that you might be a bit late to the exact 5 min mark, because it might happen in the middle of the loop your main thread is running.
    If (somehow) the timer subprocess crashes without setting blnTimerUp, you could instead just check every loop if the subprocess still exists. This is a bit safer against random bullshit, although the downside here is premature completion.

  • @jdrissel
    @jdrissel 2 หลายเดือนก่อน +1

    I like the classes in the QT library, but that is awfully heavy weight if all you are using is a timer. My preference would be to use a mutex to signal between two threads, and use one thread for the timer and the other for the work. The work thread gets the mutex, does some work and releases it (If it doesn't get the mutex, the timer has expired.) The timer thread checks the time and then loops, possibly using sleep or usleep if there is a lot of time left. When the timer expires it begins trying to grab the mutex. If it succeeds, the thread terminates.
    When the working thread fails to get the mutex, it cleans up (wait on the timer thread, destroy the mutex) and does whatever needs to happen when the timer expires, but possibly not in that order. This should work on almost any platform or OS. In general, I would sleep in the timer loop for 1 second if there is more than 10 seconds left, then 100ms until .1 second is left, then, assuming your processor is fast enough, 10ms until 1 second is left, etc until I am down to some minimum time at which point we just spin until the time is up.

  • @aakashgupta6285
    @aakashgupta6285 6 หลายเดือนก่อน +12

    As an embedded engineer, I would just use a built-in timer interrupt, which should be available for all platforms, although not portable.

  • @Tuniwutzi
    @Tuniwutzi 6 หลายเดือนก่อน

    It's interesting I never thought about how involved the simple question "how to delay code execution by X time" actually is.
    I usually work on stuff that is IO heavy and focuses on processing events as they come in (ie: a button was pressed, a socket received data, a cable was connected, ...). More often than not I already have an event loop, for example based on file handles and epoll/select. So my first instinct for a non-blocking timer was: create a timerfd and put it into the existing event loop.
    This video made me realize that I've never considered how many things become more straightforward if you're running a simulation that has one thread continuously running anyway.

  • @leedanilek5191
    @leedanilek5191 6 หลายเดือนก่อน +7

    Yeah... i don't think "most applications" behave like games with a loop that runs at 60hz. At least I've never worked with one, from iOS to CLI tools to backend to database to analytics tools. Game development is a special kind of inefficient

  • @ДмитрийКовальчук-р9и
    @ДмитрийКовальчук-р9и 5 หลายเดือนก่อน +1

    That's a nice video! And what I like the most and that you seem to be one of the very few people I know, who actually use the steady_clock for timers and stopwatches, which is by the way the intended application of this tool. The vast majority resorts to high_resolution clock and then go around in panic when their system time gets updated. And man, is it a pain to search for a root of such a bug because it's really hard to reproduce it on your own machine and the behaviour just seems random.
    By the way, any implementation of sleep only guarantees that you sleep for at least a timespan or at least until the point in time. There is actually no upper limit on how much time would pass after that
    PS I guess the so-called expert wanted to do something similar to the main loop concept with a step of second instead of display frequency but just messed this up so badly that ended up with recursive calls. As for your point in the video, I saw a lot of samples of custom games where the time for actually running your game was just completely forgotten in your wait function at the end of the loop.

  • @sayo9394
    @sayo9394 6 หลายเดือนก่อน +2

    This is a great video 👏 I vote Yes for more videos of this format

  • @XiremaXesirin
    @XiremaXesirin 5 หลายเดือนก่อน

    16:11 I do have my own Code Review thoughts. 😉
    Specifically: I would create a time_point object at line 12, before we call std::async, which is the current time + the duration the user specified, and then inside the std::async call, I would use this_thread::sleep_until, instead of sleep_for. This way, you account for any possible delays in the execution of the lambda function. std::async is not _technically required_ to immediately start execution of the provided functor right away in a new thread, even when the std::launch::async option is provided. It might be delayed if another functor is running and the thread pool is exhausted. So by determining "this is when the thread should awaken" preemptively, you make it more likely that the time the user provides will end up being accurate.
    Of course, the real solution is using boost::asio::steady_clock with a dedicated executor, which lets you cut down the code to only like 3 lines, but I guess the requirement was to use only vanilla C++, so...

  • @harald4game
    @harald4game 2 หลายเดือนก่อน

    One point is missing:
    If you already are using a specific environment that has times I really recommend to use those.
    In a program using native windows api having a message loop and a window use SetTimer and the WM_TIMER event.
    In a Qt application use the timer provided by Qt, e. g. CTimer::singleShot.
    In MFC use CWnd::SetTimer and OnTimer(..) message handler.
    In a console application the std sleep_for is fine.
    And definitely if more jobs are needed don't create a thread per job (thread count is limited). Instead create a single worker threat together with a producer/consumer pattern.
    Also never use time() or any localized or adjustable time for those kinds of timeout.

  • @sebibence02
    @sebibence02 6 หลายเดือนก่อน

    Timing in CS is an artform, basically an optimization between precision and CPU usage. The best approach is to go with the lowest level hardware interrupts and register a callback on the interrupt event. In higher level code the more precise timing you want, more frequently you need to schedule your timer thread which will lead to higher CPU usage. If you optimize to have lower CPU usage, the thread will be scheduled less often, therefore decreasing precision (the thread won't be able to check for elapsed time as frequently). Considering this the == approach in one of the replies is a huge mistake, because it is guaranteed that the timer never will be exactly equal due to the operating system's added thread scheduling overhead. Even with hardware interrupts there will be a thread swap operation losing some time until the instruction pointer is set to the callback method. Good stuff

  • @lukiluke9295
    @lukiluke9295 6 หลายเดือนก่อน +7

    Wow your first Video on multithreading and you introduced async, threads, sleep and context.
    I was actually looking for a video on the topic of multithreading this morning - couldn't find one and now here it is, just a little bit more complex ^^

    • @akashpatikkaljnanesh
      @akashpatikkaljnanesh 6 หลายเดือนก่อน

      This isn't his first video on multithreading I believe

  • @trbry.
    @trbry. 6 หลายเดือนก่อน +1

    love this kind of content almost as much as your other content, be it hazel coding reviews and more

  • @Chriva
    @Chriva 6 หลายเดือนก่อน +13

    Condition signals is probably something you want with huge delays like that.
    Especially if you want to exit cleanly without waiting forever

    • @ccgarciab
      @ccgarciab 6 หลายเดือนก่อน

      Do you mean std::condition_variable?

    • @Chriva
      @Chriva 6 หลายเดือนก่อน

      @@ccgarciab That would also work but it's really finicky to use with non-static bools (ie it's hard to spin up several instances of the same thing)

    • @ccgarciab
      @ccgarciab 6 หลายเดือนก่อน +3

      @@Chriva what's the name of the API that you're referring in your original comment then?

  • @lurgee1706
    @lurgee1706 6 หลายเดือนก่อน +3

    sleep() is great untill you realize you can't cancel your timer and notify the user about it right away, so if you do need to handle cancellations (either manual or due to the process' shutdown), you're screwed. So:
    * If you want a delay in the current thread just use condition_variable::wait_for.
    * If you want it to be executed asynchronously, either spawn a thread yourself or spawn an std::async task (which may very well spawn a thread under the hood anyway) and, again, wait on a condvar.
    * If you want your solution to be generic and scalable, you're bound to end up with some kind of scheduler, so you either use a library like boost asio (whose timers do use a scheduler under the hood), or write one yourself.
    As "simple" as that. Frankly, seeing how easy it is to do the same thing in other languages like C#, coming back to C++ is just painful.

    • @DerHerrLatz
      @DerHerrLatz 6 หลายเดือนก่อน

      Thank you for pointing out the obvious. (since nobody else does.) Would be nice to have an event- or mainloop in the standardlibrary. But it would probably not work if you don't have an OS to provide the underlying functionality.

    • @BitTheByte
      @BitTheByte หลายเดือนก่อน

      C# garbage collector kept eating my timers ;~;

  • @56a8d3f5
    @56a8d3f5 6 หลายเดือนก่อน

    futures can’t destruct with a running thread, usually there’s no need to check for the status with the purpose of ‘make sure it doesn’t get destroyed before the thread finishes’ 17:35

  • @pschichtel
    @pschichtel 6 หลายเดือนก่อน +1

    The 300 stack frames comment on the recursive function... there is a thing called tail call optimization, which apparently C++ compilers have been doing a while, that optimizes this into a loop. There is quite a few people that think more in recursion than in iteration, especially in a functional context.
    the async vs thread thing is nitpicking for the sake of it. there is really no advantage to be had _here_ by using async instead of just directly spawning a thread. you don't gain control, you don't gain performance, you are just obscuring the fact that a thread is spawned and suspended by wrapping it up in async. And when this async stuff get's put into a context where this might be scheduled into a thread pool, now you have a thread from the pool blocked for 5 minutes. From game engines you are probably used to cooperative multi tasking, which could have been an interesting spin and the one solution being bashed from the forum actually describes the idea of cooperative multi tasking, albeit with some problems.

  • @sebastianconde1341
    @sebastianconde1341 6 หลายเดือนก่อน

    Coming from a C background I would actually use an alarm :)
    Sort of like this:
    #include
    #include
    void handler(int s) {
    /* Whatever you want */
    }
    int main(void) {
    struct sigaction sa;
    sa.sa_handler = handler;
    sigemptyset(&sa.sa_mask);
    sa.sa_flags = 0;
    sigaction(SIGALRM, &sa, NULL);
    /* Set up the alarm for 5 minutes... */
    alarm(5*60);
    /* Rest of your code... */
    }
    This way, your code (the process executing it) will be interrupted 5 minutes after the alarm() call was made. You can keep doing work until then.
    When the interruption comes (from a SIGALRM signal) your code will execute the handler function.

  • @inulloo
    @inulloo 6 หลายเดือนก่อน +2

    Your analysis and explanation were very helpful.

  • @R4ngeR4pidz
    @R4ngeR4pidz 6 หลายเดือนก่อน +1

    9:20 yes, you're right, no consideration for how long that takes
    But just to play devils advocate, your game engine solution also has this flaw
    Who says the time in between frames is short?
    In the absolute worst imaginable case (definitely not realistic but still) it could take 10 minutes to render the frame, so when we get to the next frame 10 minutes will have passed, we did not get notified directly after 5 minutes had passed

    • @gob9852
      @gob9852 6 หลายเดือนก่อน +1

      That would be indicative of problems that go much deeper than the stopwatch, and in such a situation the stopwatch would be the least of our concerns.
      In the context of this hypothetical though, and assuming this hypothetical is perfectly fine with nothing wrong to it at all, then multithreading would be the solution, since you're thinking of prematurely ending an operation.

  • @Evilanious
    @Evilanious 6 หลายเดือนก่อน +2

    I think the questions I'd like to see answered here are not 'how to do it in c++', but rather, how does the computer clock work. How do you call it? How do you keep it counting while doing other stuff? The library I'll end up using isn't the most important. Though I guess if you need to solve this very specific problem it's time consuming to take that step back.

  • @rikschaaf
    @rikschaaf 25 วันที่ผ่านมา

    If you have a game loop and that game loop gets called often enough to provide a good enough resolution for your timers and you have data that can be processed in such a game loop, then your solution would indeed be viable, but do you see how it does have those 3 requirements? For a game, that's probably fine, because it (preferably) updates at least 60x per second and the processing in between is essentially just there to calculate the next frame based on the given inputs.
    Not all programs have a game loop though and you might not want to introduce one, just to be able to create a timer. In that case, the multi-threading solution is completely fine, as long as the thing you want to activate after the timer runs out is thread-safe.

  • @dawre3124
    @dawre3124 6 หลายเดือนก่อน

    If you need to wait in performance critical multi threaded environment for an accurate amount of time as shortly mentioned in the video keep in mind sleep functions are not accurate (I would assume async can not fix this). with more threads that cpu cores the time sleep oversleeps tends to go up too. for full accuracy empty loops are the only way I know, for something reasonable reduce the sleep time with an empty loop afterwards. When I had problems with this I split the sleep into multiple calls (I felt like shorter sleeps are more accurate).
    I used something like this (C):
    void my_sleep_fast(const int64_t target_t)
    {
    int64_t time_diff;
    int64_t temp_sleep;
    time_diff = target_t - get_microseconds_time();
    temp_sleep = time_diff - (time_diff >> 3);
    while (temp_sleep > SLEEP_TOLERANCE_FAST)
    {
    usleep(temp_sleep);
    time_diff = target_t - get_microseconds_time();
    temp_sleep = time_diff - (time_diff >> 3);
    }
    usleep(SLEEP_TOLERANCE_FAST);
    }

  • @vloudster
    @vloudster 6 หลายเดือนก่อน

    Great video. You should do more videos like this where you are looking at fundamental things like timers etc.
    The video was funny in relation to the code suggestions in the forum but also educational when you explained them and presented your professional solution.

  • @kuhluhOG
    @kuhluhOG 6 หลายเดือนก่อน

    15:20 Heavily depends on the OS scheduler.
    Some are more accurate than others (and some OS have multiple OS schedulers the user can choose from).

  • @MikkoRantalainen
    @MikkoRantalainen 6 หลายเดือนก่อน

    15:25 I think if your sleep library call supports 5 microsecond sleep time, it's just incorrectly implemented if it cannot accomplish it. On Linux, the sleep functionality that accepts shorter time periods than full second do support any length sleeps. However, very short sleeps are implemented as busy sleep where CPU keeps running at 100% and polls the current time until the exact correct amount of time has elapsed. This obviously doesn't end up being accurate if you have more threads than physical CPU cores because then OS has to do time multiplexing to run all the tasks and your OS is not going to schedule multiple processes running at 100% withing 5 microseconds because the task switching overhead in the CPU would kill nearly all the progress.

  • @dr99tm23
    @dr99tm23 6 หลายเดือนก่อน +2

    How applications with subscription set the free trial timer and even if you have no internet connection and change the time of your pc or shutdown for days, the program will still calculate the time correctly and end the free trial at the specific time 🤔?

    • @SimonVaIe
      @SimonVaIe 6 หลายเดือนก่อน +3

      If your application can run offline and the user has sufficient permissions on their system, I don't think there's a reliable way to calculate real world time passing from your app. As you say, you can look at the system time, but that can be changed. You can then look at system logs for system time changes, but those can be altered. You can implement an always running background service that works even when system time is changed, but a service can be downed as well.
      Basically you have to trust data of a system you can't trust.

    • @ramiths8171
      @ramiths8171 6 หลายเดือนก่อน

      Some applications don't even open without internet

    • @jacksonmagas9698
      @jacksonmagas9698 6 หลายเดือนก่อน

      They just store the date time when you started the free trial and then when you run it it checks if the current date time is the trial period after the stored start date

  • @yabastacode7719
    @yabastacode7719 5 หลายเดือนก่อน +1

    my idea is to use the observer design pattern to watch for the thread to finish. when the thread finish it send a signal to all subscribed objects to execute their functions (slots). i was inspired by Qt and its class QTimer witch was implemented using observer design pattern. i am not sure if i should be single or multi treaded tho. i need to write code to figure it out

  • @ciCCapROSTi
    @ciCCapROSTi 4 หลายเดือนก่อน

    Yeah, my first thought was the same, just a bit more complex. Implementing a component that can handle any amount of timer requests, and run it with the game loop just as any other component. And it calls the callbacks for all the timers expired on the frame. Probably needs a priority queue or more likely a sorted vector. Has the advantage of calling the callbacks on the main thread.

  • @schrottiyhd6776
    @schrottiyhd6776 6 หลายเดือนก่อน +5

    15:21 with std::this_thread::sleep_for youre able to enter nanoseconds as timer, however youre at the mercy of thread scheduler wether you will actually sleep for that time, i remember the function to work in microsecond precision on my system when theres nothing else to do, its a different story if youre at a high CPU usage...
    i belive that the best way for game engines to implment timers (for game play mechanics) is to have a singleton object with a specalised container which sotres all created timers sorted by when they trigger, and every frame you check which timers were passed and run thier callbacks ... which sort of is what a thread scheduler does

    • @UsernameUsername0000
      @UsernameUsername0000 6 หลายเดือนก่อน

      One doubt (as a non game engine dev): since it’s a loop, and you’re still processing the rest of your (heavy) code sequentially within the loop, what assures you that, say, you’ll actually be checking the time every millisecond? For 60fps, doesn’t each frame only run every 17 milliseconds? What if you want a margin of ~1ms? 17ms might be a massive margin then. How do you guys deal with that?

    • @ramiths8171
      @ramiths8171 6 หลายเดือนก่อน

      ​@@UsernameUsername0000 it doesn't matter right? You don't need that precision when you can't even render a frame fast enough. As long as it is checked within a frame time it should be ok.

    • @schrottiyhd6776
      @schrottiyhd6776 6 หลายเดือนก่อน

      @@UsernameUsername0000 i cannot imagine of any case you'd need a such precise timer which then executes a callback ... but even if you could use the same method i showed above and update it in the precision you need(1 ms or less for your example case ...), or if you know youre not going to use that many this precise timers(>20) you could also just fallback to std::thread and std::this_thread::sleep_for, because thats essentially what the thead scheduler is dooing

    • @sub-harmonik
      @sub-harmonik 6 หลายเดือนก่อน

      @@UsernameUsername0000 I dabble in programming music applications where the audio thread cannot wait because you'd get audio dropouts. It can be a very complex subject but basically you don't wait for anything in the audio thread, you set up a lockfree & hopefully waitfree queue to communicate between the higher-accuracy audio thread and put UI or things that need a lot of time to process on lower priority threads.
      You don't ever lock the thread or sleep, unless it's for longer than some amount of ms (maybe 10-20).
      Basically you shouldn't have unpredictable operations or anything that makes a system call on the audio thread while it's running

  • @anon_y_mousse
    @anon_y_mousse 6 หลายเดือนก่อน +1

    If you only have a few timers then the best way, assuming that cross platform is considered better than platform specific, would be to take the current time, add the timer amount and use that as the end trigger for that timer. Then it's a simple matter of checking in the main loop whether you've reached the target time or beyond. That's basically the way a coroutine would work too, if we're talking about the original working method and not the unholy hidden thread garbage that is usually used for async code these days. One of the things I love which they added with C++11 was UDL's. So adding 5min to a time is pretty easy and downright enjoyable now. I just wish they'd add that to C, especially since they added constexpr with C23.

    • @reddragonflyxx657
      @reddragonflyxx657 6 หลายเดือนก่อน +1

      If you have a lot of timers you can put them all in a priority queue (sorted by earliest end time) and just check for/remove/process any finished timers from the front of the queue in your main loop.

    • @anon_y_mousse
      @anon_y_mousse 6 หลายเดือนก่อน

      @@reddragonflyxx657 As long as we're talking about a dozen or so, then yep. Once you get into the couple of dozen and above range, you might want to consider multithreading.

    • @reddragonflyxx657
      @reddragonflyxx657 6 หลายเดือนก่อน +1

      @@anon_y_mousse Why? You can check if there's an expired tjmer in constant time and add/remove timers from the queue in log(n) time (per timer). If you have lots of timers going off, need to do a lot when they do so, and can't wait for that in your main loop, multithreading is a good idea. Otherwise, based on "On Modern Hardware the Min-Max Heap beats a Binary Heap" you can expect a priority queue to take ~100 ns to pop a timer with ~100k entries in the queue.

    • @anon_y_mousse
      @anon_y_mousse 6 หลายเดือนก่อน

      @@reddragonflyxx657 If you've got modern hardware, then that's fine, but you should aim for the most efficient methods always, because you might not always get to target modern hardware. Although, hopefully you wouldn't need so many timers as to clog the main loop, especially on lower powered devices. Maybe I'm just used to working on devices with speeds measured Hz.

    • @reddragonflyxx657
      @reddragonflyxx657 6 หลายเดือนก่อน +1

      @@anon_y_mousse What hardware is slow enough for a heap to be too slow, but also supports multithreading? I think this solution would be excellent on a lot of embedded platforms, with reasonable tuning for cache/branch prediction/memory performance if performance is critical.
      That article should apply to the last decade or two of PCs at least.

  • @ColossusEternum
    @ColossusEternum 6 หลายเดือนก่อน +1

    Does the std lib have anything like the millis() function within the arduino IDE?
    I used to create non obstructive timers like this:
    Event to trigger timer sets variable = to millis()
    If(event) {
    Time = millis();
    }
    If(currentTime-Time >= delayDuration){
    Code to execute
    }
    Sorry if youre unfamiliar but millis() is a native function to arduino that counts up in milliseconds from the instant the MCU boots. The timing source is completely separate from the CPU and runs in the background and doesn't influence code execution(at least noticeably)

    • @kakaz98
      @kakaz98 หลายเดือนก่อน

      time_since_epoch on a time_point has similar functionality

  • @sviatoslavberezhnyi1059
    @sviatoslavberezhnyi1059 6 หลายเดือนก่อน

    When I was at university in 2006, I had a lab about a timer, I don't remember exactly how I solved it, but the computer has a built-in timer that runs 18.2 times per second, I remember that I wrote this program in C with some assembly language inlining, which actually copied an interrupt from a certain port, then replaced it with my interrupt, my interrupt was executed 18.2 times per second and in it I made a decryption of the timer that the user entered, and when the timer was completed, I sent a certain byte to port 61h, but I may be wrong, to cause the speaker on the motherboard to beep, which signaled that the timer was over, then I replaced my interrupt with the one I copied earlier, and I used C only so that the user could enter the timer and display a successful message after it was completed, that's the story)

  • @HelloHigogo
    @HelloHigogo 6 หลายเดือนก่อน

    13:56 forgive me if I'm wrong but the problem I see with the code here is that if the other process you run apart from the timer needs to finish every time before you check the time again you'll almost certainly not land on 5 minutes either. What if the process you run takes 10 minutes itself?
    Surely on an additional thread that example would be fine but in the code at 13:56 it would be 10 minutes before the timer would complete

  • @Ozzymand
    @Ozzymand 6 หลายเดือนก่อน

    never knew (nor did i think to check) if async and promises exist in c++ after using them in JS. Awesome

  • @AndrewRedW
    @AndrewRedW 6 หลายเดือนก่อน +1

    Inexperienced people writing funny code - my favourite form of entertainment :D

  • @JuniorDjjrMixMods
    @JuniorDjjrMixMods 6 หลายเดือนก่อน

    I code more than a decade in gta3script (the proprietary script language that Rockstar Games uses from GTA 2 to GTA V, maybe GTA VI too), and is just this:
    SCRIPT_START
    {
    WHILE timera < 300000
    WAIT 0
    ENDWHILE
    PRINT_STRING_NOW "ok" 1000
    }
    SCRIPT_END
    Or just WAIT 300000 but would be basically a Sleep.
    PRINT_STRING_NOW would be PRINT_NOW (for translation support), but I'm using the modding variant for this example. The "NOW" is high priority, doesn't matter.
    Another detail, old GTAs like GTA SA have a bug and need to set NOP or some other command in the start of the script before any WHILE.
    But I like how big game companies simplified this.

  • @Sluggernaut
    @Sluggernaut 6 หลายเดือนก่อน +2

    Are you going to post this code onto that forum post with some explanation or link to this video? Why or why not?
    Edit: Nevermind. The forum post is locked. That's a great reason NOT to.
    You don't ever see the code at the end of the TimerAsync function. I THINK this is what it should be: std::future TimerAsync(std::chrono::duration duration, std::function callback)
    I'm not 100% sure but I have written the code exactly as TheCherno has it except I use "using namespace std;" as well because i'm a rebel. So apart from the portion that can't be seen, I have written the code exactly as it is and it appears (oh, and I named Period to TimePeriod) to run and work the same. So, fairly sure my guess is correct.

    • @thomasknapp7807
      @thomasknapp7807 6 หลายเดือนก่อน

      Sluggernaut, Thanks for your suggestion on the missing code. As I am sure you know, your suggestion works by producing the same result that Cherno demonstrated when he set the timer to 5 seconds.

    • @Sluggernaut
      @Sluggernaut 6 หลายเดือนก่อน

      @@thomasknapp7807 no idea

    • @Sluggernaut
      @Sluggernaut 6 หลายเดือนก่อน

      ​@@thomasknapp7807ok misunderstood your comment. Re-read it. Yes the code I suggested did work. Just wanted to lend some help, potentially, to anyone struggling to recreate this as I was

  • @J.D-g8.1
    @J.D-g8.1 6 หลายเดือนก่อน

    A sleep function is literally a timer, unless its very accurate in embedded systems, in which case it can be no ops tuned to clock cycles.
    But human time scale sleep func needs a basic
    Starttime = Systime
    If (systime - starttime > x )

  • @stonebubbleprivat
    @stonebubbleprivat 2 หลายเดือนก่อน

    A while loop is a bad idea, as it uses many resources. Sleeping instead of checking every 5ms gives other threats time to run and our thread doesn't gets throttled by the scheduler. The scheduler puts threats that use all their processing time in a lower priority queue and prioritizes i/o-bound threads that call an interrupt early.
    By checking constantly we waste our limited processing time. A thread that waits five minutes has a high priority and therefore is likely to be called exactly after 5 minutes.

  • @josnardstorm
    @josnardstorm 6 หลายเดือนก่อน +1

    But the one downside to your method, as opposed to a multithreaded solution with sleep(), is that you might run into an issue with timezones. The best scenario would seem to me to be a loop (multithreaded or single-threaded) that uses a chrono::duration object to measure 5 minutes directly.

    • @delta3244
      @delta3244 5 หลายเดือนก่อน

      How could there be an issue with timezones? steady_clock always increases at the same steady (constant) rate, hence its name. system_clock would have problems if the time were to suddenly change, but that's not what was used here.
      edit - what do you mean by "us[ing] a chrono::duration object to measure 5 minutes directly," anyways? Isn't that what was proposed in this video? Take a start time, subtract that from the current time to get a duration, compare duration to 5 mins?

  • @KeyYUV
    @KeyYUV 6 หลายเดือนก่อน

    This really makes me appreciate the convenience of QTimer::singleShot(Duration msec, Functor &&functor). Implementing the event loop manually is such a pain.

  • @Xudmud
    @Xudmud 6 หลายเดือนก่อน

    I know I've done a similar thing using Boost (boost::asio::deadline_timer() and then used boost::posix_time::seconds() to get the timer value), and that had worked for me, plus kept it asynchronous so it wouldn't hold up the rest of the system.
    (Of course part of that was having to use c++0x, I'm sure there's a better way to do it, but had to work with what I had)

  • @JkCxn
    @JkCxn 6 หลายเดือนก่อน

    You can put your timer-finished code inside the if (status == ready) block or after the loop and then your timer class is responsible for fewer things

  • @Omnifarious0
    @Omnifarious0 6 หลายเดือนก่อน

    There is another case you didn't exactly mention. I've often designed my programs around event loops (I started writing programs before multiple cores were common). In that case, you need to have a timer expiry heap as part of your event loop so the loop can easily determine if you should just wait forever for an event, or wait until a specific time.

  • @uNiels_Heart
    @uNiels_Heart 4 หลายเดือนก่อน

    In your example code you already go std::chrono all the way (including your literals), so your duration_cast is redundant and just clutters the comparison up unnecessarily (it's easier to read without it).
    Comparing any duration with any other duration (of course I'm talking about values of the type duration rather than an unadorned number) will work as intuitively expected even if they carry with them a different unit. Or I guess I should say *because* each of them carries with it its unit.

  • @TheEdmaster87
    @TheEdmaster87 6 หลายเดือนก่อน

    Timers are easy especially for hardware with different CPUs and MCUs. Some have even their own libraries for this, others you can setup a function to do this. It really depends what type of timer you need flr what. Most important thing is not to block other code that suppose to run in the "background" while the timer runs.

  • @johnmckown1267
    @johnmckown1267 5 หลายเดือนก่อน

    Interesting. At 71, I've finished my professional learning time. But I continue to learn. Helps keep the brain functioning.

  • @IgorJoseSantos
    @IgorJoseSantos 2 หลายเดือนก่อน

    Great explanation. Thank you.

  • @DakkyW
    @DakkyW 6 หลายเดือนก่อน

    Actually very curious about what you find as a good solution for function timing, been curious what low performance-impact options there are

  • @StefaNoneD
    @StefaNoneD 6 หลายเดือนก่อน

    std::this_thread::sleep_for() does not use a monotic clock by default. That is, if you change your system time, it affects the timer (at least on Windows).

  • @IncompleteTheory
    @IncompleteTheory 3 หลายเดือนก่อน

    Never time your loops using any variation of sleep(duration) because this always results in a drift determined by the amount of stuff you run in your look. Always look for an OS or language construct that sends you a signal, or runs your callback, at specified intervals. Game engines usualy give you some kind of frame-rate synchronisations.

  • @gerrykavanagh
    @gerrykavanagh หลายเดือนก่อน

    As a one-time javascript dev, the idea that async callbacks are a new thing in C++ is a bit mindblowing

    • @deltamico
      @deltamico หลายเดือนก่อน

      Apparently they used void* shenanigans for it before