Harder Than It Seems? 5 Minute Timer in C++

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024
  • To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/... . You’ll also get 20% off an annual premium subscription.
    Patreon ► / thecherno
    Instagram ► / thecherno
    Twitter ► / thecherno
    Discord ► / discord
    Timer thread ► cplusplus.com/...
    Why I don't "using namespace std" ► • Why I don't "using nam...
    Hazel ► hazelengine.com
    🕹️ Play our latest game FREE (made in Hazel!) ► studiocherno.i...
    🌏 Need web hosting? ► hostinger.com/...
    💰 Links to stuff I use:
    ⌨ Keyboard ► geni.us/T2J7
    🐭 Mouse ► geni.us/BuY7
    💻 Monitors ► geni.us/wZFSwSK
    This video is sponsored by Brilliant.

Комментарии • 635

  • @TheCherno
    @TheCherno  4 месяца назад +65

    So… got any more comedy for me to look at? 👇
    Also don’t forget you can try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/TheCherno . You’ll also get 20% off an annual premium subscription.

    • @shafiullahptm909
      @shafiullahptm909 4 месяца назад

      bro i really love your videos can you pls make a c++ one shot video pls

    • @Silencer1337
      @Silencer1337 4 месяца назад

      I'm interested to learn how you would cap the framerate when vsync is off. I've always looked for alternatives to sleep() because it likes to oversleep, but never found anything.

    • @heavymetalmixer91
      @heavymetalmixer91 4 месяца назад +2

      Given that you're using the standard library in this video I'd like to ask: As a game engine dev what's your opinion on the standard library?
      Most game devs out there tend to avoid it but I'm not sure why.

    • @theo-dr2dz
      @theo-dr2dz 4 месяца назад

      @@heavymetalmixer91
      Standard library design and implementations are optimised on correctness and generality. That can be suboptimal on performance. For example, the standard library calendar implementation is designed to get leap seconds right. That will probably not be relevant for games but it will never be completely free.
      Also the standard library uses exceptions quite extensively and exceptions create some unpredictability in timing. So, if you really need ultimate performance and every cpu cycle counts, like in AAA games, high frequency trading and that kind of applications, creating some kind of custom implementation of the standard library (or some kind of alternative for it) can be worth the effort. But generally C++ code is very fast, even without doing all kinds of optimisation tricks. I would say the standard library implementations in leading compilers are fine, except in really cutting edge performance critical situations.

    • @Brahvim
      @Brahvim 4 месяца назад

      ​@@heavymetalmixer91I don't know as much as other people around here, but I like to think that the reason why it's so is because there are new edge cases for them to know of, it takes up space wherever it's taken, it may use a few `virtual`s around the place, I think, so... mostly because it's a library, the implementation of which, they don't know a lot about!
      It _does_ make life easier once one gets into its mindset, though.

  • @christopherweeks89
    @christopherweeks89 4 месяца назад +1639

    Remember: this is the stuff we’re training our AI on

    • @monad_tcp
      @monad_tcp 4 месяца назад +148

      job security for humans

    • @enzi.
      @enzi. 4 месяца назад +10

      @@monad_tcp 😂😂

    • @Avighna
      @Avighna 4 месяца назад +6

      💀☠️💀☠️💀

    • @platin2148
      @platin2148 4 месяца назад +12

      It doesn’t matter as LLMs have inherited fuzziness as them being a statistic model.

    • @codinghuman9954
      @codinghuman9954 4 месяца назад +3

      good

  • @dhjerth
    @dhjerth 3 месяца назад +769

    I am a Python programmer and this is how I would solve it:
    import os
    import sys
    import time
    # All done, Python takes 5 minutes to start

    • @madking3
      @madking3 3 месяца назад +84

      I usually create list with 500 random numbers and sort it with bubble sort it gives me 5 min best case

    • @jongeduard
      @jongeduard 3 месяца назад +10

      Yeah, but let's also talk about performance in Python and how you want to compare it to anything like C, C++ or Rust.

    • @thuan-jinkee9945
      @thuan-jinkee9945 3 месяца назад +5

      Hahahah

    • @MunyuShizumi
      @MunyuShizumi 3 месяца назад +15

      ​@@jongeduard whoosh

    • @iritesh
      @iritesh 3 месяца назад +11

      ​@@jongeduardwooosh

  • @akashpatikkaljnanesh
    @akashpatikkaljnanesh 4 месяца назад +256

    You want your users to hate you? Tell the user in the console to set a timer for 5 minutes, wait for them to press space and start the timer, and press space to finish it. :)

    • @no_name4796
      @no_name4796 4 месяца назад +24

      Just have the user manually update the timer at this point...

    • @HassanIQ777
      @HassanIQ777 4 месяца назад +11

      just have the user manually write the code

    • @dandymcgee
      @dandymcgee 4 месяца назад +35

      just have the user go touch grass, then they won't need a timer.

    • @akashpatikkaljnanesh
      @akashpatikkaljnanesh 4 месяца назад +1

      ​@@dandymcgeeWonderful idea

    • @DasHeino2010
      @DasHeino2010 4 месяца назад

      Just have the user prompt ChatGPT! :3

  • @systemhalodark
    @systemhalodark 4 месяца назад +738

    Trolling is a art; Topnik1 is a true artist.

    • @mabciapayne16
      @mabciapayne16 4 месяца назад +10

      an* ( ͡° ͜ʖ ͡°)
      And I don't think he made a bad code on purpose.

    • @херзнаетгражданинЕбеньграда
      @херзнаетгражданинЕбеньграда 4 месяца назад +49

      @@mabciapayne16 trolling is art, and @systemhalodark is an true artist

    • @mabciapayne16
      @mabciapayne16 4 месяца назад

      @@херзнаетгражданинЕбеньграда You should really learn English articles, my dear friend ( ͡° ͜ʖ ͡°)

    • @mabciapayne16
      @mabciapayne16 4 месяца назад

      @@херзнаетгражданинЕбеньграда a true artist* ( ͡° ͜ʖ ͡°)

    • @benhetland576
      @benhetland576 4 месяца назад +27

      And top it off with a recursive call _#seconds_ deep instead of iterating, just to increase the chance of stack overflow on a long waits I assume.

  • @asteriskman
    @asteriskman 4 месяца назад +167

    "Train the AI using the entire internet, it will contain all of human knowledge."
    The AI: "derp, but with extraordinary confidence"

  • @TwistedForHire
    @TwistedForHire 3 месяца назад +209

    Funny. I am an office application engineer and my first thought at looking at your code was "noooooo!!!" We try to use as little resources as possible and a 5ms constant loop is "terrible" for battery life. It's funny how people from different coding worlds approach a problem different. My first instinct was much closer to the sleep/wait implementation (though I wouldn't waste an entire thread just to wait).

    • @Brenden.smith.921
      @Brenden.smith.921 3 месяца назад +38

      I was thinking the same thing. I would've had a thread sleeping and then doing whatever needs to be done after the sleep timeout using a callback. If there was a need to share data with the main thread and I didn't want to do safe multi threading I'd use a signal to interrupt the main thread (unless it was something that wasn't very important, unless, unless, unless).
      Looping over and over like that and sleeping for 10ms is the exact same solution as the second guy except he slept for 1s which is what was laughed at, but it's fundamentally the same solution. Just a lot sloppier.

    • @wi1h
      @wi1h 3 месяца назад +16

      @@Brenden.smith.921 as for your second point, it's not the same. the "game loop" solution presented is off from the final by at most 5 ms, the second solution from the thread is off by (loop processing time required) * (loop iterations, in that case 300)

    • @RepChris
      @RepChris 3 месяца назад +6

      As with anything "engineering" (to clarify: coding and CS has a lot of stuff that's sitting in the fuzzy zone between science and engineering, not trying to knock your status as an engineer), there isnt one "best" solution, even just by cost and development time being in the picture. In a game engine the (relatively) minuscule overhead doesn't matter since youre doing a lot of other stuff per frame/simulation step which is way way more costly, and the inaccuracy youre going to get is probably a nonissue since a game generally doesn't need a 5 minute timer to be accurate down to the millisecond. So the time spend thinking about a better solution and implementing it is going to be better spent working on something more important.
      Completely different picture for something that needs to be very accurate, or actually power/compute efficient (which games certainly are not in any capacity, at least 99+% of them)

    • @youtubehandlesux
      @youtubehandlesux 3 месяца назад +10

      Me writing a video game and trying to make it stable up to 300 fps: A whopping 5ms??? In this economy???

    • @livinghypocrite5289
      @livinghypocrite5289 3 месяца назад +3

      Yeah, coming from yet another background, I immediately catched other stuff. Just reading the original problem my immediate question was: How accurate does the timer need to be? Because I constantly have to explain people, that I can't give them millisecond accuracy on an operating system, that isn't a real time OS. So, I saw the Sleep solution and my immediate reflex was: That isn't going to be accurate, because a Sleep tells the OS to sleep at least that amount of time, so the OS can decide to wake my application at a later time. Could be fine, but this depends on how accurate the timer needs to be.
      Also when seeing the recursive function, I also noticed the stack usage of that solution, but also the problem, that a loop is simply faster than a recursive function, because a function call has overhead, building that stack takes CPU time, so simply by calling the function recursively the timer will get more inaccurate, without even looking at how long the stuff that is executed while running the timer takes.

  • @Kazyek
    @Kazyek 4 месяца назад +122

    Good video overall, but the part about precision at 15:21 is a bit lacking. To be honest, precision is most likely not very important when sleeping for 5 minutes, but the overall takeaway of how sleep work is a bit wrong. Sleep will sleep for *AT LEAST* the time specified, but could sleep for quite a bit longer depending on other task's CPU utilization, the HPET (Hardware Precision Event Timer) used by the system (or not, on some system there might not even be one), the OS's timer resolution settings, the virtual timer resolution thing that windows do on laptops for powersaving where it will actually stretch the resolution, etc etc...
    Therefore, when very high precision is desired (for example, a frame limiter in a game, to have smooth frame pacing), you don't want to sleep all the way, but rather, sleep for a significant portion of the time, but busy-loop at the end.
    This fundamental misunderstanding of how sleeping work is why so many games have built-in frame limiters with absolutely garbage frame-pacing, and that you get a much smoother experience by disabling it and using something like RTSS's frame limiter instead.

    • @Kazyek
      @Kazyek 4 месяца назад +30

      And by "quite a bit longer", I mean that on a windows laptop in default configuration, a sleep(1ms) might sleep for over 15ms sometimes!

    • @Fs3i
      @Fs3i 3 месяца назад +8

      Yeah, “make something happen at x time” is a hard problem, and really hard (near impossible) to write in a portable fashion

    • @shadowpenguin3482
      @shadowpenguin3482 3 месяца назад +2

      When I was younger I was always surprised how sleeping for 0ms is much slower than sleeping for 1 ms

    • @JohnRunyon
      @JohnRunyon 3 месяца назад

      You can get pre-empted anyway. If you need to guarantee it'll happen at an exact moment then you should be using an RTOS. Thankfully you almost never actually need to guarantee that.
      A frame limiter should be maintaining an average, not using a constant delay, and then it won't even matter if the OS delays you for 15ms.
      Btw, a 15ms jitter is completely and totally unnoticeable.

    • @TheArtikae
      @TheArtikae Месяц назад

      @@JohnRunyonBro, that’s a whole ass frame. Two if you’re running at 144 Hz.

  • @kleoCodes
    @kleoCodes 4 месяца назад +378

    I'd never think i would ever spend 20 minutes to watch a 11 year old post about a 5 minute timer. but i learned something
    Edit: 350 likes??? Damn i must be famous

    • @monkeywrench4166
      @monkeywrench4166 3 месяца назад +6

      He doesn't look 11 year old tbh

    • @driz6353
      @driz6353 Месяц назад

      @@monkeywrench4166 11 year old *post*

  • @AJMansfield1
    @AJMansfield1 3 месяца назад +143

    As a firmware engineer, my first instinct was "set the alarm peripheral to trigger an interrupt handler"

    • @jamesblack2719
      @jamesblack2719 3 месяца назад +15

      That was my thought also, but I come at it from a C background and his approach just didn't seem elegant. It seems overly complicated on something that is rather simple to do. Shame AI will be trained on this approach.

    • @cpK054L
      @cpK054L 3 месяца назад +1

      Wtf is an alarm peripheral?
      Did you mean Timer?

    • @AJMansfield1
      @AJMansfield1 3 месяца назад

      @@cpK054L on a system with a free-running continuously-increasing system clock, you set the alarm register to generate an interrupt when that system clock reaches the set value - in this case, you'd take the current time, add two minutes worth of clock ticks to that value, and set the alarm to that value.

    • @3xtrusi0n
      @3xtrusi0n 3 месяца назад +14

      @@cpK054L MCU's have hardware timers that you can use without consuming a thread. Depending on the CPU and type of timer implemented (in hardware), you can have it trigger a hardware interrupt which will then kick off a given task/instruction.
      It's a peripheral alarm, because it is a peripheral on the hardware/MCU. You can also call it a timer, but either name means the same. Alarm would indicate you are 'counting down' and timer would indicate you are 'counting up'.

    • @cpK054L
      @cpK054L 3 месяца назад +2

      @3xtrusi0n I've never heard jt called an alarm.
      Also, timers don't have "counters" from what I've seen...they only have flag bits
      The ISR just waits for it to raise then you must reset otherwise it do t work the next cycle

  • @cubemaster1298
    @cubemaster1298 4 месяца назад +125

    I am not trying to protect topnik1's code in the video, it is pretty bad indeed BUT I am pretty sure it is not going to be 300 stack frames deep. From the looks of it, it is a tail recursive function, so any major compiler (e.g. clang) will do tail call optimization.

    • @jfmhunter375
      @jfmhunter375 3 месяца назад +2

      This should be higher

    • @JuniorDjjrMixMods
      @JuniorDjjrMixMods 3 месяца назад +13

      But then you would be expecting for the compiler to fix a problem that shouldn't exist...

    • @MeMe-gm9di
      @MeMe-gm9di 3 месяца назад +8

      @@JuniorDjjrMixMods Tail Call Optimization is often required to write certain algorithms "pretty", so it's often guaranteed.

  • @Reneg973
    @Reneg973 4 месяца назад +20

    ... And then you notice that your 5sec timer needs 5.03sec on your first PC. On the second it takes 5.1s and after some debugging you find out the OS moved the thread onto an E core and that your thread priority was not high enough. Would be nice to extend this video to handle more details. Like higher+highest accuracy or lower+lowest CPU usage.

  • @andersonklein3587
    @andersonklein3587 4 месяца назад +120

    I'm surprised no one brought up interrupts, I don't know about modern C++, but I've seen in old school assembly this concept of setting a "flag" that interrupts execution and calls/executes a function before handing back the CPU.

    • @MrHaggyy
      @MrHaggyy 4 месяца назад +46

      On embedded devices, this works like a charm as dozens of timers are running all your peripherals. So you pick one of them and derive a logic for all the other timed events

    • @sopadebronha
      @sopadebronha 4 месяца назад +24

      This was literally the first thing that came to my mind. I think it's the instinctive solution for a firmware programmer.

    • @sinom
      @sinom 4 месяца назад +20

      I'm not an embedded programmer so I might just not know something, but afaik the C++ stl doesn't provide any device agnostic way of handling interrupts, so anything you do with interrupts will always be hardware dependent and non portable.
      If you are using some specific microcontroller and don't care about portability then interrupts would probably be a good way of handling the problem.

    • @fullaccess2645
      @fullaccess2645 4 месяца назад +3

      If I want to run the callback on the main thread, could interrupts avoid the while loop that checks for the task queue?

    • @sopadebronha
      @sopadebronha 4 месяца назад +5

      @@fullaccess2645 That's the whole point of interrupts.

  • @scowell
    @scowell 4 месяца назад +63

    In embedded land we have real timers! Talk about accurate... sub-nanosecond is easily doable. Overhead? It's a peripheral! Ignore it until it interrupts you.... or have it actually trigger an output without bothering you if you really need that accuracy. Love timers.

    • @JohnSmith-pn2vl
      @JohnSmith-pn2vl 3 месяца назад +4

      time is everything

    • @gonun69
      @gonun69 3 месяца назад +4

      They are great but you better have the datasheet and calculator ready to figure out how you need to set them up.

    • @RepChris
      @RepChris 3 месяца назад +6

      @@gonun69 thats the case for pretty much everything embeded

    • @muschgathloosia5875
      @muschgathloosia5875 3 месяца назад +5

      @@gonun69 I can't imagine you would ever not have the datasheet ready

    • @scowell
      @scowell 3 месяца назад

      @@gonun69 Exactly... gets easier when using a PLL to run the clock... I do this for syncing to video.

  • @KieranDevvs
    @KieranDevvs 4 месяца назад +22

    The best solution for this is asynchronous execution. That way you can decide how the execution is performed i.e on the same thread or on a separate thread, and when the execution / timer is complete, you can decide if you want to rejoin the execution context (thread) back to main and take the perf hit, or run your logic on the background thread without any perf hit.
    You get all the benefits i.e you don't need to worry about thread safety and its fully configurable in how you want it to run.

    • @phusicus_404
      @phusicus_404 3 месяца назад

      Wonderful, how to do it in C++?

    • @KieranDevvs
      @KieranDevvs 3 месяца назад

      @@phusicus_404 std::async? I thought that was pretty obvious.

    • @phusicus_404
      @phusicus_404 3 месяца назад

      @@KieranDevvs he used this in his code, you use it in other way then

    • @KieranDevvs
      @KieranDevvs 3 месяца назад

      @@phusicus_404 Nope, the way shown in the video is correct more or less. The thread sleeping is bad, but apart from a few fixes, the general premise is there. If you put the thread to sleep and don't use a state machine to allow the thread to return, you block the main thread in async cases where you only use one thread (mainly in cases where you're using a UI).

  • @kuhluhOG
    @kuhluhOG 3 месяца назад +3

    17:15 Btw, small nitpick for the C++14 users (and above): move your callback into the lambda capture, because if the callback is an object with a defined operator() (like a lambda), there could be big-ish members (like with a lambda capture).

  • @mike200017
    @mike200017 3 месяца назад +1

    Coming from POSIX land, where anything interesting has a pollfd (file-descriptor) at the bottom of it, event loops consist of something that gathers all the interesting events and then calling "poll" on their pollfd's (or calling "epoll" or "select"). So, in that world, a timer like this is either implemented via a timerfd (you tell the kernel to create a "file" and trigger it at a specific time) or by simply setting the timeout for the poll call to the earliest wake-up time among your active timers (personally, I prefer that, gives more control). No messing around with threads. Coroutines are another way to do the same thing (coroutines are syntactic sugar on top of the same mechanisms).

  • @TryboBike
    @TryboBike 4 месяца назад +26

    This threaded timer has subtle bug. If 'work' performed during the timer duration takes longer than the timer itself then after the timer concludes its scheduled work will need to wait for the 'join' thus delaying the execution by more than the 5 minutes. On the flip side - moving the 'timer' callback to the timer thread will require work of main and 'timer' to be concurrent which brings its own set of problems.
    Frankly - having any sort of 'delayed' execution done in a single thread whil stuff is happening during the wait period is a pretty difficult problem to tackle. Unless it is something like a game, where there is a game loop or an event driven application. But even then, depending on resolution of the loop the wait period might be very, very different to what was specified.

    • @delta3244
      @delta3244 3 месяца назад +1

      That's not what thread::join() does. thread::join() has _no effect_ on the thread corresponding to the std::thread it is called on. It only affects the thread which calls thread::join(), by making it block until the std::thread which .join() was called on finishes.
      Without thread::join() at the end of main(), the code following the timer would fail to run if main ended before the timer did. That's why it exists. To reiterate: it does not tell the timed thread to do any work. It tells the main thread to wait for the timed thread's work to finish before ending the program. The timed thread does work on its own, once the OS wakes it up (which will happen sometime after the sleep duration).

  • @szirsp
    @szirsp 3 месяца назад +1

    20:00 My use cases of timers usually involve programming the interrupt controller, setting up HW timers or RTC alarms in microcontrollers... setting up "sleep"/standby/poweroff states
    What different worlds we live in :)

  • @nenomius1148
    @nenomius1148 4 месяца назад +11

    Шел Черно по интернету, увидел форум, заглянул в него и сгорел.

  • @pastasawce
    @pastasawce 4 месяца назад +11

    Yeah def getting into thread pool territory. Would love to see more on this.

  • @virkony
    @virkony 4 месяца назад +4

    9:21 for that case tail call elimination should fire unless there were stack allocations done in "dowhatuwantinmeantime". So it effectively turns into jump to beginning of function.

  • @Templarfreak
    @Templarfreak 15 дней назад

    the best way to handle a timer: if you ever can avoid calculating the timer yourself, you should.
    what do i mean? if you ever have to try and calculate the current time that the timer has ran for or how much longer the timer has to run for, then your timer *will always* be more inaccurate compared to when you are *not* doing that, because you will always be using time to calculate that timer's progress and that will change when you check when the timer is completed. not by a lot, but at best you will have a different and more insidious version of an off by 1 error that can cause problems that are very difficult to debug.
    so, this solution you have in this video is very good on the basis that it avoids that problem. there are other useful features to have for generalized timers (pausing/unpausing, getting remaining time, getting current time, having more than one callback, whether to repeat the timer or not, etc) but this covers the absolute basic necessities to just get the timer working and functioning as one would typically expect and that is good in my book

  • @robwalker4653
    @robwalker4653 3 месяца назад +1

    For the first idea example you showed I would have just calculated now + 5 mins when the timer is created, store that time as target time. Check in loop if current time is greater or equal to the target time, if so, the timer has triggered. Rather than casting a duration of one time minus the other each loop.

  • @pschichtel
    @pschichtel 3 месяца назад +1

    The 300 stack frames comment on the recursive function... there is a thing called tail call optimization, which apparently C++ compilers have been doing a while, that optimizes this into a loop. There is quite a few people that think more in recursion than in iteration, especially in a functional context.
    the async vs thread thing is nitpicking for the sake of it. there is really no advantage to be had _here_ by using async instead of just directly spawning a thread. you don't gain control, you don't gain performance, you are just obscuring the fact that a thread is spawned and suspended by wrapping it up in async. And when this async stuff get's put into a context where this might be scheduled into a thread pool, now you have a thread from the pool blocked for 5 minutes. From game engines you are probably used to cooperative multi tasking, which could have been an interesting spin and the one solution being bashed from the forum actually describes the idea of cooperative multi tasking, albeit with some problems.

  • @abraxas2658
    @abraxas2658 3 месяца назад +1

    19:34 If I wanted it to happen on the main thread, I'd probably have a game loop (as you showed) but with an integrated event system. This would be implemented as a min-heap with the time it should be called at as the value being sorted on. Then all timers could be checked with a single comparison. (If the lowest time has not been reached, all the others are guaranteed not to have been reached.) At this point though, we are very close to a full game engine core haha

  • @rogercruz1547
    @rogercruz1547 3 месяца назад +1

    25 years ago when I started coding I took setTimeout and setInterval in ActionScript for granted, I was 8.
    Now I was thinking of a thread with a loop and events that trigger callbacks as other threads depending on timers you set that would mimick that behaviour but when you mentioned Promises I realized it would be way easier to open a thread for each timer and just sleep...

  • @lukiluke9295
    @lukiluke9295 4 месяца назад +6

    Wow your first Video on multithreading and you introduced async, threads, sleep and context.
    I was actually looking for a video on the topic of multithreading this morning - couldn't find one and now here it is, just a little bit more complex ^^

  • @oleksandrpozniak
    @oleksandrpozniak 4 месяца назад +9

    As an embedded developer I like to use SIGALRM and handler in case I'm sure that I'll need to have one timer only at the same time. If I need to have several timers I use timer_create aka Linux timers.

  • @lainiwakura3741
    @lainiwakura3741 4 месяца назад +2

    Maybe I'm ignorant here, but I don't understand why anyone would use something like
    while(true) { if(time_condition) { break;} do_your_processing(); }
    Wouldn't
    while(time_condition) { do_your_processing(); }
    be strictly better? I guess there could be external conditions (like while the window of your game is open) but that was neither asked for nor shown in any example. So, is there any good reason to use while(true){...}?

  • @leedanilek5191
    @leedanilek5191 3 месяца назад +5

    Yeah... i don't think "most applications" behave like games with a loop that runs at 60hz. At least I've never worked with one, from iOS to CLI tools to backend to database to analytics tools. Game development is a special kind of inefficient

  • @sumikomei
    @sumikomei 4 месяца назад +81

    at first glance I totally didn't read "using namespace std::cherno_literals;"

    • @ADAM-qd9bi
      @ADAM-qd9bi 4 месяца назад +13

      I’ve always thought of us, and used to always misspell it with “cherno” 😭

  • @xlerb2286
    @xlerb2286 3 месяца назад +2

    Just shows that nothing is simple. What type of app are you working with? Do you need the thread to remain alive while the timer is running? Do you care about multi-platform? How much accuracy do you need? How important is it that code have low processing overhead? And the list goes on. (And that recursive example is going to keep me awake tonight, it takes a special type of person to write code like that)

  • @motbus3
    @motbus3 4 месяца назад +167

    Fork Execve bash -c sleep 5

    • @yoshi314
      @yoshi314 4 месяца назад +11

      isn't that 5 seconds wait?

    • @sadhlife
      @sadhlife 4 месяца назад +22

      sleep 300

    • @ProtossOP
      @ProtossOP 4 месяца назад

      @@yoshi314easy fix, just multiply by 60

    • @Pritam252
      @Pritam252 4 месяца назад +1

      MS Windows be like:

    • @no_name4796
      @no_name4796 4 месяца назад +3

      Or bash -c sleep 300 on linux...

  • @mikefochtman7164
    @mikefochtman7164 3 месяца назад +2

    We had to run code in 'real-time' in the sense of training simulators. This means we had to perform a lot of calculations, then do I/O interfacing with the student's control panels in a way that the student couldn't tell the difference between the simulator and the actual control room. So update the I/O with new calculation results at LEAST every 250 ms. I know sounds slow by gaming standards, but we did a LOT of physics calculations for an entire power plant.
    So we set up what had to be done in each 'frame' and used a repeating interrupt timer configuration. A frame ran doing calcs and I/O then sleeps until the next interrupt. If we occasionally 'miss' an interrupt because the calcs took too long, we had to 'catch up' the next frame. (one way to do this was the interrupt service routine increment a simple frame_counter and main loop checks if we 'missed' an incremental step)
    For time delays, we simply did a counter in the main code that would count up to 'x' value because we knew each time the code executed it was 'delta-time' step since last execution. So for 5 minutes at a frame time of 250 ms, simply count up to 1200.
    This was a few years back, but you can see it's similar to your 'game engine' concept.

  • @not_herobrine3752
    @not_herobrine3752 4 месяца назад +8

    My way would include obtaining a timestamp at the beginning, checking every iteration of the application loop whether the time elapsed is greater than or equals the start time, then doing whatever if said condition was true

    • @ruix
      @ruix 4 месяца назад +3

      This is also what I thought

  • @aakashgupta6285
    @aakashgupta6285 4 месяца назад +11

    As an embedded engineer, I would just use a built-in timer interrupt, which should be available for all platforms, although not portable.

  • @jamesmackinnon6108
    @jamesmackinnon6108 3 месяца назад +3

    I remember when I was first starting programming I learned visual basic script (Why I chose that I have no idea), and I was looking up how to wait for a period of time and ended up on a forum that said the way to set a timer was to ping google, figure out how long that took, and then divide the time you want to wait by the length of the ping and ping google that amount of times.

    • @tunk_2ton168
      @tunk_2ton168 3 месяца назад

      I also have chosen this path.
      I chose vbs because it doesn't require much. Literally just open notepad and you are good to go and its easy to learn.
      What did you move onto from that?

  • @thelimatheou
    @thelimatheou 2 месяца назад +2

    A fascinating historical snapshot of the Indian application development process. Thanks!

    • @siddy.g6146
      @siddy.g6146 2 месяца назад

      What makes it Indian?

    • @thelimatheou
      @thelimatheou 2 месяца назад

      @@siddy.g6146 copy, paste and iteration of code on stack exchange...

    • @thelimatheou
      @thelimatheou 2 месяца назад

      @@siddy.g6146 copying and pasting crappy code from stack exchange

    • @thelimatheou
      @thelimatheou 2 месяца назад

      @@siddy.g6146 copy/paste/stealing code from forums

  • @satibel
    @satibel 3 месяца назад +1

    note that doing what you did with the system_clock or high_resolution_clock (in case it's not steady) instead of steady_clock can work most of the time, but you'll get issues when the time changes due to daylight savings or such, and you can accidentally get a one hour and 5 min timer

    • @delta3244
      @delta3244 3 месяца назад

      or a zero minute timer, for that matter

  • @ДмитрийКовальчук-р9и
    @ДмитрийКовальчук-р9и 3 месяца назад +1

    That's a nice video! And what I like the most and that you seem to be one of the very few people I know, who actually use the steady_clock for timers and stopwatches, which is by the way the intended application of this tool. The vast majority resorts to high_resolution clock and then go around in panic when their system time gets updated. And man, is it a pain to search for a root of such a bug because it's really hard to reproduce it on your own machine and the behaviour just seems random.
    By the way, any implementation of sleep only guarantees that you sleep for at least a timespan or at least until the point in time. There is actually no upper limit on how much time would pass after that
    PS I guess the so-called expert wanted to do something similar to the main loop concept with a step of second instead of display frequency but just messed this up so badly that ended up with recursive calls. As for your point in the video, I saw a lot of samples of custom games where the time for actually running your game was just completely forgotten in your wait function at the end of the loop.

  • @Chriva
    @Chriva 4 месяца назад +13

    Condition signals is probably something you want with huge delays like that.
    Especially if you want to exit cleanly without waiting forever

    • @ccgarciab
      @ccgarciab 3 месяца назад

      Do you mean std::condition_variable?

    • @Chriva
      @Chriva 3 месяца назад

      @@ccgarciab That would also work but it's really finicky to use with non-static bools (ie it's hard to spin up several instances of the same thing)

    • @ccgarciab
      @ccgarciab 3 месяца назад +3

      @@Chriva what's the name of the API that you're referring in your original comment then?

  • @josnardstorm
    @josnardstorm 3 месяца назад +1

    But the one downside to your method, as opposed to a multithreaded solution with sleep(), is that you might run into an issue with timezones. The best scenario would seem to me to be a loop (multithreaded or single-threaded) that uses a chrono::duration object to measure 5 minutes directly.

    • @delta3244
      @delta3244 3 месяца назад

      How could there be an issue with timezones? steady_clock always increases at the same steady (constant) rate, hence its name. system_clock would have problems if the time were to suddenly change, but that's not what was used here.
      edit - what do you mean by "us[ing] a chrono::duration object to measure 5 minutes directly," anyways? Isn't that what was proposed in this video? Take a start time, subtract that from the current time to get a duration, compare duration to 5 mins?

  • @radumotrescu3832
    @radumotrescu3832 3 месяца назад

    I think this is one of the best situations where Asio (also packaged in Boost) actually makes sense if you are planning to do this kind of thing multiple times in a project. If you have to run multiple callbacks on a repeating and variable timer, and you have to handle IO in general, slapping an Asio io_context and a few steady timers is super easy and extremely reliable. You also get nice functionality like early cancelation, error code checking and other things that make it nice for production.

  • @valseedian
    @valseedian 3 месяца назад +1

    haven't watched for even 1 second, but, the answer is a thread that sleeps for nearly 5m, then a few ms until the time is reached, then calls a callback or sets a flag.
    when I was making my scratch gui system in c++ I had to solve the timer issue so I wrote a whole scheduler and event handler subsystem.

  • @woobilicious.
    @woobilicious. 4 месяца назад +2

    I was thinking about the "busy wait" issue you end up with in game loops, especially if you need to serialize timers / handle game saves when the user quits, and I came up with, storing all your deferred functions in a heap/priority queue, and then just check the head of the queue, and sleep for that amount of time, if you have a DSL, you could potentially have your code look like "bad" code that just calls sleep(), but really it's just a co-routine that yields the CPU.

  • @sub-harmonik
    @sub-harmonik 4 месяца назад +1

    generally the extensible way is to maintain a priority queue that contains time values and callbacks. Every loop poll the first element of the priority queue and remove until the time value > current time. That way you can have as many timers as you like.
    Things get way more complex if you need accurate sleep without spinning though. You pretty much need to get into platform-specific api as well as setting certain thread priority/interrupt rate. Recent windows has pretty weird and relatively undocumented timer handling.

  • @ColossusEternum
    @ColossusEternum 3 месяца назад +1

    Does the std lib have anything like the millis() function within the arduino IDE?
    I used to create non obstructive timers like this:
    Event to trigger timer sets variable = to millis()
    If(event) {
    Time = millis();
    }
    If(currentTime-Time >= delayDuration){
    Code to execute
    }
    Sorry if youre unfamiliar but millis() is a native function to arduino that counts up in milliseconds from the instant the MCU boots. The timing source is completely separate from the CPU and runs in the background and doesn't influence code execution(at least noticeably)

  • @trbry.
    @trbry. 4 месяца назад +1

    love this kind of content almost as much as your other content, be it hazel coding reviews and more

  • @FastRomanianGypsies
    @FastRomanianGypsies 4 месяца назад

    Now pause it. Get the amount of time left. Modify the time left. Change the callback. Allow for async callback. Allow for long running callbacks that continue through power-disruptions and errors by writing to disk. And by writing to disk, include the callback with payload as a serialized file so when the timer resumes, it doesn't require resetting the callback. Allow for greater precision by decreasing sleep at end of loop as the timer nears completion. There's a lot more than just calling sleep on a separate to creating a useful timer, and implementing a timer with all the aforementioned features is quite the challenge, but in many cases absolutely necessary to handle real world business logic.

  • @JuniorDjjrMixMods
    @JuniorDjjrMixMods 3 месяца назад

    I code more than a decade in gta3script (the proprietary script language that Rockstar Games uses from GTA 2 to GTA V, maybe GTA VI too), and is just this:
    SCRIPT_START
    {
    WHILE timera < 300000
    WAIT 0
    ENDWHILE
    PRINT_STRING_NOW "ok" 1000
    }
    SCRIPT_END
    Or just WAIT 300000 but would be basically a Sleep.
    PRINT_STRING_NOW would be PRINT_NOW (for translation support), but I'm using the modding variant for this example. The "NOW" is high priority, doesn't matter.
    Another detail, old GTAs like GTA SA have a bug and need to set NOP or some other command in the start of the script before any WHILE.
    But I like how big game companies simplified this.

  • @lurgee1706
    @lurgee1706 4 месяца назад +3

    sleep() is great untill you realize you can't cancel your timer and notify the user about it right away, so if you do need to handle cancellations (either manual or due to the process' shutdown), you're screwed. So:
    * If you want a delay in the current thread just use condition_variable::wait_for.
    * If you want it to be executed asynchronously, either spawn a thread yourself or spawn an std::async task (which may very well spawn a thread under the hood anyway) and, again, wait on a condvar.
    * If you want your solution to be generic and scalable, you're bound to end up with some kind of scheduler, so you either use a library like boost asio (whose timers do use a scheduler under the hood), or write one yourself.
    As "simple" as that. Frankly, seeing how easy it is to do the same thing in other languages like C#, coming back to C++ is just painful.

    • @DerHerrLatz
      @DerHerrLatz 3 месяца назад

      Thank you for pointing out the obvious. (since nobody else does.) Would be nice to have an event- or mainloop in the standardlibrary. But it would probably not work if you don't have an OS to provide the underlying functionality.

  • @Sluggernaut
    @Sluggernaut 4 месяца назад +2

    Are you going to post this code onto that forum post with some explanation or link to this video? Why or why not?
    Edit: Nevermind. The forum post is locked. That's a great reason NOT to.
    You don't ever see the code at the end of the TimerAsync function. I THINK this is what it should be: std::future TimerAsync(std::chrono::duration duration, std::function callback)
    I'm not 100% sure but I have written the code exactly as TheCherno has it except I use "using namespace std;" as well because i'm a rebel. So apart from the portion that can't be seen, I have written the code exactly as it is and it appears (oh, and I named Period to TimePeriod) to run and work the same. So, fairly sure my guess is correct.

    • @thomasknapp7807
      @thomasknapp7807 4 месяца назад

      Sluggernaut, Thanks for your suggestion on the missing code. As I am sure you know, your suggestion works by producing the same result that Cherno demonstrated when he set the timer to 5 seconds.

    • @Sluggernaut
      @Sluggernaut 3 месяца назад

      @@thomasknapp7807 no idea

    • @Sluggernaut
      @Sluggernaut 3 месяца назад

      ​@@thomasknapp7807ok misunderstood your comment. Re-read it. Yes the code I suggested did work. Just wanted to lend some help, potentially, to anyone struggling to recreate this as I was

  • @sayo9394
    @sayo9394 4 месяца назад +2

    This is a great video 👏 I vote Yes for more videos of this format

  • @ciCCapROSTi
    @ciCCapROSTi Месяц назад

    Yeah, my first thought was the same, just a bit more complex. Implementing a component that can handle any amount of timer requests, and run it with the game loop just as any other component. And it calls the callbacks for all the timers expired on the frame. Probably needs a priority queue or more likely a sorted vector. Has the advantage of calling the callbacks on the main thread.

  • @inulloo
    @inulloo 4 месяца назад +2

    Your analysis and explanation were very helpful.

  • @grimvian
    @grimvian 2 месяца назад

    A little rant: Until two years ago, I was a happy C++ hobby programmer. I failed to understand how cout could replace printf, but chrono was the C++ weirdness, that finally convinced me, that my IQ is way to low for C++. Inheritance, composition and file handling I felt okay with, although I also never understood, why the heck C++ need a 'million' ways to handle files...

  • @johnmckown1267
    @johnmckown1267 2 месяца назад

    Interesting. At 71, I've finished my professional learning time. But I continue to learn. Helps keep the brain functioning.

  • @vloudster
    @vloudster 4 месяца назад

    Great video. You should do more videos like this where you are looking at fundamental things like timers etc.
    The video was funny in relation to the code suggestions in the forum but also educational when you explained them and presented your professional solution.

  • @IncompleteTheory
    @IncompleteTheory 24 дня назад

    Never time your loops using any variation of sleep(duration) because this always results in a drift determined by the amount of stuff you run in your look. Always look for an OS or language construct that sends you a signal, or runs your callback, at specified intervals. Game engines usualy give you some kind of frame-rate synchronisations.

  • @sviatoslavberezhnyi1059
    @sviatoslavberezhnyi1059 3 месяца назад

    When I was at university in 2006, I had a lab about a timer, I don't remember exactly how I solved it, but the computer has a built-in timer that runs 18.2 times per second, I remember that I wrote this program in C with some assembly language inlining, which actually copied an interrupt from a certain port, then replaced it with my interrupt, my interrupt was executed 18.2 times per second and in it I made a decryption of the timer that the user entered, and when the timer was completed, I sent a certain byte to port 61h, but I may be wrong, to cause the speaker on the motherboard to beep, which signaled that the timer was over, then I replaced my interrupt with the one I copied earlier, and I used C only so that the user could enter the timer and display a successful message after it was completed, that's the story)

  • @R4ngeR4pidz
    @R4ngeR4pidz 3 месяца назад +1

    9:20 yes, you're right, no consideration for how long that takes
    But just to play devils advocate, your game engine solution also has this flaw
    Who says the time in between frames is short?
    In the absolute worst imaginable case (definitely not realistic but still) it could take 10 minutes to render the frame, so when we get to the next frame 10 minutes will have passed, we did not get notified directly after 5 minutes had passed

    • @gob9852
      @gob9852 3 месяца назад +1

      That would be indicative of problems that go much deeper than the stopwatch, and in such a situation the stopwatch would be the least of our concerns.
      In the context of this hypothetical though, and assuming this hypothetical is perfectly fine with nothing wrong to it at all, then multithreading would be the solution, since you're thinking of prematurely ending an operation.

  • @alice20001
    @alice20001 3 месяца назад

    1:16 THANK YOU!

  • @sebibence02
    @sebibence02 3 месяца назад

    Timing in CS is an artform, basically an optimization between precision and CPU usage. The best approach is to go with the lowest level hardware interrupts and register a callback on the interrupt event. In higher level code the more precise timing you want, more frequently you need to schedule your timer thread which will lead to higher CPU usage. If you optimize to have lower CPU usage, the thread will be scheduled less often, therefore decreasing precision (the thread won't be able to check for elapsed time as frequently). Considering this the == approach in one of the replies is a huge mistake, because it is guaranteed that the timer never will be exactly equal due to the operating system's added thread scheduling overhead. Even with hardware interrupts there will be a thread swap operation losing some time until the instruction pointer is set to the callback method. Good stuff

  • @s3vster
    @s3vster 4 месяца назад +2

    std::thread::sleep_for(std::chrono::minutes(5))

  • @johnmckown1267
    @johnmckown1267 2 месяца назад +1

    9:35 holy s..t. I hope the person giving that answer isn't writing code as a profession.

  • @stefanm578
    @stefanm578 4 месяца назад +1

    wouldn't the "recursive" function be tail-call optimized, thus using constant stack memory?

  • @XiremaXesirin
    @XiremaXesirin 2 месяца назад

    16:11 I do have my own Code Review thoughts. 😉
    Specifically: I would create a time_point object at line 12, before we call std::async, which is the current time + the duration the user specified, and then inside the std::async call, I would use this_thread::sleep_until, instead of sleep_for. This way, you account for any possible delays in the execution of the lambda function. std::async is not _technically required_ to immediately start execution of the provided functor right away in a new thread, even when the std::launch::async option is provided. It might be delayed if another functor is running and the thread pool is exhausted. So by determining "this is when the thread should awaken" preemptively, you make it more likely that the time the user provides will end up being accurate.
    Of course, the real solution is using boost::asio::steady_clock with a dedicated executor, which lets you cut down the code to only like 3 lines, but I guess the requirement was to use only vanilla C++, so...

  • @stephenkamenar
    @stephenkamenar 4 месяца назад +1

    sleeping is not accurate to 1ms, it's worse than that. more like 15ms
    (but there is a way to make it 1 to 2 ms accurate with timebeginperoid)

  • @J.D-g8.1
    @J.D-g8.1 3 месяца назад

    A sleep function is literally a timer, unless its very accurate in embedded systems, in which case it can be no ops tuned to clock cycles.
    But human time scale sleep func needs a basic
    Starttime = Systime
    If (systime - starttime > x )

  • @Xudmud
    @Xudmud 3 месяца назад

    I know I've done a similar thing using Boost (boost::asio::deadline_timer() and then used boost::posix_time::seconds() to get the timer value), and that had worked for me, plus kept it asynchronous so it wouldn't hold up the rest of the system.
    (Of course part of that was having to use c++0x, I'm sure there's a better way to do it, but had to work with what I had)

  • @Yulenka-
    @Yulenka- 3 месяца назад

    The simplest way to fix the last solution is to just move the finish-code after "timer.join()". Boom. This does exactly what was asked and doesn't actively waste CPU time (& battery) if there is no more work to be done (compared to the game loop approach which is constantly spinning and checking).

  • @Tuniwutzi
    @Tuniwutzi 3 месяца назад

    It's interesting I never thought about how involved the simple question "how to delay code execution by X time" actually is.
    I usually work on stuff that is IO heavy and focuses on processing events as they come in (ie: a button was pressed, a socket received data, a cable was connected, ...). More often than not I already have an event loop, for example based on file handles and epoll/select. So my first instinct for a non-blocking timer was: create a timerfd and put it into the existing event loop.
    This video made me realize that I've never considered how many things become more straightforward if you're running a simulation that has one thread continuously running anyway.

  • @Ozzymand
    @Ozzymand 3 месяца назад

    never knew (nor did i think to check) if async and promises exist in c++ after using them in JS. Awesome

  • @dawre3124
    @dawre3124 4 месяца назад

    If you need to wait in performance critical multi threaded environment for an accurate amount of time as shortly mentioned in the video keep in mind sleep functions are not accurate (I would assume async can not fix this). with more threads that cpu cores the time sleep oversleeps tends to go up too. for full accuracy empty loops are the only way I know, for something reasonable reduce the sleep time with an empty loop afterwards. When I had problems with this I split the sleep into multiple calls (I felt like shorter sleeps are more accurate).
    I used something like this (C):
    void my_sleep_fast(const int64_t target_t)
    {
    int64_t time_diff;
    int64_t temp_sleep;
    time_diff = target_t - get_microseconds_time();
    temp_sleep = time_diff - (time_diff >> 3);
    while (temp_sleep > SLEEP_TOLERANCE_FAST)
    {
    usleep(temp_sleep);
    time_diff = target_t - get_microseconds_time();
    temp_sleep = time_diff - (time_diff >> 3);
    }
    usleep(SLEEP_TOLERANCE_FAST);
    }

  • @mikeyoung3870
    @mikeyoung3870 3 месяца назад

    Could you get the current time and create an alarm function that compares the current epoch time to the stop time? Initialize the stop time by getting the time of its creation and add the amount of time you’d want it to stop at. Do a comparison between the two and return a Boolean if the time is over

  • @sebastianconde1341
    @sebastianconde1341 4 месяца назад

    Coming from a C background I would actually use an alarm :)
    Sort of like this:
    #include
    #include
    void handler(int s) {
    /* Whatever you want */
    }
    int main(void) {
    struct sigaction sa;
    sa.sa_handler = handler;
    sigemptyset(&sa.sa_mask);
    sa.sa_flags = 0;
    sigaction(SIGALRM, &sa, NULL);
    /* Set up the alarm for 5 minutes... */
    alarm(5*60);
    /* Rest of your code... */
    }
    This way, your code (the process executing it) will be interrupted 5 minutes after the alarm() call was made. You can keep doing work until then.
    When the interruption comes (from a SIGALRM signal) your code will execute the handler function.

  • @mcawesome9705
    @mcawesome9705 3 месяца назад

    among my first thoughts was something like:
    std::chrono::time_point stop = std::chrono::high_resolution_clock::now() + std::chrono::duration(5min);
    while (std::chrono::high_resolution_clock::now() < stop)
    {
    // do stuff
    }
    // do other stuff
    for most things™, this should be fine, but it's worth noting that it won't interrupt whatever it's in the middle of doing when 5 minutes pass.

  • @yabastacode7719
    @yabastacode7719 2 месяца назад

    my idea is to use the observer design pattern to watch for the thread to finish. when the thread finish it send a signal to all subscribed objects to execute their functions (slots). i was inspired by Qt and its class QTimer witch was implemented using observer design pattern. i am not sure if i should be single or multi treaded tho. i need to write code to figure it out

  • @Evilanious
    @Evilanious 4 месяца назад +1

    I think the questions I'd like to see answered here are not 'how to do it in c++', but rather, how does the computer clock work. How do you call it? How do you keep it counting while doing other stuff? The library I'll end up using isn't the most important. Though I guess if you need to solve this very specific problem it's time consuming to take that step back.

  • @56a8d3f5
    @56a8d3f5 3 месяца назад

    futures can’t destruct with a running thread, usually there’s no need to check for the status with the purpose of ‘make sure it doesn’t get destroyed before the thread finishes’ 17:35

  • @KeyYUV
    @KeyYUV 3 месяца назад

    This really makes me appreciate the convenience of QTimer::singleShot(Duration msec, Functor &&functor). Implementing the event loop manually is such a pain.

  • @charl3782
    @charl3782 3 месяца назад

    At the risk of stating the obvious, the stuff that the user wants to do after the timer should be executed in the main loop after the future status is ready, and not in the callback function or the TimerAsync function.

  • @lePoMo
    @lePoMo 3 месяца назад

    on recursion: look up tail recursion.
    The example given won't go 500 stack frames deep because the recursion call is the last call of the function. nothing needs to be preserved so the conpiler will optimize it.
    (still horrible use of recursion, i agree on that)

  • @andreyv116
    @andreyv116 Месяц назад

    First thought was alarm() but that introduces a non-Windows* OS dependency and also signals are a mess when multi threaded code is involved
    Edit: also only one active timer per program unless you implement some form of multiple timer management
    * OSes that are neither Windows nor Linux/BSD/Mac are generally outside the scope of desktop apps

  • @johnmckown1267
    @johnmckown1267 2 месяца назад

    Hum, I don't know if it is possible, but on the OS I used at work, it had a built in facility to send a "signal" to the main program which could invoke a routine on the main thread, suspending the "regular" code. Returning from that async routine would let the main code resume exactly where it was interrupted.

  • @HelloHigogo
    @HelloHigogo 4 месяца назад

    13:56 forgive me if I'm wrong but the problem I see with the code here is that if the other process you run apart from the timer needs to finish every time before you check the time again you'll almost certainly not land on 5 minutes either. What if the process you run takes 10 minutes itself?
    Surely on an additional thread that example would be fine but in the code at 13:56 it would be 10 minutes before the timer would complete

  • @sakuyarules
    @sakuyarules 3 месяца назад

    If you run things in a game based on the framerate, you get funny results when people go for really high or really low framerates. I believe there was an issue with shield recharge in one of the Mass Effect games because of this.

  • @whydoineedausername1386
    @whydoineedausername1386 2 месяца назад

    But timer() is tail recursive. It will optimise to a loop. Almost all of the FP world relies on this. (And in those languages it's a guaranteed optimisation, like RVO is in C++, so it'll happen in debug builds)

  • @andy02q
    @andy02q 3 месяца назад

    Do I understand correctly, that assuming your loop runs at 100fps you're going to call your function including the ratio lookup, multiplication and the if-clause 30000 times for a 5 minute timer?
    If that's true then your solution is still much better than the other ones, but I feel like there must be a more efficient way to do this.
    Like I imagine a scenario in which the *process main thread queue here* starts a 4-5 minute timer every single frame and as soon as the first timer ends sth. should happen and all the other timers should get discarded, I think if you tried to do that with your code, then the program might become a bit unresponsive. Of course that's not the original specification and I don't want to take away from your relatively good solution.

  • @Omnifarious0
    @Omnifarious0 3 месяца назад

    There is another case you didn't exactly mention. I've often designed my programs around event loops (I started writing programs before multiple cores were common). In that case, you need to have a timer expiry heap as part of your event loop so the loop can easily determine if you should just wait forever for an event, or wait until a specific time.

  • @rafazieba9982
    @rafazieba9982 4 месяца назад

    Those are two different ways to build timers.
    1) You have a loop (UI loop for example) and you need your code to execute on that thread.
    2) You need something to be done in some time and you don't care about the thread it runs on.
    Relying on a UI loop for timers is only marginally different from relying on "windows.h" unless you are doing it for a UI specific functionality (update an animation).

    • @TurtleKwitty
      @TurtleKwitty 4 месяца назад

      its VERY different if you care about being available on other platforms at all

  • @StefaNoneD
    @StefaNoneD 3 месяца назад

    std::this_thread::sleep_for() does not use a monotic clock by default. That is, if you change your system time, it affects the timer (at least on Windows).

  • @anon_y_mousse
    @anon_y_mousse 4 месяца назад +1

    If you only have a few timers then the best way, assuming that cross platform is considered better than platform specific, would be to take the current time, add the timer amount and use that as the end trigger for that timer. Then it's a simple matter of checking in the main loop whether you've reached the target time or beyond. That's basically the way a coroutine would work too, if we're talking about the original working method and not the unholy hidden thread garbage that is usually used for async code these days. One of the things I love which they added with C++11 was UDL's. So adding 5min to a time is pretty easy and downright enjoyable now. I just wish they'd add that to C, especially since they added constexpr with C23.

    • @reddragonflyxx657
      @reddragonflyxx657 4 месяца назад +1

      If you have a lot of timers you can put them all in a priority queue (sorted by earliest end time) and just check for/remove/process any finished timers from the front of the queue in your main loop.

    • @anon_y_mousse
      @anon_y_mousse 4 месяца назад

      @@reddragonflyxx657 As long as we're talking about a dozen or so, then yep. Once you get into the couple of dozen and above range, you might want to consider multithreading.

    • @reddragonflyxx657
      @reddragonflyxx657 4 месяца назад +1

      @@anon_y_mousse Why? You can check if there's an expired tjmer in constant time and add/remove timers from the queue in log(n) time (per timer). If you have lots of timers going off, need to do a lot when they do so, and can't wait for that in your main loop, multithreading is a good idea. Otherwise, based on "On Modern Hardware the Min-Max Heap beats a Binary Heap" you can expect a priority queue to take ~100 ns to pop a timer with ~100k entries in the queue.

    • @anon_y_mousse
      @anon_y_mousse 4 месяца назад

      @@reddragonflyxx657 If you've got modern hardware, then that's fine, but you should aim for the most efficient methods always, because you might not always get to target modern hardware. Although, hopefully you wouldn't need so many timers as to clog the main loop, especially on lower powered devices. Maybe I'm just used to working on devices with speeds measured Hz.

    • @reddragonflyxx657
      @reddragonflyxx657 4 месяца назад +1

      @@anon_y_mousse What hardware is slow enough for a heap to be too slow, but also supports multithreading? I think this solution would be excellent on a lot of embedded platforms, with reasonable tuning for cache/branch prediction/memory performance if performance is critical.
      That article should apply to the last decade or two of PCs at least.

  • @oschonrock
    @oschonrock 3 месяца назад

    consider avoiding std:function and the likely heap allocation... use template < Callback> instead.

  • @Mahm00dM0hanad
    @Mahm00dM0hanad 4 месяца назад +1

    Sorry out of the topic. There is a small tear/wrinkle/fold or I don’t know what is it exactly right in the center of your sweater’s collar, I just couldn’t unfocus on it the entire video, it’s like a small talking mouth specially when it’s aligned with the microphone. Super funny

  • @LundBrandon
    @LundBrandon 3 месяца назад

    I love embedded because there I have direct access to the cpu's timers/counters and interrupts ;)

  • @dr99tm23
    @dr99tm23 4 месяца назад +2

    How applications with subscription set the free trial timer and even if you have no internet connection and change the time of your pc or shutdown for days, the program will still calculate the time correctly and end the free trial at the specific time 🤔?

    • @SimonVaIe
      @SimonVaIe 4 месяца назад +3

      If your application can run offline and the user has sufficient permissions on their system, I don't think there's a reliable way to calculate real world time passing from your app. As you say, you can look at the system time, but that can be changed. You can then look at system logs for system time changes, but those can be altered. You can implement an always running background service that works even when system time is changed, but a service can be downed as well.
      Basically you have to trust data of a system you can't trust.

    • @ramiths8171
      @ramiths8171 4 месяца назад

      Some applications don't even open without internet

    • @jacksonmagas9698
      @jacksonmagas9698 4 месяца назад

      They just store the date time when you started the free trial and then when you run it it checks if the current date time is the trial period after the stored start date

  • @kuhluhOG
    @kuhluhOG 3 месяца назад

    15:20 Heavily depends on the OS scheduler.
    Some are more accurate than others (and some OS have multiple OS schedulers the user can choose from).

  • @ender-gaming
    @ender-gaming 3 месяца назад

    I don't code in C++, mostly do powershell scripting, but when I saw this I though of a simple while loop like your final solution with a running timer. Though I'm interested why you used 'While (true)' instead of 'while (satus == std::future_status::ready)'.
    I will say timing code is always an interesting challenge with far deep rabbit holes at least in the languages I've played with, usually in-built function have some noticeable overhead.