Probably do a follow-up to this about ReadWriterLockSlim. In most cases in the real world, you want reads to be concurrent, but block on writes. Another option would be covering a cache strategies for use in threading
There is another thing to consider. İf you accidentally call the Release method from any other thread, semaphore allows it. To ensure that the Release call will be only within the owner thread, Mutex class can be used.
Normal lock(monitor) can be used for ensuring release from the same thread. Release from any thread is specifically the reason semaphore is used. Or am I missing something?
Glad to see you took it down and reuploaded with my feedback implemented! It's important to lead by example, and it's a much better scenario to have a beginner ask "wait why is the Wait happening outside of the try?" and end up learning about this pitfall through inquisitive learning. Good job Nick. I'm proud to share your videos with my coworkers consistently
You missed one important thing, classic lock allows the same thread to re-enter synchronized section, however SemaphorSlim can be entered only specified amount of times. Semaphore slim is not the same lock as lock for sync code
2 года назад
If I understand you correctly, you can put the task from WaitAsync() to a variable, and await that one multiple times. It will however be a more difficult to manage the release part. But you could make something with the LogicalCallContext (/AsyncLocal) or similar, since there is "no thread".
This is only true if you don't specify the number 1 in the semaphoreslim constructor. If you do then the lease can be acquired once at a time giving the same experience. Each thread will wait for the lease to be released and for the new one to be acquired. You can be extra strict and also add the second parameter as 1 which specifies the max threads as 1 as well
Re-entrancy is actually a surprisingly rare thing in general because recursion is rare in general and we tend to lock on the public method calls at the point of entry. But you're still correct
This video is detailed enough. Lock is a syntactic sugar over Monitor class. SemaphoreSlim works similarly here. One little note: don't forget to limit a number of concurrent tasks to 1.
You probably want to use the 2-integer constructor in this scenario, the second parameter being "maxCount": new SemaphoreSlim(1, maxCount: 1); This prevents extra, erroneous calls to Release() from allowing concurrent access through the critical section. By specifying a max count, these extra calls to Release() will instead fail-fast (via exception). Example: var ss = new SemaphoreSlim(1); // SemaphoreSlim(1, 1); await ss.WaitAsync(); ss.Release(); ss.Release(); // oops! .. now 2 concurrent accessors allowed ss.Release(); // oops! .. now 3 It's an edge case, and probably TMI for the short video -- just FYI.
There is no scenario where two locks can be acquired so there is no need for the max count. In all exception throwing scenarios of WaitAsync and Wait the lock was never acquired
@@nickchapsas I'm not sure if you're saying "there is no scenario" in terms of the code you shared. If so, I agree -- your trivial example correctly manages Wait & Release calls. My point is that setting maxCount is being explicit about the maximum amount of concurrent access you want to allow in the critical section, and ensuring an exception is thrown if that condition could be violated. Without maxCount, you are allowing (in terms of what SemaphoreSlim enforces) any amount of concurrency (int.MaxValue) to hit the critical section. Suppose there was some system with the following: try { SomethingThatCouldRandomlyFail(); // shouldn't be in the try await semaphoreSlim.WaitAsync(); // might not be called if above fails await OneAtATimeAsync(); } finally { semaphoreSlim.Release(); // always called } If this type of code existed (I hope it doesn't), and maxCount was not specified, the system may unknowingly be allowing concurrent access to OneAtATimeAsync(). However, if maxCount was specified (e.g., value of 1), the system would throw an exception on a Release without a Wait. I don't know why you wouldn't specify maxCount for the scenario your video discusses (using SemaphoreSlim for async, single-access locking) -- what's the downside of having a fail-fast?
@@patrick_1719 Where your thought process is flawed is that await semaphoreSlim.WaitAsync() should be inside the try/finally that releases the lock. it shouldn't
@@nickchapsas We may be talking past each other. No, I don't think Wait should be in the try/finally, but I do think some real-world project may make that mistake (someone new to .NET, something missed during a refactor, a bad merge conflict resolution, etc). When that mistake is made, the lack of a maxCount will allow the mistake to go unnoticed. Overall, my point is that when you are using SemaphoreSlim as a lock { }, you should be explicit and set maxCount to 1. I believe it more closely models the intended concurrency constraints, and as I've attempted (and failed) to describe, it prevents a class of errors from existing.
Not 100% for the described scenario. But one of the best libraries in my opinon. It replaces all my blocking collections which ran in a separate thread….
why haven't you used a full version of youtube for this video? It is not convenient to use "shorts" for education - cannot rewind back, cannot change speed...
Any concerns with implementing IDisposable in the SemaphoreSlim class to do the release on dispose so it’s a bit cleaner to lock with a using block and avoid a try-finally block? It would also visually highlight what code is locked.
I don't think that you release it on Dispose. It's not instantiated in he same scope and persists along with the class instance. Also if it can manage the number of threads allowed and it's not 1 you can be sure you are not disposing it
I don't think that you release it on Dispose. It's not instantiated in the same scope and persists along with the class instance. Also if it can manage the number of threads allowed and it's not 1 you can be sure you are not disposing it
There's a slightly behavior difference though. The lock keyword allows reentrant code to enter the critical region several times in the call stack, while with SemaphoreSlim you'd need some sort of way to tell the current task already is allowed to enter the region without attempting to wait for an exhausted semaphore.
The technical reason is that after an await the code can run in another thread. Depending on the context, this is even normal. All "classic" synchronization mechanisms (lock, monitor, semaphore, etc.) are based on the current thread and block accesses from other threads. Lock and await would therefore be a big deadlock risk.
You can have a lock statement inside async code. What you cannot do is await inside a lock statement. It makes sense because the lock is basically telling that the code inside it should be locked by a thread, but when you await you are effectively delegating a piece of code to possibly be run in another thread
Lock uses Monitor.Enter/Exit, which are designed for single-threaded access in the critical section. The method cannot return without exiting the lock, which is what await might do, and if you exit the lock, that would allow another thread to enter the critical section while you're half way through your work. AsyncLock holds the lock even across await boundaries while allowing the method to return without blocking the thread. It does this by hooking up callbacks to the lock that will be invoked one at a time when the lock is released (the lock is re-taken before the callback is invoked).
An example. If before the await we are on the thread Context (for example with Task.Run) After the await we are guaranteed to be in the same context so it's guaranteed to run in another thread of the thread pool. BUT that could be a different thread than the one before the await. So we are potentially releasing the lock in a different thread that gained it
It works…. But requiring the manual release is unfortunate. The best thing about the lock statement IMO is the automatic unlock. It also makes the bounds of the locked region very apparent. Hmm… maybe something could be done with an IDisposable/IAsyncDisposable and a using statement?
@@nickchapsas Hmmm... I've always done (1,1) and I see it in most of the examples. The docs are a bit confusing, but I remember years ago getting bit by not using (1,1). Maybe you can clear the air in another video ;)
@@nickchapsas With locks like these, it is absolutely important (not trivial) to put them in a try finally block. Nito.AsyncEx (Stephen Cleary) does it nicely with disposables.
@@urbanelemental3308 The only reason why Stephen uses (1,1) is because in order to offer the "cleaner" api with the using statement you NEED to add a maxThread limit because since using will put the WaitAsync call in the try block and that WaitAsync can throw, you might over-release locks, leading to a problem. You don't need to use the second parameter if you keep the WaitAsync outside of the try beause the lock is never acquired on a throw to be released
@@nickchapsas Hmmm... Ok. So you're suggesting that in the rare case that you might not want it inside a try block because... you're certain your code will work without fault (is there a perf benefit?), then the (1) is just fine. Fair enough.
Haha i recently had to find out about this. I had some logic that should only be run once a day no matter how many times i call this endpoint. So i was using database to check that the last record was recorded in the same day etc. I made a mistake in js and called this endpoint couple of time and i noticed i have this task run more than once in the same day. Ugh. I thought that putting await would be blocking for all the calls. Stupid of me.
I have an Idisposable semaphore slim wrapper that I can use the using keyword with and guarantee that I'm going to release using(await something.Lock()) { } Lock returns a new disposable wrapper for the semaphore slim contained in the "something" instance, it's dispose method releases the semaphore
@@nickchapsas no there's no try there. Something is a wrapper for semaphoreslim, something.Lock() does awaitasync then returns a disposable wrapper for the semaphoreslim, when the using block exits and dispose is called, the dispose method releases the lock Yeah I assume there are 2 allocations (one for the lock wrapper and the disposable wrapper), but being able to guarantee my locks are being released without try/finally feels good and clean
@@figloalds I think you don't understand how using works. Using will be lowered to try/finally so your lock will be acquired inside the generated try code which is bad because if there is an exception thrown there, you are releasing a lock you never acquired
@@nickchapsas I appreciate the insight, I wrote the idea in sharplab If i paste the full URL then YT will delete ite, sharplab io #gist:3e1774255c57e2300badaff9fc0cc97f I can see that the whole async await and it's state machine madness create some very complicated IL code, but on the surface it seems to me that the only way this fails is if WaitAsync itself throws an exception
@@figloalds Which is why it's flawed because WaitAsync CAN throw an exception and all the exception throwing scenariors happen when the lock is NOT acquired. The way to "fix" this is if you also set the maxThreads value in the semaphore constructor so you don't over-release locks.
Probably do a follow-up to this about ReadWriterLockSlim. In most cases in the real world, you want reads to be concurrent, but block on writes. Another option would be covering a cache strategies for use in threading
。Thank you graciously..
That's true, however, it is a thread-affine lock type, so unlike semaphoreslim it does not have async support.
There is another thing to consider. İf you accidentally call the Release method from any other thread, semaphore allows it. To ensure that the Release call will be only within the owner thread, Mutex class can be used.
Normal lock(monitor) can be used for ensuring release from the same thread. Release from any thread is specifically the reason semaphore is used. Or am I missing something?
you are right@@filiplaubert5001
Glad to see you took it down and reuploaded with my feedback implemented! It's important to lead by example, and it's a much better scenario to have a beginner ask "wait why is the Wait happening outside of the try?" and end up learning about this pitfall through inquisitive learning.
Good job Nick. I'm proud to share your videos with my coworkers consistently
I would say it is really hard to explain smth like that in a very short period of time...but you did it wow.
You missed one important thing, classic lock allows the same thread to re-enter synchronized section, however SemaphorSlim can be entered only specified amount of times. Semaphore slim is not the same lock as lock for sync code
If I understand you correctly, you can put the task from WaitAsync() to a variable, and await that one multiple times. It will however be a more difficult to manage the release part. But you could make something with the LogicalCallContext (/AsyncLocal) or similar, since there is "no thread".
This is only true if you don't specify the number 1 in the semaphoreslim constructor. If you do then the lease can be acquired once at a time giving the same experience. Each thread will wait for the lease to be released and for the new one to be acquired. You can be extra strict and also add the second parameter as 1 which specifies the max threads as 1 as well
Re-entrancy is actually a surprisingly rare thing in general because recursion is rare in general and we tend to lock on the public method calls at the point of entry.
But you're still correct
@@AvenDonnrecursion can easily happen if you use a callback argument that can call method of the same class
The algorithm be smoking crack if it thinks I'm gonna understand a word of this
HAHAHA
Beautiful, thank you. Always learning something new from you Nick.
Its not just a good idea to use finally block, its a must have
THIS... This is why I subscribed to Nick. Always something new and useful. 😎💪
Waiting for a more detailed video about this 👍
This video is detailed enough. Lock is a syntactic sugar over Monitor class. SemaphoreSlim works similarly here. One little note: don't forget to limit a number of concurrent tasks to 1.
You're the man! This is so valuable advice 🥳
You probably want to use the 2-integer constructor in this scenario, the second parameter being "maxCount": new SemaphoreSlim(1, maxCount: 1);
This prevents extra, erroneous calls to Release() from allowing concurrent access through the critical section. By specifying a max count, these extra calls to Release() will instead fail-fast (via exception).
Example:
var ss = new SemaphoreSlim(1); // SemaphoreSlim(1, 1);
await ss.WaitAsync();
ss.Release();
ss.Release(); // oops! .. now 2 concurrent accessors allowed
ss.Release(); // oops! .. now 3
It's an edge case, and probably TMI for the short video -- just FYI.
There is no scenario where two locks can be acquired so there is no need for the max count. In all exception throwing scenarios of WaitAsync and Wait the lock was never acquired
@@nickchapsas
I'm not sure if you're saying "there is no scenario" in terms of the code you shared. If so, I agree -- your trivial example correctly manages Wait & Release calls.
My point is that setting maxCount is being explicit about the maximum amount of concurrent access you want to allow in the critical section, and ensuring an exception is thrown if that condition could be violated. Without maxCount, you are allowing (in terms of what SemaphoreSlim enforces) any amount of concurrency (int.MaxValue) to hit the critical section.
Suppose there was some system with the following:
try
{
SomethingThatCouldRandomlyFail(); // shouldn't be in the try
await semaphoreSlim.WaitAsync(); // might not be called if above fails
await OneAtATimeAsync();
}
finally
{
semaphoreSlim.Release(); // always called
}
If this type of code existed (I hope it doesn't), and maxCount was not specified, the system may unknowingly be allowing concurrent access to OneAtATimeAsync(). However, if maxCount was specified (e.g., value of 1), the system would throw an exception on a Release without a Wait.
I don't know why you wouldn't specify maxCount for the scenario your video discusses (using SemaphoreSlim for async, single-access locking) -- what's the downside of having a fail-fast?
@@patrick_1719 Where your thought process is flawed is that await semaphoreSlim.WaitAsync() should be inside the try/finally that releases the lock. it shouldn't
@@nickchapsas We may be talking past each other. No, I don't think Wait should be in the try/finally, but I do think some real-world project may make that mistake (someone new to .NET, something missed during a refactor, a bad merge conflict resolution, etc). When that mistake is made, the lack of a maxCount will allow the mistake to go unnoticed.
Overall, my point is that when you are using SemaphoreSlim as a lock { }, you should be explicit and set maxCount to 1. I believe it more closely models the intended concurrency constraints, and as I've attempted (and failed) to describe, it prevents a class of errors from existing.
Suggestion for the future: how about a quick intro to TPL DataFlow? (ActionBlock)
Or is it "outdated" nowdays?
Not 100% for the described scenario. But one of the best libraries in my opinon. It replaces all my blocking collections which ran in a separate thread….
Re-upload, eh? Still going to point to AsyncLock in the AsyncEx library. ;)
Yeah fixed a bug I had in there accidentally. AsyncEx is fine for the most part, I will be making a video on it at some point.
That's quite good information.. thank you
why haven't you used a full version of youtube for this video? It is not convenient to use "shorts" for education - cannot rewind back, cannot change speed...
You can open shorts as any other youtube video...
ruclips.net/video/mr8kdAauc7E/видео.html
Just replace /shorts/ID with watch?v=ID
@@msddddd Even easier: Replace /shorts/ with /v/, which also works.
@@Black-Dawg-Jesus wow thanks
Good job exemplifying how little you understand how RUclips works
@@dabbopabblo I also didn't know that, and I should probably avoid talking about how much time I waste on yt
Implementing an AOP and just pass an attribute like [Safe(1)] on top of method will be great :)
Very neat tip
Any concerns with implementing IDisposable in the SemaphoreSlim class to do the release on dispose so it’s a bit cleaner to lock with a using block and avoid a try-finally block? It would also visually highlight what code is locked.
I don't think that you release it on Dispose. It's not instantiated in he same scope and persists along with the class instance. Also if it can manage the number of threads allowed and it's not 1 you can be sure you are not disposing it
I don't think that you release it on Dispose. It's not instantiated in the same scope and persists along with the class instance. Also if it can manage the number of threads allowed and it's not 1 you can be sure you are not disposing it
Sure you can do this, just make a class that implements IDisposable and you can use a using block
To clarify, I know how to do it, but not sure if it would cause any subtle issues in the timing or potential thread blocking under the covers.
@@noneofyourbusiness76 IDisposable with using block just uses try/finally under the hood
There's a slightly behavior difference though. The lock keyword allows reentrant code to enter the critical region several times in the call stack, while with SemaphoreSlim you'd need some sort of way to tell the current task already is allowed to enter the region without attempting to wait for an exhausted semaphore.
Thanks, just learned about this because of your comment
love this! Tha'ts why I subscribed!
Did this at work with requesting JWT tokens to prevent multiple threads from all requesting tokens at the same time.
It would be interesting to know what the technical reasons behind not having a lock-statement in async code are.
That's a good idea actually. I'll have to investigate that cuz I don't know the answer
The technical reason is that after an await the code can run in another thread. Depending on the context, this is even normal.
All "classic" synchronization mechanisms (lock, monitor, semaphore, etc.) are based on the current thread and block accesses from other threads.
Lock and await would therefore be a big deadlock risk.
You can have a lock statement inside async code. What you cannot do is await inside a lock statement. It makes sense because the lock is basically telling that the code inside it should be locked by a thread, but when you await you are effectively delegating a piece of code to possibly be run in another thread
Lock uses Monitor.Enter/Exit, which are designed for single-threaded access in the critical section. The method cannot return without exiting the lock, which is what await might do, and if you exit the lock, that would allow another thread to enter the critical section while you're half way through your work.
AsyncLock holds the lock even across await boundaries while allowing the method to return without blocking the thread. It does this by hooking up callbacks to the lock that will be invoked one at a time when the lock is released (the lock is re-taken before the callback is invoked).
An example. If before the await we are on the thread Context (for example with Task.Run) After the await we are guaranteed to be in the same context so it's guaranteed to run in another thread of the thread pool. BUT that could be a different thread than the one before the await. So we are potentially releasing the lock in a different thread that gained it
Superb!! Subscribed!!
Using semaphore slim in my latest management app for write to cache scenarios
Thread safety is quite complex thing
It works…. But requiring the manual release is unfortunate. The best thing about the lock statement IMO is the automatic unlock. It also makes the bounds of the locked region very apparent.
Hmm… maybe something could be done with an IDisposable/IAsyncDisposable and a using statement?
Sure you can do that. That’s what the AsyncEx package is doing
Astonishing that such a high level language has worse ergonomics for async locks than Rust.
Well at least c# doesn't have the worst looking syntax in the universe
@@viktorstojanovic9007 Yeah. Google's languages have that covered.
Amazing
Awesome
me writing rust: "wait, not being thread safe was even an option?"
My condolences
New Logo... Nice so new merch coming soon ;)
I was under the assumption async await already implements mutex and semaphore locks. I was wrong.
C# also has a Mutex class, what is the difference between the Mutex and the SemaPhorSlime class?
Wow! Thank you! Very useful.
await is running in which thread?
Isn't a Mutex for this case better?
Do you have a c# course I can purchase?
Great
why not using mutex?
The dude is walking stack overflow
Love your content :)
Monitor.Enter?
Maybe I missed something but aren't you supposed to construct the SemaphoreSlim with (1,1) for this to work?
You don’t need to because the waitasync is outside the try so there is no scenario where more than one locks can be acquired or released
@@nickchapsas Hmmm... I've always done (1,1) and I see it in most of the examples. The docs are a bit confusing, but I remember years ago getting bit by not using (1,1). Maybe you can clear the air in another video ;)
@@nickchapsas With locks like these, it is absolutely important (not trivial) to put them in a try finally block. Nito.AsyncEx (Stephen Cleary) does it nicely with disposables.
@@urbanelemental3308 The only reason why Stephen uses (1,1) is because in order to offer the "cleaner" api with the using statement you NEED to add a maxThread limit because since using will put the WaitAsync call in the try block and that WaitAsync can throw, you might over-release locks, leading to a problem. You don't need to use the second parameter if you keep the WaitAsync outside of the try beause the lock is never acquired on a throw to be released
@@nickchapsas Hmmm... Ok. So you're suggesting that in the rare case that you might not want it inside a try block because... you're certain your code will work without fault (is there a perf benefit?), then the (1) is just fine. Fair enough.
We can't use async in lock because async task me be finished on a different thread
Haha i recently had to find out about this. I had some logic that should only be run once a day no matter how many times i call this endpoint. So i was using database to check that the last record was recorded in the same day etc. I made a mistake in js and called this endpoint couple of time and i noticed i have this task run more than once in the same day. Ugh. I thought that putting await would be blocking for all the calls. Stupid of me.
O this is handy
continue; 😉
nice. (@line 4)
I have an Idisposable semaphore slim wrapper that I can use the using keyword with and guarantee that I'm going to release
using(await something.Lock()) {
}
Lock returns a new disposable wrapper for the semaphore slim contained in the "something" instance, it's dispose method releases the semaphore
Does that mean that your waitasync is inside the try part? That’s not good
@@nickchapsas no there's no try there. Something is a wrapper for semaphoreslim, something.Lock() does awaitasync then returns a disposable wrapper for the semaphoreslim, when the using block exits and dispose is called, the dispose method releases the lock
Yeah I assume there are 2 allocations (one for the lock wrapper and the disposable wrapper), but being able to guarantee my locks are being released without try/finally feels good and clean
@@figloalds I think you don't understand how using works. Using will be lowered to try/finally so your lock will be acquired inside the generated try code which is bad because if there is an exception thrown there, you are releasing a lock you never acquired
@@nickchapsas I appreciate the insight, I wrote the idea in sharplab
If i paste the full URL then YT will delete ite, sharplab io #gist:3e1774255c57e2300badaff9fc0cc97f
I can see that the whole async await and it's state machine madness create some very complicated IL code, but on the surface it seems to me that the only way this fails is if WaitAsync itself throws an exception
@@figloalds Which is why it's flawed because WaitAsync CAN throw an exception and all the exception throwing scenariors happen when the lock is NOT acquired. The way to "fix" this is if you also set the maxThreads value in the semaphore constructor so you don't over-release locks.
Nito.AsyncEx is just better