When we learned about transistors/diodes in electronics class one of the first things the teacher said was "we won't speak of open and closed because these terms are horribly ambiguous, we will speak of pass-through and blocking".
As an electronic engineer turned game developer, I'd like to point out that the correct terminology for transistors zones are cut-off ("off"), saturation ("on") and active (signal amplifier). I never heard of pass-through/blocking. For diodes, they're called forward ("on") and reverse ("off") bias.
In my case this was useful when I send error notifications if something goes wrong. Sometimes sending to many errors via email/pushover is anoying. This patter will definitely help me solve that problem without having to code any complex logic.
Because you cannot control wat a remote api does when you try to use it when it is down. Sometimes api’s ip ban the remote client that keeps failing. Also if you have certain amount of calls per day or month the failed request do count. So it is to prevent your api from using all the requests allotted for that time period.
@@Xamdify That doesn't answer his question. I think he's asking why doesn't Nick just return the cached data whenever an API call fails and my guess is this might not be the best example to showcase the package.
I think this was more a demo of Poly and its circuit breaking abilities rather than all the appropriate ways to handle failure, but yes I agree - seems if you have a cached value and the API you're forwarding your request onto fails, you should return that cached value and minimize disruption to the client. In this example, the API call had to fail several times in a row before falling back to the cached value, which seems inefficient.
This circuitbreake is easy to implement with your own code - no need to depend on external libraries for every tiny feature. I always prefer to implement myself all my main functionalities - HTTPClient, Webserver, SIP Server .. etc.
Hi, Nick! Great video! I assume in real world applications you use combine both this and the retry techniques. Is my assumption correct? And, if so, do you have any recommendations / best practices on how to implement both propertly? (P.S.: I haven't gone deeply into Polly's documentation yet, so, if there is anything there which answers my question, feel free to simply point it out) ;-)
I would say it depends on situation. If you facing a non-durable service or 3rd party call with timeout issues, then using retry policy would sufficient. Circuit breaker is more convenient where service doesn't managed by you in any case of outage and cause cascading failures in your distributed systems. Answer is yes for your question. You can combine multiple policies with PolicyWrap extension. Again, it depends on need but maybe wait-and-retry policy first and then after circuit breaker policy could be strong handler for this type of situation.
Great video as always! While Polly is a great tool, in a real world application of a decent size you would want to get rid of this kind code from your services and instead push the problem to the infrastructure level. That way, your web service code have one less concern to take care of. Of course, your Ops will instead inherit that concern =O) What I'm talking about here is of course service meshes.
It depends on the problem. Your fallback logic can have domain specific logic in it. In that case I wouldn’t expose it to something like the api gateway and instead use something like Redis to manage the state. It really depends on the problem
Thanks for the video. It is very useful. I have a question, how does the Polly keeps track of number of failed requests to "open the circuit". I am just thinking the scenario, in which we have a round-robin load-balanced servers and let's imagine we only have 2 round-robin servers. The first few hits could hit Server 1 and Server 1 may pop open the circuit but when the subsequent hit, it may go to Server 2 and it is a successful call, then third hit, it goes back to Server 1 and it fails. So, in this case, the client application will behave weirdly?
@@nickchapsas Hi Nick, Can you Please tell me the Git Repository name along with root folder name where I can find the code for this corresponding video. Thanks
Do you have any recommendation for fallback scenarios for database calls when you are dealing with millions of records? Also, is this cache variable global or local to the process?
Because when you keep calling services that fail or are down, you can cause further damage and not let them recover and also your service will be degraded as a result of that.
How can we use this in case of getting the access token from third party APIs and making request to their APIs and if the token is expired how can we continue to get new token and continue where it was stopped before token would be valid for an hr or 30 mins
It is very much useful even with a reverse proxy. There are limits to how much logic you can dump there and you need to be very careful to not leak business logic to your revere proxy’s configuration
This is my recommended approach as well. Let the infrastructure handle as much of the dumb retrying/circuit breaking as possible. All the service meshes/modern proxies have extensive retry/circuit breaker features built in. Doing it all inside your services leads down the road of creating shared libraries to avoid duplicating it in every service and before you know it, you now have a core platform set of libraries/clients that MUST be used by every service so that they behave correctly. Doing it in the service is still useful when you need more context and logic around specific retries/circuit breakers. However, that should be the exception and not the rule.
I don't understand if the circuit is referred to a api call including the querystring or not. I mean, "endpoint/api/action?arg=1" and "endpoint/api/action?arg=2" are two different and independent circuit or just one circuit focusing on the response of "endpoint/api/action" regardless the input parameters?
Would this be beneficial to use with azure functions as well? We have a lot of http clients with retry policies set up in our startup.cs, but we’re just using the standard Polly library.
circuitbreaker + retry extensions with jitter + timeout + cached fallback == $$. I love polly.
When we learned about transistors/diodes in electronics class one of the first things the teacher said was "we won't speak of open and closed because these terms are horribly ambiguous, we will speak of pass-through and blocking".
I cannot begin to describe you how much this naming messed with me when I was first introduced to the topic
As an electronic engineer turned game developer, I'd like to point out that the correct terminology for transistors zones are cut-off ("off"), saturation ("on") and active (signal amplifier). I never heard of pass-through/blocking.
For diodes, they're called forward ("on") and reverse ("off") bias.
@@galandilvogler8577 In reality my teacher used Dutch words. I freely translated them to English.
In my case this was useful when I send error notifications if something goes wrong. Sometimes sending to many errors via email/pushover is anoying. This patter will definitely help me solve that problem without having to code any complex logic.
This is brilliant, easily understandable.. would like to see how you integrate this with the DI.
A good way to protect the APIs you need to call and well explained, good job.
This is brilliant. Definitely something I wanna try implementing/experimenting.
7:20 you talk about delegate and wrapping in a delegate, do you mean lambda function? or are these the same thing? im confused!
Very well demonstrated the cases.
I would hate to implement Polly from scratch and the more I learn, the more I realize, I need to learn more.
How is it different from cancellation token. Why do we need to introduce the circuit breaker?
Why do you need to wait for 3 queries to fail when you can return the cached version immediately in the catch block for the method?
Because you cannot control wat a remote api does when you try to use it when it is down.
Sometimes api’s ip ban the remote client that keeps failing. Also if you have certain amount of calls per day or month the failed request do count. So it is to prevent your api from using all the requests allotted for that time period.
@@Xamdify That doesn't answer his question. I think he's asking why doesn't Nick just return the cached data whenever an API call fails and my guess is this might not be the best example to showcase the package.
I think this was more a demo of Poly and its circuit breaking abilities rather than all the appropriate ways to handle failure, but yes I agree - seems if you have a cached value and the API you're forwarding your request onto fails, you should return that cached value and minimize disruption to the client. In this example, the API call had to fail several times in a row before falling back to the cached value, which seems inefficient.
Because you don't have to make an expensive api call and wait for it to fail, possibly wasting limited api calls before returning the cached version.
Because it's just an example?
This circuitbreake is easy to implement with your own code - no need to depend on external libraries for every tiny feature. I always prefer to implement myself all my main functionalities - HTTPClient, Webserver, SIP Server .. etc.
Hi Nick, which editor are you using ?
Hi, Nick! Great video! I assume in real world applications you use combine both this and the retry techniques. Is my assumption correct? And, if so, do you have any recommendations / best practices on how to implement both propertly? (P.S.: I haven't gone deeply into Polly's documentation yet, so, if there is anything there which answers my question, feel free to simply point it out) ;-)
I would say it depends on situation. If you facing a non-durable service or 3rd party call with timeout issues, then using retry policy would sufficient. Circuit breaker is more convenient where service doesn't managed by you in any case of outage and cause cascading failures in your distributed systems.
Answer is yes for your question. You can combine multiple policies with PolicyWrap extension. Again, it depends on need but maybe wait-and-retry policy first and then after circuit breaker policy could be strong handler for this type of situation.
Great video as always!
While Polly is a great tool, in a real world application of a decent size you would want to get rid of this kind code from your services and instead push the problem to the infrastructure level.
That way, your web service code have one less concern to take care of. Of course, your Ops will instead inherit that concern =O)
What I'm talking about here is of course service meshes.
It depends on the problem. Your fallback logic can have domain specific logic in it. In that case I wouldn’t expose it to something like the api gateway and instead use something like Redis to manage the state. It really depends on the problem
Nice :), thanks polly rocks
Very nice explained, thanks a lot!
Couldn’t you have combined Polly’s cache policy with a circuit breaker policy, instead of using a memory cache manually?
Wow, really interesting
Great explanation
My only concern to raise around this example is that one bad actor or poor integration can lead to the circuit breaker blocking the API for everyone.
Thanks for the video. It is very useful. I have a question, how does the Polly keeps track of number of failed requests to "open the circuit". I am just thinking the scenario, in which we have a round-robin load-balanced servers and let's imagine we only have 2 round-robin servers. The first few hits could hit Server 1 and Server 1 may pop open the circuit but when the subsequent hit, it may go to Server 2 and it is a successful call, then third hit, it goes back to Server 1 and it fails. So, in this case, the client application will behave weirdly?
You’d need to plug in distributed extensions with Redis to keep track of the state on scaled out systems
@Nick Chapsas please make a video on communication between microservice
How is the shared across like pods for instance? is it per pod if this was deployed in kube? newbie here
You can use the extension points to use something like Redis to distribute the state
Vey usefull thing. Dont know about this. Thank you Nick.
If the application is having multiple instances, how we can manage the state of the circuit across instances?
You hook up Redis
@@nickchapsas Any chance you make a video demonstrating this?
How come you declared circuit breaker policy as not static variable, but it's still able to keep track count of failed requests?
Because the class it is used in is a singleton
@@nickchapsas oh, i see now :) thanks for explaining
Hi Nick, I became patreon member but not able access the code for this corresponding video. Please let me know ASAP
The code is there if you search the repo for any of the final code snippets
@@nickchapsas Hi Nick, Can you Please tell me the Git Repository name along with root folder name where I can find the code for this corresponding video. Thanks
Thanks!
Do you have any recommendation for fallback scenarios for database calls when you are dealing with millions of records?
Also, is this cache variable global or local to the process?
Local: each instance of your app would have a separate InMemoryCache
Thank you!
Good Stuff well explained.
how we can use this in Scoped services?
I half expected the example near the end to show using the real site as a fall-back option.
you're just great...
Hi can you make a video about how to handle Stale cookies when authentication is done through an identity Provider like azure Ad 😄 thanks 🙏
I am still trying to understand why we need circuit breaker.
Because when you keep calling services that fail or are down, you can cause further damage and not let them recover and also your service will be degraded as a result of that.
How can we use this in case of getting the access token from third party APIs and making request to their APIs and if the token is expired how can we continue to get new token and continue where it was stopped before
token would be valid for an hr or 30 mins
Don't we need to dispose httpClient?
No
When David Warner becomes Programmer :D
I’m not sure this is useful at all when a microservice is using a reverse proxy side car, which already does this for you.
This is my thought as well, that this is better handled in the reverse proxy or API-gateway than added to every service you got.
It is very much useful even with a reverse proxy. There are limits to how much logic you can dump there and you need to be very careful to not leak business logic to your revere proxy’s configuration
This is my recommended approach as well. Let the infrastructure handle as much of the dumb retrying/circuit breaking as possible. All the service meshes/modern proxies have extensive retry/circuit breaker features built in.
Doing it all inside your services leads down the road of creating shared libraries to avoid duplicating it in every service and before you know it, you now have a core platform set of libraries/clients that MUST be used by every service so that they behave correctly.
Doing it in the service is still useful when you need more context and logic around specific retries/circuit breakers. However, that should be the exception and not the rule.
polly again?
Yeap. Polly doesn’t only support one resilience policy.
Get that private variable underscore out of here!!
Super cool though, will have to check that library out.
Some people like their code to look nice and be easily understandable by others
@@maskettaman1488 Those people shouldn't put arbitrary prefixes everywhere then. Use your IDE.
@@lost-prototype What IDE highlights/indicates private variables by default? What is the benefit of relying on an IDE feature?
Waiting for datetime and datetimeoffset
😱
Was it just me that noticed "Port:6969"?..🤭
First comment
I don't understand if the circuit is referred to a api call including the querystring or not. I mean, "endpoint/api/action?arg=1" and "endpoint/api/action?arg=2" are two different and independent circuit or just one circuit focusing on the response of "endpoint/api/action" regardless the input parameters?
It is a single circuit. Otherwise his examples would not have worked.
@@EvaldasNaujikas my fault 😅
Would this be beneficial to use with azure functions as well? We have a lot of http clients with retry policies set up in our startup.cs, but we’re just using the standard Polly library.
It would be harder to maintain the state of the circuit breaker in an Azure function so you’d need to be careful with that