Well, we only needed simple storage of state for this, which is a great opportunity to use an Agent. I definitely could've used a GenServer instead, but I didn't need the whole client/server/message passing interface.
Was the Discord stuff too much? I thought working within a real problem would make things interesting. We're curious as to whether you thought it was fun or if it was a distraction. Let us know!
I understood exactly what was done, but I guess some more new developers could have a hard time following it. In that case, I think a little bit more of introduction will help.
@@gabrielcontra Your comment is on point here. Being a mid-level developer, I would consider setting up a tenant cache to be an excellent take-home interview question. I might struggle a bit in a live coding session, but I feel that a video like this would get me thinking more along the lines of how a senior-level developer would address the problem. For example, I am considering building a service in my portfolio that hits an external api (athenahealth or calendly) and pulls the available appointments. One of the things I want to include (especially for athenahealth) is a tenant cache that I could populate by choosing a healthcare provider. That way, while I was performing scheduling duties in a very large medical organization, I could speed things up by not having to sift through data from all of the providers. It might be over-baking the biscuits in smaller shops, but on larger projects something like this technique could definitely get my brain going in the right direction.
You can store a map in your Agent instead, where the key is the Discord guild id (or tenant id): %{ 1 => [5, 6, 7], 2 => [8, 9 4], 3 => [10, 11, 2] } And access the tickets for each guild by passing in the guild id.
@@liveviewmastery I mean how you reference your agent using name. It works as long as you only have 1 Phoenix app running. If you deploy your app in a distributed manner IE 3 nodes of the same app running, you would have to use some kind of registry correct? I’m curious to see how you usually go about solving that.
I've been thinking about this. I think you do nothing. You'll have the same cache in all the nodes, which will be hitting an in-memory list. I don't see much downside to just having a process in each Phoenix server that maintains it's own cache.
I have a question, what would happen in case of parallel access? Imagine that I'm creating a new ticket at the same time that someone is closing a ticket, I think that there is a chance that someone gets the wrong information. What if instead of reseting the cache you invalidate? So when you are searching for the tickets, you would first check if the cache is valid, if not, you would fill the information. set it to valid, and return that info. What do you think?
I thought about this and I was concerned, so I looked into it. I prolly should do a video about this specifically, but: 1. The source code for the Agent’s state read is actually implemented as a GenServer.call (the synchronous blocking one), so it wont get the wrong state. 2. With Erlang message passing, it’s implemented as a Stack. The first message in, gets handled first. So if they happen at nearly the same time, the messages will be handled in the proper order!
Is there a reason for choosing Agent over Genserver?
Well, we only needed simple storage of state for this, which is a great opportunity to use an Agent. I definitely could've used a GenServer instead, but I didn't need the whole client/server/message passing interface.
Was the Discord stuff too much? I thought working within a real problem would make things interesting. We're curious as to whether you thought it was fun or if it was a distraction. Let us know!
I understood exactly what was done, but I guess some more new developers could have a hard time following it. In that case, I think a little bit more of introduction will help.
Gotcha. Thank you!
@@gabrielcontra Your comment is on point here. Being a mid-level developer, I would consider setting up a tenant cache to be an excellent take-home interview question. I might struggle a bit in a live coding session, but I feel that a video like this would get me thinking more along the lines of how a senior-level developer would address the problem.
For example, I am considering building a service in my portfolio that hits an external api (athenahealth or calendly) and pulls the available appointments. One of the things I want to include (especially for athenahealth) is a tenant cache that I could populate by choosing a healthcare provider. That way, while I was performing scheduling duties in a very large medical organization, I could speed things up by not having to sift through data from all of the providers. It might be over-baking the biscuits in smaller shops, but on larger projects something like this technique could definitely get my brain going in the right direction.
This will work if you have only one node running. What do you use to manage process for setup with more than one node?
You can store a map in your Agent instead, where the key is the Discord guild id (or tenant id):
%{
1 => [5, 6, 7],
2 => [8, 9 4],
3 => [10, 11, 2]
}
And access the tickets for each guild by passing in the guild id.
@@liveviewmastery I mean how you reference your agent using name. It works as long as you only have 1 Phoenix app running. If you deploy your app in a distributed manner IE 3 nodes of the same app running, you would have to use some kind of registry correct? I’m curious to see how you usually go about solving that.
I've been thinking about this. I think you do nothing. You'll have the same cache in all the nodes, which will be hitting an in-memory list. I don't see much downside to just having a process in each Phoenix server that maintains it's own cache.
@@liveviewmastery Yeah, worst case is you'll get a cache miss 3 times (if you have 3 instances of your app running). It can work.
I have a question, what would happen in case of parallel access? Imagine that I'm creating a new ticket at the same time that someone is closing a ticket, I think that there is a chance that someone gets the wrong information. What if instead of reseting the cache you invalidate? So when you are searching for the tickets, you would first check if the cache is valid, if not, you would fill the information. set it to valid, and return that info. What do you think?
I thought about this and I was concerned, so I looked into it.
I prolly should do a video about this specifically, but:
1. The source code for the Agent’s state read is actually implemented as a GenServer.call (the synchronous blocking one), so it wont get the wrong state.
2. With Erlang message passing, it’s implemented as a Stack. The first message in, gets handled first. So if they happen at nearly the same time, the messages will be handled in the proper order!