This is literally what I needed today. Cramming this playlist hopefully offer pulls up, I never comment on posts but u are the goat bro the goat broski if I get this offer I will send u some only fans money
Hey Jordan, instead of maintaining a cache of pending payments to handle the scenario where our web hook server goes down, would a placing a message in a queue when the web hook is called be sufficient? The pros - We don't need to configure a cdc - We don't need to poll and determine the right parameters for it The cons - We're depending on Stripe to at least call our web hook once for each payment At least for the con I mentioned, we could have a cron job, say every hour look through our pending payments table.
I think your solution works, but it basically boils down to my solution due to your cron job remark at the end there haha I hate polling too, alas sometimes it is inevitable
I’m not sure if I understand your solution correctly. With your solution, in scenario of the web hook is down, is the Strip payment status completely lost? And, if the pending payment table means the source of truth string consistency table, adding read load via cron job sounds less optimal than querying a separate derived DB.
Reads do not necessarily have to interfere with writes.When payment status changes your db can have a CDC event that triggers an email, or stripe webhook can have a callback action that sends an email when payment processing is done.
They would if I'm constantly doing a linear scan of the table to check for pending payments (unless we used snapshot isolation as opposed to two phase locking, which in retrospect also would have worked nicely here). But even despite that, just extra resources used by your table that you want to keep fast.
maybe another option is to hit the replica for reading. since we anyway look up orders that are still pending after some X minutes, the replica can be assumed to have been eventually consistent with leader until that point
@@RS7-123 that's fine too! note that the replica is probably part of the strong consistency piece of things, so slowing it down could slow down our writes a bit
"Well just go to Stripe", LOL I wish I could answer that on system design interviews... I recently was asked to "Design Datadog", "Design S3" and in other to design a "Load Balancer such as ELB"... I wanted to answer something similar.
Can you make a video on designing Spotify? Func Requirements: - Ability to play any song. super low latency while playing any song. - User can create playlist, share playlist. - Follow other playlist, artist, get notified for any song updates by artist or in a playlist.
At least off the cuff I'll say 1) CDNs and precaching when going down a playlist 2) Use a database 3) This feels like twitter You think there are any other unique pieces to it?
@@jordanhasnolife5163 how about live streaming & podcasts? I believe that this is similar to Zoom. But would like to know if there any optimizations which can be done?
The assertion behind the need for a derived Pending Transactions cache - that reads will hurt write throughput due to row locking - is not necessarily true if you're using a DB with MVCC (like Spanner or Cockroach). So I question the need for the separate cache.
The other reason is that you then need to run a query on disk to figure out all of the pending transactions. I'd rather just have them all precached, but agreed that if you're using snapshot isolation locking is unnecessary for such a read
@@jordanhasnolife5163 could add a local covered index to speed up the query and ensure consistency, which would slow down writes a bit but per the original requirements that's not a problem. In any case pending transactions would be a great topic to deep dive in a real interview and discuss tradeoffs, so thanks for calling it out explicitly in your video
Hey Jordan! Thanks as always for these awesome videos! I was wondering if you could do a quick video about how to actually structure and talk about these on an interview. Is there a structure to it that you do (or have seen), such as laying out requirements first, then doing some considerations, then diving into the design? Is it really not one size fits all? Either way could be interesting to see what tips you might have around this. Keep it up, you're incredibly helpful!
Here ya go m8 ruclips.net/video/IY2EPjShgc4/видео.htmlsi=Xw4uwvd4iDBbpp_w You can always just ask your Interviewer too. Hey is it ok if I start with x?
I don't really know how much there is to elaborate on here beyond what I've discussed in my concepts videos. I'd take a look at something like debezium.
The action of deleting the pending payment in database when receiving un-recognized result from stripe potentially leads to inconsistency when your request is still in network route and has not yet reached stripe server at the time the poller check. After deleting the record in DB, the request comes into stripe and modify your bank balance. How to solve this problem completely
Can you give me a timestamp here? I never propose deleting a payment from the actual payments DB, just from the cache when it hears from either stripe or the payments DB. The Payments DB is the source of truth, the cache can be wrong.
17:18 - should pending payments that are `not recognized` by stripe at poll time really just be deleted from the payments table? this case might require special processing since at this point, the payment has a local DB status of `pending` but stripe has not recognized this payment. What would be a possible solution for this inconsistency?
I don't really think there is any solution, network requests to stripe can always fail. Do we want to delete the event? Maybe not but then we may find ourselves doing a lot of polling after a while.
The other video I’d like to see: A distributed system for generating unique IDs, akin to Twitter Snowflake. Also with the functional requirement of how to allow people to bring along their own IDs.
Thanks for the video Jordan. How is polling going to work? Is there a cron job or a scheduler running every x minutes in the flink? Moreover maintaining the doubly linkedlist in the order of create time stamp would be log(n) right, as each event reaching flink could be out of order, right?
Yeah something like that, or you could just say something like on a new event, if we haven't polled in x amount of time, poll again. I would think that events reaching Flink should be coming in order on a timestamp per partition, so we could always just do a flink node per partition to maintain that invariant.
How does Kafka CDC capture pending payments if it's triggered only by database changes? If payments remain in a pending state without any status change, how will Kafka CDC detect them for Flink to process?
Thanks for the great video! Question about web hooks. So the Payment system is listening for web hook callbacks and the polling mechanism is only triggered when a pending payment hasn’t received a callback in a specified amount of time? Is that the idea?
Thanks for the video! Why not just use zookeeper to give us a monotonically increasing u64 for the idempotency key? This way we are guaranteed to not have any conflict, also u64 should be enough till the end of times.
1) using a monotonically increasing sequence number implies that all writes must go through the same choke point (meaning you can't shard zookeeper, which is potentially fine if we really don't care about performance) 2) We basically do this anyways, as our payments db is basically using a consensus algorithm, making it effectively the same as zookeeper
I have a dumb question. Why would row read locks on the pending payments slow down the write throughput of the table, given the writes' idempotency key are different from the pending ones?
Great point, there probably wouldn't be many conflicts IRL, but the reads themselves would be quite expensive and would take resources away from the DB
I dont think you can do pre-materalize for the idempotency. If a user click the pay button twice, the second pay request will ask your "pre-materialize key service" for a new idempotent key. Now the problem goes back to the original one, how can you generate a idempotent key for the request in the first place.
Even if the client reloads, it could be persist on client to avoid double payments. It can be stored with the checkout data the first time checkout was hit
thanks Jordan I have been watching your sd video each week have two qq regarding your design 1. could we use snowflake algorithm to generate id as idompotence key ? 2. was Flink processing part of payment service code ? if so, for fault tolerance case where payment service was down how is it going to affect flink processing ? thanks
1) Not familiar with this method, feel free to send me a link to what it is 2) Not sure what you mean by this question. Flink is just getting data from our payment db, and occasionally polling stripe to see the status of it, it is independent of any synchronous operation to do with the payment service.
@@jordanhasnolife5163 thanks for replying 1) en.wikipedia.org/wiki/Snowflake_ID 2) let me rephrase my question a little bit where was the application code to generate idempotence key logic and save to payment as one microservice and flink process another microservice or they are all clustered as one service as in payment service. if both processes are treated as one service application code and payment service was down, flink process will also halt right
Not sure if tinyurl/hash based approach is best suited for payment idempotent key, usually using order id or some combination of order id and uuid would suffice, simpler and more efficient, wdyt?
How do you use cron job to search the doubly linked list in flink? Isn't the data store in memory in flink? In that case the cron job will be inside flink? Also why use flink instead of some in memory database like redis?
For payments DB, you have chosen Spanner, Cockroach, Yugabyte.... Do these DBs provide strong consistency, and ACID and are they relational? quick question do we need relational DB here?
Oh man I'll have to look into this one, you may be aging yourself by asking for an RSS feed and I may be aging myself by saying I've never used one lol
I am wondering how you think the design would be different if it's like design a paypal (payments from account A to accountB), maybe also allowed for scheduled payment)?
I think it would basically be the same, you're putting all of the individual transactions in a log that is maintained by Paxos or some other consensus algorithm and then using derived data from there.
@@jordanhasnolife5163 Thanks! I am wondering how's your take about SAGA and TC/C as mentioned in Alex's book, compared with CDC to update maybe both payer / receiver's account value?
@@Anonymous-ym6st That's more or less exactly what I'd do here. I don't think there's much other choice. Maybe I wouldn't say two phase commit or saga, but rather using a consensus algorithm to build out a distributed transaction log.
Realistically, no - it's a lot of extra work to do that for what I consider to be minimal benefit to viewers. I think pseudocode is present where necessary, but if you begin generating your own sample code I'd be happy to amend it in my video descriptions! Sorry about that
I think Cassandra's "strong consistency" is probably quorum consistency. I'd look into spanner, cockroach, yugabyte, as it seems they lean towards using distributed consensus within a replication group.
This is literally what I needed today. Cramming this playlist hopefully offer pulls up, I never comment on posts but u are the goat bro the goat broski if I get this offer I will send u some only fans money
Haha please take the only fans money and donate it to charity
the cache design that you mention with the doubly linked list and hashmap is basically the implementation a LRU cache
You're correct
Hey Jordan, instead of maintaining a cache of pending payments to handle the scenario where our web hook server goes down, would a placing a message in a queue when the web hook is called be sufficient?
The pros
- We don't need to configure a cdc
- We don't need to poll and determine the right parameters for it
The cons
- We're depending on Stripe to at least call our web hook once for each payment
At least for the con I mentioned, we could have a cron job, say every hour look through our pending payments table.
I think your solution works, but it basically boils down to my solution due to your cron job remark at the end there haha
I hate polling too, alas sometimes it is inevitable
I’m not sure if I understand your solution correctly. With your solution, in scenario of the web hook is down, is the Strip payment status completely lost? And, if the pending payment table means the source of truth string consistency table, adding read load via cron job sounds less optimal than querying a separate derived DB.
Reads do not necessarily have to interfere with writes.When payment status changes your db can have a CDC event that triggers an email, or stripe webhook can have a callback action that sends an email when payment processing is done.
They would if I'm constantly doing a linear scan of the table to check for pending payments (unless we used snapshot isolation as opposed to two phase locking, which in retrospect also would have worked nicely here).
But even despite that, just extra resources used by your table that you want to keep fast.
maybe another option is to hit the replica for reading. since we anyway look up orders that are still pending after some X minutes, the replica can be assumed to have been eventually consistent with leader until that point
@@RS7-123 that's fine too! note that the replica is probably part of the strong consistency piece of things, so slowing it down could slow down our writes a bit
Great video Jordan! Any plans for a video on api gateways, jwts and identity providers?
Not tentatively, but I'll keep this one in the back of my mind!
"Well just go to Stripe", LOL I wish I could answer that on system design interviews... I recently was asked to "Design Datadog", "Design S3" and in other to design a "Load Balancer such as ELB"... I wanted to answer something similar.
Well I say stripe because a lot of the work there is just dealing with every credit card's API
Thanks Jordan for your awesome video! Hope I can see the topic about 'design some meeting scheduler' thing one day~~
Can you make a video on designing Spotify?
Func Requirements:
- Ability to play any song. super low latency while playing any song.
- User can create playlist, share playlist.
- Follow other playlist, artist, get notified for any song updates by artist or in a playlist.
At least off the cuff I'll say
1) CDNs and precaching when going down a playlist
2) Use a database
3) This feels like twitter
You think there are any other unique pieces to it?
@@jordanhasnolife5163 how about live streaming & podcasts? I believe that this is similar to Zoom. But would like to know if there any optimizations which can be done?
The assertion behind the need for a derived Pending Transactions cache - that reads will hurt write throughput due to row locking - is not necessarily true if you're using a DB with MVCC (like Spanner or Cockroach). So I question the need for the separate cache.
The other reason is that you then need to run a query on disk to figure out all of the pending transactions. I'd rather just have them all precached, but agreed that if you're using snapshot isolation locking is unnecessary for such a read
@@jordanhasnolife5163 could add a local covered index to speed up the query and ensure consistency, which would slow down writes a bit but per the original requirements that's not a problem. In any case pending transactions would be a great topic to deep dive in a real interview and discuss tradeoffs, so thanks for calling it out explicitly in your video
Hey Jordan! Thanks as always for these awesome videos! I was wondering if you could do a quick video about how to actually structure and talk about these on an interview. Is there a structure to it that you do (or have seen), such as laying out requirements first, then doing some considerations, then diving into the design? Is it really not one size fits all? Either way could be interesting to see what tips you might have around this. Keep it up, you're incredibly helpful!
Here ya go m8
ruclips.net/video/IY2EPjShgc4/видео.htmlsi=Xw4uwvd4iDBbpp_w
You can always just ask your Interviewer too. Hey is it ok if I start with x?
@@jordanhasnolife5163 oh shit I missed that, thanks! 🙏🏼
can you make a separate video elaborating Change Data Capture part ? like log based , trigger based ...
I don't really know how much there is to elaborate on here beyond what I've discussed in my concepts videos. I'd take a look at something like debezium.
The action of deleting the pending payment in database when receiving un-recognized result from stripe potentially leads to inconsistency when your request is still in network route and has not yet reached stripe server at the time the poller check. After deleting the record in DB, the request comes into stripe and modify your bank balance. How to solve this problem completely
Can you give me a timestamp here? I never propose deleting a payment from the actual payments DB, just from the cache when it hears from either stripe or the payments DB. The Payments DB is the source of truth, the cache can be wrong.
Thanks a lot for another amazing video... I've a question, how does the payment reaches seller?
Well I guess thats a detail for tipalti, but Amazon probably makes batch payments every month to them via an ACH (wire) trabsfer
17:18 - should pending payments that are `not recognized` by stripe at poll time really just be deleted from the payments table? this case might require special processing since at this point, the payment has a local DB status of `pending` but stripe has not recognized this payment. What would be a possible solution for this inconsistency?
I don't really think there is any solution, network requests to stripe can always fail. Do we want to delete the event? Maybe not but then we may find ourselves doing a lot of polling after a while.
why not retry the payment assuming this never hit stripe in first place. our intent is to clear the order and make some money isn’t it
The other video I’d like to see: A distributed system for generating unique IDs, akin to Twitter Snowflake. Also with the functional requirement of how to allow people to bring along their own IDs.
This does feel somewhat similar to what we do in the payment gateway video, shard the key range, allow users to bring their own
Thanks for the video Jordan. How is polling going to work? Is there a cron job or a scheduler running every x minutes in the flink? Moreover maintaining the doubly linkedlist in the order of create time stamp would be log(n) right, as each event reaching flink could be out of order, right?
Yeah something like that, or you could just say something like on a new event, if we haven't polled in x amount of time, poll again.
I would think that events reaching Flink should be coming in order on a timestamp per partition, so we could always just do a flink node per partition to maintain that invariant.
How does Kafka CDC capture pending payments if it's triggered only by database changes? If payments remain in a pending state without any status change, how will Kafka CDC detect them for Flink to process?
Well payments are added to the DB with a status of "pending"
Why do we need two Load Balancers? Couldn't we pass the request to the previous one?
It's just for the sake of the diagram, you can use the same load balancer for both
Thanks for the great video! Question about web hooks. So the Payment system is listening for web hook callbacks and the polling mechanism is only triggered when a pending payment hasn’t received a callback in a specified amount of time? Is that the idea?
Yep!
What happens if Flink cache fails? I think we will somehow have to redrive the CDC stream to repopulate the new cache instance?
Please see the flink concepts video. State is periodically checkpoibted to s3
Thanks for the video! Why not just use zookeeper to give us a monotonically increasing u64 for the idempotency key? This way we are guaranteed to not have any conflict, also u64 should be enough till the end of times.
hey friend, can you please explain what is u64? is it like a uuid?
@@lalasmith2137 haha sorry, an unsigned 64 bits integer
@@huguesbouvier3821 thank you for clarifying that, helped me understand your answer :)
1) using a monotonically increasing sequence number implies that all writes must go through the same choke point (meaning you can't shard zookeeper, which is potentially fine if we really don't care about performance)
2) We basically do this anyways, as our payments db is basically using a consensus algorithm, making it effectively the same as zookeeper
I have a dumb question. Why would row read locks on the pending payments slow down the write throughput of the table, given the writes' idempotency key are different from the pending ones?
Great point, there probably wouldn't be many conflicts IRL, but the reads themselves would be quite expensive and would take resources away from the DB
I dont think you can do pre-materalize for the idempotency. If a user click the pay button twice, the second pay request will ask your "pre-materialize key service" for a new idempotent key. Now the problem goes back to the original one, how can you generate a idempotent key for the request in the first place.
The idempotency key is generated on page load, not on user click. If they reload the page that's a different story.
Even if the client reloads, it could be persist on client to avoid double payments. It can be stored with the checkout data the first time checkout was hit
Please make a video on design aws cloud watch
See distributed logging and metrics video
One more banger system design video!
thanks Jordan I have been watching your sd video each week have two qq regarding your design
1. could we use snowflake algorithm to generate id as idompotence key ?
2. was Flink processing part of payment service code ? if so, for fault tolerance case where payment service was down how is it going to affect flink processing ?
thanks
1) Not familiar with this method, feel free to send me a link to what it is
2) Not sure what you mean by this question. Flink is just getting data from our payment db, and occasionally polling stripe to see the status of it, it is independent of any synchronous operation to do with the payment service.
@@jordanhasnolife5163 thanks for replying
1) en.wikipedia.org/wiki/Snowflake_ID
2) let me rephrase my question a little bit where was the application code to generate idempotence key logic and save to payment as one microservice and flink process another microservice or they are all clustered as one service as in payment service. if both processes are treated as one service application code and payment service was down, flink process will also halt right
Not sure if tinyurl/hash based approach is best suited for payment idempotent key, usually using order id or some combination of order id and uuid would suffice, simpler and more efficient, wdyt?
fine by me
How do you use cron job to search the doubly linked list in flink? Isn't the data store in memory in flink? In that case the cron job will be inside flink? Also why use flink instead of some in memory database like redis?
It's not a cron job, it's just a loop that waits a few seconds. I need more functionality than what redis is able to offer me here.
@@jordanhasnolife5163 A loop that waits for few seconds, sounds like a cron job.
For payments DB, you have chosen
Spanner, Cockroach, Yugabyte.... Do these DBs provide strong consistency, and ACID and are they relational? quick question do we need relational DB here?
1) yes
2) I don't believe we have any relationships in the data, nonetheless I'm still in favor of using SQL barring a reason to do otherwise
@@jordanhasnolife5163 Got it, thanks.
can you do privacy/visibility controls system design?
Perhaps, how do you see this one being a challenge after we put everything in a strongly consistent table?
In a future video, could you do an RSS newsfeed aggregator? Maybe throw keyword search in there.
Oh man I'll have to look into this one, you may be aging yourself by asking for an RSS feed and I may be aging myself by saying I've never used one lol
I am wondering how you think the design would be different if it's like design a paypal (payments from account A to accountB), maybe also allowed for scheduled payment)?
I think it would basically be the same, you're putting all of the individual transactions in a log that is maintained by Paxos or some other consensus algorithm and then using derived data from there.
@@jordanhasnolife5163 Thanks! I am wondering how's your take about SAGA and TC/C as mentioned in Alex's book, compared with CDC to update maybe both payer / receiver's account value?
@@Anonymous-ym6st That's more or less exactly what I'd do here. I don't think there's much other choice.
Maybe I wouldn't say two phase commit or saga, but rather using a consensus algorithm to build out a distributed transaction log.
Jordan can you also add github sample code if possible...
Realistically, no - it's a lot of extra work to do that for what I consider to be minimal benefit to viewers. I think pseudocode is present where necessary, but if you begin generating your own sample code I'd be happy to amend it in my video descriptions!
Sorry about that
why are you so smart, my love Jason?
😙
Could you suggest a database that would match the consistency requirements? Or are we rolling our own?
I see Cassandra can be configured into a strong consistency mode?
I think Cassandra's "strong consistency" is probably quorum consistency. I'd look into spanner, cockroach, yugabyte, as it seems they lean towards using distributed consensus within a replication group.
First! Happy Saturday!