29: Amazon Payment Gateway | Systems Design Interview Questions With Ex-Google SWE

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 87

  • @tejasvenky5538
    @tejasvenky5538 4 месяца назад +6

    This is literally what I needed today. Cramming this playlist hopefully offer pulls up, I never comment on posts but u are the goat bro the goat broski if I get this offer I will send u some only fans money

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад +2

      Haha please take the only fans money and donate it to charity

  • @JLJConglomeration
    @JLJConglomeration 4 месяца назад +5

    the cache design that you mention with the doubly linked list and hashmap is basically the implementation a LRU cache

  • @itsslo
    @itsslo 28 дней назад +1

    Hey Jordan, instead of maintaining a cache of pending payments to handle the scenario where our web hook server goes down, would a placing a message in a queue when the web hook is called be sufficient?
    The pros
    - We don't need to configure a cdc
    - We don't need to poll and determine the right parameters for it
    The cons
    - We're depending on Stripe to at least call our web hook once for each payment
    At least for the con I mentioned, we could have a cron job, say every hour look through our pending payments table.

    • @jordanhasnolife5163
      @jordanhasnolife5163  24 дня назад

      I think your solution works, but it basically boils down to my solution due to your cron job remark at the end there haha
      I hate polling too, alas sometimes it is inevitable

    • @bingqinghuang9318
      @bingqinghuang9318 15 дней назад +1

      I’m not sure if I understand your solution correctly. With your solution, in scenario of the web hook is down, is the Strip payment status completely lost? And, if the pending payment table means the source of truth string consistency table, adding read load via cron job sounds less optimal than querying a separate derived DB.

  • @rongrongmiao4638
    @rongrongmiao4638 29 дней назад +1

    Reads do not necessarily have to interfere with writes.When payment status changes your db can have a CDC event that triggers an email, or stripe webhook can have a callback action that sends an email when payment processing is done.

    • @jordanhasnolife5163
      @jordanhasnolife5163  24 дня назад

      They would if I'm constantly doing a linear scan of the table to check for pending payments (unless we used snapshot isolation as opposed to two phase locking, which in retrospect also would have worked nicely here).
      But even despite that, just extra resources used by your table that you want to keep fast.

    • @RS7-123
      @RS7-123 6 дней назад +1

      maybe another option is to hit the replica for reading. since we anyway look up orders that are still pending after some X minutes, the replica can be assumed to have been eventually consistent with leader until that point

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 дней назад

      @@RS7-123 that's fine too! note that the replica is probably part of the strong consistency piece of things, so slowing it down could slow down our writes a bit

  • @mark-6572
    @mark-6572 14 дней назад +1

    Great video Jordan! Any plans for a video on api gateways, jwts and identity providers?

    • @jordanhasnolife5163
      @jordanhasnolife5163  13 дней назад

      Not tentatively, but I'll keep this one in the back of my mind!

  • @RolopIsHere
    @RolopIsHere 2 месяца назад +2

    "Well just go to Stripe", LOL I wish I could answer that on system design interviews... I recently was asked to "Design Datadog", "Design S3" and in other to design a "Load Balancer such as ELB"... I wanted to answer something similar.

    • @jordanhasnolife5163
      @jordanhasnolife5163  2 месяца назад

      Well I say stripe because a lot of the work there is just dealing with every credit card's API

  • @yuanyizhang8228
    @yuanyizhang8228 4 месяца назад +1

    Thanks Jordan for your awesome video! Hope I can see the topic about 'design some meeting scheduler' thing one day~~

  • @thestarbahety
    @thestarbahety 4 месяца назад +3

    Can you make a video on designing Spotify?
    Func Requirements:
    - Ability to play any song. super low latency while playing any song.
    - User can create playlist, share playlist.
    - Follow other playlist, artist, get notified for any song updates by artist or in a playlist.

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      At least off the cuff I'll say
      1) CDNs and precaching when going down a playlist
      2) Use a database
      3) This feels like twitter
      You think there are any other unique pieces to it?

    • @thestarbahety
      @thestarbahety 4 месяца назад

      @@jordanhasnolife5163 how about live streaming & podcasts? I believe that this is similar to Zoom. But would like to know if there any optimizations which can be done?

  • @htm332
    @htm332 4 месяца назад +1

    The assertion behind the need for a derived Pending Transactions cache - that reads will hurt write throughput due to row locking - is not necessarily true if you're using a DB with MVCC (like Spanner or Cockroach). So I question the need for the separate cache.

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      The other reason is that you then need to run a query on disk to figure out all of the pending transactions. I'd rather just have them all precached, but agreed that if you're using snapshot isolation locking is unnecessary for such a read

    • @htm332
      @htm332 4 месяца назад

      ​@@jordanhasnolife5163 could add a local covered index to speed up the query and ensure consistency, which would slow down writes a bit but per the original requirements that's not a problem. In any case pending transactions would be a great topic to deep dive in a real interview and discuss tradeoffs, so thanks for calling it out explicitly in your video

  • @aforty1
    @aforty1 4 месяца назад +1

    Hey Jordan! Thanks as always for these awesome videos! I was wondering if you could do a quick video about how to actually structure and talk about these on an interview. Is there a structure to it that you do (or have seen), such as laying out requirements first, then doing some considerations, then diving into the design? Is it really not one size fits all? Either way could be interesting to see what tips you might have around this. Keep it up, you're incredibly helpful!

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад +2

      Here ya go m8
      ruclips.net/video/IY2EPjShgc4/видео.htmlsi=Xw4uwvd4iDBbpp_w
      You can always just ask your Interviewer too. Hey is it ok if I start with x?

    • @aforty1
      @aforty1 4 месяца назад

      @@jordanhasnolife5163 oh shit I missed that, thanks! 🙏🏼

  • @siddharthsingh7281
    @siddharthsingh7281 4 месяца назад +2

    can you make a separate video elaborating Change Data Capture part ? like log based , trigger based ...

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      I don't really know how much there is to elaborate on here beyond what I've discussed in my concepts videos. I'd take a look at something like debezium.

  • @nhancu3964
    @nhancu3964 Месяц назад +1

    The action of deleting the pending payment in database when receiving un-recognized result from stripe potentially leads to inconsistency when your request is still in network route and has not yet reached stripe server at the time the poller check. After deleting the record in DB, the request comes into stripe and modify your bank balance. How to solve this problem completely

    • @jordanhasnolife5163
      @jordanhasnolife5163  24 дня назад +1

      Can you give me a timestamp here? I never propose deleting a payment from the actual payments DB, just from the cache when it hears from either stripe or the payments DB. The Payments DB is the source of truth, the cache can be wrong.

  • @chawlagarima
    @chawlagarima 4 месяца назад +1

    Thanks a lot for another amazing video... I've a question, how does the payment reaches seller?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      Well I guess thats a detail for tipalti, but Amazon probably makes batch payments every month to them via an ACH (wire) trabsfer

  • @uday3patel
    @uday3patel 4 месяца назад +2

    17:18 - should pending payments that are `not recognized` by stripe at poll time really just be deleted from the payments table? this case might require special processing since at this point, the payment has a local DB status of `pending` but stripe has not recognized this payment. What would be a possible solution for this inconsistency?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад +1

      I don't really think there is any solution, network requests to stripe can always fail. Do we want to delete the event? Maybe not but then we may find ourselves doing a lot of polling after a while.

    • @RS7-123
      @RS7-123 6 дней назад +1

      why not retry the payment assuming this never hit stripe in first place. our intent is to clear the order and make some money isn’t it

  • @jporritt
    @jporritt 4 месяца назад +1

    The other video I’d like to see: A distributed system for generating unique IDs, akin to Twitter Snowflake. Also with the functional requirement of how to allow people to bring along their own IDs.

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      This does feel somewhat similar to what we do in the payment gateway video, shard the key range, allow users to bring their own

  • @rationallearner
    @rationallearner 4 месяца назад +1

    Thanks for the video Jordan. How is polling going to work? Is there a cron job or a scheduler running every x minutes in the flink? Moreover maintaining the doubly linkedlist in the order of create time stamp would be log(n) right, as each event reaching flink could be out of order, right?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      Yeah something like that, or you could just say something like on a new event, if we haven't polled in x amount of time, poll again.
      I would think that events reaching Flink should be coming in order on a timestamp per partition, so we could always just do a flink node per partition to maintain that invariant.

  • @harinimurali1180
    @harinimurali1180 3 месяца назад +1

    How does Kafka CDC capture pending payments if it's triggered only by database changes? If payments remain in a pending state without any status change, how will Kafka CDC detect them for Flink to process?

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 месяца назад +1

      Well payments are added to the DB with a status of "pending"

  • @raihanulalamhridoy4714
    @raihanulalamhridoy4714 Месяц назад +1

    Why do we need two Load Balancers? Couldn't we pass the request to the previous one?

    • @jordanhasnolife5163
      @jordanhasnolife5163  Месяц назад +1

      It's just for the sake of the diagram, you can use the same load balancer for both

  • @NBetweenStations
    @NBetweenStations 4 месяца назад +1

    Thanks for the great video! Question about web hooks. So the Payment system is listening for web hook callbacks and the polling mechanism is only triggered when a pending payment hasn’t received a callback in a specified amount of time? Is that the idea?

  • @abhishekmiet
    @abhishekmiet 4 месяца назад +1

    What happens if Flink cache fails? I think we will somehow have to redrive the CDC stream to repopulate the new cache instance?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад +3

      Please see the flink concepts video. State is periodically checkpoibted to s3

  • @huguesbouvier3821
    @huguesbouvier3821 4 месяца назад +1

    Thanks for the video! Why not just use zookeeper to give us a monotonically increasing u64 for the idempotency key? This way we are guaranteed to not have any conflict, also u64 should be enough till the end of times.

    • @lalasmith2137
      @lalasmith2137 4 месяца назад +1

      hey friend, can you please explain what is u64? is it like a uuid?

    • @huguesbouvier3821
      @huguesbouvier3821 4 месяца назад +2

      @@lalasmith2137 haha sorry, an unsigned 64 bits integer

    • @lalasmith2137
      @lalasmith2137 4 месяца назад +1

      @@huguesbouvier3821 thank you for clarifying that, helped me understand your answer :)

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад +2

      1) using a monotonically increasing sequence number implies that all writes must go through the same choke point (meaning you can't shard zookeeper, which is potentially fine if we really don't care about performance)
      2) We basically do this anyways, as our payments db is basically using a consensus algorithm, making it effectively the same as zookeeper

  • @DivyanshRana265
    @DivyanshRana265 4 месяца назад +1

    I have a dumb question. Why would row read locks on the pending payments slow down the write throughput of the table, given the writes' idempotency key are different from the pending ones?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      Great point, there probably wouldn't be many conflicts IRL, but the reads themselves would be quite expensive and would take resources away from the DB

  • @shangma9176
    @shangma9176 3 месяца назад +1

    I dont think you can do pre-materalize for the idempotency. If a user click the pay button twice, the second pay request will ask your "pre-materialize key service" for a new idempotent key. Now the problem goes back to the original one, how can you generate a idempotent key for the request in the first place.

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 месяца назад

      The idempotency key is generated on page load, not on user click. If they reload the page that's a different story.

    • @rubenlicio
      @rubenlicio Месяц назад

      Even if the client reloads, it could be persist on client to avoid double payments. It can be stored with the checkout data the first time checkout was hit

  • @easward
    @easward 4 месяца назад +2

    Please make a video on design aws cloud watch

  • @nikhilm9494
    @nikhilm9494 4 месяца назад +1

    One more banger system design video!

  • @shuozhang236
    @shuozhang236 4 месяца назад +1

    thanks Jordan I have been watching your sd video each week have two qq regarding your design
    1. could we use snowflake algorithm to generate id as idompotence key ?
    2. was Flink processing part of payment service code ? if so, for fault tolerance case where payment service was down how is it going to affect flink processing ?
    thanks

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      1) Not familiar with this method, feel free to send me a link to what it is
      2) Not sure what you mean by this question. Flink is just getting data from our payment db, and occasionally polling stripe to see the status of it, it is independent of any synchronous operation to do with the payment service.

    • @shuozhang236
      @shuozhang236 4 месяца назад

      @@jordanhasnolife5163 thanks for replying
      1) en.wikipedia.org/wiki/Snowflake_ID
      2) let me rephrase my question a little bit where was the application code to generate idempotence key logic and save to payment as one microservice and flink process another microservice or they are all clustered as one service as in payment service. if both processes are treated as one service application code and payment service was down, flink process will also halt right

  • @Keira77L-t3b
    @Keira77L-t3b 3 месяца назад +1

    Not sure if tinyurl/hash based approach is best suited for payment idempotent key, usually using order id or some combination of order id and uuid would suffice, simpler and more efficient, wdyt?

  • @smithalan9487
    @smithalan9487 3 месяца назад +1

    How do you use cron job to search the doubly linked list in flink? Isn't the data store in memory in flink? In that case the cron job will be inside flink? Also why use flink instead of some in memory database like redis?

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 месяца назад +1

      It's not a cron job, it's just a loop that waits a few seconds. I need more functionality than what redis is able to offer me here.

    • @anshulkatare
      @anshulkatare 2 месяца назад

      @@jordanhasnolife5163 A loop that waits for few seconds, sounds like a cron job.

  • @anshulkatare
    @anshulkatare 2 месяца назад +1

    For payments DB, you have chosen
    Spanner, Cockroach, Yugabyte.... Do these DBs provide strong consistency, and ACID and are they relational? quick question do we need relational DB here?

    • @jordanhasnolife5163
      @jordanhasnolife5163  2 месяца назад

      1) yes
      2) I don't believe we have any relationships in the data, nonetheless I'm still in favor of using SQL barring a reason to do otherwise

    • @anshulkatare
      @anshulkatare 2 месяца назад

      @@jordanhasnolife5163 Got it, thanks.

  • @Kevin-jt4oz
    @Kevin-jt4oz 4 месяца назад +1

    can you do privacy/visibility controls system design?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      Perhaps, how do you see this one being a challenge after we put everything in a strongly consistent table?

  • @jporritt
    @jporritt 4 месяца назад +1

    In a future video, could you do an RSS newsfeed aggregator? Maybe throw keyword search in there.

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      Oh man I'll have to look into this one, you may be aging yourself by asking for an RSS feed and I may be aging myself by saying I've never used one lol

  • @Anonymous-ym6st
    @Anonymous-ym6st 2 месяца назад +1

    I am wondering how you think the design would be different if it's like design a paypal (payments from account A to accountB), maybe also allowed for scheduled payment)?

    • @jordanhasnolife5163
      @jordanhasnolife5163  2 месяца назад +1

      I think it would basically be the same, you're putting all of the individual transactions in a log that is maintained by Paxos or some other consensus algorithm and then using derived data from there.

    • @Anonymous-ym6st
      @Anonymous-ym6st Месяц назад +1

      @@jordanhasnolife5163 Thanks! I am wondering how's your take about SAGA and TC/C as mentioned in Alex's book, compared with CDC to update maybe both payer / receiver's account value?

    • @jordanhasnolife5163
      @jordanhasnolife5163  Месяц назад

      @@Anonymous-ym6st That's more or less exactly what I'd do here. I don't think there's much other choice.
      Maybe I wouldn't say two phase commit or saga, but rather using a consensus algorithm to build out a distributed transaction log.

  • @mani8586
    @mani8586 3 месяца назад +1

    Jordan can you also add github sample code if possible...

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 месяца назад +1

      Realistically, no - it's a lot of extra work to do that for what I consider to be minimal benefit to viewers. I think pseudocode is present where necessary, but if you begin generating your own sample code I'd be happy to amend it in my video descriptions!
      Sorry about that

  • @tomtran6936
    @tomtran6936 4 месяца назад +1

    why are you so smart, my love Jason?

  • @jporritt
    @jporritt 4 месяца назад +1

    Could you suggest a database that would match the consistency requirements? Or are we rolling our own?

    • @jporritt
      @jporritt 4 месяца назад +1

      I see Cassandra can be configured into a strong consistency mode?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 месяца назад

      I think Cassandra's "strong consistency" is probably quorum consistency. I'd look into spanner, cockroach, yugabyte, as it seems they lean towards using distributed consensus within a replication group.

  • @kokoromarudi7717
    @kokoromarudi7717 4 месяца назад +3

    First! Happy Saturday!