MySQL is not designed to handle 250PB of data. I believe storing user information in a cache and using an LSM Tree + SSTable family of database will be better choice. We can join from cache. What do you think? Searches can be performed on the fetched user ids in parallel to filter out.
When adding mutual connections from the Flink nodes. How is it known that the new mutual connections are not already direct connections? e.g. For 10: 3, 4, 15 you are creating 3,15 and 4,15. What if 3,15 and/or 4,15 are direct connections? These connections could also be on a different Flink node/partition.
Hi Jordan, loving this video. A couple of quick questions: 1. For the adding a connection workflow, is it supposed to be real-time processing or batch? 2. Let's say B accepted A's invite to connect and A wants to view the change right after it, how can we ensure that? 3. Does it make sense if we put the mutual connection data in memory cache servers and have a graph db to store the raw connections so that we can rebuild the cache if any node fails? Any idea or discussion is appreciated. Thanks!
Thanks for making this video Jordan. I have two questions: a) You mention "Mutual Cache table", but it appears you are using SQL db for that. Does not cache mean keeping in memory? b) It is mentioned that we need very fast reads ("fast as humanly possible"), should it not engender use of mongodb or something liek that instead of SQL db?
Thanks for the great content again! Question: In your final diagram, the middle flow (new connection service) shows two layers of Kafka (a stateless consumer in between). Why do we need both layers? Can't the "new connection service(s)" directly push to the corresponding Kafka shard and avoid having the Kafka layer and the stateless consumer?
If we want the atomicity of both messages for a connection (a connecting with b and b connecting with a) we can basically either two phase commit to both of those kafka queues, or we can push to one kafka queue and handle any message replay on the back end.
@@jordanhasnolife5163 Makes sense. Won't the first Kafka layer and its stateless consumer be SPoF (especially the stateless consumer as Kafka supposed to be highly fault-tolerant)? and if we replicate them to avoid SPoF won't we have the need for 2PC? However, that's solvable if we use TMR (and not DMR).
@@PoRBvG I don't know what TMR and DMR is. Inevitably, you always need some amount of consensus or synchronous replication when dealing with attempting to make sure messages are durable. If you're willing to lose messages, you could very much have an asynchronously replicated kafka queue and consumers on each of those replicas of the queue.
I have a question on brokers and message queue. Do i setup the broker on a server and then set the consumers on other servers? Lets say i have a mail server and i need to classify the emails and send them after classification to there right system. Where do i host the broker and the Ai classification model?
hey, i have some questions if anyone can please help me :) 1) when jordan says shard the database by userID, it means shard it by the hash of the userID (for consistent hashing)? 2) sometimes i see the term partitioned by instead of sharded by, are those the same?
@@jordanhasnolife5163 thank you so much for taking the time to answer :) also, can't thank you enough for all the knowledge i gained since finding your channel
MySQL is not designed to handle 250PB of data. I believe storing user information in a cache and using an LSM Tree + SSTable family of database will be better choice. We can join from cache. What do you think? Searches can be performed on the fetched user ids in parallel to filter out.
isn't count 500 X 500 = 250K ? which is 10 time more. Or are we assuming only 10% of friends will be mutual or something like that?
Oof yeah good catch. Point is, fan-out probably won't work here.
When adding mutual connections from the Flink nodes. How is it known that the new mutual connections are not already direct connections?
e.g. For 10: 3, 4, 15 you are creating 3,15 and 4,15. What if 3,15 and/or 4,15 are direct connections? These connections could also be on a different Flink node/partition.
Fair point - you can always just hit the database first here. We will have a connections table sharded by user Id so we know where to look.
Same doubt
Hi Jordan, loving this video. A couple of quick questions: 1. For the adding a connection workflow, is it supposed to be real-time processing or batch? 2. Let's say B accepted A's invite to connect and A wants to view the change right after it, how can we ensure that? 3. Does it make sense if we put the mutual connection data in memory cache servers and have a graph db to store the raw connections so that we can rebuild the cache if any node fails? Any idea or discussion is appreciated. Thanks!
1) Realtime
2) You could first write to a table before using CDC to sink to Kafka and then see the first degree connection there
Awesome , great video🎉
Thanks for making this video Jordan. I have two questions: a) You mention "Mutual Cache table", but it appears you are using SQL db for that. Does not cache mean keeping in memory? b) It is mentioned that we need very fast reads ("fast as humanly possible"), should it not engender use of mongodb or something liek that instead of SQL db?
Cache doesn't inherently mean memory, it just means having the result of a computation easily accessible. Why are mongoreads faster thansql?
Keep doing its help us out so much
You are great bro
How mutually awesome
Thanks for the great content again!
Question: In your final diagram, the middle flow (new connection service) shows two layers of Kafka (a stateless consumer in between). Why do we need both layers? Can't the "new connection service(s)" directly push to the corresponding Kafka shard and avoid having the Kafka layer and the stateless consumer?
If we want the atomicity of both messages for a connection (a connecting with b and b connecting with a) we can basically either two phase commit to both of those kafka queues, or we can push to one kafka queue and handle any message replay on the back end.
@@jordanhasnolife5163 Makes sense. Won't the first Kafka layer and its stateless consumer be SPoF (especially the stateless consumer as Kafka supposed to be highly fault-tolerant)? and if we replicate them to avoid SPoF won't we have the need for 2PC? However, that's solvable if we use TMR (and not DMR).
@@PoRBvG I don't know what TMR and DMR is. Inevitably, you always need some amount of consensus or synchronous replication when dealing with attempting to make sure messages are durable. If you're willing to lose messages, you could very much have an asynchronously replicated kafka queue and consumers on each of those replicas of the queue.
@@jordanhasnolife5163 en.wikipedia.org/wiki/Dual_modular_redundancy. It helps with SPoF both the DMR and TMR
Does profile update mean updating the latest job or education? If yes, why do we need to update the mutual connection DB for that?
Yes - because the data is denormalized in our mutual connections database
I have a question on brokers and message queue.
Do i setup the broker on a server and then set the consumers on other servers?
Lets say i have a mail server and i need to classify the emails and send them after classification to there right system.
Where do i host the broker and the Ai classification model?
I mean you can technically set them up wherever, but ideally different containers yeah
W as always
How to do full text search?
Can you provide link to that video?
You can look up any of mine regarding elasticsearch.
Thank you :)!
hey, i have some questions if anyone can please help me :)
1) when jordan says shard the database by userID, it means shard it by the hash of the userID (for consistent hashing)?
2) sometimes i see the term partitioned by instead of sharded by, are those the same?
1) yes
2) I think so, others seem to disagree
@@jordanhasnolife5163 thank you so much for taking the time to answer :) also, can't thank you enough for all the knowledge i gained since finding your channel
Watched. --