Hey you are doing great work! A request- Can you please continue such tutorials and also teach about scalability, microservices, chat servers with rooms, video calls or deployment on docker-k8s etc. Like what software engineering looks like irl. People on yt just doing nextjs stuff and I can't understand anything...
Hi Great work Piyush. Can you please create a video how we can able to deploy turborepo project like current scalable Realtime chat app on servers (vercel)
Very helpful videos Piyush. Just have one doubt, How to retrieve data in case user refresh the page and we have to fetch the last few messages? because we can't query DB in that case.
awesome course and thank you so much. Make more awesome valuable content with monorepo architecture in nodejs . God bless you sir and thank you once again.
Hi Piyush. Thanks for the amazing video!!! Just one Question, Couldn't we use kafka directly as a pub/sub instead of using redis separately , where all servers and the processing server ( running write queries in postgres ) subscribes to the 'MESSAGES' kafka topic?
Bhai ek request hai plz ek pagination aur inifinite scroll ke upar bhi video banaye bahut jada ye topic pucha jata hai interviews mein react.js, node.js mein. Aur konsa kab use karna hai wo bhi bata dijiyega indepth banaye ga bhai
Here I was curious to know that if Redis could have been replaced kafka? I am not sure if redis is required here if we are using kafka ? Please let me know your thoughts
You earlier said that consumer is a separate NodeJS Server but u defined the consumer in the primary server itself.. Why ? Is that for the sake of simplicity? If so , then how wil l get the same Prisma instance if we had standalone consumer server ?
You are making a new entry to the database every time a new message is produced and as we know databases have low through put so can we run consumer or a second consumer to consume the data at a certain interval of time to store the produced data in db?
This is a beneficial video but I didn't like the ending of this. With the try catch if the DB crashes or something goes wrong with it then pause for 1 minute and restart it from the beginning, Can we do something else that would be good for the DB always?
why are we using redis along with kafka, can't we simply use Kafka's pub/sub for two servers to communicate instead of redis? can someone please explain the advantages or tradeoffs of doing so?
I'll give example based on Google pubsub or storage queue that I use at place of kafkaa, and the reason is if one consumer takes that message and ACK that message is gone from topic and other instances with same subscriber won't get the messages, offcourse we can create separate subscriber for each instance but that is manual process unlike redis pub sub
Just one question, why are you using cloud services for postgres and kafka? Isn't using a docker container locally free and less time consuming as well?
Hey @piyushgargdev, Is Kafka's consumer interval (to consume message) is incremental or all data will be provided? If so, how can we only handle the incremental data and not whole?
When the messages are consumed, then they should be deleted from the 'Topics' in Kafka, right? But they are still present there. Is it supposed to be like this?
One thing i didnt understand is why did you run the consumer function in init function of index.ts file. Couldn't understand the logic behind it. Everything else is topnotch
One more like... hey mate have query how kafka understands that to which consumer beed to send reply.. i did see your kafka video... struggling to understand this...and how can i build same for mobile app??
Hi Piyush and everyone, I have a doubt. When there are multiple servers, each of them will be consuming msg from kafka and writing to postgres, thereby creating as many message entries in db on every message as the number of servers. Is that desired? spin up one more server on a diff port. Send a message. There will be two entries for this message on db
Yes, you are correct. If the logic for consuming messages and writing to PostgreSQL is directly placed within the message-receiving event, it can lead to duplicated message entries. In my implementation, I've used Redis for inter-server communication. I have not used kafka yet. When a message is sent (triggered by the "send" event), I publish it to the "MESSAGES" channel in Redis. And, on the "receive" event, I broadcast the message to all connected clients. Regarding the storage of messages in PostgreSQL, I've introduced a global array named "messageBatch." When a message is sent ("send" event), I push the message into this array. The important aspect is the use of setInterval to periodically process this array(use copy of messageBatch and make messageBatch empty to store new messages), writing its contents to PostgreSQL. The data is successfully stored.
A doubt, You are able to get messages in kafka at high velocity because kafka is meant for that , but when you are inserting into db for eachMessage , how will it make any diff because in event of high velocity of messages eachMessage function will do a insert query into db so for example if you are rec 100000 messages at 1-2 second interval your db will have 100000 insert operation which will make the db down. and if that happen what is the benefit of using kafka i understand that there will be no downtime because kafka will still be active but there should be something that will reduce the insert operation into the db
Actually the consumer should be an altogether different microservice. This will consume the messages in batches and do batch insertion. Let's say we configured the DB to support 10k WPS. So we'll consume 10k messages and insert it into DB. This is actually known as async processing.
I have one question. Since you introduced Kafka in the project, couldn't we remove Redis from the project?? Because Redis was being used for pub/sub, which Kafka can do as well.
Redis supports push-based delivery of messages that means messages published to Redis will be delivered automatically to subscribers immediately but kafka is supports pull-based delivery of messages, meaning that messages published in Kafka are never distributed directly to consumers, consumers subscribe to topics and ask for messages when consumers are ready to deal with them.
Yes, one might think of using only Kafka or Redis. But here we need both. Here we have 2 requirements: 1. Inter-Server Communication. Meaning message sent by user1 on server1 should be received by all the users present on different servers. Here we can use Redis Pub/Sub model. Redis Publisher publishes the message to the channel "MESSAGES". All the Redis Subscribers of this channel will receive the message, including the server which has sent the message. Thus Inter-Server Communication is achieved. If we use Kafka in this case. Kafka producer will produce the message to topic "MESSAGES". Here all the Kafka consumers (on all servers) will belong to the same consumer group, because they have same groupId. Hence only any one consumer will receive the message on the "MESSAGES" topic. And other Kafka consumers (servers) will not receive the message. 2. Storage of Messages in Database. Here we can use Kafka. Kafka producer will produce the message to topic "MESSAGES". And only one Kafka consumer of this topic will receive the message. This consumer will store it in the database. Like I said earlier, all the consumers here have same groupId. Hence only one of them can receive message. If we use Redis here, all the Redis subscribers will receive the messages and store the messages in database, resulting in duplicate messages.
Here, all the server instances subscribe to a redis channel for incoming messages. I think, we could simply remove redis, and make every server long poll kafka for messages.
@@catchroniclesbyanikyes because kafka also gives pub/sub mechanism. I think pyush bhai ny first video just problem solve krny k liye bnai and yeh full scalability k liye.
Kafka itself have a pub/sub model, rather than saving data in two places(redis and kafka), can we create a data aggregation function that'll cater the user messaging service? Working adjacent to yours for updating the db update query handling function
How to handle real time notification in Vue&Node like FB handle its post notification. For example I have a Assignment Management system and I'm logged in as a Admin, when someone upload/send new assignment so the notification come in realtime and show in toast. How it can be possible?
You come up with something that no one makes
You are awesome Piyush.
Hey you are doing great work! A request- Can you please continue such tutorials and also teach about scalability, microservices, chat servers with rooms, video calls or deployment on docker-k8s etc.
Like what software engineering looks like irl. People on yt just doing nextjs stuff and I can't understand anything...
Amazing tutorial, helping us to do better engineering. Industry level standards !!
Thanks piyush, I was waiting for this video. I love your scalable, system design videos
Finally, completed this project and gained a lot of knowledge Thank you Piyush Sir ❤️👍🎉
do provide a GH link.
Fresh unique stuff, no one is teaching this on RUclips.
Also can you teach more about tueborepo in detail like testing, linting etc in a turborepo
Please make a full video about rabbitMQ🙏
Man! So much of learning in these 2 videos. Thanks!
Just wow 🤩 🤩 . I have learned something new that no one teaches us. Highly appreciable work. Thank you . 🙏
You are best piyush bhaiya ❤
that is something i was planning to build and had lot of confusion, now everything is cleared thank you brother
Love you brother ❤❤❤,
And my one is that please make series on microservices project, that how to make project using this architecture
Mann i love how professionally you do your work ❤
Really something awesome.Practically answering all system design question.
Better than paid course ❤
Hi Great work Piyush. Can you please create a video how we can able to deploy turborepo project like current scalable Realtime chat app on servers (vercel)
I like your teaching style. ❤
Thank man. Learned a lot.
Love you ❤❤❤
Great video, please continue this series.
Piyush ek introduction video banao plzz What is turboRepo im confused with it
Very helpful videos Piyush.
Just have one doubt, How to retrieve data in case user refresh the page and we have to fetch the last few messages? because we can't query DB in that case.
awesome course and thank you so much.
Make more awesome valuable content with monorepo architecture in nodejs .
God bless you sir and thank you once again.
Very nice learning ❤🎉it will definitely impact on community
bro is litrally creating his own empire in backend mastery.
Great content !! Keep sharing your experience ❤
Hi Piyush. Thanks for the amazing video!!!
Just one Question, Couldn't we use kafka directly as a pub/sub instead of using redis separately , where all servers and the processing server ( running write queries in postgres ) subscribes to the 'MESSAGES' kafka topic?
ek MVC pe bhi video lao bhai
Piyush you are sending data into the database one by one not in bulk, right?
Awessome Content Brother . You can further extend this Project ❤
Bhai ek request hai plz ek pagination aur inifinite scroll ke upar bhi video banaye bahut jada ye topic pucha jata hai interviews mein react.js, node.js mein. Aur konsa kab use karna hai wo bhi bata dijiyega indepth banaye ga bhai
Sir please make backend project using microservices
why you use redis and kafka both can we use kafka only?
Here I was curious to know that if Redis could have been replaced kafka? I am not sure if redis is required here if we are using kafka ? Please let me know your thoughts
top notch content
You earlier said that consumer is a separate NodeJS Server but u defined the consumer in the primary server itself.. Why ? Is that for the sake of simplicity? If so , then how wil l get the same Prisma instance if we had standalone consumer server ?
You can just create the new instance for the prisma, and if you are using turbo repo then you can just import that
@@opsingh861 New instance would probably lead to new connection ig and probably new migrations.... Yeah !, Turbo might be a better option.
You are making a new entry to the database every time a new message is produced and as we know databases have low through put so can we run consumer or a second consumer to consume the data at a certain interval of time to store the produced data in db?
Keep posting such content please
Brother please deploy bhi kr diya kro, bahut problem hoti hai.
love it bhai
This is a beneficial video but I didn't like the ending of this. With the try catch if the DB crashes or something goes wrong with it then pause for 1 minute and restart it from the beginning, Can we do something else that would be good for the DB always?
please make more videos for this as continuation !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
awesome value
You Got New Subscriber
why are we using redis along with kafka, can't we simply use Kafka's pub/sub for two servers to communicate instead of redis? can someone please explain the advantages or tradeoffs of doing so?
I'll give example based on Google pubsub or storage queue that I use at place of kafkaa, and the reason is if one consumer takes that message and ACK that message is gone from topic and other instances with same subscriber won't get the messages, offcourse we can create separate subscriber for each instance but that is manual process unlike redis pub sub
where is the video piyush sir has mentioned about i.e the first part to this video?
Just one question, why are you using cloud services for postgres and kafka? Isn't using a docker container locally free and less time consuming as well?
amazing video
Hey @piyushgargdev,
Is Kafka's consumer interval (to consume message) is incremental or all data will be provided? If so, how can we only handle the incremental data and not whole?
Awesome video ❤🎉
If it is a group chat then we have to make a different database postgres to store all the data like room id and all users of that grp
When the messages are consumed, then they should be deleted from the 'Topics' in Kafka, right? But they are still present there. Is it supposed to be like this?
One thing i didnt understand is why did you run the consumer function in init function of index.ts file. Couldn't understand the logic behind it. Everything else is topnotch
One more like... hey mate have query how kafka understands that to which consumer beed to send reply.. i did see your kafka video... struggling to understand this...and how can i build same for mobile app??
First comment 😁
I had one doubt please anyone help, Where is the complete frontend and deployment part of this project ?
Can you show the deployment process?
👏👏
Can i run two databases in prisma in the same project like PostgreSQL and MySQL
can you help us in knowing how to deploy monorepo appilcations
.
Make a video on posgresql
Hi Piyush and everyone, I have a doubt. When there are multiple servers, each of them will be consuming msg from kafka and writing to postgres, thereby creating as many message entries in db on every message as the number of servers. Is that desired?
spin up one more server on a diff port. Send a message. There will be two entries for this message on db
Yes, you are correct. If the logic for consuming messages and writing to PostgreSQL is directly placed within the message-receiving event, it can lead to duplicated message entries. In my implementation, I've used Redis for inter-server communication. I have not used kafka yet. When a message is sent (triggered by the "send" event), I publish it to the "MESSAGES" channel in Redis. And, on the "receive" event, I broadcast the message to all connected clients.
Regarding the storage of messages in PostgreSQL, I've introduced a global array named "messageBatch." When a message is sent ("send" event), I push the message into this array. The important aspect is the use of setInterval to periodically process this array(use copy of messageBatch and make messageBatch empty to store new messages), writing its contents to PostgreSQL. The data is successfully stored.
This is a valid issue. This wont occur if we put the produceMessageForKafka logic after publishing to redis instead of putting it on subscribe
A doubt, You are able to get messages in kafka at high velocity because kafka is meant for that , but when you are inserting into db for eachMessage , how will it make any diff because in event of high velocity of messages eachMessage function will do a insert query into db so for example if you are rec 100000 messages at 1-2 second interval your db will have 100000 insert operation which will make the db down. and if that happen what is the benefit of using kafka i understand that there will be no downtime because kafka will still be active but there should be something that will reduce the insert operation into the db
Actually the consumer should be an altogether different microservice. This will consume the messages in batches and do batch insertion. Let's say we configured the DB to support 10k WPS. So we'll consume 10k messages and insert it into DB. This is actually known as async processing.
please make the same videos on python.
I have one question. Since you introduced Kafka in the project, couldn't we remove Redis from the project?? Because Redis was being used for pub/sub, which Kafka can do as well.
you mean we can subscribe on servers to kafka topics?
Redis supports push-based delivery of messages that means messages published to Redis will be delivered automatically to subscribers immediately but kafka is supports pull-based delivery of messages, meaning that messages published in Kafka are never distributed directly to consumers, consumers subscribe to topics and ask for messages when consumers are ready to deal with them.
Yes, one might think of using only Kafka or Redis. But here we need both.
Here we have 2 requirements:
1. Inter-Server Communication.
Meaning message sent by user1 on server1 should be received by all the users present on different servers. Here we can use Redis Pub/Sub model. Redis Publisher publishes the message to the channel "MESSAGES". All the Redis Subscribers of this channel will receive the message, including the server which has sent the message. Thus Inter-Server Communication is achieved.
If we use Kafka in this case. Kafka producer will produce the message to topic "MESSAGES". Here all the Kafka consumers (on all servers) will belong to the same consumer group, because they have same groupId. Hence only any one consumer will receive the message on the "MESSAGES" topic. And other Kafka consumers (servers) will not receive the message.
2. Storage of Messages in Database.
Here we can use Kafka. Kafka producer will produce the message to topic "MESSAGES". And only one Kafka consumer of this topic will receive the message. This consumer will store it in the database. Like I said earlier, all the consumers here have same groupId. Hence only one of them can receive message.
If we use Redis here, all the Redis subscribers will receive the messages and store the messages in database, resulting in duplicate messages.
Here, all the server instances subscribe to a redis channel for incoming messages. I think, we could simply remove redis, and make every server long poll kafka for messages.
@@catchroniclesbyanikyes because kafka also gives pub/sub mechanism. I think pyush bhai ny first video just problem solve krny k liye bnai and yeh full scalability k liye.
Kafka itself have a pub/sub model, rather than saving data in two places(redis and kafka), can we create a data aggregation function that'll cater the user messaging service? Working adjacent to yours for updating the db update query handling function
We should use either Kafka or redis , right ? not both @rohitpandey4411
🙏👍
can i do this with mongo ???
Can anyone give me the previous video link?
when db is down or not able to insert that message so the consumer will resume from that message or from the next message?
That message because the message is stored inside kafka
How to handle real time notification in Vue&Node like FB handle its post notification. For example I have a Assignment Management system and I'm logged in as a Admin, when someone upload/send new assignment so the notification come in realtime and show in toast. How it can be possible?
love you.
Please reply anyone can i make chat app using Java Networking concept? is it possible ? please reply
Yes. Explore about Netty.
maza ni aaya bro
Video Title In English... Video audio in Hindi... No offense... but Bruh... What are you doing?
Is there any way we can create virtual load to test our application?
Then I can die peacefully 🎉