Awesome content and great explanation. I was searching for an IOT back end architecture and landed on your video. I would be great if you could do a series on IOT track and trace back end and any simple application like current position and geofencing.
@@vkscoding there are many videos on Google iot, IBM Watson, Cisco and AWS IoT, but hardly anything where solutions are independent of such platforms. I haven't seen content where Middleware is explained. Like the reverse proxy for incoming connections, traffic load balancing, chached database for live updates of the IoT and final permanent storage/ database for reports retrival. The interconnect between these services and other microservice deployment for redundancy and their interconnections. I am a flight engineer by profession but love technology and its applications. Just jamming with you for some cool stuff that would be worth sharing. Anyways subscribed and notifications added. Cool stuff here...😇😇
Apache Kafka is best known for its high throughput whereas RabbitMQ is best suitable for low-latency message delivery and complex routing as well. Recommendations for Kafka : Kafka Broker Node: eight cores, 64 GB to128 GB of RAM, two or more 8-TB SAS/SSD disks, and a 10- Gige Nic . Minimum of three Kafka broker nodes Minimum of three nodes in your cluster, you can expect 225 MB/sec data transfer. If you need throughput of 50 MB and thousands of events per second Node : 1 or 2 nodes CPU : 8 or more cores per node, although more is better DISK : 6 or more disks per node (SSD or spinning) RAM : 2 GB memory per node Network card : 1 GB NICs If you need throughput of 100 MB and tens of thousands of events per second Node : 3 or 4 nodes CPU : 16 or more cores per node, although more is better DISK :6 or more disks per node (SSD or spinning) RAM : 2 GB of memory per node Network card : 1 GB NICs If you need throughput of 200 MB and hundreds of thousands of events per second Node : 5 to 7 nodes CPU : 24 or more cores per node (effective CPUs) DISK : 12 or more disks per node (SSD or spinning) RAM : 4 GB of memory per node Network Card : 10 GB NICs If you need throughput of 400 MB to 500 MB/sec and hundreds of thousands of events per second Node : 7 - 10 nodes CPU : 24 or more cores per node (effective CPUs) DISK :12 or more disks per node (SSD or spinning) RAM : 6 GB of memory per node Network Card : 10 GB NICs
I have 2 questions about RabbitMQ: How to handle the case that some messages are consumed failed so many times, it would cause a bottleneck? How to automatically scale the consumers if there are so many messages in the queue? Could you please help me to clarify this?
1) How to handle the case that some messages are consumed failed so many times, it would cause a bottleneck? Ans : We have to configure maximum retry for messages and Once the retry exhausted then we have route these message to the dead letter queue. We must configure dead letter exchange and dead letter queue to deal with dead messages. (TTL Expired Messages , Message dropped from queue due to queue size limit. Messages retry failed.) 2) How to automatically scale the consumers if there are so many messages in the queue? Ans : We can subscribe multiple consumer to one queue and we can configure multiple queue to one exchange as well.
Год назад
@@vkscoding But the queue size could be unpredictable, I dont know is there any way to scale the number of consumers automatically, by docker containers or something like that? Btw, thanks for you solution on first question.
Best explanation I have came across after going through approximately 10other kakfa videos 🎉🎉😊
Thank you so much 🙂
Really nice comparison of rabbitMQ and Kafka at the end, one could easily remembers that rabbitMQ push based to rmb all it's mechanism and benefits.
Glad you liked it !
Nice explanation though basic but good for newbies
Great explanation. nice presentation. 👏
Glad you liked it!
really best explanation
Glad it was helpful!
Explained very well. Thank you!
Glad you liked it !
Awesome content and great explanation. I was searching for an IOT back end architecture and landed on your video. I would be great if you could do a series on IOT track and trace back end and any simple application like current position and geofencing.
Thankyou so much for appreciating the effort. Input Noted for starting series on IOT
@@vkscoding there are many videos on Google iot, IBM Watson, Cisco and AWS IoT, but hardly anything where solutions are independent of such platforms. I haven't seen content where Middleware is explained. Like the reverse proxy for incoming connections, traffic load balancing, chached database for live updates of the IoT and final permanent storage/ database for reports retrival. The interconnect between these services and other microservice deployment for redundancy and their interconnections. I am a flight engineer by profession but love technology and its applications. Just jamming with you for some cool stuff that would be worth sharing. Anyways subscribed and notifications added. Cool stuff here...😇😇
Great explanation
Glad it was helpful!
keep up good work .
Thank you so much for appreciating my effort !
Thank you.
Glad it was helpful !!
Excellent
Hi,
If I want very low latency. What you will suggest RabbitMQ or Kafka? How many vcpu and ram is necessary for the broker?
Apache Kafka is best known for its high throughput whereas RabbitMQ is best suitable for low-latency message delivery and complex routing as well.
Recommendations for Kafka :
Kafka Broker Node: eight cores, 64 GB to128 GB of RAM, two or more 8-TB SAS/SSD disks, and a 10- Gige Nic .
Minimum of three Kafka broker nodes
Minimum of three nodes in your cluster, you can expect 225 MB/sec data transfer.
If you need throughput of 50 MB and thousands of events per second
Node : 1 or 2 nodes
CPU : 8 or more cores per node, although more is better
DISK : 6 or more disks per node (SSD or spinning)
RAM : 2 GB memory per node
Network card : 1 GB NICs
If you need throughput of 100 MB and tens of thousands of events per second
Node : 3 or 4 nodes
CPU : 16 or more cores per node, although more is better
DISK :6 or more disks per node (SSD or spinning)
RAM : 2 GB of memory per node
Network card : 1 GB NICs
If you need throughput of 200 MB and hundreds of thousands of events per second
Node : 5 to 7 nodes
CPU : 24 or more cores per node (effective CPUs)
DISK : 12 or more disks per node (SSD or spinning)
RAM : 4 GB of memory per node
Network Card : 10 GB NICs
If you need throughput of 400 MB to 500 MB/sec and hundreds of thousands of events per second
Node : 7 - 10 nodes
CPU : 24 or more cores per node (effective CPUs)
DISK :12 or more disks per node (SSD or spinning)
RAM : 6 GB of memory per node
Network Card : 10 GB NICs
@@vkscoding Thank you for your detailed response bro.
Wow
🙏
I have 2 questions about RabbitMQ:
How to handle the case that some messages are consumed failed so many times, it would cause a bottleneck?
How to automatically scale the consumers if there are so many messages in the queue?
Could you please help me to clarify this?
1) How to handle the case that some messages are consumed failed so many times, it would cause a bottleneck?
Ans : We have to configure maximum retry for messages and Once the retry exhausted then we have route these message to the dead letter queue. We must configure dead letter exchange and dead letter queue to deal with dead messages. (TTL Expired Messages , Message dropped from queue due to queue size limit. Messages retry failed.)
2) How to automatically scale the consumers if there are so many messages in the queue?
Ans : We can subscribe multiple consumer to one queue and we can configure multiple queue to one exchange as well.
@@vkscoding But the queue size could be unpredictable, I dont know is there any way to scale the number of consumers automatically, by docker containers or something like that? Btw, thanks for you solution on first question.
Regarding Consumer auto scaling for RabbitMQ on Kubernetes you can try this blog ryanbaker.io/2019-10-07-scaling-rabbitmq-on-k8s/
@@vkscoding many thanks
be regular
Glad you liked the Video ! Content will be updated weekly Basis ! Will try to increase frequency :-)
Its good and how can I reach you for mentorship?
I am glad you liked it. You can Connect via email.