@@arisweedler4703 Thanks! And thanks for finding the podcast - I hope you're enjoying it. There are a couple more coding videos on that channel too. 🙂 How did I learn to do this? Interesting question. I think giving a lot of conference talks has certainly helped, as has years of refactoring code - they both push you to think about how you'd explain the code to someone else. 🙂
My beef with internal topics is that when a cloud service like confluent cloud bills per partition (internal topics included) well costs can balloon quite a bit.. I much prefer Flinks snapshotting model to some blob storage for HA rather than using Kafka topics
We have a template that handles stock tick data and represents it on a candle stick chart github.com/quixio/template-real-time-data-pipelines-in-python
Hello, thanks for the video. What if we would like to consume the data in the topic from certain existing offset number ? I was looking at .seek() class but not get a effective solution yet. Many thanks
Hi thanks for the question. To do it currently, you'd have to manually seek and commit that offset for each partition. But QuixStreams doesn't support that at the moment.
My mind is blown by how well you are thinking and translating it by typing it beautifully into python code. Thank you.
Yes. Please keep doing more vids!
You're very welcome!
yes exactly - there's nothing more educational and engaging than watching another programmer iterate over ideas to come up with working code.
Interested in
an episode on "From Kafka to Kibana"
Truly incredibly explained. I would love to know where you learned how to think/explain/communicate like that.
Aight epic this dude has a podcast - Developer Voices - time to go monkey mode on those.
@@arisweedler4703 Thanks! And thanks for finding the podcast - I hope you're enjoying it. There are a couple more coding videos on that channel too. 🙂
How did I learn to do this? Interesting question. I think giving a lot of conference talks has certainly helped, as has years of refactoring code - they both push you to think about how you'd explain the code to someone else. 🙂
Thank you for this 🙏. This is a wonderful thing. I'm really enjoying this series.
Glad you enjoy it!
Always tried to learn Kafka but was worried about the libraries. But quixstreams is something else, its super easy to get started with
The day quikstreams supports joins without internal Kafka topics I’m sold !! 😊
Joins are in the works. What’s your beef with internal topics?
My beef with internal topics is that when a cloud service like confluent cloud bills per partition (internal topics included) well costs can balloon quite a bit.. I much prefer Flinks snapshotting model to some blob storage for HA rather than using Kafka topics
I feel your pain on that front. Thanks for the feedback!
Waiting for some more series on kafka, great explanation
Have you got anything specific you want us to explain or demonstrate?
Awesome videos!! how could I store my data to a local csv tho!??
We’ve just released a new Sinks API. The release has a CSV Sink connector included. Pls check out the docs
@@michaelrosam9271 That sounds great! Thanks!
If you haven't already seen it check out the video about the 2.9 release. ruclips.net/video/VoDQtO8mirc/видео.html
More kafka😊
I like your videos, Please upload more videos.
Yes sir, will do! We actually have many more in the pipeline. Subscribe and stay tuned!
Cool! A candlestick chart would have been nice. 🙂
We have a template that handles stock tick data and represents it on a candle stick chart github.com/quixio/template-real-time-data-pipelines-in-python
Hello, thanks for the video. What if we would like to consume the data in the topic from certain existing offset number ? I was looking at .seek() class but not get a effective solution yet. Many thanks
Hi thanks for the question. To do it currently, you'd have to manually seek and commit that offset for each partition. But QuixStreams doesn't support that at the moment.
@@QuixStreams many thanks !
thank you for the content, keep it up !!
I learnt a lot grandpa 😊