Congratulations on your presentation! You absolutely nailed it. Your thorough research and confident delivery captivated everyone in the room. Your ability to explain complex ideas so clearly is truly impressive. Keep up the fantastic work!
Great talk! There are a few Java libraries that already solve these challenges (db-scheduler, JobRunr or Quartz). At JobRunr we'd love to share your talk as it explains JobRunr's architecture well and can help our users understand the challenges of distributed scheduling even better!
Thanks for your comment! I'm glad you liked it! ☺ Please, I would appreciate it if you shared it! By the way, I received great feedback from Ronald, the creator of JobRunr-he watched my talk! He is a fantastic guy! ❤
@marshall143Thanks for the comment! 😊 I didn't know nFlow, but I understand that if your context allows your team or project to adopt a task scheduler or workflow engine, you should go with it. Usually, those libs and frameworks make the developer's life easier because they address very well all the issues discussed in the talk.
I've seen this presentation in portuguese before of Rafael Pontes in Zup Channel, and I could implement something similar it in my job. Great work, Bro! Thank you so much
Thanks for the comment. Yeah, you're right. The number of rows is unimportant in understanding how the SQL feature works. The idea was to be didactic and straightforward.
Thank you for clear and well-structured presentation. It's very useful and important information even for people with lots years of experience. I wish every developer should watch this video when every time they put @Transactional onto theirs method.
Great talk!! so much learnings and addressed real life problems I faced while writing background scheduled jobs... btw we used ShedLock library but this is real good insight.
Great talk! A couple of thoughts. Your statement about entity state and transactions is only true if Spring's "open session in view" is not enabled. I find there is a lot of confusion out there about the Hibernate session, transaction state, OSIV, and entity state. Along similar lines, the call to a repository save() method is unnecessary when updating an attached entity because of Hibernate change tracking, and calling save() leads people to assume that it persists changes, which (counterintuitively) it doesn't. (It adds/merges detached entities to the session/persistence context.) Regarding transaction scope, I would argue it is still too broad. Work for a single user/card should generally happen in its own transaction, at least in an OLTP context.
You are welcome! Thanks for the comment ☺️ I am unsure if I followed your comment about Open Session in View. I mean, there's no OSIV relation to a job scheduled by Spring since OSIV has to do with web/MVC scope. The code in the talk is correct. You are right about the save() method; it wasn't needed, but the idea was to show a simple and didact code, not getting into details about how to persist entities or their state transitions. The transaction scope is broader because we are working in batches 😊
@@RafaelPonte Thanks for the response. I spend much time working in a WebMVC context that (unfortunately) uses OSIV and I'm too used to its oddities. 🙂
Really great talk! But I am curious that if 2 save statements already wrap in 1 small transaction how can it combine with the hibernate batch with another save statement process
Hi Rafael, In the scenario of this video, we are using short-transaction to save data to the database so I think each transaction should be isolated so they can't be wrapped in one batch like your example INSERT INTO ... Values (A),(B)
@@mindcontrolkmc.3286 Yeah, the idea is precisely that! For each batch (chunk) of 50 rows, Hibernate will group (and reorder if needed) each INSERT and UPDATE inside that short-running transaction and convert them into only two single statements right on the commit.
Great talk and lot of cool new (for me) information about Spring/JPA semantics! But not much of this is specific to background jobs, and not much in the talk about generic background job processing. So I'd say the title is a bit misleading.
Thanks for the comment 😊 I am glad the content was helpful for you! Out of curiosity, what do you understand as background jobs and job processing, and what do you expect from a talk about these subjects?
As a side note, the original example program has one further problem which wasn't discussed - if the job runs every 60 seconds, what happens if it takes more than 60 seconds to complete, giving you unintended parallel processing? I've been bitten by that one a few times...
Thanks for the comment! 😊 In the context of the talk, this is not an issue. I mean, Spring will not allow running multiple jobs for the same task, even if it takes longer than 60 sec. But if the method is annotated with @Async then we can not say the same 😬
@@RafaelPonte Fair enough... you're right that a good scheduler will avoid the problem (for sync operations, at least). My negative experiences have typically been with more naive scheduling tools...
@@RafaelPonte Does it mean the while-true loop is also fine? And how will Spring behave if the job takes slightly longer than 60 seconds? Will it let it complete the last batch process iteration?
@@michaelchung8102 Spring will not stop or kill the thread running the job. Since the while-true loop has a break statement, the job will run until all the filtered rows from the table are processed, and it's ok if this execution takes more than 1 minute. After that, the Spring Scheduler will wait 1 minute before starting the same job again. Does it make sense?
@@RafaelPonte oh yes, I forget that without Async, Spring will just queue the jobs if previous ones are still running, thanks for your quick reply, and a big thanks to you as this video answers the doubts about race condition and performance concerns in distributed scheduling that I had for years. ❤️
Nice explanation! But did not cover very important case if your app has more than one job marked with @Scheduled annotation. Because it may be crucial moment of performance. May be it will be covered in next topics.
Thanks for the comment 😊 Nice you liked it! I am not sure if I understood what you mean. Usually, a single application has multiple @Scheduled jobs running concurrently doing different things (sometimes at other times). Could you give more details?
@@YuliSlabko Thanks for the explanation. Now I got your point! ☺ You're right. If your application runs multiple jobs close together or jobs that take too long to finish, tuning the Scheduler's thread pool size is essential. 👊🏻
what's the difference between reading and writing with a rabbit or kafka and reading and writing with a database? Usually i'm using REDIS for solve same problem, because it much faster than usual relation db
Thanks for the comment. I will ignore the trade-offs of having a new component in the infrastructure now and focus only on the developer's perspective. There are differences, but how they can impact your solution depends on your context. I mean, using Kafka or RabbitMQ in the talk's job perspective may have little difference on the job's code, but in the application perspective, which produces events in the queue, we may have to deal with a dual write issue. The same is true for Redis: it depends on how you're using it, such as a distributed lock provider or a message queue.
Thanks for the comment. It depends on which problem you're talking about. In the talk’s context, it solves only part of the problem: it makes the whole operation atomic and recoverable but causes a few side effects.
Obrigado pelo comentário! 😊 Eu discuti vários problemas e soluções na talk, e a maioria roda bem com a maioria dos bancos relacionais, mas podem sim haver nuances. De qual abordagem vc fala exatamente?
bom dia Rafael eu tentei usando Oracle 12c subindo 2 instâncias no ecs mas ao executar um job que busca do banco e publica em um tópico as instâncias trazem registros repetidos! Parabéns pela live de ontem no deveficiente! Agora tb serei seu aluno lá
@@fonfux0123 valeu! então, Oracle suporta bem o que discuti na talk. Como está o SELECT executado pela aplicação? Ele está sendo gerado FOR UPDATE ou FOR UPDATE SKIP LOCKED?
@@RafaelPonte com for update skip locked! acho que tem algo a ver com os cursores do oracle como funcionam! posso estar enganado! ou entao o fato de eu estar delimitando a qtd de registros da consulta e ele so da o lock depois de obter o resulta pra cada uma das linhas
@@fonfux0123 Entendi. Isso é verdade. No caso do SKIP LOCKED, o Oracle somente faz o locking durante o fetching do registro, e não durante a seleção (filtering) do mesmo. Mas isso normalmente acontece quando você está manipulando explicitamente cursores via PL/SQL ou alguma API de persistência. Aqui nessa outra talk sobre SKIP LOCKED focada em Oracle, eu comento essa "limitação": ruclips.net/video/8DVFc7gXfIQ/видео.html Espero que ajude!
In my understanding, `select ... limit 50 for update` would directly lock these 50 rows, instead of locking one row and processing one row at a time. But in the video, it seems to be the latter approach. Why is that?
He just presents it like that for a purpose of presentation. Of course it will lock all 50 rows (as long as they meet select criteria and are not locked already). Overall this is a very basic presentation, not sure what was the point of that.
@@wukash999 I think the point is to introduce to more unexperienced people the possibles problems one might encounter, so you can study further on it (at least for me it worked ,since I've never thought or knew about this problems), not to make a thourough implementation guide
Thanks for your comment ☺ As @wukash999 commented, the idea was to make it as didactic and accessible as possible so that junior and inexperienced developers could understand it. Do you think it got confused?
@@wukash999 Thanks for your comment and helping them to understand my intention ☺ Do you think this was an introductory and basic talk? I'm afraid I have to disagree. The talk was designed to simplify the subject and make it accessible for everyone, but it's still a complex, tricky, and detailed theme.
Thanks for your comment ☺ Although MongoDB and Kafka support some level of transactions, I don't know how @Transactional annotation would work with MongoRepositories or KafkaTemplates. It's worth reading the Spring Data docs. But it's important to be aware that you do NOT have an atomic operation (all or nothing) when your code mixes different external service calls, like PostgreSQL, Mongo, and Kafka. When you do that, you hit a common issue in distributed systems called "dual write".
@@RafaelPonte I have the same use case where i need to write to mongo, kafka and also to google cloud storage bucket within the same transaction. Do you by any chance know how to solve this problem so I get a all or nothing? Or if not possible, how we would solve this problem then….
For mongo, you can spin a new session with transaction as well, manually. However for Kafka if the produced records are idempotent, you can use the mongo transaction support above to achieve the same.
Ummm... Distribution topic starts after 27 min. Using db locks is tricky and works differently for different databases, e.g. lock escalation. Better use an app level locking. All that had not really to do a lot with jobs. Just long running tasks in a distributed system.
Thanks for your comment ☺ Distributed systems are tricky, and database locks have worked well for over 30 years. Although some databases might differ, an exclusive row-level lock works similarly. By the way, a few RDBMS suffer from lock escalation, but not PostgreSQL (which was used in the talk's context); in addition to that, we used many approaches in the talk that mitigate the chances of lock escalation 💪🏻 Regarding application-level locking, PostgreSQL offers Advisory Locks as an excellent alternative to row-level locks. They're very light and are handled by the application side.
sorry, the topic and the presentation has,without doubt, a high technical value, but the english of this guy, the accent and the way he tries-hard to emphasise almost each and every word in the sentence comes highly unnatural.. it really sounds tiring in the ear
Thanks Rafael! especially for the SKIP_LOCKED feature, new knowledge learnt
Thank you so much! I am glad the talk was helpful for you! 🥰
And yeah, SKIP LOCKED is fantastic!! 💪🏻
Congratulations on your presentation! You absolutely nailed it. Your thorough research and confident delivery captivated everyone in the room. Your ability to explain complex ideas so clearly is truly impressive. Keep up the fantastic work!
Thanks for the kind words, Eduardo! ❤
this for me is the best presentation. Great job
What a comment! Thanks for that! ❤️
Great talk! There are a few Java libraries that already solve these challenges (db-scheduler, JobRunr or Quartz). At JobRunr we'd love to share your talk as it explains JobRunr's architecture well and can help our users understand the challenges of distributed scheduling even better!
Thanks for your comment! I'm glad you liked it! ☺
Please, I would appreciate it if you shared it! By the way, I received great feedback from Ronald, the creator of JobRunr-he watched my talk! He is a fantastic guy! ❤
@@RafaelPonte You're too kind 🤩!
@marshall143Thanks for the comment! 😊 I didn't know nFlow, but I understand that if your context allows your team or project to adopt a task scheduler or workflow engine, you should go with it.
Usually, those libs and frameworks make the developer's life easier because they address very well all the issues discussed in the talk.
I've seen this presentation in portuguese before of Rafael Pontes in Zup Channel, and I could implement something similar it in my job. Great work, Bro! Thank you so much
Hi Felipe,
Thanks for this comment and for having watched both versions of the talk. ❤
Parabéns, Rafael! Foi um prazer assistir sua apresentação pessoalmente!
Obrigado demais, Rapha! ❤ Você eh top!
36:48 Actually in our example, each instance will fight for first 50 records, not one record as it is illustrated in the slide.
Thanks for the comment.
Yeah, you're right. The number of rows is unimportant in understanding how the SQL feature works. The idea was to be didactic and straightforward.
Thank you for clear and well-structured presentation. It's very useful and important information even for people with lots years of experience. I wish every developer should watch this video when every time they put @Transactional onto theirs method.
Thanks for the kind words! I am glad you enjoyed the talk! ☺
Great presentation, great work. Thanks a lot for sharing this knowledge with us!
Thanks so much! I am glad you liked it 🥳
Great talk!! so much learnings and addressed real life problems I faced while writing background scheduled jobs... btw we used ShedLock library but this is real good insight.
Thanks! Nice you liked it!! 😊
By the way, ShedLock is a very cool library! 👊🏻
Excellent topic! Have some background jobs running here and there and I definitely going to check them again.
Nice! I am glad this talk was helpful to you! 👊🏻
Parabéns meu irmão , você deu um show na apresentação, impecável! show de top!
Obrigado, meu irmão!
Que talk boa, aprendi tanta coisa que fiquei até perdido!
Obrigado pelo feedback! ☺ Feliz que teve conteudo novo, bom e útil pra você!
It was pretty cool talk, thank you for it!
You're welcome! I am glad you liked it! ☺
Great talk! A couple of thoughts. Your statement about entity state and transactions is only true if Spring's "open session in view" is not enabled. I find there is a lot of confusion out there about the Hibernate session, transaction state, OSIV, and entity state. Along similar lines, the call to a repository save() method is unnecessary when updating an attached entity because of Hibernate change tracking, and calling save() leads people to assume that it persists changes, which (counterintuitively) it doesn't. (It adds/merges detached entities to the session/persistence context.) Regarding transaction scope, I would argue it is still too broad. Work for a single user/card should generally happen in its own transaction, at least in an OLTP context.
You are welcome! Thanks for the comment ☺️
I am unsure if I followed your comment about Open Session in View. I mean, there's no OSIV relation to a job scheduled by Spring since OSIV has to do with web/MVC scope. The code in the talk is correct.
You are right about the save() method; it wasn't needed, but the idea was to show a simple and didact code, not getting into details about how to persist entities or their state transitions.
The transaction scope is broader because we are working in batches 😊
@@RafaelPonte Thanks for the response. I spend much time working in a WebMVC context that (unfortunately) uses OSIV and I'm too used to its oddities. 🙂
you are an amazing presenter thank you so much learned a lot
Thank you so much!!! I am happy this talk was helpful for you 🥳
Congrats for your amazing presentation, Rafa!
Thanks, Jess! ❤
26:34, 31:59, 32:14, 36:04, 40:31 - key moments
Thanks for the comment and for pointing out the key moments ☺️
Very good! thank your for the valuable content!
You are welcome 🤗
I really like the way you explained short running transactions. Nice addition to the jobs! Parabéns pela excelente apresentação! É muito útil!
Thanks so much! I am glad you liked it 🥰
Nicely done @RafaelPonte.
Thank you! ☺
Amazing persentation, very usefull, thanks Rafael!
You are welcome! 😊
Really great talk!
But I am curious that if 2 save statements already wrap in 1 small transaction how can it combine with the hibernate batch with another save statement process
Thanks for the comment and feedback 😊
I am not sure if I understood your question correctly. Could you elaborate a little bit more on it?
Hi Rafael,
In the scenario of this video, we are using short-transaction to save data to the database so I think each transaction should be isolated so they can't be wrapped in one batch like your example INSERT INTO ... Values (A),(B)
@@mindcontrolkmc.3286 Yeah, the idea is precisely that! For each batch (chunk) of 50 rows, Hibernate will group (and reorder if needed) each INSERT and UPDATE inside that short-running transaction and convert them into only two single statements right on the commit.
O Rafael é fera demais!! Great presentation
Brigadão!! ☺
Great talk and lot of cool new (for me) information about Spring/JPA semantics! But not much of this is specific to background jobs, and not much in the talk about generic background job processing. So I'd say the title is a bit misleading.
Thanks for the comment 😊 I am glad the content was helpful for you!
Out of curiosity, what do you understand as background jobs and job processing, and what do you expect from a talk about these subjects?
Parabeeens manooo! ficou top! sucesso
Obrigado! Feliz que curtiu ❤
Beautiful presentation, thank you
Thank you so much! That's very nice you liked it! 🥰
As a side note, the original example program has one further problem which wasn't discussed - if the job runs every 60 seconds, what happens if it takes more than 60 seconds to complete, giving you unintended parallel processing? I've been bitten by that one a few times...
Thanks for the comment! 😊
In the context of the talk, this is not an issue. I mean, Spring will not allow running multiple jobs for the same task, even if it takes longer than 60 sec.
But if the method is annotated with @Async then we can not say the same 😬
@@RafaelPonte Fair enough... you're right that a good scheduler will avoid the problem (for sync operations, at least). My negative experiences have typically been with more naive scheduling tools...
@@RafaelPonte Does it mean the while-true loop is also fine? And how will Spring behave if the job takes slightly longer than 60 seconds? Will it let it complete the last batch process iteration?
@@michaelchung8102 Spring will not stop or kill the thread running the job. Since the while-true loop has a break statement, the job will run until all the filtered rows from the table are processed, and it's ok if this execution takes more than 1 minute.
After that, the Spring Scheduler will wait 1 minute before starting the same job again.
Does it make sense?
@@RafaelPonte oh yes, I forget that without Async, Spring will just queue the jobs if previous ones are still running, thanks for your quick reply, and a big thanks to you as this video answers the doubts about race condition and performance concerns in distributed scheduling that I had for years. ❤️
Nice explanation! But did not cover very important case if your app has more than one job marked with @Scheduled annotation. Because it may be crucial moment of performance. May be it will be covered in next topics.
Thanks for the comment 😊 Nice you liked it!
I am not sure if I understood what you mean. Usually, a single application has multiple @Scheduled jobs running concurrently doing different things (sometimes at other times).
Could you give more details?
@@RafaelPonte If you do not specify in application.yml thread pool size for scheduler explicitly all jobs will be operated by one single thread.
@@YuliSlabko Thanks for the explanation. Now I got your point! ☺
You're right. If your application runs multiple jobs close together or jobs that take too long to finish, tuning the Scheduler's thread pool size is essential. 👊🏻
what's the difference between reading and writing with a rabbit or kafka and reading and writing with a database?
Usually i'm using REDIS for solve same problem, because it much faster than usual relation db
Thanks for the comment.
I will ignore the trade-offs of having a new component in the infrastructure now and focus only on the developer's perspective.
There are differences, but how they can impact your solution depends on your context. I mean, using Kafka or RabbitMQ in the talk's job perspective may have little difference on the job's code, but in the application perspective, which produces events in the queue, we may have to deal with a dual write issue.
The same is true for Redis: it depends on how you're using it, such as a distributed lock provider or a message queue.
Does a single @Transactional annotation for Scheduled method (in case of JPA framework) fix the original code right away?
Thanks for the comment.
It depends on which problem you're talking about.
In the talk’s context, it solves only part of the problem: it makes the whole operation atomic and recoverable but causes a few side effects.
Amazing! Congrats Rafa!
Thanks, I'm glad you liked it ☺
Perfect! thx Rafael!
Congrats Rafael! Parabéns Rafa!
Thanks so much!!! 🥰
Essa abordagem serve para todos os modelos de banco ou há algum diferencial entre eles por exemplo o Oracle?
Obrigado pelo comentário! 😊
Eu discuti vários problemas e soluções na talk, e a maioria roda bem com a maioria dos bancos relacionais, mas podem sim haver nuances. De qual abordagem vc fala exatamente?
bom dia Rafael eu tentei usando Oracle 12c subindo 2 instâncias no ecs mas ao executar um job que busca do banco e publica em um tópico as instâncias trazem registros repetidos! Parabéns pela live de ontem no deveficiente! Agora tb serei seu aluno lá
@@fonfux0123 valeu!
então, Oracle suporta bem o que discuti na talk. Como está o SELECT executado pela aplicação? Ele está sendo gerado FOR UPDATE ou FOR UPDATE SKIP LOCKED?
@@RafaelPonte com for update skip locked! acho que tem algo a ver com os cursores do oracle como funcionam! posso estar enganado! ou entao o fato de eu estar delimitando a qtd de registros da consulta e ele so da o lock depois de obter o resulta pra cada uma das linhas
@@fonfux0123 Entendi.
Isso é verdade. No caso do SKIP LOCKED, o Oracle somente faz o locking durante o fetching do registro, e não durante a seleção (filtering) do mesmo. Mas isso normalmente acontece quando você está manipulando explicitamente cursores via PL/SQL ou alguma API de persistência.
Aqui nessa outra talk sobre SKIP LOCKED focada em Oracle, eu comento essa "limitação": ruclips.net/video/8DVFc7gXfIQ/видео.html
Espero que ajude!
Great talk! Did not catch all the red flags in this :)
Thanks! I am glad you liked it!! ❤
In my understanding, `select ... limit 50 for update` would directly lock these 50 rows, instead of locking one row and processing one row at a time. But in the video, it seems to be the latter approach. Why is that?
He just presents it like that for a purpose of presentation. Of course it will lock all 50 rows (as long as they meet select criteria and are not locked already). Overall this is a very basic presentation, not sure what was the point of that.
@@wukash999 I think the point is to introduce to more unexperienced people the possibles problems one might encounter, so you can study further on it (at least for me it worked ,since I've never thought or knew about this problems), not to make a thourough implementation guide
Thanks for your comment ☺
As @wukash999 commented, the idea was to make it as didactic and accessible as possible so that junior and inexperienced developers could understand it.
Do you think it got confused?
@@wukash999 Thanks for your comment and helping them to understand my intention ☺
Do you think this was an introductory and basic talk? I'm afraid I have to disagree. The talk was designed to simplify the subject and make it accessible for everyone, but it's still a complex, tricky, and detailed theme.
@@wukash999 How is that a very basic presentation? How would you implement it differently?
Nice one. Appreciate it.
Thanks! I am glad you liked it ☺️
@Transactional Will this works if You have to call a mongoRepositoy and Kafka template ?
All or nothing
If Kafka call KO
The mongo call also ?
Thanks for your comment ☺
Although MongoDB and Kafka support some level of transactions, I don't know how @Transactional annotation would work with MongoRepositories or KafkaTemplates. It's worth reading the Spring Data docs.
But it's important to be aware that you do NOT have an atomic operation (all or nothing) when your code mixes different external service calls, like PostgreSQL, Mongo, and Kafka. When you do that, you hit a common issue in distributed systems called "dual write".
@@RafaelPonte I have the same use case where i need to write to mongo, kafka and also to google cloud storage bucket within the same transaction. Do you by any chance know how to solve this problem so I get a all or nothing? Or if not possible, how we would solve this problem then….
@@RafaelPonte obrigado :)
For mongo, you can spin a new session with transaction as well, manually. However for Kafka if the produced records are idempotent, you can use the mongo transaction support above to achieve the same.
Parabéns, muito show!
Braaabo de mais. Parabéns, príncipe do oceano kkk 👏👏👏
Brigadão, Junior! 👊🏻
@@RafaelPonte Parabéns Rafael! Compartilhando com todos do meu time! Abraço.
@@benicioavila obrigado ☺️ E valeu por compartilhar!! ❤️
Ummm...
Distribution topic starts after 27 min.
Using db locks is tricky and works differently for different databases, e.g. lock escalation. Better use an app level locking.
All that had not really to do a lot with jobs. Just long running tasks in a distributed system.
Do you have a resource recommendation on app level locking? I'm studying the topic and it would be awesome to see it more detailed. Thanks
Thanks for your comment ☺
Distributed systems are tricky, and database locks have worked well for over 30 years. Although some databases might differ, an exclusive row-level lock works similarly. By the way, a few RDBMS suffer from lock escalation, but not PostgreSQL (which was used in the talk's context); in addition to that, we used many approaches in the talk that mitigate the chances of lock escalation 💪🏻
Regarding application-level locking, PostgreSQL offers Advisory Locks as an excellent alternative to row-level locks. They're very light and are handled by the application side.
Great content!
Thanks 🙏🏻
Adorei a conversa, mas não sei se queria falar sobre Spring Boot ou se candidatar a político, hahaha.. brincadeira!
Hahaha, valeu! 😊
Great job Rafael!
Thank you ☺
excellent lecture 💚
Thanks, my friend!
Is he describing Spark 😆?
Thanks for the comment 😊
Do you mean Apache Spark? hehe
it was so good.
thanks
I am glad you liked 😊
Rafa is humble, Freak and beatifiul
Hehe, you're very kind, my friend! ❤
mandou bem, parabéns!
Obrigado, Diego ❤️
The thing i hate the most in this video - "Conluding...", i was so engaged didnt wanted him to stop.
hahaha, thank you so much for this lovely comment 🙏🏻😍 I am thrilled after reading it!!
Ola. Todo bem
Tudo ótimo ☺️☺️
Parabéns marajá! 😉
Brigadão ☺️☺️
Parabéns Rafael, Zerou game do Java.
hahaha, valeu bruno!!!
Perfect
Thanks 🙏🏻
What a prince 💛🔥
Thanks, Luis ❤
é o cara! boooraa!
Valeu Mustafa 👊🏻
Muito bom!
que massa que gostou 😊
Nice!!
Thanks! ❤
Almost made me want to work with boring techs again ;)
I’m moving back to Java/JVM after 15 years in Node/JS/Python
Boring techs are amazing! 🙌🏻
boa ponte!!!!!
Valeu, Flávio 😊
👏👏👏
thanks!!!
thank you
you're welcome! ☺
sorry, the topic and the presentation has,without doubt, a high technical value, but the english of this guy, the accent and the way he tries-hard to emphasise almost each and every word in the sentence comes highly unnatural.. it really sounds tiring in the ear
Congrats, nice job!
Thanks, Barroso! 👊🏻