I have many cases where I depend on many datasets but i must ensure that the dag run is attempted only once per day. Can you do that with the DatasetOrTimeSchedule schedule? something like DatasetAndTimeSchedule?
Which Udemy training has this video in more detail?I want to create a dataset by importing data from postgresql. So when there is an update in the interconnected tables, I will check if there will be an update in the other tables.
I have many cases where I depend on many datasets but i must ensure that the dag run is attempted only once per day. Can you do that with the DatasetOrTimeSchedule schedule? something like DatasetAndTimeSchedule?
Yes, you can do that with DatasetOrTimeSchedule :)
Which Udemy training has this video in more detail?I want to create a dataset by importing data from postgresql. So when there is an update in the interconnected tables, I will check if there will be an update in the other tables.
Hello Marc, I am facing challenges when I implement sla_miss_callback functionalities.Could you please help me .
Hello Marc,
QQ:
Does the "Clear only failed tasks" option rerun the failed task and all of its upstream tasks as well?
Nope, only failed and downstream tasks
Where do you need to store the dataset that's going to be used in the DAG?
I tend to have a datasets.py file in include/ where I define the datasets I use across DAGs
Another new feature of 2.9 is the ability to give a name to an expanded task using map_index_template
There is a typo in your video description. Version - 2.8, not 2.9
Thank you 🙏