Airflow has been on fire lately - I love TaskFlow API and dynamic task mapping. Data aware scheduling came out at a perfect time and simplified a real problem for me
I started working more with Airflow at my job and your videos have been very helpful when I want to switch up learning format and not look through docs to get exposed to concept.
Amazing video Marc! This is a truly amazing feature! Although, one thing I couldn't seem to find is a way to pass some parameters to the consumer DAG. Is there a way to access the context of what triggered the DAG? Or the extra params can be passed in the Dataset? This could be useful metadata such as the latest timestamp of some data been updated which can be useful to the downstream processes when triggered. Thank you!
Hi Marc, datasets looks like to be very useful but how they can be used to trigger a dag based on a SQL database update? Is there any example of it? Thanks in advance
Same thing, great /not direct comment about: If the producer DAG ‘s task had defined the outlet, but does not really access the file/folder. Or has nothing to do with the content of the URI in the task logic, what would happen? the consumer DAG still runs. So it is really just using the URI as a link between the two DAGs.
Hi Marc, This is indeed amazing feature. I try to use dataset scheduling, but when job finished/or failed, it is didn't trigger my on_success_callback/my_failure_callback. While using normal scheduling (ex: @hourly, etc), it trigger my on_success_callback/my_failure_callback. Is there any config that I missed? or is it a bug?
Hi Marc, I am watching your airflow architecture video In single node Executor update the metastore In multi architecture executor put data in queue So who will update the metastore in multinode after job is done Queue or executor
Hi Marc, This is indeed a truly amazing feature. Just wondering if there is always going to be an instance of consumer DAG triggered for every file (URI) change. ie.. in case the consumer DAG is running while the producer DAG has created a new file change, will the new change cause a new coumser DAG instance to run on the new data (ie.. while the previous conumser instance is still running). if that makes sense. as always thank you for the content.
You mentioned that the consumer dag triggered by the dataset is alway run when the producer dag did run successfully, not when the dataset has changed. Let`s say the producer has a compare task, and only changes the dataset if necessary. In that case the consumer would always run anyway. Any way to solve that?
Airflow has been on fire lately - I love TaskFlow API and dynamic task mapping. Data aware scheduling came out at a perfect time and simplified a real problem for me
Other great features are coming. Stay tuned ;)
This is awesome. No more such ugly Triggers, Sensors and etc. Thx for explanation Marc!
you're welcome :)
I started working more with Airflow at my job and your videos have been very helpful when I want to switch up learning format and not look through docs to get exposed to concept.
Thank you 🙏
Nice demonstration, I will test a MySQL as dataset to explore this feature
Great content as always Marc!
Amazing video Marc! This is a truly amazing feature! Although, one thing I couldn't seem to find is a way to pass some parameters to the consumer DAG. Is there a way to access the context of what triggered the DAG? Or the extra params can be passed in the Dataset? This could be useful metadata such as the latest timestamp of some data been updated which can be useful to the downstream processes when triggered. Thank you!
Hi Marc, datasets looks like to be very useful but how they can be used to trigger a dag based on a SQL database update? Is there any example of it? Thanks in advance
Same thing, great /not direct comment about:
If the producer DAG ‘s task had defined the outlet, but does not really access the file/folder. Or has nothing to do with the content of the URI in the task logic, what would happen? the consumer DAG still runs.
So it is really just using the URI as a link between the two DAGs.
Yeah exactly it is just the URI that acts as a bridge. It does not actually point to anything
A very utilitarian feature!..
Isn't "task aware scheduling" a more appropriate name for this feature? Since there is no real interaction with data .
I think there will be real interaction with data at some point 😉
Hi Marc, This is indeed amazing feature.
I try to use dataset scheduling, but when job finished/or failed, it is didn't trigger my on_success_callback/my_failure_callback. While using normal scheduling (ex: @hourly, etc), it trigger my on_success_callback/my_failure_callback. Is there any config that I missed? or is it a bug?
can i use template var (like ds_nodash) in Dataset uri ?
Hi Marc,
I am watching your airflow architecture video
In single node Executor update the metastore
In multi architecture executor put data in queue
So who will update the metastore in multinode after job is done
Queue or executor
Hi Marc, This is indeed a truly amazing feature.
Just wondering if there is always going to be an instance of consumer DAG triggered for every file (URI) change.
ie.. in case the consumer DAG is running while the producer DAG has created a new file change, will the new change cause a new coumser DAG instance to run on the new data (ie.. while the previous conumser instance is still running). if that makes sense.
as always thank you for the content.
yes
How to extract table from postgre database instead of txt file as dataset. Is there an example video for this?
how do we trigger it off an SQL table update rather than a file? He mentions its possible but I can't seem to find documentation for it anywhere
Amazing 👀
Very good to comment that external system updates the dataset file will NOT make the consumer DAG to run.
Not yet. But it will be possible very soon
You mentioned that the consumer dag triggered by the dataset is alway run when the producer dag did run successfully, not when the dataset has changed. Let`s say the producer has a compare task, and only changes the dataset if necessary. In that case the consumer would always run anyway. Any way to solve that?
What if pass extras for the consumer with the flag to act or no
is this only available on airflow >= 2.4 ?
Yes
Yes
Cloud composer doesn’t have airflow 2.4 yet 😞
please update the udemy with dataset
coming