Hi J, your tutorials are really awesome, it's easy to understand, hands-on immediately really helpful. Can you do more videos series such as Redshift ( I am waiting for that from you) or a reality pipeline in the real world using services on AWS ??? To have a more general look and deeper vision of the Data Engineer role with AWS. Thank you so much
Hi Minh. Thanks for watching! Redshift is on the plan. It's quite a big topic so I want to get it right. As for the real life pipeline, I am planning on showing an example when to cover airflow as an orchestration tool. Keep an eye out for it.
Hello, great video! It's data brew considered a data quality tool? Can I use it to set alarms to control for example missing data in a column and how will you do it? Thanks a lot
Hi chconnect. Yip 100 percent. Easiest way is using an eventbridge to trigger a lambda which reads the results of a glue databrew job out of S3 and then uses this to send an alert.
Great Stuff . I just started looked at this tool . I am just wondering , how can we apply the recipe to a new data lands in S3 . Then do a merge or is it advisable to do the merge in Redshift
Hi J, your tutorials are really awesome, it's easy to understand, hands-on immediately really helpful. Can you do more videos series such as Redshift ( I am waiting for that from you) or a reality pipeline in the real world using services on AWS ??? To have a more general look and deeper vision of the Data Engineer role with AWS. Thank you so much
Hi Minh.
Thanks for watching!
Redshift is on the plan. It's quite a big topic so I want to get it right.
As for the real life pipeline, I am planning on showing an example when to cover airflow as an orchestration tool. Keep an eye out for it.
@@JohnnyChivers Thanks for your reponse
hey johnny. can you make a video for career path in aws for data science and machine learning
I certainly can. I'll put it on the list of future videos.
Great explanation. Thank you for sharing
You are welcome!
Can I store the name of the uploaded file in a new column of the output?
Hello, great video! It's data brew considered a data quality tool? Can I use it to set alarms to control for example missing data in a column and how will you do it? Thanks a lot
Hi chconnect. Yip 100 percent.
Easiest way is using an eventbridge to trigger a lambda which reads the results of a glue databrew job out of S3 and then uses this to send an alert.
Great Stuff . I just started looked at this tool . I am just wondering , how can we apply the recipe to a new data lands in S3 . Then do a merge or is it advisable to do the merge in Redshift
So what is vs between databrew and studio? i think gcp dataprep like aws databrew and gcp datafusion like aws studio (visuel)