AWS Tutorials - Building ETL Pipeline using AWS Glue and Step Functions

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ก.ย. 2024
  • The script URL - github.com/aws...
    In AWS, ETL pipelines can be built using AWS Glue Job and Glue Crawler. AWS Glue Jobs are responsible for data transformation while Crawlers are responsible for data catalog. Amazon Step Functions is one approach to create such pipelines. In this tutorial, learn how to use Step Functions build ETL pipeline in AWS.

ความคิดเห็น • 77

  • @arunr2265
    @arunr2265 2 ปีที่แล้ว +19

    your channel is gold for data engineers. thanks for sharing the knowledge

  • @veerachegu
    @veerachegu 2 ปีที่แล้ว +1

    Really helpful and no institute will come to give training on this thankyou so much

  • @coldstone87
    @coldstone87 2 ปีที่แล้ว +1

    This is amazing. Glad I found this on youtube. A million thanks.

  • @vaishalikankanala6499
    @vaishalikankanala6499 2 ปีที่แล้ว +2

    Clear and concise. Great work, thank you very much!

  • @tejvinder83
    @tejvinder83 หลายเดือนก่อน

    very nice explanation

  • @pravakarchaudhury1623
    @pravakarchaudhury1623 2 ปีที่แล้ว +1

    It is really awesome. A million thanks to you.

  • @harsh2014
    @harsh2014 2 ปีที่แล้ว +1

    Thank for your session, it helped me !

  • @akhilnooney534
    @akhilnooney534 ปีที่แล้ว +1

    Very Well Explained!!!!

  • @ravitejatavva7396
    @ravitejatavva7396 4 หลายเดือนก่อน

    @AWSTutorialsOnline, Appreciate your good work. AWS glue has evolved so much now, how can we in-corporate data quality checks to the pipelines and send email notifications to the users with dq fail results such as rules_succeeded, rules_skipped, rules_failed and publish the data to a quicksight dashboard. Do we still need step-functions ? Any thoughts / suggestions please.

  • @nlopedebarrios
    @nlopedebarrios 7 หลายเดือนก่อน

    Considering the continuous evolution of AWS Glue, what do you think is more suitable for a newbie: orchestrating the ETL pipeline with Glue Workflows or Step Functions?

  • @terrcan1008
    @terrcan1008 2 ปีที่แล้ว +2

    Thanks for your this kind of tutorials,
    Could you please share some of the scenarios for AWS Glue job along with Session as well as for AWS lambda.
    And Also would like to understand the incremental load scenarios in AWS GLUE using HUDI DATASET and other scenarios on same topic

  • @nlopedebarrios
    @nlopedebarrios 7 หลายเดือนก่อน

    If the purpose of the ETL pipeline is to move data around, and the sources, stages and destination are already cataloged, why would you need to run the crawlers after each glue job is finished?

  • @najmehforoozani
    @najmehforoozani 2 ปีที่แล้ว +1

    Great work

  • @anuradha6892
    @anuradha6892 ปีที่แล้ว

    Thanks 🙏 it was a great video.

  • @PipatMethavanitpong
    @PipatMethavanitpong 2 ปีที่แล้ว +1

    Thank you. This is a nice ELT demo. I wonder how do you handle past extracted and cleaned data.
    Glue jobs are appending write only, so the raw bucket will contain both old and new extracts and the cleaning job will perform on both the old and new.
    I think there should be some logic to separate old files and new files.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว +1

      You can enable job bookmark on Glue Job and that way the job will not processing already processed data.

    • @PipatMethavanitpong
      @PipatMethavanitpong 2 ปีที่แล้ว

      @@AWSTutorialsOnline sounds nice. I'll check it out. Thank you.

  • @johnwilliam9310
    @johnwilliam9310 ปีที่แล้ว +1

    Which one would you recommend in order to automate the ETL process? I have seen the AWS glue workflow video as well and this video is also doing something similar thing which is automating the ETL process. I am not able to decide which one should I use? workflow or step function.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว +1

      Glue Workflow is good for simple workflow of Glue Jobs and Crawlers. However, if you want to build a complex workflow where you want to reuse the same job / crawler and also call other AWS Services then, you should choose Step Functions. Hope it helps.

    • @johnwilliam9310
      @johnwilliam9310 ปีที่แล้ว +1

      @@AWSTutorialsOnline Thank you for providing clarity to me.

  • @picklu1079
    @picklu1079 2 ปีที่แล้ว +1

    Thanks for the video. If i use step function to orchestrate glue workflows, will that slow the whole process down?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว

      Please tell me more. Why you want to orchestrate glue workflows?

  • @user-lq6gc1tw2v
    @user-lq6gc1tw2v ปีที่แล้ว +1

    Hello, good video. Maybe someone knows when use Glue workflows and when use StepFunctions?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว

      Glue workflow when you want to orchestrate Glue Job and Crawler only. StepFunction when you want to orchestrate Glue Job, Crawler plus other services as well.

  • @BradThurber
    @BradThurber 2 ปีที่แล้ว +1

    It looks like Step Functions Workflow Studio includes AWS Glue Start Crawler and AWS Glue Get Crawler states. Could these be used directly instead of the lambdas?

  • @abeeya13
    @abeeya13 2 หลายเดือนก่อน

    can we combine batch processing with step function?

  • @rishubhanda1084
    @rishubhanda1084 2 ปีที่แล้ว +1

    Amazing video!! Could you please go over how to build something like this with a CDK? The visual editor is helpful, but I find it easier to provision resources with code.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว +1

      Hi - yes. Planning CDK video for setting up data platform.

    • @rishubhanda1084
      @rishubhanda1084 2 ปีที่แล้ว +1

      @@AWSTutorialsOnline Thank you so much! I just watched all your videos on Glue and I think the event driven pipeline with EventBridge would be the most helpful.

  • @veerachegu
    @veerachegu 2 ปีที่แล้ว +1

    Really awesome video no where available this content small request can you do the one lab like while daily or hourly fils uploaded in to S3 and trigger the function from S3 to step function pipeline to end of the job

  • @4niceguy
    @4niceguy 2 ปีที่แล้ว

    Great ! I really appreciate !!!!!

  • @chatchaikomrangded960
    @chatchaikomrangded960 2 ปีที่แล้ว +1

    Good one.

  • @veerachegu
    @veerachegu 2 ปีที่แล้ว +1

    One doubt crawler operation is mandatory? To perform raw data to cleanse
    Can we transfer the raw data directly to cleanse with help of glue job

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว

      It is not mandatory but cataloging data at each stage is recommended practice. Each makes data searchable and discoverable at each stage.

  • @simij851
    @simij851 2 ปีที่แล้ว

    thank you a ton lot for doing this!!!

  • @veerachegu
    @veerachegu 2 ปีที่แล้ว +1

    Pls can you explain what job takes place in between raw crawler to cleanse crawler

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว

      Raw layer is immutable. It presents the data in the format it is ingested. From raw to cleansed layer, you do cleaning operations such as handling missing values, format standardization for data, currency, column naming etc.

  • @anmoljm5799
    @anmoljm5799 2 ปีที่แล้ว +1

    my data source is CSV files dropped into an s3 bucket which is crawled, and I trigger the crawler using a lambda to detect when an object has been dropped into the s3 bucket, how do I trigger the start of a pipeline consisting of Glue jobs upon the completion of the first which crawls my source s3 crawler?
    I could use Workflows which is part of Glue but I have a Glue DataBrew job that needs to be part of the pipeline.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว

      You need to use event based mechanism. I have one tutorial for it. here - th-cam.com/video/04BbCLDlvII/w-d-xo.html

    • @anmoljm5799
      @anmoljm5799 2 ปีที่แล้ว

      @@AWSTutorialsOnline Thank you for the reply and the awesome video!

  • @Draco-pu4ro
    @Draco-pu4ro ปีที่แล้ว +1

    How do we run this like an automated flow in real world? Like in a productionized environment?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว

      You can automate in two ways - event based or schedule based. Event based will be like run StepFunction when data lands in S3 bucket. Schedule based will be run StepFunction at a scheduled time (configured by AWS EventBridge)

  • @kamrulshuhel7126
    @kamrulshuhel7126 2 ปีที่แล้ว

    Thank you so much for your nice tutorial.
    I will be grateful can you respond, I have some understanding issues -
    while I use condition in step functions workflow - not ($.state == "READY")
    I am getting this error,
    An error occurred while executing the state 'Choice' (entered at the event id #13). Invalid path '$.state': The choice state's condition path references an invalid value.

  • @sriadityab4794
    @sriadityab4794 2 ปีที่แล้ว

    How to handle if there are multiple files dropped in S3 at the same time where we need trigger one glue job using Lambda? I see some limitations where it is throwing error where it can’t trigger multiple files at a time. How should we handle Lambda here? Any help is appreciated.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว +1

      yeah it is real pain if you drop multiple files at the time of ingestion (in raw layer) and you want glue job to start after all drops have completed. Post raw stage, you can hook into Glue and Crawler events to run the pipeline but at the time of ingestion you rely on S3 file drop based event.
      In such case, based method is to drop a token file after all the files are dropped. S3 event can be configured on put/post event of this token file. Crawler will be configured to exclude token file. Similarly, glue job if doing file based operation will also exclude the token file. Hope it helps.

  • @simij851
    @simij851 2 ปีที่แล้ว

    What would you advise if we have 150 tables to move from mySQL into S3 ( No business transformation- just dump load raw) , to have them all in one step function to run parallelly or create individual pipelines to reduce the risk of if one fails all fails with all being clubbed together.

  • @anirbandatta2037
    @anirbandatta2037 2 ปีที่แล้ว

    Hi, Could you please share some CICD scenarios using AWS services.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 ปีที่แล้ว

      Sure - I will plan some. Thanks for the feedback.

  • @veeru2310
    @veeru2310 ปีที่แล้ว

    Hi sir I am passing glue job arguments in step functions to call parallel glue job operation but unfortunately my job getting success but records not transferred path and destination clear please help me job not taking parameters from step function

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว

      show the syntax you use to pass parameters when calling Glue Job?

    • @veeru2310
      @veeru2310 ปีที่แล้ว

      @@AWSTutorialsOnline I am going to load 18 tables so I need to pass 18 table parameters right is it good way can you pls suggest me

  • @InvestorKiddd
    @InvestorKiddd ปีที่แล้ว

    How to create a glue job using aws lambda?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว

      you want to create glue job or run glue job?

    • @InvestorKiddd
      @InvestorKiddd ปีที่แล้ว

      @@AWSTutorialsOnline create a glue job using aws lambda or aws stepfunction

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว

      @@InvestorKiddd I can probably explain but want to understand more. Generally, people will have job configured and they would like to run it using Lambda / Step Functions. Why you need to create job using Lambda / Step Functions? What is the use case?

    • @InvestorKiddd
      @InvestorKiddd ปีที่แล้ว

      @@AWSTutorialsOnline so I am scraping some files based on cities, and then I want to convert it into parquet and then use Athena queries to get insights.
      So here I can use same job for mapping and conversion purpose, but input and output path name will be getting changed, like say, input file name is mumbai.csv(city.csv) . So the input path will change when we go for Bangalore.csv , so to solve this issue, my idea was to create a new job for a new city or if we can change input and output path programmatically, then also it is ok for me, I want to automate this process.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว +1

      @@InvestorKiddd In this case, you should create a job and at run time pass source and destination location as job parameters. Please check my video - I did talk about it in one of them.

  • @user-lq6gc1tw2v
    @user-lq6gc1tw2v ปีที่แล้ว +1

    Hello, good video. Maybe someone knows when use Glue workflows and when use StepFunctions?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  ปีที่แล้ว

      Glue workflow when you want to orchestrate Glue Job and Crawler only. StepFunction when you want to orchestrate Glue Job, Crawler plus other services as well.