You only got HALF of your data. You ignored the real-world challenge of pagination. The page/per_page/total/total_pages attributes indicate that you need to perform subsequent fetches to get all 12 rows.
You have great content explaining ADF ! Can you please create a video on how we can pull multiple excel files from Microsoft Teams and another video on Sharepoint as source and loading to some file or database after doing some transformation. There are videos available but I like the way you make it simple.
I have one requirement this topic? I want to call API and this API contains 5 CSV format files and want to move these 5 files into destination (table format) . Can you please help me how to do ??
Same requirement with incremental data required. call watermark table col_Name 'ApiUrl' ,based on 'LastExportDt' data should be increment. Please make one Video
you should define a input file (this could be a .txt file) and define the Sources and Destinations. Use this file as a input for your ADF Activity Lookup
Absolutely you can.... especially if you store source and target metadata as well as the file format options. Then have a stored procedure or function dynamically output the same json that ADF would output upon reading source and target metadata
You would need to know your watermark column name or names. Most likely, you would then need to include these columns and values as key value pairs in your api call. At the end of your process, I would then recommend you take the maximum value of your watermark columns and log them at the end of a successful load.....depending on source application some people take the max value of the watermark from their target table and some take the maximum value of the what was extracted. Both have their pluses and minuses. 😊If it is a first time load I would suggest you have a metadata framework which sets an arbitrary start watermark value for any first time initial loads.
You only got HALF of your data. You ignored the real-world challenge of pagination. The page/per_page/total/total_pages attributes indicate that you need to perform subsequent fetches to get all 12 rows.
Thank you so much! I learnt a lot from this short video.
Thanks brother, it was really useful. I really appreciate it.
Thank you 😊
Perfect 👌 explanation sir
Thanks buddy for such a insightgul video
That is a great video!!!!👏👏👏👏
Make a video for interview questions and also what are the challenges u have faced in adf with examples please.
You have great content explaining ADF ! Can you please create a video on how we can pull multiple excel files from Microsoft Teams and another video on Sharepoint as source and loading to some file or database after doing some transformation. There are videos available but I like the way you make it simple.
Informative! Many thanks
Awesome tutorial. Is there a way to divide PDF in chunks while copying from source to sink ? Like copying a 500 page PDF to 5 PDFs 100 page each ?
Is there any option to dynamically specify unroll column in flatten transform . Pls reply
Never tried. But see if add dynamic content option is available there. If yes, then using byname() function u can add ur column name
Hi, The requirement is to Truncate the Salesforce Table first using ADF and then load the table using ADF. How can we do this? Any idea ?
how can control dev and prod variables in ADF ?.
Brilliant! Thanks
I have one requirement this topic? I want to call API and this API contains 5 CSV format files and want to move these 5 files into destination (table format) . Can you please help me how to do ??
great video......thanks a lot
we got a request in our project to egress data from ADLS location to REST API using ADF, is that possible? if so please provide your guidance
How can I decode the API response?
Nice Video.. 👍
Thank you ☺️
How to copy table from one storage account to another storage account in synapse
Same requirement with incremental data required. call watermark table col_Name 'ApiUrl' ,based on 'LastExportDt' data should be increment. Please make one Video
Helpful
Thank you ☺️
I have 10 jsons same as above scenario,Then how to load them in single for each activity by using dynamic column mapping,Can you please explain this
Have u got solution for this???
Is it possible to dynamically copy multiple json files to CSV files without manually importing schemas
you should define a input file (this could be a .txt file) and define the Sources and Destinations. Use this file as a input for your ADF Activity Lookup
Absolutely you can.... especially if you store source and target metadata as well as the file format options. Then have a stored procedure or function dynamically output the same json that ADF would output upon reading source and target metadata
thanks for sharing
How to acheive Incremental Load from Rest API or through ODATA?
You would need to know your watermark column name or names. Most likely, you would then need to include these columns and values as key value pairs in your api call. At the end of your process, I would then recommend you take the maximum value of your watermark columns and log them at the end of a successful load.....depending on source application some people take the max value of the watermark from their target table and some take the maximum value of the what was extracted. Both have their pluses and minuses.
😊If it is a first time load I would suggest you have a metadata framework which sets an arbitrary start watermark value for any first time initial loads.
Make Use Web, set variable, to copy data !
Awesome tutorial. Is there a way to divide PDF in chunks while copying from source to sink ? Like copying a 500 page PDF to 5 PDFs 100 page each ?