Hi Annu, your videos are of great help. Can you also make an incremental data load(delta load, watermark table --- mentioning these technical terms in understandable way) from blob to sql or sql to sql on a priority basis please. Thank you in advance !
Hello thankyou for watching the videos. Sure will try to create the mentioned scenario. Meanwhile, please check if the videos in synapse playlist helps you for incremental load: th-cam.com/video/FXw1gPaa2-M/w-d-xo.html
@@azurecontentannu6399 Sure, thank you for your response, will watch the content as you shared, consider my concern and please help in that incremental load scenario as well.
Mam , everything is clear. Only one doubt from myside - I have the following folder structure : ==================================== Input(container)/folderA(directory)/source_file.csv(file) I enabled the partition discovery But not provided the partition root path Copy nature : File path available in dataset In this case , how it will behave mam? I ran with above config and I could see same as normal copy activity. Please help me
Hi , will this help with reading the latest incremental data from adls ? If incremental data is loaded into year=2023/month=06/day=21/ folder then will this data be fetched automatically?
Very informative for a fundamental look. And I appreciate the walkthrough tests. Thank you!
Thank you for your contribution
Nicely explained!
Hi Annu, your videos are of great help. Can you also make an incremental data load(delta load, watermark table --- mentioning these technical terms in understandable way) from blob to sql or sql to sql on a priority basis please. Thank you in advance !
Hello thankyou for watching the videos. Sure will try to create the mentioned scenario. Meanwhile, please check if the videos in synapse playlist helps you for incremental load: th-cam.com/video/FXw1gPaa2-M/w-d-xo.html
@@azurecontentannu6399 Sure, thank you for your response, will watch the content as you shared, consider my concern and please help in that incremental load scenario as well.
Mam , everything is clear.
Only one doubt from myside -
I have the following folder structure :
====================================
Input(container)/folderA(directory)/source_file.csv(file)
I enabled the partition discovery
But not provided the partition root path
Copy nature : File path available in dataset
In this case , how it will behave mam?
I ran with above config and I could see same as normal copy activity.
Please help me
Can you make video on what are common error occurs when a pipeline break and how do you recover it ?
The folder nomenclature needs to be in the format "column_name=row_data".?
Will it accept anything apart from = operator.?
Hi Nitin
Yes the folder nomenclature needs to be in that way else it will fail
@@azurecontentannu6399 great, thanks for clarification.
Can you please start videos on adls gen1 migration to gen2
Hi
Is there any way to capture list of folders and file names recursively for a selected container?
Get Metadata activity can help you to achieve the same.
Hi , will this help with reading the latest incremental data from adls ?
If incremental data is loaded into year=2023/month=06/day=21/ folder then will this data be fetched automatically?
How can we do optimization in copy activity