To connect with SSMS, I have discussed in this video th-cam.com/video/Mn1GB8rSeoQ/w-d-xo.html Take url, use database engine, url and login with User id password with MFA
Hi Amit, thank you for this introduction. Now, for the shown approaches I need a web-browser and do manual upload each time I have new data. If I want to automate an update schedule for local file with Fabric, do I need Power BI Gateway like with Power BI or Azure Integration Runtime like with Synapse?
Thanks. For uploading, we have the option in Lakehouse itself. For downloading, we can use One Lake Explorer. Once installed, you will have access to the files similar to One Drive. Another method is to use Python code. PySpark on Notebooks can help, or the local Python code, which I have discussed in later videos of the series, can help to download files. I also checked that you can create a pipeline to move data from Lakehouse to a local folder using a pipeline. You need to have an on-premises gateway for that, and it will be downloaded on the on-premises gateway machine.
@amit , Great tutorial , how we can transform multiple csv in a single dataflow and then sink them all in a lakehouse or warehouse using the same dataflow is it even possible i tried it but when i select the destination it is only taking one CSV to sink at destination i am stuck here does that mean i have to create multiple dataflows to tackle multiple CSV transformations ? i know in pipelines we can handle this via for each activity but not able to sort this for transformation multiple csv using dataflows ? Scenario is I have , Emp.csv , Dept.csv provided by client and i want to transform them ( changing the ID column from string to Whole number ) and then sink them to lakehouse , the problem is that i can ingest the files automatically using pipelines for each but cannot perform the same for transformations
You can add multiple files in the same data flow and set a destination. When you start a dataflow from the Lakehouse Explorer. It will automatically set the destination for all tables. In the above case, you should be able to add all the files to one dataflow and transform and load to the lakehouse and warehouse.
Thanks for this 😊 please add how to connect ssms
To connect with SSMS, I have discussed in this video th-cam.com/video/Mn1GB8rSeoQ/w-d-xo.html
Take url, use database engine, url and login with User id password with MFA
Hi Amit, thank you for this introduction. Now, for the shown approaches I need a web-browser and do manual upload each time I have new data. If I want to automate an update schedule for local file with Fabric, do I need Power BI Gateway like with Power BI or Azure Integration Runtime like with Synapse?
As of now on-premise gateway is not supporting that. But we can expect that soon.
For Azure you should be able to schedule the pipeline and dataflows
Thanks for the video. Can I upload files using python in fabric notebook?
If the file is on the some cloud and you can access it using URL and credential you can.
As of now I doubt on-premise gateway is supported for that
Great explanation sir. Sir, is it possible to download the csv files in fabric lakehouse?
Thanks. For uploading, we have the option in Lakehouse itself. For downloading, we can use One Lake Explorer. Once installed, you will have access to the files similar to One Drive. Another method is to use Python code. PySpark on Notebooks can help, or the local Python code, which I have discussed in later videos of the series, can help to download files.
I also checked that you can create a pipeline to move data from Lakehouse to a local folder using a pipeline. You need to have an on-premises gateway for that, and it will be downloaded on the on-premises gateway machine.
Why is the destination type always string and cannot be changed? Thx
When we got UI initially, we have option on the left to change data type. Now it changed and we can change it at destination side.
@amit , Great tutorial , how we can transform multiple csv in a single dataflow and then sink them all in a lakehouse or warehouse using the same dataflow is it even possible i tried it but when i select the destination it is only taking one CSV to sink at destination i am stuck here does that mean i have to create multiple dataflows to tackle multiple CSV transformations ? i know in pipelines we can handle this via for each activity but not able to sort this for transformation multiple csv using dataflows ?
Scenario is I have , Emp.csv , Dept.csv provided by client and i want to transform them ( changing the ID column from string to Whole number ) and then sink them to lakehouse , the problem is that i can ingest the files automatically using pipelines for each but cannot perform the same for transformations
You can add multiple files in the same data flow and set a destination.
When you start a dataflow from the Lakehouse Explorer. It will automatically set the destination for all tables.
In the above case, you should be able to add all the files to one dataflow and transform and load to the lakehouse and warehouse.