Hey! It would be useful to be able to do that, but I’m not sure it’s possible currently, normally you would just write the data out to a table and pick it up from there. You can pass parameters into a notebook though from a data pipeline. I’ll be doing making some videos on notebooks in the near future, so I’ll make sure to cover these topics!
@@LearnMicrosoftFabric Saw your video with parameters for the pipeline. Interesting thanks. Can you cover the other pipeline elements as well? Just would like to see "Script , Lookup, Webhook and co) br
How to get connection name at source what we do in ADF...how we get in fabric is this automatically generated suppose for over here it's blob storage..can u pls explain
great videos and thanks for sharing. I was searching for a pipeline that reads data from a text file that has a header and footer row that has to be excluded, also add column headers. Is that possible to achieve use pipeline copy activity. I'm able to achieve the same using Spark, but was looking for skip rows option
There is an option to use first row as header in a data pipeline copy activity, but in general data pipeline is an orchestration tool, for file/data cleaning it’s better to use dataflow or a notebook 👍
Nice video, how do we append data while writing to a destination table using a data pipeline in a warehouse (I don't see any option like we have while writing to the table in Lakehouse e.g. "append" or "overwrite"), my source is rest API
Hi, thanks! Yes I believe this is currently not possible from a fabric data pipeline. It is possible for Azure Data Factory pipelines so I’m sure they will add the functionality to Fabric soon (hopefully before release). if you need to do this in Fabric you can do it using Dataflow Gen2’s
Very useful, thank you for this demo!
Nice one mate, need to see your other videos
Thanks for watching!
Could you send the output of the notebook to the next step in the pipeline? Or just write the data somewhere and read it in another step?
thx
Hey! It would be useful to be able to do that, but I’m not sure it’s possible currently, normally you would just write the data out to a table and pick it up from there. You can pass parameters into a notebook though from a data pipeline. I’ll be doing making some videos on notebooks in the near future, so I’ll make sure to cover these topics!
@@LearnMicrosoftFabric Saw your video with parameters for the pipeline. Interesting thanks. Can you cover the other pipeline elements as well? Just would like to see "Script , Lookup, Webhook and co)
br
Absolutely yes it's on my list to go through each element of pipelines in more detail. WATCH THIS SPACE!
Great video thanks!
How to get connection name at source what we do in ADF...how we get in fabric is this automatically generated suppose for over here it's blob storage..can u pls explain
great videos and thanks for sharing. I was searching for a pipeline that reads data from a text file that has a header and footer row that has to be excluded, also add column headers. Is that possible to achieve use pipeline copy activity. I'm able to achieve the same using Spark, but was looking for skip rows option
There is an option to use first row as header in a data pipeline copy activity, but in general data pipeline is an orchestration tool, for file/data cleaning it’s better to use dataflow or a notebook 👍
Nice video, how do we append data while writing to a destination table using a data pipeline in a warehouse (I don't see any option like we have while writing to the table in Lakehouse e.g. "append" or "overwrite"), my source is rest API
Hi, thanks! Yes I believe this is currently not possible from a fabric data pipeline. It is possible for Azure Data Factory pipelines so I’m sure they will add the functionality to Fabric soon (hopefully before release). if you need to do this in Fabric you can do it using Dataflow Gen2’s
the data movement cost of the fabric is very high. Anyone having problems with this?