Hi Chris, I have to say I really don’t consider this “fast copy”. It might work fine when loading small tables, but if you need to load billions of rows, it can’t dynamically partition tables. With hand coding I can move 6 billion rows in an hour or two….but with the copy job I had to kill it after 24 hours. I hope they make it smart enough to partition large tables in the future I love the interface but performance isn’t where we need it. Your video is a great overview! Scott
I think this is a fair assessment, and it's still clear that we need the Pro-Code side of the house to address these needs. For a surprising number of companies, this will be a more than sufficient solution.
Hi Chris, I have to say I really don’t consider this “fast copy”. It might work fine when loading small tables, but if you need to load billions of rows, it can’t dynamically partition tables. With hand coding I can move 6 billion rows in an hour or two….but with the copy job I had to kill it after 24 hours. I hope they make it smart enough to partition large tables in the future
I love the interface but performance isn’t where we need it.
Your video is a great overview!
Scott
I think this is a fair assessment, and it's still clear that we need the Pro-Code side of the house to address these needs. For a surprising number of companies, this will be a more than sufficient solution.
Hi, What is the special in Copy Job because we have already fast copy in Data pipeline in Fabric.
It's a great low-code option for ppl who are not interested in getting into more complex pipelines.