Misconfigured My Microsoft Fabric Workspace? My Costs Went Up! 😱
ฝัง
- เผยแพร่เมื่อ 8 ก.พ. 2025
- Are you struggling to manage the costs of your Microsoft Fabric workspace? In this video, I dive into two common misconfigurations that can significantly impact your Fabric performance, pipeline efficiency, and capacity unit (CU) consumption.
We'll cover:
1️⃣ Enabling high concurrency for pipelines running multiple notebooks
2️⃣ Adding an exit function to notebooks to avoid unnecessary resource usage
Whether you're fine-tuning Spark performance in Fabric, optimizing notebooks and pipelines, or trying to lower the cost of Fabric capacity, this video will help you identify and fix costly misconfigurations.
By the end, you'll have actionable insights to optimize your workspace, reduce unnecessary CU usage, and ensure a smoother analytics experience. Perfect for anyone working with Microsoft Fabric, Spark tuning, or managing data pipelines with Notebooks!
🔔 Don’t forget to like, comment, and subscribe for more Fabric performance tips and tricks!
#MicrosoftFabric #FabricPerformance #SparkTuning #FabricNotebooks #DataPipelines #FabricCapacity #CUConsumption #Performance
Nice video ! From my experience, the starter pool works well without HC mode, while custom Spark pools can benefit from it. I've noticed HC mode enhances performance for heavy-load concurrent tasks, but if we are cost-conscious, it may be prudent to avoid HC mode for smaller tasks, as underutilized resources could increase CU costs.
Good point, Nalaka! Since the Custom Pool takes time to start, enabling HC for pipelines might indeed highlight its value, definitely worth testing and seeing the results. Thanks for sharing this.
The one we saw with 1.7 million CUs only includes code for updating an Azure SQL DB, but it’s part of a pipeline with many activities that typically take around 20-30 minutes to complete.