Microsoft Fabric Optimistic Job Admission for Apache Spark - maximum compute utilization by default

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 1

  • @VasuN-bc4wc
    @VasuN-bc4wc 6 หลายเดือนก่อน +1

    This looks nice. Wait for it to become available in all tenants to test it out.
    We need shared cluster/concurrent spark session in pipelines
    There are scenarios where we need to do a lot of very lightweight operations like pulling data from APIs, Databases, etc which barely need 2 core and 4GB memory but they need to run every 15 mins
    Right now, all we can do is run it in a 8core 64GB node, of course small node is there but it takes 3mins to start.
    Hope to see a solution to this problems.