Advancing Fabric - The Data Engineering Experience

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 ธ.ค. 2024

ความคิดเห็น • 11

  • @mwaltercpa
    @mwaltercpa ปีที่แล้ว

    Learning pyspark in Fabric, love the tips!

  • @rhambo5554
    @rhambo5554 ปีที่แล้ว +2

    Bit concerning at the moment notebooks are unsupported for Git integration & deployment pipelines, hopefully we can get some support either in Fabric or via an API for DevOps prior to GA.

  • @DeyvidCabral
    @DeyvidCabral ปีที่แล้ว +1

    Does it able to set a version control in notebooks using devops?

  • @hellhax
    @hellhax ปีที่แล้ว +1

    Does VS Code extension allow you to run spark commands remotely? Similarly to how it works for AzureML? If so, that would be fantastic and a major advantage over mediocre Databricks vs code extension...

    • @willwang9673
      @willwang9673 ปีที่แล้ว

      yes, it does support this scenario.

  • @keen8five
    @keen8five ปีที่แล้ว

    I'd love to learn more about custom pools

  • @joaoluismartins
    @joaoluismartins ปีที่แล้ว

    Very nice video! A quick question: when you created the tables using the files, does the data duplicate in fabric, i.e., more onelake usage?

  • @jorgbender2763
    @jorgbender2763 ปีที่แล้ว

    Would have been great if you showed the initial ingestion step of how to get these parquet files into the Lakehouse :) all in all great video ! Keep them coming guys #fabricators

    • @AdvancingAnalytics
      @AdvancingAnalytics  ปีที่แล้ว +1

      Yep, we'll do a "getting data into Fabric" episode soon, so we didn't cover it here!

  • @stephenpace7928
    @stephenpace7928 ปีที่แล้ว

    How long does a custom Spark cluster generally take to start?

  • @vt1454
    @vt1454 ปีที่แล้ว

    From Databricks perspective a lakehouse is logical place inclusive of all 3 zones - bronze | silver | gold - even though on physical plane these can be in separate storage account or containers. The terminology in Fabric for using separate lake house for each of the 3 layers is confusing.