Understanding Delta Lake - The Heart of the Data Lakehouse

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 พ.ย. 2024

ความคิดเห็น • 17

  • @mainakdey3893
    @mainakdey3893 4 หลายเดือนก่อน

    at last somebody is clearing the confusion, Good job Bryan

  • @amarnadhgunakala2901
    @amarnadhgunakala2901 ปีที่แล้ว +1

    Thank you Brother, this helps people.

  • @stylish37
    @stylish37 ปีที่แล้ว

    Top stuff Bryan! Thanks a lot for this playlist

  • @parisaayazi8886
    @parisaayazi8886 5 หลายเดือนก่อน +1

    Thanks Bryan! I'm wondering how it's possible to create a CSV table using the CREATE TABLE command, which allows us to write SQL queries against it, but we can't use saveAsTable with format('csv') to achieve the same result

    • @BryanCafferky
      @BryanCafferky  5 หลายเดือนก่อน

      Originally Spark could not create updatable tables. Instead it could only create a schema for a flat file like a CSV. The schema describes the data in the file so SQL select statements can be used on it. You can't update the table though and it is not a Managed table meaning if you drop the table for the CSV file, the file remains. Updateable tables (supports CRUD and ACID) was added with Delta tables.

    • @parisaayazi8886
      @parisaayazi8886 5 หลายเดือนก่อน

      @@BryanCafferky thanks a lot.

  • @panzabamboo1901
    @panzabamboo1901 ปีที่แล้ว +1

    Hi Brian, would you be able to elaborate more on the file types, currently supporting etl jobs running databricks, still using trial and error to figure out the file type/ how to load em

    • @BryanCafferky
      @BryanCafferky  ปีที่แล้ว

      Hi Panza, Assuming you mean source files types to be read, most file types supported via Spark, i.e. csv, json, SQL databases, parquet, delta, avro. Are you looking for a specific type?

  • @WojciechBukowski-m5e
    @WojciechBukowski-m5e 9 หลายเดือนก่อน +1

    Thanks, this is great video and well explained

    • @BryanCafferky
      @BryanCafferky  9 หลายเดือนก่อน

      Thanks. In my experience, it is important to have the original data you loaded into a DW bc 1) troubleshooting issues, 2) recovery if some part of the data fails to load - you reload from the copy, 3) auditability - you can show what you loaded. It's especially critical if you cannot go back at a later date and retrieve that data again from the source.

  • @gatorpika
    @gatorpika ปีที่แล้ว

    Great explanation! Thanks!

  • @rahulberry5341
    @rahulberry5341 ปีที่แล้ว

    Thanks for the nice explanation

  • @sajeershahul8361
    @sajeershahul8361 ปีที่แล้ว

    How can I not subscribe 👌🏽

  • @gautamgovinda5140
    @gautamgovinda5140 5 หลายเดือนก่อน

    Cool👍