Thanks Bryan! I'm wondering how it's possible to create a CSV table using the CREATE TABLE command, which allows us to write SQL queries against it, but we can't use saveAsTable with format('csv') to achieve the same result
Originally Spark could not create updatable tables. Instead it could only create a schema for a flat file like a CSV. The schema describes the data in the file so SQL select statements can be used on it. You can't update the table though and it is not a Managed table meaning if you drop the table for the CSV file, the file remains. Updateable tables (supports CRUD and ACID) was added with Delta tables.
Hi Brian, would you be able to elaborate more on the file types, currently supporting etl jobs running databricks, still using trial and error to figure out the file type/ how to load em
Hi Panza, Assuming you mean source files types to be read, most file types supported via Spark, i.e. csv, json, SQL databases, parquet, delta, avro. Are you looking for a specific type?
Thanks. In my experience, it is important to have the original data you loaded into a DW bc 1) troubleshooting issues, 2) recovery if some part of the data fails to load - you reload from the copy, 3) auditability - you can show what you loaded. It's especially critical if you cannot go back at a later date and retrieve that data again from the source.
at last somebody is clearing the confusion, Good job Bryan
Thank you Brother, this helps people.
Top stuff Bryan! Thanks a lot for this playlist
YW
Thanks Bryan! I'm wondering how it's possible to create a CSV table using the CREATE TABLE command, which allows us to write SQL queries against it, but we can't use saveAsTable with format('csv') to achieve the same result
Originally Spark could not create updatable tables. Instead it could only create a schema for a flat file like a CSV. The schema describes the data in the file so SQL select statements can be used on it. You can't update the table though and it is not a Managed table meaning if you drop the table for the CSV file, the file remains. Updateable tables (supports CRUD and ACID) was added with Delta tables.
@@BryanCafferky thanks a lot.
Hi Brian, would you be able to elaborate more on the file types, currently supporting etl jobs running databricks, still using trial and error to figure out the file type/ how to load em
Hi Panza, Assuming you mean source files types to be read, most file types supported via Spark, i.e. csv, json, SQL databases, parquet, delta, avro. Are you looking for a specific type?
Thanks, this is great video and well explained
Thanks. In my experience, it is important to have the original data you loaded into a DW bc 1) troubleshooting issues, 2) recovery if some part of the data fails to load - you reload from the copy, 3) auditability - you can show what you loaded. It's especially critical if you cannot go back at a later date and retrieve that data again from the source.
Great explanation! Thanks!
You're welcome!
Thanks for the nice explanation
YW
How can I not subscribe 👌🏽
Cool👍