I never tried duckdb but I'm convinced now thanks to you Pandas is great but "I can't query a DF like it's a DB using SQL" was one of my first compliants using pandas and "it needs its own process" is a huge deal breaker for data analysis specially in finance This solves all these problems 😮 Great video 😊
Big big fan of DuckDB. It is nice to be able to switch to SQL for some things. The engine is great. Pairing it with MotherDuck opens up some nice options. I like the relational api too. I switch between duckdb and polars.
Love your videos, very helpful and informative! I have a suggestion that you could consider using AI completion tools like SuperMaven, Copilot, Tabnine, Cursor, etc., to speed up your typing. Watching you type long code lines slowly can be a bit tedious.
For the speed this is very comparable to polars, I started using SQL within polars but have switched to using duckdb due to duck showing where errors come from and i dont think polars sql has all the functionality of duck. Being able to save as a dataframe is nice to change things such as date columns to UK format and the display options of the output table.
Neat, but: Running the test with the DuckDB process second each time means possible skew due to OS caching. I would have run the test again with Pandas second. Would it have made a difference? One way to find out ....
In my point of view, you can treat CTE as sub-selects, it's just more clear and better readibility as it can reuse in the query statement. But yes, they do have some differences. In the video, the author called them 'sub selects' or 'sub queryies' I think is okay.
First impression with duckdb: They dont seem to support dictionary by default, so lets say you just want to dump a python dictionary to a flattened table in duckdb, no go (best workaround thus far is to turn it into a pyarrow/dataframe first). If you want to import json, you first have to save it to a file, and then load that file. Thats stupid if you are already carrying the json text from etc a web request response. I might just be stupid, but the documentation doesn't seem to give examples on how to just load data into duckdb without going through a stupid data proxy. I was hoping I could avoid all that if I used duckdb.
I like this because anything that gives people more options is great. However an honest question - is the preference for SQL syntax an organic thing, or is it because of familiarity? I mean I totally get the dislike for the pandas API and I almost never use it myself now, unless I have to. But I find the cognitive load of both reading and writing SQL far greater than the expression syntax and method chaining of Polars. Is that just because I don't do enough SQL?
For the complex query, I have: query = """ WITH filtered_data AS ( SELECT job, AVG(age) as avg_age FROM personnel_df WHERE age > 25 GROUP BY job ), job_counts AS ( SELECT job, COUNT(*) as count FROM personnel_df GROUP BY job ) SELECT fd.job, fd.avg_age, jc.count FROM filtered_data fd JOIN job_counts jc ON fd.job = jc.job WHERE jc,count > 1 ORDER BY fd.avg_age DESC """ print(conn.sql(query).df()) I get the following error: Traceback (most recent call last): File "D:/python/DuckDB/foo1.py", line 60, in main() File "D:/python/DuckDB/foo1.py", line 55, in main print(conn.sql(query).df()) duckdb.duckdb.ParserException: Parser Error: syntax error at or near "," I am running python3.12, using the IDLE interface. Can someone please explain how to correct this error?
I never tried duckdb but I'm convinced now thanks to you
Pandas is great but "I can't query a DF like it's a DB using SQL" was one of my first compliants using pandas and "it needs its own process" is a huge deal breaker for data analysis specially in finance
This solves all these problems 😮
Great video 😊
Big big fan of DuckDB. It is nice to be able to switch to SQL for some things. The engine is great. Pairing it with MotherDuck opens up some nice options.
I like the relational api too.
I switch between duckdb and polars.
Concise, complete, helpful. As usual. Thanks.
I would have never thought duckDB is so versatile and really cool replacement for Pandas even when working with local files!
Love duckdb. I love this video. A comparison to polars would have been interesting too!
Love your videos, very helpful and informative! I have a suggestion that you could consider using AI completion tools like SuperMaven, Copilot, Tabnine, Cursor, etc., to speed up your typing. Watching you type long code lines slowly can be a bit tedious.
This time every new tool seems faster than pandas even after the 2.0 arrow-based upgrade
What else is faster?
@@maximilianrausch5193 polars, vaex
@@maximilianrausch51931:27 polars and vaex are examples
So if you don't need the sugar, just use a db already? I can't stress how much time this insight has saved me. Especially for live data.
For the speed this is very comparable to polars, I started using SQL within polars but have switched to using duckdb due to duck showing where errors come from and i dont think polars sql has all the functionality of duck.
Being able to save as a dataframe is nice to change things such as date columns to UK format and the display options of the output table.
Really great intro, thank you for making this.
Duckdb is super convenient for one off analysis or those that don’t have extra hardware to run their own database server.
Neat, but: Running the test with the DuckDB process second each time means possible skew due to OS caching. I would have run the test again with Pandas second. Would it have made a difference? One way to find out ....
Those two queries are CTEs (Common Table Expressions) not sub-selects. Nit picking, but there is a difference.
Thanks for the clarification! :)
In my point of view, you can treat CTE as sub-selects, it's just more clear and better readibility as it can reuse in the query statement. But yes, they do have some differences. In the video, the author called them 'sub selects' or 'sub queryies' I think is okay.
@@marvinalone Apart from the fact that they're not sub-selects...
speed comparison has inefficient Pandas code. Try this approach: df.loc[df['C']
Really very informatic, thanks for sharing.
when considering the speed, how about the use pd.read_csv, engine="pyarrow"? which is also very fastly load the big csv file
Can you compare the speed of polars and duckdb?
Any plans for making a video about airflow?
I would like Dagster as well.
First impression with duckdb: They dont seem to support dictionary by default, so lets say you just want to dump a python dictionary to a flattened table in duckdb, no go (best workaround thus far is to turn it into a pyarrow/dataframe first). If you want to import json, you first have to save it to a file, and then load that file. Thats stupid if you are already carrying the json text from etc a web request response. I might just be stupid, but the documentation doesn't seem to give examples on how to just load data into duckdb without going through a stupid data proxy. I was hoping I could avoid all that if I used duckdb.
Thank you for another great video. I learned something new again.
I like this because anything that gives people more options is great.
However an honest question - is the preference for SQL syntax an organic thing, or is it because of familiarity?
I mean I totally get the dislike for the pandas API and I almost never use it myself now, unless I have to. But I find the cognitive load of both reading and writing SQL far greater than the expression syntax and method chaining of Polars. Is that just because I don't do enough SQL?
Thanks for the video,i am considering querying a large set from bigquery then output to a csv file but better zip it, any suggestions please? Thanks
Can we do comparison between DuckDB vs Polars?
Is data transformation still better in Pandas or Polars? I appreciate that the analysis part is better in DuckDB.
15:47 - results of aggregations are different btw.
would be great if you could also cover cozo DB
Well done. Thank you
Hi there, may I ask the theme name that you are using? Thanks
How about duckdb vs polars?
Answer = yes
This makes my thinking even more true "you master python then you master everything"
What is the go to screen capture that he is using that works on linux these days?
Same as rhe one for Windiws: OBS Studio.
Fun fact: after all these pandas killers people still are using pandas.
Pandas is a different use case
@@gs-e2d So they should be called Pandas Killer but is a different use case.
Does anyone know how would this compare to a PySpark setup?
Wtf. Now I can use csv files as tables and query them with sql. I’ve only heard about this. But never seen it being done so easily
Interesting
Thx_.
4:01 Lol he said INSERT INTO person.
not the next, it's the real killer
Hi
+++++++++
For the complex query, I have:
query = """
WITH filtered_data AS (
SELECT job, AVG(age) as avg_age
FROM personnel_df
WHERE age > 25
GROUP BY job
),
job_counts AS (
SELECT job, COUNT(*) as count
FROM personnel_df
GROUP BY job
)
SELECT fd.job, fd.avg_age, jc.count
FROM filtered_data fd
JOIN job_counts jc
ON fd.job = jc.job
WHERE jc,count > 1
ORDER BY fd.avg_age DESC
"""
print(conn.sql(query).df())
I get the following error:
Traceback (most recent call last):
File "D:/python/DuckDB/foo1.py", line 60, in
main()
File "D:/python/DuckDB/foo1.py", line 55, in main
print(conn.sql(query).df())
duckdb.duckdb.ParserException: Parser Error: syntax error at or near ","
I am running python3.12, using the IDLE interface. Can someone please explain how to correct this error?
probably this one "WHERE jc,count > 1", it should be "jc.count", period not comma.
@benrontol2010 - wow!!!! I looked that over about 100 times and missed that. Thank you very much.
@@benrontol2010 real chad!