I never tried duckdb but I'm convinced now thanks to you Pandas is great but "I can't query a DF like it's a DB using SQL" was one of my first compliants using pandas and "it needs its own process" is a huge deal breaker for data analysis specially in finance This solves all these problems 😮 Great video 😊
Big big fan of DuckDB. It is nice to be able to switch to SQL for some things. The engine is great. Pairing it with MotherDuck opens up some nice options. I like the relational api too. I switch between duckdb and polars.
For the speed this is very comparable to polars, I started using SQL within polars but have switched to using duckdb due to duck showing where errors come from and i dont think polars sql has all the functionality of duck. Being able to save as a dataframe is nice to change things such as date columns to UK format and the display options of the output table.
Love your videos, very helpful and informative! I have a suggestion that you could consider using AI completion tools like SuperMaven, Copilot, Tabnine, Cursor, etc., to speed up your typing. Watching you type long code lines slowly can be a bit tedious.
In my point of view, you can treat CTE as sub-selects, it's just more clear and better readibility as it can reuse in the query statement. But yes, they do have some differences. In the video, the author called them 'sub selects' or 'sub queryies' I think is okay.
For the complex query, I have: query = """ WITH filtered_data AS ( SELECT job, AVG(age) as avg_age FROM personnel_df WHERE age > 25 GROUP BY job ), job_counts AS ( SELECT job, COUNT(*) as count FROM personnel_df GROUP BY job ) SELECT fd.job, fd.avg_age, jc.count FROM filtered_data fd JOIN job_counts jc ON fd.job = jc.job WHERE jc,count > 1 ORDER BY fd.avg_age DESC """ print(conn.sql(query).df()) I get the following error: Traceback (most recent call last): File "D:/python/DuckDB/foo1.py", line 60, in main() File "D:/python/DuckDB/foo1.py", line 55, in main print(conn.sql(query).df()) duckdb.duckdb.ParserException: Parser Error: syntax error at or near "," I am running python3.12, using the IDLE interface. Can someone please explain how to correct this error?
I never tried duckdb but I'm convinced now thanks to you
Pandas is great but "I can't query a DF like it's a DB using SQL" was one of my first compliants using pandas and "it needs its own process" is a huge deal breaker for data analysis specially in finance
This solves all these problems 😮
Great video 😊
Concise, complete, helpful. As usual. Thanks.
Big big fan of DuckDB. It is nice to be able to switch to SQL for some things. The engine is great. Pairing it with MotherDuck opens up some nice options.
I like the relational api too.
I switch between duckdb and polars.
This time every new tool seems faster than pandas even after the 2.0 arrow-based upgrade
What else is faster?
@@maximilianrausch5193 polars, vaex
@@maximilianrausch51931:27 polars and vaex are examples
I would have never thought duckDB is so versatile and really cool replacement for Pandas even when working with local files!
For the speed this is very comparable to polars, I started using SQL within polars but have switched to using duckdb due to duck showing where errors come from and i dont think polars sql has all the functionality of duck.
Being able to save as a dataframe is nice to change things such as date columns to UK format and the display options of the output table.
So if you don't need the sugar, just use a db already? I can't stress how much time this insight has saved me. Especially for live data.
Love your videos, very helpful and informative! I have a suggestion that you could consider using AI completion tools like SuperMaven, Copilot, Tabnine, Cursor, etc., to speed up your typing. Watching you type long code lines slowly can be a bit tedious.
Those two queries are CTEs (Common Table Expressions) not sub-selects. Nit picking, but there is a difference.
Thanks for the clarification! :)
In my point of view, you can treat CTE as sub-selects, it's just more clear and better readibility as it can reuse in the query statement. But yes, they do have some differences. In the video, the author called them 'sub selects' or 'sub queryies' I think is okay.
@@marvinalone Apart from the fact that they're not sub-selects...
Really great intro, thank you for making this.
Really very informatic, thanks for sharing.
Any plans for making a video about airflow?
I would like Dagster as well.
Duckdb is super convenient for one off analysis or those that don’t have extra hardware to run their own database server.
Can you compare the speed of polars and duckdb?
Thank you for another great video. I learned something new again.
when considering the speed, how about the use pd.read_csv, engine="pyarrow"? which is also very fastly load the big csv file
Thanks for the video,i am considering querying a large set from bigquery then output to a csv file but better zip it, any suggestions please? Thanks
would be great if you could also cover cozo DB
How about duckdb vs polars?
Well done. Thank you
Can we do comparison between DuckDB vs Polars?
Is data transformation still better in Pandas or Polars? I appreciate that the analysis part is better in DuckDB.
This makes my thinking even more true "you master python then you master everything"
Interesting
Answer = yes
Wtf. Now I can use csv files as tables and query them with sql. I’ve only heard about this. But never seen it being done so easily
Thx_.
Does anyone know how would this compare to a PySpark setup?
4:01 Lol he said INSERT INTO person.
What is the go to screen capture that he is using that works on linux these days?
Same as rhe one for Windiws: OBS Studio.
Fun fact: after all these pandas killers people still are using pandas.
Pandas is a different use case
@@gs-e2d So they should be called Pandas Killer but is a different use case.
Hi
not the next, it's the real killer
+++++++++
For the complex query, I have:
query = """
WITH filtered_data AS (
SELECT job, AVG(age) as avg_age
FROM personnel_df
WHERE age > 25
GROUP BY job
),
job_counts AS (
SELECT job, COUNT(*) as count
FROM personnel_df
GROUP BY job
)
SELECT fd.job, fd.avg_age, jc.count
FROM filtered_data fd
JOIN job_counts jc
ON fd.job = jc.job
WHERE jc,count > 1
ORDER BY fd.avg_age DESC
"""
print(conn.sql(query).df())
I get the following error:
Traceback (most recent call last):
File "D:/python/DuckDB/foo1.py", line 60, in
main()
File "D:/python/DuckDB/foo1.py", line 55, in main
print(conn.sql(query).df())
duckdb.duckdb.ParserException: Parser Error: syntax error at or near ","
I am running python3.12, using the IDLE interface. Can someone please explain how to correct this error?
probably this one "WHERE jc,count > 1", it should be "jc.count", period not comma.
@benrontol2010 - wow!!!! I looked that over about 100 times and missed that. Thank you very much.
@@benrontol2010 real chad!