DuckDB in Python - The Next Pandas Killer?

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 56

  • @graymars1097
    @graymars1097 2 หลายเดือนก่อน +16

    I never tried duckdb but I'm convinced now thanks to you
    Pandas is great but "I can't query a DF like it's a DB using SQL" was one of my first compliants using pandas and "it needs its own process" is a huge deal breaker for data analysis specially in finance
    This solves all these problems 😮
    Great video 😊

  • @jaybakerwork
    @jaybakerwork 2 หลายเดือนก่อน +4

    Big big fan of DuckDB. It is nice to be able to switch to SQL for some things. The engine is great. Pairing it with MotherDuck opens up some nice options.
    I like the relational api too.
    I switch between duckdb and polars.

  • @rsbenari
    @rsbenari 2 หลายเดือนก่อน +6

    Concise, complete, helpful. As usual. Thanks.

  • @pawjast
    @pawjast 2 หลายเดือนก่อน

    I would have never thought duckDB is so versatile and really cool replacement for Pandas even when working with local files!

  • @RobSeward7
    @RobSeward7 หลายเดือนก่อน

    Love duckdb. I love this video. A comparison to polars would have been interesting too!

  • @vuongphaminh2293
    @vuongphaminh2293 2 หลายเดือนก่อน

    Love your videos, very helpful and informative! I have a suggestion that you could consider using AI completion tools like SuperMaven, Copilot, Tabnine, Cursor, etc., to speed up your typing. Watching you type long code lines slowly can be a bit tedious.

  • @nocturnomedieval
    @nocturnomedieval 2 หลายเดือนก่อน +13

    This time every new tool seems faster than pandas even after the 2.0 arrow-based upgrade

    • @maximilianrausch5193
      @maximilianrausch5193 2 หลายเดือนก่อน

      What else is faster?

    • @nocturnomedieval
      @nocturnomedieval 2 หลายเดือนก่อน

      @@maximilianrausch5193 polars, vaex

    • @nocturnomedieval
      @nocturnomedieval 2 หลายเดือนก่อน

      ​@@maximilianrausch51931:27 polars and vaex are examples

  • @apefu
    @apefu 2 หลายเดือนก่อน

    So if you don't need the sugar, just use a db already? I can't stress how much time this insight has saved me. Especially for live data.

  • @Andrew-ud3xl
    @Andrew-ud3xl 2 หลายเดือนก่อน +1

    For the speed this is very comparable to polars, I started using SQL within polars but have switched to using duckdb due to duck showing where errors come from and i dont think polars sql has all the functionality of duck.
    Being able to save as a dataframe is nice to change things such as date columns to UK format and the display options of the output table.

  • @InhumanBean
    @InhumanBean 2 หลายเดือนก่อน

    Really great intro, thank you for making this.

  • @andybecker5001
    @andybecker5001 2 หลายเดือนก่อน

    Duckdb is super convenient for one off analysis or those that don’t have extra hardware to run their own database server.

  • @thenderaaa9252
    @thenderaaa9252 หลายเดือนก่อน +2

    Neat, but: Running the test with the DuckDB process second each time means possible skew due to OS caching. I would have run the test again with Pandas second. Would it have made a difference? One way to find out ....

  • @joeingle1745
    @joeingle1745 2 หลายเดือนก่อน +10

    Those two queries are CTEs (Common Table Expressions) not sub-selects. Nit picking, but there is a difference.

    • @NeuralNine
      @NeuralNine  2 หลายเดือนก่อน +2

      Thanks for the clarification! :)

    • @marvinalone
      @marvinalone 2 หลายเดือนก่อน

      In my point of view, you can treat CTE as sub-selects, it's just more clear and better readibility as it can reuse in the query statement. But yes, they do have some differences. In the video, the author called them 'sub selects' or 'sub queryies' I think is okay.

    • @joeingle1745
      @joeingle1745 2 หลายเดือนก่อน

      @@marvinalone Apart from the fact that they're not sub-selects...

  • @ravlel1085
    @ravlel1085 2 หลายเดือนก่อน +1

    speed comparison has inefficient Pandas code. Try this approach: df.loc[df['C']

  • @rohitpammi
    @rohitpammi 2 หลายเดือนก่อน +1

    Really very informatic, thanks for sharing.

  • @Jianfeng-ny7to
    @Jianfeng-ny7to 2 หลายเดือนก่อน +2

    when considering the speed, how about the use pd.read_csv, engine="pyarrow"? which is also very fastly load the big csv file

  • @dandyexplorer4252
    @dandyexplorer4252 2 หลายเดือนก่อน +3

    Can you compare the speed of polars and duckdb?

  • @carlmunkby6140
    @carlmunkby6140 2 หลายเดือนก่อน +6

    Any plans for making a video about airflow?

  • @MrBrannfjell
    @MrBrannfjell หลายเดือนก่อน

    First impression with duckdb: They dont seem to support dictionary by default, so lets say you just want to dump a python dictionary to a flattened table in duckdb, no go (best workaround thus far is to turn it into a pyarrow/dataframe first). If you want to import json, you first have to save it to a file, and then load that file. Thats stupid if you are already carrying the json text from etc a web request response. I might just be stupid, but the documentation doesn't seem to give examples on how to just load data into duckdb without going through a stupid data proxy. I was hoping I could avoid all that if I used duckdb.

  • @lowkeygaming4716
    @lowkeygaming4716 2 หลายเดือนก่อน

    Thank you for another great video. I learned something new again.

  • @tmb8807
    @tmb8807 หลายเดือนก่อน

    I like this because anything that gives people more options is great.
    However an honest question - is the preference for SQL syntax an organic thing, or is it because of familiarity?
    I mean I totally get the dislike for the pandas API and I almost never use it myself now, unless I have to. But I find the cognitive load of both reading and writing SQL far greater than the expression syntax and method chaining of Polars. Is that just because I don't do enough SQL?

  • @goodmanshawnhuang
    @goodmanshawnhuang 2 หลายเดือนก่อน +1

    Thanks for the video,i am considering querying a large set from bigquery then output to a csv file but better zip it, any suggestions please? Thanks

  • @nishantkumar9570
    @nishantkumar9570 2 หลายเดือนก่อน +1

    Can we do comparison between DuckDB vs Polars?

  • @timz2917
    @timz2917 2 หลายเดือนก่อน

    Is data transformation still better in Pandas or Polars? I appreciate that the analysis part is better in DuckDB.

  • @andreip9378
    @andreip9378 หลายเดือนก่อน

    15:47 - results of aggregations are different btw.

  • @xade8381
    @xade8381 2 หลายเดือนก่อน

    would be great if you could also cover cozo DB

  • @christopherc4526
    @christopherc4526 2 หลายเดือนก่อน

    Well done. Thank you

  • @togai-dev
    @togai-dev 25 วันที่ผ่านมา

    Hi there, may I ask the theme name that you are using? Thanks

  • @BaronCorso
    @BaronCorso 2 หลายเดือนก่อน +1

    How about duckdb vs polars?

  • @Rothbardo
    @Rothbardo 2 หลายเดือนก่อน

    Answer = yes

  • @booster404
    @booster404 2 หลายเดือนก่อน +2

    This makes my thinking even more true "you master python then you master everything"

  • @jaybakerwork
    @jaybakerwork 2 หลายเดือนก่อน

    What is the go to screen capture that he is using that works on linux these days?

    • @nobiado484
      @nobiado484 2 หลายเดือนก่อน

      Same as rhe one for Windiws: OBS Studio.

  • @egonkirchof
    @egonkirchof 2 หลายเดือนก่อน +4

    Fun fact: after all these pandas killers people still are using pandas.

    • @gs-e2d
      @gs-e2d 2 หลายเดือนก่อน

      Pandas is a different use case

    • @egonkirchof
      @egonkirchof 2 หลายเดือนก่อน

      @@gs-e2d So they should be called Pandas Killer but is a different use case.

  • @mapleigue
    @mapleigue 2 หลายเดือนก่อน

    Does anyone know how would this compare to a PySpark setup?

  • @matheuspegorari1094
    @matheuspegorari1094 2 หลายเดือนก่อน

    Wtf. Now I can use csv files as tables and query them with sql. I’ve only heard about this. But never seen it being done so easily

  • @AleksaMilic-d2e
    @AleksaMilic-d2e 2 หลายเดือนก่อน

    Interesting

  • @philtoa334
    @philtoa334 2 หลายเดือนก่อน +1

    Thx_.

  • @ChristopherBruns-o7o
    @ChristopherBruns-o7o 2 หลายเดือนก่อน +2

    4:01 Lol he said INSERT INTO person.

  • @harikrishnanb7273
    @harikrishnanb7273 2 หลายเดือนก่อน

    not the next, it's the real killer

  • @SuperLimeWorld
    @SuperLimeWorld 2 หลายเดือนก่อน

    Hi

  • @iegorshevchenko8365
    @iegorshevchenko8365 2 หลายเดือนก่อน

    +++++++++

  • @HH-mw4sq
    @HH-mw4sq 2 หลายเดือนก่อน +1

    For the complex query, I have:
    query = """
    WITH filtered_data AS (
    SELECT job, AVG(age) as avg_age
    FROM personnel_df
    WHERE age > 25
    GROUP BY job
    ),
    job_counts AS (
    SELECT job, COUNT(*) as count
    FROM personnel_df
    GROUP BY job
    )
    SELECT fd.job, fd.avg_age, jc.count
    FROM filtered_data fd
    JOIN job_counts jc
    ON fd.job = jc.job
    WHERE jc,count > 1
    ORDER BY fd.avg_age DESC
    """
    print(conn.sql(query).df())
    I get the following error:
    Traceback (most recent call last):
    File "D:/python/DuckDB/foo1.py", line 60, in
    main()
    File "D:/python/DuckDB/foo1.py", line 55, in main
    print(conn.sql(query).df())
    duckdb.duckdb.ParserException: Parser Error: syntax error at or near ","
    I am running python3.12, using the IDLE interface. Can someone please explain how to correct this error?

    • @benrontol2010
      @benrontol2010 2 หลายเดือนก่อน +2

      probably this one "WHERE jc,count > 1", it should be "jc.count", period not comma.

    • @HH-mw4sq
      @HH-mw4sq 2 หลายเดือนก่อน +1

      @benrontol2010 - wow!!!! I looked that over about 100 times and missed that. Thank you very much.

    • @amalzulkipli2248
      @amalzulkipli2248 2 หลายเดือนก่อน

      @@benrontol2010 real chad!