Process HUGE Data Sets in Pandas

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.ย. 2024
  • Today we learn how to process huge data sets in Pandas, by using chunks.
    ◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
    📚 Programming Books & Merch 📚
    🐍 The Python Bible Book: www.neuralnine...
    💻 The Algorithm Bible Book: www.neuralnine...
    👕 Programming Merch: www.neuralnine...
    🌐 Social Media & Contact 🌐
    📱 Website: www.neuralnine...
    📷 Instagram: / neuralnine
    🐦 Twitter: / neuralnine
    🤵 LinkedIn: / neuralnine
    📁 GitHub: github.com/Neu...
    🎙 Discord: / discord
    🎵 Outro Music From: www.bensound.com/

ความคิดเห็น • 45

  • @Open5to6
    @Open5to6 7 หลายเดือนก่อน +4

    I can't always follow everything he says, cause he moves pretty quick and throws a lot at you, but he's always straight to the point, no fluff, and innovative.
    I always glean more things to look up after hearing it from NeuralNine first.

  • @DonaldRennie
    @DonaldRennie 27 วันที่ผ่านมา

    Very good! I'm a beginner, and this guy spent more time explaining this topic than DataCamp. The only thing I didn't understand was the "series" part.

  • @Roman-kn7kt
    @Roman-kn7kt 14 วันที่ผ่านมา

    As always, your tutorials are incredible!

  • @thisoldproperty
    @thisoldproperty ปีที่แล้ว +3

    I like the simplicity. Wonder if a similar thing could be done with sql queries given they usually store incredibly large datasets.

    • @jaysont5311
      @jaysont5311 ปีที่แล้ว +1

      I thought I read that you could, I could be wrong tho

    • @mikecripps2011
      @mikecripps2011 10 หลายเดือนก่อน

      Yes, do it all day long. I read 2.5. billion records a new level for me this week on a wimpy PC. I chunk it by 200 K Rows normally.

    • @nuhuhbruhbruh
      @nuhuhbruhbruh 9 หลายเดือนก่อน

      @@mikecripps2011 the whole point of SQL databases is that you can directly manipulate arbitrary amounts of data without having to load it all in memory though, so you don't need to do any chunking, just let the database run the query and retrieve the processed output

  • @TomKnudsen
    @TomKnudsen ปีที่แล้ว +1

    Thank you.. Could you please make a tutorial on how you would stip out certain elements from a file that is not your typical "list", "csv" or "json".. Find this task to be the most confusing and difficult things you can do in Python. If needed, I can provide you with a text file which include information about airports such as runways, elevation, etc. Perhaps there are some way to clean such file up or even convert it to a json/excel/csv etc.

    • @lilDaveist
      @lilDaveist ปีที่แล้ว

      Can you explain what you mean? List is a data structure inside Python, csv is a file format (comma separated values), and json is also a file format (JavaScript Object Notation).
      If you have a file which incorporates many different ways of storing data you have either manually or in a script way copied a file line by line and pasted it in another file.

    • @kavetisaikumar
      @kavetisaikumar ปีที่แล้ว

      What kind of file are you referring to here?

  • @maloukemallouke9735
    @maloukemallouke9735 9 หลายเดือนก่อน +2

    thanks but how you deal with depending row like times series data or observations like text where context correletead to row?

    • @mainak222
      @mainak222 หลายเดือนก่อน

      I have the same question, do you have an answer?

  • @artabra1019
    @artabra1019 ปีที่แล้ว

    OMG tnx im trying to open csv file with million data then my pc collapse so i find some i9 computer with 16gb ram to open it thanks now i can open big files using pandas.

  • @Ngoc-KTVHCM
    @Ngoc-KTVHCM 8 หลายเดือนก่อน

    In excel file, method "pd.read_excel" has no parameter "chunksize", how to handling the big data in many sheet in excel? Please help me!

  • @lakshay1168
    @lakshay1168 ปีที่แล้ว

    Your explanation is very good can you do a video on the Python project that else the position of an eye

  • @goku-np5bk
    @goku-np5bk 9 หลายเดือนก่อน +1

    why would you use csv format instead of parquet or hdf5 for large datasets?

    • @chrisl.9750
      @chrisl.9750 23 วันที่ผ่านมา

      pandas, for example, doesn't read parquet in chunks.
      CSV is still relevant for small, easy data transfers.

  • @aniv6346
    @aniv6346 ปีที่แล้ว

    Thanks a ton ! This is very helpful !

  • @franklimmaciel
    @franklimmaciel 5 หลายเดือนก่อน

    Thanks!

  • @leythecg
    @leythecg ปีที่แล้ว

    wie immer top content perfekt präsentiert!

  • @siddheshphaple342
    @siddheshphaple342 11 หลายเดือนก่อน

    How can I connect database in python, and how to optimise it if I have 60L+ records in it

  • @csblueboy85
    @csblueboy85 ปีที่แล้ว

    Great video thanks

  • @tcgvsocg1458
    @tcgvsocg1458 ปีที่แล้ว

    i was litteraly watch a video when you post a new video...i like that!(8)

  • @uzeyirktk6732
    @uzeyirktk6732 ปีที่แล้ว

    how we can further work on it. Suppose if want to use groupby function on column [ 'A '].

    • @15handersson16
      @15handersson16 9 หลายเดือนก่อน

      By experimenting yourself

  • @wildchildhep
    @wildchildhep ปีที่แล้ว

    it works! thanks!

  • @ramaronin
    @ramaronin 4 หลายเดือนก่อน

    brilliant!

  • @FabioRBelotto
    @FabioRBelotto ปีที่แล้ว +1

    Can we use each chunk to spawn a new process and do it in parallel?

    • @Supercukr
      @Supercukr หลายเดือนก่อน

      That would defeat the purpose of saving the RAM

  • @hynguyen1794
    @hynguyen1794 8 หลายเดือนก่อน

    i'm a simple man, i see vim, i press like

  • @wzqdhr
    @wzqdhr 3 หลายเดือนก่อน

    The hard part is how to append the new feature back to the original dataset without loading them in one shot

  • @tauseefmemon2331
    @tauseefmemon2331 ปีที่แล้ว

    Why was the RAM increasing? should not it stop increasing once the data is loaded?

    • @thisoldproperty
      @thisoldproperty ปีที่แล้ว +1

      It takes a while to load 4GB into memory. So the shown example was during the process load.

  • @JuanCarlosMH
    @JuanCarlosMH ปีที่แล้ว

    Awesome!

  • @vishkerai9229
    @vishkerai9229 7 หลายเดือนก่อน

    is this faster than Dask?

  • @berserker117-o7d
    @berserker117-o7d ปีที่แล้ว +1

    Is pickle better?

    • @WilliamDean127
      @WilliamDean127 ปีที่แล้ว

      Still would load all data at one time

  • @hkpeaks
    @hkpeaks ปีที่แล้ว

    Benchmark (Pandas vs Peaks vs Polars) th-cam.com/video/1Kn665ADSck/w-d-xo.html

  • @RidingWithGerdas
    @RidingWithGerdas ปีที่แล้ว

    Or with really huge datasets, use Koalas, interface is pretty much the same as pandas

    • @Zonno5
      @Zonno5 ปีที่แล้ว

      Provided you have access to scalable compute clusters. Recently Spark got a pandas API so koalas has sort of become unnecessary for that purpose.

    • @RidingWithGerdas
      @RidingWithGerdas ปีที่แล้ว

      @@Zonno5 talking about pyspark?

  • @pasqualegu
    @pasqualegu ปีที่แล้ว

    all workеd

  • @ashraf_isb
    @ashraf_isb 4 หลายเดือนก่อน

    1000th like 😀

  • @imclowdy
    @imclowdy ปีที่แล้ว

    Awesome! First comment :D

  • @driouichelmahdi
    @driouichelmahdi ปีที่แล้ว

    Thank you