Process HUGE Data Sets in Pandas

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ธ.ค. 2024

ความคิดเห็น • 45

  • @Open5to6
    @Open5to6 10 หลายเดือนก่อน +4

    I can't always follow everything he says, cause he moves pretty quick and throws a lot at you, but he's always straight to the point, no fluff, and innovative.
    I always glean more things to look up after hearing it from NeuralNine first.

  • @Roman-kn7kt
    @Roman-kn7kt 3 หลายเดือนก่อน +1

    As always, your tutorials are incredible!

  • @DonaldRennie
    @DonaldRennie 4 หลายเดือนก่อน

    Very good! I'm a beginner, and this guy spent more time explaining this topic than DataCamp. The only thing I didn't understand was the "series" part.

  • @maloukemallouke9735
    @maloukemallouke9735 ปีที่แล้ว +3

    thanks but how you deal with depending row like times series data or observations like text where context correletead to row?

    • @mainak222
      @mainak222 4 หลายเดือนก่อน

      I have the same question, do you have an answer?

  • @thisoldproperty
    @thisoldproperty 2 ปีที่แล้ว +3

    I like the simplicity. Wonder if a similar thing could be done with sql queries given they usually store incredibly large datasets.

    • @jaysont5311
      @jaysont5311 2 ปีที่แล้ว +1

      I thought I read that you could, I could be wrong tho

    • @mikecripps2011
      @mikecripps2011 ปีที่แล้ว

      Yes, do it all day long. I read 2.5. billion records a new level for me this week on a wimpy PC. I chunk it by 200 K Rows normally.

    • @nuhuhbruhbruh
      @nuhuhbruhbruh ปีที่แล้ว

      @@mikecripps2011 the whole point of SQL databases is that you can directly manipulate arbitrary amounts of data without having to load it all in memory though, so you don't need to do any chunking, just let the database run the query and retrieve the processed output

  • @Ngoc-KTVHCM
    @Ngoc-KTVHCM ปีที่แล้ว

    In excel file, method "pd.read_excel" has no parameter "chunksize", how to handling the big data in many sheet in excel? Please help me!

  • @goku-np5bk
    @goku-np5bk ปีที่แล้ว +1

    why would you use csv format instead of parquet or hdf5 for large datasets?

    • @chrisl.9750
      @chrisl.9750 4 หลายเดือนก่อน

      pandas, for example, doesn't read parquet in chunks.
      CSV is still relevant for small, easy data transfers.

  • @TomKnudsen
    @TomKnudsen 2 ปีที่แล้ว +1

    Thank you.. Could you please make a tutorial on how you would stip out certain elements from a file that is not your typical "list", "csv" or "json".. Find this task to be the most confusing and difficult things you can do in Python. If needed, I can provide you with a text file which include information about airports such as runways, elevation, etc. Perhaps there are some way to clean such file up or even convert it to a json/excel/csv etc.

    • @lilDaveist
      @lilDaveist 2 ปีที่แล้ว

      Can you explain what you mean? List is a data structure inside Python, csv is a file format (comma separated values), and json is also a file format (JavaScript Object Notation).
      If you have a file which incorporates many different ways of storing data you have either manually or in a script way copied a file line by line and pasted it in another file.

    • @kavetisaikumar
      @kavetisaikumar 2 ปีที่แล้ว

      What kind of file are you referring to here?

  • @siddheshphaple342
    @siddheshphaple342 ปีที่แล้ว

    How can I connect database in python, and how to optimise it if I have 60L+ records in it

  • @leythecg
    @leythecg 2 ปีที่แล้ว

    wie immer top content perfekt präsentiert!

  • @uzeyirktk6732
    @uzeyirktk6732 ปีที่แล้ว

    how we can further work on it. Suppose if want to use groupby function on column [ 'A '].

  • @hynguyen1794
    @hynguyen1794 11 หลายเดือนก่อน

    i'm a simple man, i see vim, i press like

  • @FabioRBelotto
    @FabioRBelotto ปีที่แล้ว +1

    Can we use each chunk to spawn a new process and do it in parallel?

    • @Supercukr
      @Supercukr 5 หลายเดือนก่อน

      That would defeat the purpose of saving the RAM

  • @aniv6346
    @aniv6346 2 ปีที่แล้ว

    Thanks a ton ! This is very helpful !

  • @lakshay1168
    @lakshay1168 2 ปีที่แล้ว

    Your explanation is very good can you do a video on the Python project that else the position of an eye

  • @artabra1019
    @artabra1019 2 ปีที่แล้ว

    OMG tnx im trying to open csv file with million data then my pc collapse so i find some i9 computer with 16gb ram to open it thanks now i can open big files using pandas.

  • @vishkerai9229
    @vishkerai9229 10 หลายเดือนก่อน

    is this faster than Dask?

  • @wzqdhr
    @wzqdhr 6 หลายเดือนก่อน

    The hard part is how to append the new feature back to the original dataset without loading them in one shot

  • @tauseefmemon2331
    @tauseefmemon2331 2 ปีที่แล้ว

    Why was the RAM increasing? should not it stop increasing once the data is loaded?

    • @thisoldproperty
      @thisoldproperty 2 ปีที่แล้ว +1

      It takes a while to load 4GB into memory. So the shown example was during the process load.

  • @csblueboy85
    @csblueboy85 2 ปีที่แล้ว

    Great video thanks

  • @wildchildhep
    @wildchildhep 2 ปีที่แล้ว

    it works! thanks!

  • @franklimmaciel
    @franklimmaciel 9 หลายเดือนก่อน

    Thanks!

  • @TruthBomber42
    @TruthBomber42 2 ปีที่แล้ว +1

    Is pickle better?

    • @WilliamDean127
      @WilliamDean127 2 ปีที่แล้ว

      Still would load all data at one time

  • @tcgvsocg1458
    @tcgvsocg1458 2 ปีที่แล้ว

    i was litteraly watch a video when you post a new video...i like that!(8)

  • @ramaronin
    @ramaronin 8 หลายเดือนก่อน

    brilliant!

  • @hkpeaks
    @hkpeaks ปีที่แล้ว

    Benchmark (Pandas vs Peaks vs Polars) th-cam.com/video/1Kn665ADSck/w-d-xo.html

  • @JuanCarlosMH
    @JuanCarlosMH 2 ปีที่แล้ว

    Awesome!

  • @RidingWithGerdas
    @RidingWithGerdas 2 ปีที่แล้ว

    Or with really huge datasets, use Koalas, interface is pretty much the same as pandas

    • @Zonno5
      @Zonno5 2 ปีที่แล้ว

      Provided you have access to scalable compute clusters. Recently Spark got a pandas API so koalas has sort of become unnecessary for that purpose.

    • @RidingWithGerdas
      @RidingWithGerdas 2 ปีที่แล้ว

      @@Zonno5 talking about pyspark?

  • @pasqualegu
    @pasqualegu 2 ปีที่แล้ว

    all workеd

  • @imclowdy
    @imclowdy 2 ปีที่แล้ว

    Awesome! First comment :D

  • @ashraf_isb
    @ashraf_isb 7 หลายเดือนก่อน

    1000th like 😀

  • @driouichelmahdi
    @driouichelmahdi 2 ปีที่แล้ว

    Thank you