If you enjoyed this video please consider subscribing and check out some of my videos on similar topics: - Polars Tutorial: th-cam.com/video/VHqn7ufiilE/w-d-xo.html&feature=shares - Pandas Alternatives Benchmarking: th-cam.com/video/LEhMQhCv3Kg/w-d-xo.html&feature=shares - Speed up Pandas: th-cam.com/video/SAFmrTnEHLg/w-d-xo.html&feature=shares
One thing you said implicitly is quite important: the footprint of polars is waaayyyy smaller than pandas which feels like polars may be a good choice for edge or serverless computing. In those cases I often refrain from using pandas because of the resources needed and the startup time. I then end up doing funny stuff with dicts, classes, tuples… I‘m considering exploring polars for that.
Great video! I have a Junior Data Engineer interview coming up and I'm stressed. I don't have any previous working experience in this field. I feel somewhat confident in SQL and Pandas and have been practicing on Strata Scratch. I absolutely hate the Data Structures and Algorithms type of questions like the ones on leetcode and I can't even answer the easy ones. I'm worried that my interview will have those kinds of coding problems. My initial goal was to become a Data Analyst but decided to apply for Data Engineer since it is a junior position.
@@robmulla Thanks for the reply! I just had the interview but it was just talking with a recruiter, nothing technical. Hopefully, if they proceed with me I'm going to have to solve coding questions in a week or so. I just hope the coding questions are going to be like the ones on Strata Scratch and not the ones on Leetcode. If they proceed with me and I get the coding questions and a technical interview, I'm definitely going to share how it went.
"junior data engineer" You need an education of a few years for that and learn quite some math, statistics and what not... Programming is a really different animal than statistics. That companies are hiring programmers will only cause risks of doing wrong analysis.
@@ErikS- Even as a data scientist you do not need a lot of stats. In a data engineering role its not required at all. The reason demand in actual data engineers is growing is because data engineers and analysts are shit swes
kinda beginner-intermediate learner here, but how do you manage units in these data frames/sets? like datatypes are good and all (ints, floats, boolean), but how to you keep track of your units like seconds, hours, kilometers, miles, degrees etc. Would you just add the units in the header, e.g. "max_delay_minutes"? Sorry if this question is trivial.
Rob, thank you! It's almost as if you read minds! This video sort of went above-and-beyond here! I'd been toying with trying a local session of Spark, and thanks to you, now have the impetus to give it a go!
Awesome! The problem I've always run into for personal projects with spark is that the data I'm using is small enough not to warrent it. But it's a great skill to brush up on if you intend to work at a large company.
Hello Rob. In your video, you said that you use Anaconda for environment management, but you install all packages through pip. Could you tell me how to make PyPI the main channel in anaconda and reinstall all packages from it? I currently have an anaconda setup with channels: 'conda-forge' , 'defaults' , 'pandas'. How do I rearrange all installed packages from pip respecting all dependencies?
It was a great video and very useful. Adding Spark to the mix was just awesome! For next video, using duckdb and it's benefits vs polars or maybe duckdb alongside polars would be great! Founder of duckdb said that for most companies, it is enough. So testing and discussion on that claim would also be great! Duckdb is said to be using vector search. Discussion on how vector-search is faster or better would also be great. Thanks!
Great tip! I've been hearing a lot about duckdb lately so I need to check that out. I think I saw the twitter thread you are talking about. Interesting that they can be combined.
Great introduction video! Thank you! Looks like most of time for PySpark was to initiate the session itself, it creates once as far as I understand and the reuses for later GetOrCreate() function calls. But anyway, for bigger pipelines Spark will work faster.
Thanks for such awesome content. I love polars and been trying it since your video came out, it would be nice to see you use it to do a data exploration video :D
I've never used it but people seem to keep mentioning it so I need to take a closer look! I started using polars after it was mentioned in the comments of my previous videos.
Hi! thank you for your video ! a question, what version of pandas you were using ?, I see that you are not using the type "arrow" when you are reading the parquet file with pandas
Hey Rob, this was a great video - clear and concise. Could you explain how you would set up an analysis that would run regularly as the data changed? For example, the flight data you used in this example, let's say that was updated once a week and you needed to update the aggregate stats, and maybe even track the aggregates over time. Thanks!
That's a great question. I'm sure others could answer it better but from my experience is you can solve this with: 1) a batch process that runs your aggregations at set intervals like daily and storing them out to summary files/tables. 2) streaming options that I'm not at all experienced with like: spark.apache.org/docs/latest/streaming-programming-guide.html
Hi Rob, What do you recommend me to do if I want to access a 30+ GB sqlite3 database table to access information to display on suppose, a web app or a jupyter notebook?
Great work! It would be cool to see how you can use SPARK with ML. I have been using Pandas to do a lot of ML work recently, but my data grew too large to fit in my RAM. I need to swap to PySpark, but I know my scikitlearn pipelines won't work with it.
Good suggestion. I've done some ML with spark, but that was many years back. Usually with deep learning you can train on batches so having all the data in memory is not important. I believe spark tries to follow similar syntax to sklearn pipelines.
With a lot of operations it's not obvious how you do that. Let's say you want to sort a column but you can't load the dataset. Getting the sorted result of each chunk is not enough.
That is true, but also depends on the operation you are working with. Something like standard deviation requires the entire dataset to compute. Obviously if you are doing a groupby std you could chunk the data. Essentially that's what these libraries are attempting to do for you.
very good video. Can you please make more advanced polars videos? I have start switching to polars from pandas and I really want to learn more about how to do more advanced things with them.
How do you explain that spark is slower than polars? As theoretically, it should be better right? Maybe we would see better results with spark for larger datasets but Polars aims to be good at that too
spark is useful when you can't fit the data in memory but all the overhead makes it slower when running on medium sized datasets. I try to mention that in the video. Just to demo I wanted to show how the synatx works but if the data is HUGE I wouldn't have been able to even open it in pandas.
There's a bit of a chicken vs the egg problem there. Pandas is mature, tried and true. Polars can only mature if it is compelling enough to switch, but to be compelling, it needs the user base.
Hi Rob, wonderful video as always! Can you make a video on how to deploy a trained machine learning model (maybe the XGBoost forecaster you made) using Docker?
Hi Rob and thanks for the excellent work, I enjoy each of your videos! I would be interested in a video explaining how to put several machine learning libraries pulled from GitHub in a row, for example: Object detection + Keypoints estimation + Person identification. Also, how to manage compatible library versions for all these repos that have different (incompatible) requirements. Thanks!
@@robmulla they have a demo in their website of a graphical pipeline of a full Project (starting from the input of data, the filters applyed, the model created and analysis). It works in a way that documentation of projects are build within the development of the project. It is perfect for recuring projects that many people will take a look independently of you been there to explain how it works.
Hi, i recently find your chanel and it's amazing, best think I found. I what to ask you if you know of like a game or a page where I can find cleaning data excercis
Glad you found my channel. Do you mean something like leetcode but for data science? I think there are a few out there but I've never used any of them.
Can spark be useful if I'm running on a single machine? (like my personal computer) Let's say my PC has 8gb of RAM and I need to work with a 20gb dataset. Can spark split the data somehow and make it work?
It should but I would instead 1) Try splitting the data manually and working on it in chunks with pandas or 2) Try polars streaming to see if it would work.
I've used dask in previous videos with poor performance on a single machine. But it is an option for distributed. Check out this video: th-cam.com/video/LEhMQhCv3Kg/w-d-xo.html
Pandas work much better in unclean data, pyarrow give so much headache in data conversion error: ArrowInvalid: Could not convert '230' with type str: tried to convert to double make many dependencies unusable: to_parquet() convert pandas to polars open csv in data wrangle, save as parquet in data wrangle
If you enjoyed this video please consider subscribing and check out some of my videos on similar topics:
- Polars Tutorial: th-cam.com/video/VHqn7ufiilE/w-d-xo.html&feature=shares
- Pandas Alternatives Benchmarking: th-cam.com/video/LEhMQhCv3Kg/w-d-xo.html&feature=shares
- Speed up Pandas: th-cam.com/video/SAFmrTnEHLg/w-d-xo.html&feature=shares
One thing you said implicitly is quite important: the footprint of polars is waaayyyy smaller than pandas which feels like polars may be a good choice for edge or serverless computing. In those cases I often refrain from using pandas because of the resources needed and the startup time. I then end up doing funny stuff with dicts, classes, tuples… I‘m considering exploring polars for that.
Very good points! I need to start using polars more honestly.
These are phenomenal, I especially like these short 10-15min videos. Thanks a lot for sharing all these relevant and up to date topics!
Great video! Always curious about Spark and this gave a great overview of these 3 tools! 💡
Thanks for watching Jose!
Great video! I have a Junior Data Engineer interview coming up and I'm stressed. I don't have any previous working experience in this field. I feel somewhat confident in SQL and Pandas and have been practicing on Strata Scratch. I absolutely hate the Data Structures and Algorithms type of questions like the ones on leetcode and I can't even answer the easy ones. I'm worried that my interview will have those kinds of coding problems. My initial goal was to become a Data Analyst but decided to apply for Data Engineer since it is a junior position.
Thanks for the feedback. I hope your interview goes well. It sounds like you are well prepared and will do great! Do let me know how it goes.
@@robmulla Thanks for the reply! I just had the interview but it was just talking with a recruiter, nothing technical. Hopefully, if they proceed with me I'm going to have to solve coding questions in a week or so. I just hope the coding questions are going to be like the ones on Strata Scratch and not the ones on Leetcode. If they proceed with me and I get the coding questions and a technical interview, I'm definitely going to share how it went.
"junior data engineer"
You need an education of a few years for that and learn quite some math, statistics and what not... Programming is a really different animal than statistics.
That companies are hiring programmers will only cause risks of doing wrong analysis.
@@ErikS- Even as a data scientist you do not need a lot of stats. In a data engineering role its not required at all. The reason demand in actual data engineers is growing is because data engineers and analysts are shit swes
kinda beginner-intermediate learner here, but how do you manage units in these data frames/sets? like datatypes are good and all (ints, floats, boolean), but how to you keep track of your units like seconds, hours, kilometers, miles, degrees etc. Would you just add the units in the header, e.g. "max_delay_minutes"? Sorry if this question is trivial.
I like these type of videos as they clear
all confusion.
Glad you like them!
Rob, thank you! It's almost as if you read minds! This video sort of went above-and-beyond here! I'd been toying with trying a local session of Spark, and thanks to you, now have the impetus to give it a go!
Awesome! The problem I've always run into for personal projects with spark is that the data I'm using is small enough not to warrent it. But it's a great skill to brush up on if you intend to work at a large company.
Hello Rob. In your video, you said that you use Anaconda for environment management, but you install all packages through pip. Could you tell me how to make PyPI the main channel in anaconda and reinstall all packages from it? I currently have an anaconda setup with channels: 'conda-forge' , 'defaults' , 'pandas'. How do I rearrange all installed packages from pip respecting all dependencies?
It was a great video and very useful. Adding Spark to the mix was just awesome! For next video, using duckdb and it's benefits vs polars or maybe duckdb alongside polars would be great! Founder of duckdb said that for most companies, it is enough. So testing and discussion on that claim would also be great! Duckdb is said to be using vector search. Discussion on how vector-search is faster or better would also be great. Thanks!
Great tip! I've been hearing a lot about duckdb lately so I need to check that out. I think I saw the twitter thread you are talking about. Interesting that they can be combined.
Hey Rob, huge fan of your work, keep rolling😀
Thanks. Will do!
Another great video! Thanks Rob! Looking forward to the next stream
Thanks for watching. Glad you liked it!
Great introduction video! Thank you!
Looks like most of time for PySpark was to initiate the session itself, it creates once as far as I understand and the reuses for later GetOrCreate() function calls. But anyway, for bigger pipelines Spark will work faster.
Thanks for such awesome content. I love polars and been trying it since your video came out, it would be nice to see you use it to do a data exploration video :D
Great video! Do you have any thoughts on duckDB?
I've never used it but people seem to keep mentioning it so I need to take a closer look! I started using polars after it was mentioned in the comments of my previous videos.
@@robmulla I didn't hear about it till this week I think is relatively new couldn't find anything about it older than a month
Since couple of days ago, you can use SQL with Polars as well
Really useful for me, thank you rob
Glad you found it useful Lumiera. Thanks for watching.
Hi! thank you for your video !
a question, what version of pandas you were using ?, I see that you are not using the type "arrow" when you are reading the parquet file with pandas
I really like your content. Absolutely grade A+
Glad you enjoy it!
Thanks for the great video! I'd like to see a comparison with other distributed Python libraries, such as Modin. Thanks!
Hey Rob, this was a great video - clear and concise. Could you explain how you would set up an analysis that would run regularly as the data changed? For example, the flight data you used in this example, let's say that was updated once a week and you needed to update the aggregate stats, and maybe even track the aggregates over time. Thanks!
That's a great question. I'm sure others could answer it better but from my experience is you can solve this with: 1) a batch process that runs your aggregations at set intervals like daily and storing them out to summary files/tables. 2) streaming options that I'm not at all experienced with like: spark.apache.org/docs/latest/streaming-programming-guide.html
R is an amazing tool for data pipelines. its native object is a dataframe and has dplyr witch is fast and makes the code easy to read.
I agree, but haven't used R in a long time. How does it compare in terms of speed? I thought R was generally slow.
Hi Rob,
What do you recommend me to do if I want to access a 30+ GB sqlite3 database table to access information to display on suppose, a web app or a jupyter notebook?
Great work! It would be cool to see how you can use SPARK with ML. I have been using Pandas to do a lot of ML work recently, but my data grew too large to fit in my RAM. I need to swap to PySpark, but I know my scikitlearn pipelines won't work with it.
Good suggestion. I've done some ML with spark, but that was many years back. Usually with deep learning you can train on batches so having all the data in memory is not important. I believe spark tries to follow similar syntax to sklearn pipelines.
@@robmulla thanks Rob! I thought data needed to be in memory at the same time to do training. It's way easier to split my data and train by chunks.
@@DarkShine101 You can also leverage Pandas API in Spark to run your training Pandas code as an UDF in spark environment.
That's great comparison and very relevant. What about 'dask'? Isn't it quite a mature replacement of Spark as well?
Great question, I actually have a video on dask in my "pandas alternatives" you should check it out.
Awesome. What do you think about ibis? It can act as a frontend for Pandas, Polars, Spark, etc
:O
Never heard of it before but will def check it out.
Thanks for the educational content Rob
My pleasure!
3:40 couldn't you solve the memory issue by processing the file in chunks?
With a lot of operations it's not obvious how you do that. Let's say you want to sort a column but you can't load the dataset. Getting the sorted result of each chunk is not enough.
That is true, but also depends on the operation you are working with. Something like standard deviation requires the entire dataset to compute. Obviously if you are doing a groupby std you could chunk the data. Essentially that's what these libraries are attempting to do for you.
very good video. Can you please make more advanced polars videos? I have start switching to polars from pandas and I really want to learn more about how to do more advanced things with them.
Sure. I need to find some good examples to show. The polars docs has some nice ones.
How do you explain that spark is slower than polars? As theoretically, it should be better right?
Maybe we would see better results with spark for larger datasets but Polars aims to be good at that too
spark is useful when you can't fit the data in memory but all the overhead makes it slower when running on medium sized datasets. I try to mention that in the video. Just to demo I wanted to show how the synatx works but if the data is HUGE I wouldn't have been able to even open it in pandas.
I think polars could replace pandas in the future once it matures a bit and the community and support grows. Great video as usual! 👌
I think it's possible. However people are slow to adopt and speed isn't really the main issue for most people writing pandas code right now.
There's a bit of a chicken vs the egg problem there. Pandas is mature, tried and true. Polars can only mature if it is compelling enough to switch, but to be compelling, it needs the user base.
Great content Rob! TKS
Glad you like it!
hi Mulla, where i can download the file flight ? can you put url please ?
Which machine learning project should I do for MAAANG companies?
so when i comes to polars or pandas i should always use polars?
Your videos are so nice Rob, I really love them. Could you please share the dataset or indicate us where to find it? Thx
I second this, would love to play with the data myself!
Thanks guys. The dataset is on kaggle here: www.kaggle.com/datasets/robikscube/flight-delay-dataset-20182022
Upvote if you like it!
@@robmulla Thanks a lot, appreciate it! 🙂🙂
Polars is so fast!. Great video
It sure is! Apprecaite the feedback.
Bro, please create a Playlist on Polars Beginner to expert for faster processing.
Where can we get datasets like the one you just showed in the video?
The airline dataset is on kaggle here: www.kaggle.com/datasets/robikscube/flight-delay-dataset-20182022
Hi Rob, wonderful video as always! Can you make a video on how to deploy a trained machine learning model (maybe the XGBoost forecaster you made) using Docker?
Thanks for the suggestion. I really need to make a video about MLops but I'm not the most experienced in it. Thanks for the idea I'll keep it in mind.
Best channel By a Grandmaster
Thank you sir!
Hi Rob and thanks for the excellent work, I enjoy each of your videos!
I would be interested in a video explaining how to put several machine learning libraries pulled from GitHub in a row, for example: Object detection + Keypoints estimation + Person identification. Also, how to manage compatible library versions for all these repos that have different (incompatible) requirements.
Thanks!
You should definitely cover Kedro pipeline!
Never heard of Kedro before but I'll give it a look for sure!
@@robmulla they have a demo in their website of a graphical pipeline of a full Project (starting from the input of data, the filters applyed, the model created and analysis). It works in a way that documentation of projects are build within the development of the project. It is perfect for recuring projects that many people will take a look independently of you been there to explain how it works.
Nice job, Rob. Keep them coming 👍
I'll try my best!
excellent. Great contents.
Thanks for sharing..
Rob, thank you so much!
Absolutely Alejandro!
Thanks for this great video
Glad you liked it!
Hi, i recently find your chanel and it's amazing, best think I found. I what to ask you if you know of like a game or a page where I can find cleaning data excercis
Glad you found my channel. Do you mean something like leetcode but for data science? I think there are a few out there but I've never used any of them.
Thanks!!! Great JOB!
Glad you liked it!
Polars is rust based which explains the fastness
Yep! I have a whole video on polars/rust you should check out.
Hi there! Is it possible to request the file for practice purposes? Thank you!
Please create a video on Gans creating artifical images
I would also add dask to the comparison
I compare dask on my pandas alternatives video!
What about dask?
Nice MKBHD shirt!
😊
Can spark be useful if I'm running on a single machine? (like my personal computer)
Let's say my PC has 8gb of RAM and I need to work with a 20gb dataset. Can spark split the data somehow and make it work?
It should but I would instead 1) Try splitting the data manually and working on it in chunks with pandas or 2) Try polars streaming to see if it would work.
Can you do a tutorial for building a data pipeline using industry standard tools?
Could you maybe do a similiar Video and compare polars with datatable? Thanks alot!
I use Dask instead PySpark.
I've used dask in previous videos with poor performance on a single machine. But it is an option for distributed. Check out this video: th-cam.com/video/LEhMQhCv3Kg/w-d-xo.html
TLDR; Polars was the fastest and Pandas was the slowest
What about spark?
In the middle? That's what the video says, isn't it?
Pandas work much better in unclean data,
pyarrow give so much headache in data conversion error:
ArrowInvalid: Could not convert '230' with type str: tried to convert to double
make many dependencies unusable:
to_parquet()
convert pandas to polars
open csv in data wrangle,
save as parquet in data wrangle
Thanks a lot but Sparks is a nightmare for me to install on my windows PC
Oh man. I can't help you there. Why not install ubuntu dual boot?
@@robmulla how did you installed it?
that's great video
Love it
Thanks for watching Nadav!
Excellent
Thank you so much 😀
thx!
Lovely
Thanks!
have you ever tried ibis?
I have not. Other have mentioned it and duckDB
Jones William Rodriguez Elizabeth Miller Joseph
Koalas is good too
Whoa! First time I've heard of this but googled and it looks cool. Pandas API on spark... I need to check it out more.
@@robmulla it’s really good . Runs on top of pyspark . Have you also tried dask?
I really hate these stock video interruptions
What do you mean? The advertisements from youtube?
Never mention....Rust 😂
U using Py 3.8? Srsly, go update to 3.11 and test again
Why?
@@robmulla Performance 3.11 is really much better
Hello! How can I contact you directly via telegram or email?
Hello! How can I contact you directly via email or telegram for buisness iinquiries?