Wow, it's Ken Jee! Thanks for the comment and kind words! I also subscribe to your channel, great content by the way, especially the 6-part DS project from scratch series.
Great Explanation of each step....right from opening file to end....because sometimes as a newbie we find difficult to which file to use from github also.....Thank you ....Great Video!
The video is great. But the screen text us way too small to read. Suggest that you can enlarge the font or reduce the white space in the screen to make the video no e readable
Amazing! I am totally new to web scraping. I tried to scrape the website using beautiful soup library for 4 days now, but I can't get past the basics. You have extremely simplified it for me. For instance, I just scraped data from Wikipedia about the list of countries and their population and got the whole table in the first attempt. Thank you so much! I wonder if this can be used for other pages like LinkedIn, Glassdoor data collection? Because there are no tables there. Professor, thank you so much once again!
Amazing! your video helped me with my 1st homework in Data Mining. And also thinking to jump into data science, so Thank you so much! Like and Subscription!
Hi Data Professor, thanks for this video. It's very helpful. I'm a newbie starting out in data science and web scraping. Just wondering can you use pandas functionality for scraping data that are not laid out in table? and how would you do that? could you perhaps create a video on scraping non tabular data if you haven't already?
Hi professor I truly enjoy your videos and have learnt a lot may God keep you successful in life. A question that's been on my mind is what laptop do you use as I really like the keyboard sound when you type unless you are using a external keyboard. Is it possible for you to show us a set-up of your desk ? Kind regards
Hi, I'm using a MacBook Pro (2016) and yes the keyboard feel is good on this laptop although being a bit flat which is a good thing as it allows minimal effort in moving from one button to the next.
Hi, Professor! Thank you for the contents you brings to us, it really helps! \o/ Lately, I've been asking myself: How important is web scraping for a data scientist? How often do you web scrape? I just started learning it, I'll keep going and I wanted to know your thoughts about its relevance.
Hi Paulo, webscraping comes in handy when you want to create your own dataset from available data on the internet. For example, you want to analyze the salary of data scientists from glassdoor database then you can do that with webscraping. Hope this helps 😃
hey professor, thankyou for the content. but i was wondering when we are scrapping by just passing the link how does it know to only read data from the table and not any other information.
Hi, the function will detect HTML syntax. The syntax for tables in HTML is and the read_html() function finds these to figure out that they are tables and extracts the data.
The pandas read_html function is suitable for a simple webpage with relatively few tables. For more complex and large volume of pages I would recommend to look into beautifulsoup and selenium.
Hello Professor, I would like to suggest you to publish a video about RSelenium which use with Selenium Webdriver for automation system testing :D Hope it may benefits others. This is just my humble suggestion.
Great suggestion! I have played around with Selenium for Python and have found it pretty powerful. What I made so far was a short script that can take screenshots of my youtube channel's page (or any webpage).
Thanks Mert for the kind comment. pandas works only for tabular data from webpages. For linkedin posts, we'll probably have to use beautiful soup for that. I might make a future video about that, will put it into the to-do list.
Hi! ken jee, I try your code of web screping on kaggle but I'm getting RLError: error. i try to solve but i cannot resolve ...please give me your suggestions
Hi Piyush, The pandas library allows scraping webpages that have tabular data such as from Wikipedia. It is really limited to those with a predefined table format. To scrape webpages I'd recommend looking into selenium and beautifulsoup
Hey I tried using the code on Wikipedia to scrape tables on Wikipedia. When it comes to scraping on place with loads of other data and i just want to pull the table alone is there a method for that? As with current code im pulling whole page. And I just want the playoff stats... i think I'm supposed to creat dictionary then assign it to a dataframe but I dont know how when it comes to urls and websites.
How do we deal when we encounter the error "HTTP Error 403: Forbidden" while reading url with Pandas? How should we proceed in this case? Kindly advise.
Hi, dfs can be concatenated using the pd.concat() function, you can play around with axis=0 or axis=1 depending on how you want to combine the dfs (side by side or stacked on top of the other)
Is this useful for every situation? I am trying to fetch data from glassdoor but this method is not working Link: "www.glassdoor.co.in/Job/bengaluru-data-analyst-jobs-SRCH_IL.0,9_IC2940587_KO10,22.htm"
I didn't know about this pandas functionality! Great video!
Wow, it's Ken Jee! Thanks for the comment and kind words! I also subscribe to your channel, great content by the way, especially the 6-part DS project from scratch series.
@@DataProfessor Thanks! I am loving your stuff as well. I need to start using colab more. Keep up the good work, the tutorials are very helpful!
You great bro Down to earth
Please don't stop making videos. These videos really helps alot.
Thank you, glad it was helpful!
Great Explanation of each step....right from opening file to end....because sometimes as a newbie we find difficult to which file to use from github also.....Thank you ....Great Video!
Wow thanks for the encouraging words, glad you’ve found the video helpful 😊
I used this before, but I didn't knew that you can select the table using the brackets, awesome! Thanks for the video!
Glad it's helpful, thanks for watching!
The video is great. But the screen text us way too small to read. Suggest that you can enlarge the font or reduce the white space in the screen to make the video no e readable
Thanks for the suggestion, greatly appreciate it, yes in recent videos I have increased the font size.
Amazing! I am totally new to web scraping. I tried to scrape the website using beautiful soup library for 4 days now, but I can't get past the basics. You have extremely simplified it for me. For instance, I just scraped data from Wikipedia about the list of countries and their population and got the whole table in the first attempt. Thank you so much! I wonder if this can be used for other pages like LinkedIn, Glassdoor data collection? Because there are no tables there. Professor, thank you so much once again!
Glad to hear that the video was helpful! For non-tabular pages you may have to use beautifulsoup and/or selenium
Wow your video is the best , it took me forever to run this .This video helped me in 5 min. Thank you !!!
Great well explained clear and excellent quality of sound. Thanks for doing this keep it up!
Thanks for the encouragement 😃
thanks a lot. I am doing a machine learning project and do web scraping in the same code...thanks this is better
Wow this is a great video! Very well organised!
Excellent work breaking this down. I have only used R, but this seemed incredibly intuitive. Thank you!
A query, in row 12 , why are we using .index along with df.drop ? why wouldn't df.drop work without it ?
Fabulous - it's soooo easy when you know how!
Thanks for watching Roger, absolutely agreed with that 😃
Thx for the video, was really helpful. I wish u more subscribers, man ;)
Thanks for the support! 😃
Amazing! your video helped me with my 1st homework in Data Mining. And also thinking to jump into data science, so Thank you so much! Like and Subscription!
Glad I could help! And welcome to Data science!
this tutorial gets my subscription. Thank you Professor. :)
Wow, glad to hear that, welcome aboard 😃
Bravo Data Professor, nice lecture!
what would be best for comparing prices between competitors?
Awesome work by the hero! Keep teaching like this
Thanks for the encouragement 😃
Hi Data Professor, thanks for this video. It's very helpful. I'm a newbie starting out in data science and web scraping. Just wondering can you use pandas functionality for scraping data that are not laid out in table? and how would you do that? could you perhaps create a video on scraping non tabular data if you haven't already?
Great question, to web scrape non-tabular data you can look into using beautiful soup and also selenium libraries for Python
@@DataProfessor thank you for the pointer, much appreciated!
Exactly the question I was gonna ask. Thanks.
Hi professor I truly enjoy your videos and have learnt a lot may God keep you successful in life.
A question that's been on my mind is what laptop do you use as I really like the keyboard sound when you type unless you are using a external keyboard.
Is it possible for you to show us a set-up of your desk ?
Kind regards
Hi, I'm using a MacBook Pro (2016) and yes the keyboard feel is good on this laptop although being a bit flat which is a good thing as it allows minimal effort in moving from one button to the next.
Thank you for the clear explanation !
A pleasure! Thanks for watching 😃
Thank you so much for this concept it was really time saving one!
Glad it was helpful!
Hi, Professor! Thank you for the contents you brings to us, it really helps! \o/
Lately, I've been asking myself: How important is web scraping for a data scientist? How often do you web scrape?
I just started learning it, I'll keep going and I wanted to know your thoughts about its relevance.
Hi Paulo, webscraping comes in handy when you want to create your own dataset from available data on the internet. For example, you want to analyze the salary of data scientists from glassdoor database then you can do that with webscraping. Hope this helps 😃
Hi Professor does the original data need to be a html file to start with? Does the original data always need to have a table to extract data?
Yes to both questions, that’s the limitation of this approach. Other than that selenium + beautifulsoup is a good combo to look into.
I see. Thank you very much for the guidance!!@@DataProfessor
Thanks bro, for your nice tutorials
It's my pleasure
hey professor, thankyou for the content.
but i was wondering when we are scrapping by just passing the link how does it know to only read data from the table and not any other information.
Hi, the function will detect HTML syntax. The syntax for tables in HTML is and the read_html() function finds these to figure out that they are tables and extracts the data.
Thank you so much for this concept it was really helpful respect !
Thanks a lot - this helped a lot.
Is there an api for sports results? or you have to do it via web scraping?
thank you for your video my question if there are many tables in so many pages (20000 page) what should I do ???
The pandas read_html function is suitable for a simple webpage with relatively few tables. For more complex and large volume of pages I would recommend to look into beautifulsoup and selenium.
Great video Professor!
Glad you liked it!
this is exciting. i love pandas
Really awesome.. Data Professor
Salik, Thanks!
Superb, let me bring you some more guys to your channel
Awesome, welcome to the channel!
@@DataProfessor 🙏
Hello Professor, I would like to suggest you to publish a video about RSelenium which use with Selenium Webdriver for automation system testing :D Hope it may benefits others. This is just my humble suggestion.
Great suggestion! I have played around with Selenium for Python and have found it pretty powerful. What I made so far was a short script that can take screenshots of my youtube channel's page (or any webpage).
f strings are more readable compared to the .format() method
Great content !
Any idea on how I can scrape data for example from linkedin Jobs Postings. I found Octoparse for this, any ideas?
Thanks Mert for the kind comment. pandas works only for tabular data from webpages. For linkedin posts, we'll probably have to use beautiful soup for that. I might make a future video about that, will put it into the to-do list.
Data Professor thank you 🙏
@@mj7146 A pleasure!
@@DataProfessor Hi Data Professor, we are still expecting this :grin:
Hi! ken jee, I try your code of web screping on kaggle but I'm getting
RLError: error.
i try to solve but i cannot resolve ...please give me your suggestions
Hi Piyush,
The pandas library allows scraping webpages that have tabular data such as from Wikipedia. It is really limited to those with a predefined table format. To scrape webpages I'd recommend looking into selenium and beautifulsoup
Can you please explain how to read all the retrieved urls
A great tutorial!
Thank you!
Do you know if we can use this to scrape sites built with dynamic JS, and how do we do this if we have to login ?
Very nice video
Thanks :)
Can you also use df2019(df2019[‘Age’] == ‘Age’) to find the ages containing the word ‘Age’?
Very helpful thank you!
Thanks Badr for the kind words!
very cool thanks!
This helped thanks!
Glad it helped!
Hey I tried using the code on Wikipedia to scrape tables on Wikipedia. When it comes to scraping on place with loads of other data and i just want to pull the table alone is there a method for that? As with current code im pulling whole page. And I just want the playoff stats... i think I'm supposed to creat dictionary then assign it to a dataframe but I dont know how when it comes to urls and websites.
What are the prerequisites to watch this tutorial? I know some python, is this ok?
Yes, beginner’s level of Python is sufficient to follow along.
thank for knowledge
A pleasure, thanks for watching
How do we deal when we encounter the error "HTTP Error 403: Forbidden" while reading url with Pandas? How should we proceed in this case?
Kindly advise.
not working for other sites i did it for tripadvisor nothing came
How do I keep the url that the coloum tm has in my dataframe?
Matur nuwun sanget sedulur....
Why did you use string.format instead of String concatination
Thankyou so much sir
Informative.
Thanks Shweta for the kind comment!
Thanks !
Thanks for watching!
(ImportError: lxml not found, please install it)
I got this error. what is the solution?
Hi, you can install lxml via pip install lxml
@@DataProfessor Done it, thank U
Ure awesome sr
Thanks for the kind words
what if there is no table on a web page ??
You look like jomatech's big brother :O
Haha, I get that a lot. Joma and I should do a collab video 😆
@@DataProfessor But Sir I learned a week's lesson from one of your 10 minute video. I can't be more grateful to you. Thank you.
@@tareqmahmud3902Thanks, glad to hear that they’re helpful! 😊
Every link turns into a df. How can I concatenate all the dfs?
Hi, dfs can be concatenated using the pd.concat() function, you can play around with axis=0 or axis=1 depending on how you want to combine the dfs (side by side or stacked on top of the other)
how to save df to excel ? please
Awesome
i try it on ur channel ( just for testing lol )
Don't name your variables str or you will shadow the string builtin
You're right, many thanks for pointing that out, why did I do that. I've changed it to url_link now.
Is this useful for every situation?
I am trying to fetch data from glassdoor but this method is not working
Link: "www.glassdoor.co.in/Job/bengaluru-data-analyst-jobs-SRCH_IL.0,9_IC2940587_KO10,22.htm"