I have been searching for a video like this forever. Thank God I landed on your page. Really wonderful and amazing video showing step by step. You are a living legend. Just subscribed as well.
This is by far THE best and easiest to understand explanation I’ve heard about using python to scrape data. Thank you for your effort in creating this video. You got a new subscriber!
I just wanted to let you know that I really enjoyed this video. I was feeling like learning python was stupid. Then I found you doing a cool project and it was easy to follow. I am inspired again thank you.
Your work needs to be appreciated man. The way you explain things in a calm, composed and soothing voice. The simplicity of the tutorial indicates your grasp on the web scraping. Thank you.
I watch a lot of videos about programming and most of them are really good. However, this really is a standout piece. The way it combines theory and practice is second none. Well done,sir
ooooooooooooooooooooohhhh I really love this video you saved me big time. This is really outstanding, well detailed and you explainations are very logical and clear
Thank you very much sir I was watching many tutorials and was getting confused to understand the html structure then I found your video you explained everything beautifully I completed my project successfully thanks a lot sir❤
Thanks a lot. With some basic level in python and 0 background in programming, I was able to successfully do a project for my master thesis related to media coverage about certain topic.
Finally, a video that puts paid courses to shame! Hats off to you for the great tutorial! You just did not explain, the way you went back and forth helped me understand a lot. Kudos! Could not resist the urge to hit the like and subscribe button. Will definitely visit your channel for more guides and tutorials! ♥
I just loved it . I used to think web scraping is too hard but when I saw your video it is like so simple that even 10 years old can also understand. Simply great job 👏
Great video! I wrote the code while you were explaining it and I kinda grasped the idea behind what you were doing. The only thing I don't understand is about the indentation and how it affects the for structure. In other languages, you end the for with some code and nest them like any while-do or if-else-endif type of stuff. I also thought that Python was like Javascript where data would automatically be translated on each variable based on it's content Var1 = Here you go (text) or Var1 = 12 (num) but as I saw on your example, you have to transform data into numbers even if they are actually numbers already. Interesting!
Great tutorial yet again... This channel is so valuable for people who want to learn programming but do not have the money to go to school for it... Are there any other similar channels on youtube or outside the platform (websites etc.) that offer such great value but may not be popular? Please reply if you even have one suggestion. It is really helpful
i love your way to get into my brain you are awesome how easily explain with no feed dumb or crazy this tut is the best tut ever is best then the tut i been put my money on it thank you very much friend :)
very informative video, thank you for your efforts. I use jupyter notebook and I wrote the exact code, yet it doesn't scrap all pages and it scrap only the last number in range. do you have any idea what could cause this error?
Great tutorial Thanks, now what if the pages have different/variable names like site/brand/VariableBrandname and I only have a list of the pages How to set the "i" variable to look into a set of "variablebrandname"?
Reasons for 403: 1- The URL you are trying to scrape is forbidden, and you need to be authorized to access it. 2- The website detects that you are scraper and returns a 403 Forbidden HTTP Status Code as a ban page (the website could be protected by Cloudflare for example).
and i want to access the text in span tag and this span tag is within a 1i tag pls how can i go about it because I tried using spag tag it not giving right text
pls in a situation whereby i have multiple p tag and I want the text of the second p tag and no class or attrs to differentiate it pls how can i go about it
Nice tutorial on scraping multiple pages to CSV with BeautifulSoup! Any tips on reliable proxies for handling large scraping jobs like this? Heard Proxy-Store offers specialized scraping packages, anyone tried them out?
title = article.find('img').attrs['alt'] star = article.find('p')['class'] Could you explain why we need .attrs for title when you can ditect access it like you did with the class tag? title = article.find('img')['alt'] star = article.find('p')['class'] I have try this and it work the same. Is there benefit to use attrs? Thank You
403 and 503 status code errors, indicate that the server is refusing to fulfill the request. To handle these errors, you can use the requests library to make the request and check the status code. One way to handle these errors is to use try-except blocks to catch the error and handle it appropriately. For example, you could include a sleep function to wait a certain amount of time before trying again, or you could implement a retry loop to keep trying until the request is successful. Another approach is to use a library like requests-html which has a built-in support for handling these errors and retrying failed requests automatically. Also, you can use a User-Agent in the headers to make the request appear as if it was coming from a browser instead of a scraper, as some websites block requests from known scraper IPs and user-agents.
you pulled the name from the image alternative tag sometimes image alternative can be anything instead pulling the title from the h3 tag and title attribute will be better in my opinion
Thanks, but what if I want follow the subpage of every book and extract the informations in these pages? I mean before I extract the informations in the first page, then go into every subpage of the books and finally grab the pages informations
how do i webscrape the page and the content in the page eg, your video extracts the title, price etc but lets say I also want to extract the book page and the content after the book the book page like some e-commerce sites show the products name, price etc but when I click the page it shows decscriptions, reviews , and more pictures of the product how do I extract that aswell? Thanks mahn I like your work!
For my project i want to scrape two websites about different index values for various countries and put it into one database eg. freedom house index and index of economic freedom for germany and other countries...im not sure how merge this data into one database
hi sir, well i have a question about the page numbers, if i'm working with for exemple three websites and i don't know how many pages they've got so what should i do to make my code scrape all the products ?
Finally, a video i can understand and doesn't make me feel dumb.
Thank you good sir!
same for me.
literally the best video on webscrapping....i have watched hundreds of videos but this is the best.
Thank you very much Abdul Wali for your nice words. Very encouraging :)
I have been searching for a video like this forever. Thank God I landed on your page. Really wonderful and amazing video showing step by step. You are a living legend. Just subscribed as well.
This is the most on point tutorial I ever watched. No bullshit, no jargon, Just pure knowledge. Thank you Sir, I learnt a lot from this small video.
This is by far THE best and easiest to understand explanation I’ve heard about using python to scrape data. Thank you for your effort in creating this video. You got a new subscriber!
I just wanted to let you know that I really enjoyed this video. I was feeling like learning python was stupid. Then I found you doing a cool project and it was easy to follow. I am inspired again thank you.
Finally! Really clean and easy to follow scraping video.
Your work needs to be appreciated man. The way you explain things in a calm, composed and soothing voice. The simplicity of the tutorial indicates your grasp on the web scraping. Thank you.
Appreciate it, Sandeep.
Absolutely jaw-dropping the power of web scraping. Congrats for the wonderful and comprehensive video. Waiting for more!!!
I watch a lot of videos about programming and most of them are really good. However, this really is a standout piece. The way it combines theory and practice is second none. Well done,sir
Wow, thanks!❤
Great video!
You're not getting enough credit for how well this is made.
Thank you so much for this video. I have watched several web scraping videos but this is absolutely the best so far.
Multiple pages start at 21:20
ooooooooooooooooooooohhhh I really love this video you saved me big time. This is really outstanding, well detailed and you explainations are very logical and clear
Thank you very much sir I was watching many tutorials and was getting confused to understand the html structure then I found your video you explained everything beautifully I completed my project successfully thanks a lot sir❤
Thanks a lot. With some basic level in python and 0 background in programming, I was able to successfully do a project for my master thesis related to media coverage about certain topic.
Finally, a video that puts paid courses to shame! Hats off to you for the great tutorial! You just did not explain, the way you went back and forth helped me understand a lot. Kudos! Could not resist the urge to hit the like and subscribe button. Will definitely visit your channel for more guides and tutorials! ♥
OMG, I am so impress thank you so much for this wonderful lesson. I cant believe I got this for free . God bless you.
Wish I could like this twice, I had a web scraping class that didn't explain this as well as you did in half an hour
Such a GreaT Explanation Dear
JusT Love it😘
Love From India 🇮🇳NamsTe🙏
After so much searching I finally get a video that is so easy to grasp on scrapping from multiple pages. Thank you
wow - best tutorial so far on beautifulsoup! Thank you!
This video is such a relief ,absolutely the best material about scraping ! Thank you so much!
24:53 what a vim move 😄.
Thanks, great video. Excellent explanation and great english.
The best tut on web scraping. Very beginner friendly. Keep it up
Thanks a lot for this detailed video. Hoping to see more video's like this.
Thank you so much for this video! It's literally an answered prayer for me. 🙏
I just loved it . I used to think web scraping is too hard but when I saw your video it is like so simple that even 10 years old can also understand. Simply great job 👏
Thanknyou for this clear and easy to follow video.
Great video! I wrote the code while you were explaining it and I kinda grasped the idea behind what you were doing. The only thing I don't understand is about the indentation and how it affects the for structure. In other languages, you end the for with some code and nest them like any while-do or if-else-endif type of stuff. I also thought that Python was like Javascript where data would automatically be translated on each variable based on it's content Var1 = Here you go (text) or Var1 = 12 (num) but as I saw on your example, you have to transform data into numbers even if they are actually numbers already. Interesting!
this is just crazyyy. loved the tutorial
Wonderful!. Simple and concise🥰
Great Sir Today I learned how to do web scrapping .. Nicely Explained 👍. Please make more content
Glad you liked it
Its just really awesome and very easy to understand also and i have submitted this as a mini project. Thank You brother.
This video is an absolute gem. Thank you for this..
Great lesson...Very resourceful
I like the way you teach while talking. make me understand. Tank you very much
You explain everything very clearly. Everything makes sense now!
This is the best web scrapping video on the internet
Thank you very much, you helped me a lot with your vid. 🙏
Great tutorial yet again... This channel is so valuable for people who want to learn programming but do not have the money to go to school for it... Are there any other similar channels on youtube or outside the platform (websites etc.) that offer such great value but may not be popular? Please reply if you even have one suggestion. It is really helpful
Fantastic video on web scraping
You really did make a great video.Thank you
Omg .. this is such a perfect, informative, easy to understand explanation ! Thx a lot.
thank you so much sir....................i learned a lot .....its so helpful to me..🙏
i love your way to get into my brain you are awesome how easily explain with no feed dumb or crazy this tut is the best tut ever is best then the tut i been put my money on it thank you very much friend :)
paginaton starts from 21:30
So awesome! Consice & crystal clear! You are absolutely a legend.❤
very informative video, thank you for your efforts.
I use jupyter notebook and I wrote the exact code, yet it doesn't scrap all pages and it scrap only the last number in range. do you have any idea what could cause this error?
This was great content. You made web scraping super easy.
Really helpful, thank you!
Wow, this video is so helpful, thank you!
Thank you very much Pythonology. This was well-explained and very easy to understand.
Great tutorial Thanks, now what if the pages have different/variable names like site/brand/VariableBrandname and I only have a list of the pages
How to set the "i" variable to look into a set of "variablebrandname"?
top notch. I managed to follow this, so thankyou!
What happens if i get a 403 response I think is Forbidden Access ??
Reasons for 403:
1- The URL you are trying to scrape is forbidden, and you need to be authorized to access it.
2- The website detects that you are scraper and returns a 403 Forbidden HTTP Status Code as a ban page (the website could be protected by Cloudflare for example).
Thank You So much SIr!
Great video. I really like the way you explain the concepts. Everything working fine and easy to understand
Thanks Nikhil
Just found it and love it. Thank you!
Very productive, thank you
Thank you very much. Is there a good book you can recommend?
Great Tutorial can we scrap the secure or can say not allowed scrapping text.
and i want to access the text in span tag
and this span tag is within a 1i tag
pls how can i go about it
because I tried using spag tag it not giving right text
wonderful sir ! Learnt a lot
Very well explained ...thank u..
pls in a situation whereby i have multiple p tag and I want the text of the second p tag and no class or attrs to differentiate it
pls how can i go about it
Thank you! Very clear and useful!
Nice tutorial on scraping multiple pages to CSV with BeautifulSoup! Any tips on reliable proxies for handling large scraping jobs like this? Heard Proxy-Store offers specialized scraping packages, anyone tried them out?
thanks alot for the detailed tutorial!!!
I saw what I need to saw, Thank you!!!
Thanks a lot for all this web scrapping tutorials! Ill try to do my own scraps now!
Thank you so much for this. Thank you
You explain really well.. keep it up
title = article.find('img').attrs['alt']
star = article.find('p')['class']
Could you explain why we need .attrs for title when you can ditect access it like you did with the class tag?
title = article.find('img')['alt']
star = article.find('p')['class']
I have try this and it work the same. Is there benefit to use attrs?
Thank You
Thank you very much, very good and detailed explanation
khahesh mikonam
Hello I am struggling with something . Can you help me . I can't see ol but all started with section how to run soup to grab exact data .
Thanks a lot. It helped solve a problem.
I have a question though.
How do u handle 403 and 503 status_codes errors when scrapping a website?
403 and 503 status code errors, indicate that the server is refusing to fulfill the request. To handle these errors, you can use the requests library to make the request and check the status code.
One way to handle these errors is to use try-except blocks to catch the error and handle it appropriately. For example, you could include a sleep function to wait a certain amount of time before trying again, or you could implement a retry loop to keep trying until the request is successful. Another approach is to use a library like requests-html which has a built-in support for handling these errors and retrying failed requests automatically. Also, you can use a User-Agent in the headers to make the request appear as if it was coming from a browser instead of a scraper, as some websites block requests from known scraper IPs and user-agents.
Super Thanks for this video. It is very clear and useful for people who like start web scraping like me. Good job and keep it up! 👏🙂
you pulled the name from the image alternative tag sometimes image alternative can be anything instead pulling the title from the h3 tag and title attribute will be better in my opinion
Thanks, but what if I want follow the subpage of every book and extract the informations in these pages? I mean before I extract the informations in the first page, then go into every subpage of the books and finally grab the pages informations
how do i webscrape the page and the content in the page eg, your video extracts the title, price etc but lets say I also want to extract the book page and the content after the book the book page
like some e-commerce sites show the products name, price etc but when I click the page it shows decscriptions, reviews , and more pictures of the product how do I extract that aswell?
Thanks mahn I like your work!
lovely stuff. I thoroughly enjoyed it.
how can i save a png from a page?
This is very cool
Keep.it up bro...
Would you please make a video on how to scrape the data inside of each link
Many thanks for your demonstration! :D
Thank you for a great video realy its cool project ever seen
How to store these data into a database like mongodb ...kindly make a video on it also it would be a great help.
Thank you for the great content.
it was very helpful video, keep on making such video.
Exactly what I was looking for! Thanks!
What if I want the parser to click into every book and get some info from each book page?
To handle clicks, it is better to use Selenium or Scrapy than BeautifulSoup.
Why this NameError: " i is not defined" -- for i is the variable in the link
Very very good, I learned so much new and interesting stuff.
Could you please show how to web scrap target product reviews?
I have the prices in tags and soup.find ignores it all together. Any idea how to handle that?
For my project i want to scrape two websites about different index values for various countries and put it into one database eg. freedom house index and index of economic freedom for germany and other countries...im not sure how merge this data into one database
Thank you 🙏 so easy to understand and helpful
hi sir, well i have a question about the page numbers, if i'm working with for exemple three websites and i don't know how many pages they've got so what should i do to make my code scrape all the products ?