Introduction to Scrapy API | Practical Python Web Scraping with Flask (Part 2 of 2)

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ธ.ค. 2024

ความคิดเห็น • 22

  • @raviranjansharma8953
    @raviranjansharma8953 4 ปีที่แล้ว +1

    This is very very interesting and useful video and it's font size is visible , Thank you so much Sir

  • @s_sattu_s
    @s_sattu_s ปีที่แล้ว

    can we scrape any website with this like amazon , news website or blogs

  • @muhammedsiddiq643
    @muhammedsiddiq643 4 ปีที่แล้ว +2

    This is great tutorial , plz make tutorial on scrap with django

  • @retoschurmann3512
    @retoschurmann3512 4 ปีที่แล้ว

    great tutorial. but if i run this in heroku i get an http error 401: unauthorized and when i run i locally i get this message: do you have any idea what the problem could be? thanks!

  • @khizarhayat979
    @khizarhayat979 4 ปีที่แล้ว

    your tutorials are just awesome!
    I have a question.
    Can we send the value of pages from browser as well?
    Like when we hit the URL for scrapyrt we have to input the number of pages in the terminal. How we can give that input from web page or scrapyrt url

  • @srinivasyv1258
    @srinivasyv1258 4 ปีที่แล้ว

    can we run crawl spider after submitting a button in python html i mean i want submit url from inputbox ?

  • @abrarali-go6lu
    @abrarali-go6lu 2 ปีที่แล้ว

    I wanna deploy the spider to a cloud and use cron scheduler to schedule it periodically. I wanna create a website on the other hand that would show products and their prices would be updated everytime the task gets executed. The question is, how would i establish connection between the cloud and the website? And if there’s a best way to do this, what would it be?

    • @SATSifaction
      @SATSifaction  2 ปีที่แล้ว

      The connection between the cloud and the website would need to be agreed to by the website owner otherwise they can ban your server if you are pinging it too frequently. Assuming thats done, you can load your spider on any cloud framework, establish a firewall for inbound/outbound connections and have the spider run on a scheduler. Every time the spider runs it would save data to some database. If you care about history then you need persistence. If you just want to stream live data then you can always send it via a json object over something like Kafka or something more structured like Postgres. You would then put a listener on your website to listen for any changes to the data to update or if you want to update it every 5-10 min then set up a cron job or such to pull the data. This is only one of dozens of ways to do it.

  • @nelsongomez8547
    @nelsongomez8547 2 ปีที่แล้ว

    Hello I hope your very well.
    If you want to do many request to api, in the same time, How its behavior?

    • @SATSifaction
      @SATSifaction  2 ปีที่แล้ว

      That is dependent on your server and your rate limiting profile. I wouldnt let several users run it concurrently without any caching in your architectural design. You can always house the data in a cache engine like redid if you want concurrency in analyzing the data once its been scraped. However, if you’re scraping someone else’s site then performance is also dependant on their server policies and protocols.

  • @ayushpatel8385
    @ayushpatel8385 4 ปีที่แล้ว

    Please help me in hosting scrapyrt spider on heroku.. Please , What's the format of the Procfile

    • @cvallejoec
      @cvallejoec 3 ปีที่แล้ว

      Hi! Could you achieve it? Could you give me a guide on how you did it please

  • @MinecrafterFlicker2
    @MinecrafterFlicker2 4 ปีที่แล้ว

    i want to do something similar like this but in my case i also want to pass in a start_url that need to be scraped, how can i do that?

    • @SATSifaction
      @SATSifaction  4 ปีที่แล้ว +1

      You can use input in python to feed it a URL but it has to be part of the same domain. Web scraping is very website specific and depends on the website topology and naming convention. So you would need to create your own script for every site

  • @jordankendall86
    @jordankendall86 5 ปีที่แล้ว

    Great tutorial. I watched both part 1 & 2. I would like implement Scrapy & ScrapyRT on my Azure Ubuntu VM. I have done several successful tests of Scrapy, but I cannot seem to get ScrapyRT to work. I have made sure all the connections/security settings are open for port 9080. I keep getting the error in the browser "This site can’t be reached" and "[my public IP] refused to connect." Will you make a video showing an implementation on an Azure, AWS or some type of server with ssh access that is not your local computer?

    • @SATSifaction
      @SATSifaction  5 ปีที่แล้ว

      I might do a cloud version of it when I do a Django version. On your cloud server you may want to make sure twisted is installed. I find that one always gives me issues. Also make sure you make a requirements.txt file and load all those requirements in azure. Make sure you also have your Procfile or equivalent setup properly (not sure if azure uses this, I know heroku does). Something like:
      web: scrapyrt -i IP -p $PORT

    • @jordankendall86
      @jordankendall86 5 ปีที่แล้ว

      I got it to work. I will try your suggestion of modifying the procfile. I have also used tmux to run in a separate daemon session however the process is killed after sometime. Would modifying the procfile make listening to the port permanent. If not what would you suggest to keep the scrapyrt process open and listening to port permanent?

    • @jordankendall86
      @jordankendall86 5 ปีที่แล้ว

      I looked into the Procfile for Ubuntu on Azure and have not found anything comparable. However, if I do a web app on Azure and use web jobs, then this would be comparable, however I don't want use a web app. I have looked into using immortal.run but I am not sure if is the best option. Check out my issue ticket on github: github.com/scrapinghub/scrapyrt/issues/97 Maybe you can post a link to a youtube video providing an answer. Thanks.

  • @chandraprakashv2044
    @chandraprakashv2044 5 ปีที่แล้ว

    loved it
    any video for filter data

    • @SATSifaction
      @SATSifaction  5 ปีที่แล้ว +1

      What do you mean specifically?

    • @chandraprakashv2044
      @chandraprakashv2044 5 ปีที่แล้ว

      @@SATSifaction like remove the book(row) that title include free something

    • @SATSifaction
      @SATSifaction  5 ปีที่แล้ว +1

      Oh I see you can always filter within the dataframe itself. If you watch some of my other videos I do this a lot especially in my machine learning videos. Same technique