Web Scraping with GPT-4 Vision AI + Puppeteer is Mind-Blowingly EASY!

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ม.ค. 2025

ความคิดเห็น • 55

  • @hxxzxtf
    @hxxzxtf 9 หลายเดือนก่อน +12

    🎯 Key Takeaways for quick navigation:
    00:00 *🌐 Web scraping has been revolutionized by AI, particularly with the latest Vision AI model, making data extraction more efficient.*
    01:07 *💻 Manually copying HTML and using Chat GPT for extraction is one method, but OpenAI's API offers programmable solutions for scalability.*
    02:16 *🔄 Using Puppeteer with Bright Data's scraping browser helps circumvent website restrictions and rate limiting during scraping.*
    05:33 *🖥️ Puppeteer allows for easy scraping of HTML content, but there's a need to manage and clean up the extracted data before analysis.*
    08:35 *💡 Extracting only necessary data from HTML can optimize costs when using OpenAI's models for analysis.*
    12:17 *💰 Text-based scraping methods can be cost-effective, but they require ongoing maintenance due to HTML structure changes.*
    14:49 *📸 Utilizing OpenAI's GPT-4 Vision API enables data extraction from screenshots, potentially offering a more robust solution for complex web scraping tasks.*
    17:52 *🖼️ Using base64 encoding allows passing images to models, enhancing data processing capabilities.*
    18:49 *💸 Consider cost-effectiveness when choosing between complex HTML-based or text-based approaches for web scraping.*
    19:58 *🎚️ Adjusting image resolution can significantly decrease token usage in web scraping, but it may increase the likelihood of errors.*
    20:53 *🖼️🔄 Balance image resolution and price when utilizing Vision API for web scraping, as higher resolution images incur higher costs.*
    21:19 *🧹 Clean up HTML before web scraping to reduce token usage and ensure accuracy in results.*
    22:57 *🤖 Explore advanced features of AI tools, such as identifying clickable elements, to enhance web scraping automation.*
    Made with HARPA AI

  • @zeeeeeman
    @zeeeeeman 9 หลายเดือนก่อน +10

    This is such a timely video - i'm doing something similar to resurrect a website from the wayback machine.

  • @shineymcshine
    @shineymcshine หลายเดือนก่อน +1

    Thanks for the JS example, everyone else has scraping tutorials in Python

  • @rakysreplays8259
    @rakysreplays8259 2 หลายเดือนก่อน +1

    The best video I've seen about web scraping

  • @beemerrox
    @beemerrox 6 หลายเดือนก่อน +2

    Wow. this video provides GREAT value. Just in time for what I´m doing now. Thanks mate!

  • @reidevanson181
    @reidevanson181 9 หลายเดือนก่อน +1

    what an amazing video - like its so niche but so useful

    • @ByteGrad
      @ByteGrad  9 หลายเดือนก่อน

      Glad you liked it

  • @benhasanaltun
    @benhasanaltun 2 หลายเดือนก่อน +1

    Thanks for sharing!

  • @SupCortez
    @SupCortez 7 หลายเดือนก่อน

    Thank you infinitely for sharing this masterclass lesson with the universe for free. Subbed

  • @niclas.pandey
    @niclas.pandey 9 หลายเดือนก่อน +2

    thank you a lot ♥

  • @juliushernandez9855
    @juliushernandez9855 9 หลายเดือนก่อน +2

    Can you create a video how to deploy puppeteer and next js to vercel?

  • @imranhrafi
    @imranhrafi 9 หลายเดือนก่อน +3

    It's interesting, but what if I want pagination?
    I will still need to select next button in old way.
    Is there any other way of doing the pagination?

    • @MrVliegendepater
      @MrVliegendepater 6 หลายเดือนก่อน +1

      scrape all url's from all sitemaps and then define how many levels deep you like to go... you will get more info than needed but it will do the job. If you put your html contento to markdown and secondly embed the markdown content into a vector database, you could query anything on the content.

  • @jameskayihura1675
    @jameskayihura1675 4 หลายเดือนก่อน

    Let’s say I want to scrape LinkedIn mentions. Basically LinkedI will request authentifications.
    Can this be applied to my question? Thanks

  • @Lars16
    @Lars16 9 หลายเดือนก่อน +12

    This is a great video. But the problem with scraping has hardly ever been parsing the HTML or maintaining the parsers.
    The biggest problem is efficiently accessing websites that actively try to block you by gating their content being a login or captchas. Then comes IP blocking (or worse data obfuscation) if you Scrape their website in a large volume.

    • @binhtruongdac2861
      @binhtruongdac2861 9 หลายเดือนก่อน +3

      That’s why you need smth like Bright Data, yes, it’s not free unfortunately

    • @karenapatch1952
      @karenapatch1952 7 หลายเดือนก่อน +1

      Octoparse can deal with this, and it's free. No thanks

    • @beemerrox
      @beemerrox 6 หลายเดือนก่อน

      @@karenapatch1952 Thanks! Didnt know, looks awesome!

    • @Andrew-qc8jh
      @Andrew-qc8jh 5 หลายเดือนก่อน

      yeah this is pretty cool to see but it doesn't look that helpful in comparison to methods using beautifulsoup.

  • @Garejoor
    @Garejoor 9 หลายเดือนก่อน

    can crewAI do this as well?

  • @justcars2454
    @justcars2454 4 หลายเดือนก่อน +2

    When doing web scraping but at a large scale it will be so much expensive, its better to use chatgpt or a better llm, trough its api, and automatilcy making chatgpt handle the errors untill it find the perfect code, its better if it can try finding hidden api endpoints first then building the script for the website based on that enpdoint .... And all this automatily, you just need to make chatgpt, be able to correct itself, and making scripts by itself and run it on your pc, and handle errors untill getting the exact script that succefully scrape what you want.

    • @yaboy7120
      @yaboy7120 24 วันที่ผ่านมา

      hidden api endpoints are your best friend 😊

  • @gregsLyrics
    @gregsLyrics 6 หลายเดือนก่อน

    and how do you get to the next page to scrape?

  • @felipeblin8616
    @felipeblin8616 7 หลายเดือนก่อน

    Great video. Some question though. What about hallucinating? How can be sure is not doing it?

  • @dupatrio9305
    @dupatrio9305 7 หลายเดือนก่อน +1

    Where can I learn basic coding from scratch to be able to do that?

  • @RobShocks
    @RobShocks 8 หลายเดือนก่อน +2

    Have you thought about or tried using a local model to scrape, it would save all the costs

    • @Zaddy_Woods
      @Zaddy_Woods 5 หลายเดือนก่อน

      Could you explain a little more please?

  • @dmitriydorogonov7918
    @dmitriydorogonov7918 7 หลายเดือนก่อน +1

    Perfect video, thanks

  • @amitjangra6454
    @amitjangra6454 8 หลายเดือนก่อน +3

    I am scrapping (dropping html) with python code with selenium (aprrox 60,000 articles) and later creating vector embeddings for Llama 3 and asking it to write article for me.

    • @richerite
      @richerite 7 หลายเดือนก่อน

      Do you have a GitHub link? What did you mean write article

    • @5minutes106
      @5minutes106 7 หลายเดือนก่อน

      We're you able to scrape 60,000 articles without getting your IP address blocked ? That's impressive if you did

    • @OnlyUseMeEquip
      @OnlyUseMeEquip 7 หลายเดือนก่อน +1

      @@5minutes106 obviously not, you just rotate proxies

  • @subhranshudas8862
    @subhranshudas8862 9 หลายเดือนก่อน

    how do you handle paginated data?

    • @binhtruongdac2861
      @binhtruongdac2861 9 หลายเดือนก่อน

      You just need to use the URL with page number in query params then run for loop to request multiple html page

  • @LifeTrekchannel
    @LifeTrekchannel 8 หลายเดือนก่อน

    How to do this using Braina AI? Braina can run GPT-4 Vision.

  • @hishamazmy8189
    @hishamazmy8189 9 หลายเดือนก่อน +1

    amazing

  • @hellokevin_133
    @hellokevin_133 9 หลายเดือนก่อน +2

    Hey man, mind if I ask what programming languages you know other than Javascript/TS ?

  • @amadeuszg1491
    @amadeuszg1491 9 หลายเดือนก่อน +9

    I am interested in creating a price comparison website featuring approximately 10-20 shops, each offering around 10,000 similar products. Unfortunately, these shops do not provide APIs for direct access to their data. What would be the most efficient approach to setting up such a website while keeping maintenance costs reasonable?

    • @Braincompiler
      @Braincompiler 9 หลายเดือนก่อน +2

      Make it like the other comparison sites and provide an upload for CSV, XML and so on or YOU provide the API for them so their shop systems can push the data ;) Crawling by yourself is the last option and could be made with XPath and stuff.

    • @amadeuszg1491
      @amadeuszg1491 9 หลายเดือนก่อน +1

      @@Braincompiler Yes, but in this case store needs to send me the csv, xml file with their products. What if they dont?

    • @Braincompiler
      @Braincompiler 9 หลายเดือนก่อน

      @@amadeuszg1491 Yes of course. If your comparison site has a benefit for them be sure they will.

    • @abhisycvirat
      @abhisycvirat 9 หลายเดือนก่อน +1

      I did this 6 years ago, scraped each website and compared the price using SKU

  • @laihan4469
    @laihan4469 7 หลายเดือนก่อน

    How a full stack dev work with AI?

  • @水手大力-y8l
    @水手大力-y8l 9 หลายเดือนก่อน +1

    elegant

  • @Kamil_Aqil
    @Kamil_Aqil 5 หลายเดือนก่อน +1

    10/10

  • @dmytroocheretianyi7577
    @dmytroocheretianyi7577 8 หลายเดือนก่อน +1

    Perhaps it will be cheaper on Claude.

  • @ByteGrad
    @ByteGrad  7 หลายเดือนก่อน +2

    Hi, my latest course is out now (Professional React & Next.js): bytegrad.com/courses/professional-react-nextjs -- I'm very proud of this course, my best work!
    I'm also a brand ambassador for Kinde (paid sponsorship). Check out Kinde for authentication and more bit.ly/3QOe1Bh

  • @ThePriceIsNeverRight
    @ThePriceIsNeverRight 4 หลายเดือนก่อน

    This is good but costly to maintain 💸

  • @antronx7
    @antronx7 5 หลายเดือนก่อน

    So is this what modern software engineers do these days? Write scripts to glue paid services together?

    • @Fatman305
      @Fatman305 4 หลายเดือนก่อน +2

      Yeah. Makes zero sense... Paying for each scraped page is probably one of the worst ways of doing this. I guess it's fine if your total bill is very low, but really, for serious work it would make way more sense to ask the AI how to store these pages locally and analyze that local data...locally...

  • @UserAliyev
    @UserAliyev 9 หลายเดือนก่อน +5

    First

    • @semyaza555
      @semyaza555 9 หลายเดือนก่อน +3

      2nd

  • @laughremixsquad
    @laughremixsquad 3 หลายเดือนก่อน +2

    🫡 those 90,000 tokens. Thanks you for your sacrifice. 😢