This analysis is on the client side. Allows you to continue checking the APIs for any exploits. That allows you to find the connection medium that exchanges client data to the server
Amazing tutorial as always! Can't wait to try this in production! For any potatoes like me on older python versions here are some changes you have to make: 1. import 'from typing import Optional, List' 2. update 'rating: float | None' to "rating: Optional[float] = None'
In most cases SSR is just for the first page, so robots get their mouth filled with right stuff. Next pages are hydrated on the client side over the API. This is the evolved pattern.
I’m new to data scraping, so please excuse my lack of knowledge, but I wanted to ask: since SSR delivers fully rendered content directly to the client, wouldn’t it be simpler to scrape data from SSR websites compared to CSR?
@@pedrolivaresanchez No, CSR pages typically include endpoints that return clean, structured data in formats like JSON (as demonstrated in the video). In contrast to SSR, where you need to parse through HTML to extract the desired data (which also includes a bunch of unwanted CSS and JavaScript).
Thank-you for taking the time to "make things a little bit bigger". So many channels have tiny fuzz in the corner of the screen and a huge empty space.
Great content as always, thanks! I'm looking forward to the fingerprint video. If I may make one request, I would love to see a video about decrypting the response when it is encrypted. I’m currently trying to deal with a website like that, and I believe the decryption process must be hidden somewhere in the JavaScript since I can see the data on the website but can’t figure out how to crack it. Thanks again for your videos, man.. I really appreciate them!
@@brendanfusik5654 thanks for your reply.. my problem was in the end actually encoding with base64 and protobuf layers, and not encryption.. but thanks anyways
100% percent agree that front end scraping sucks. I remember having a hard time with python selenium because of different class names being generated with inconsistent names (maybe just to discourage scraping). For my last scraping project I used Deno Typescript. The API was only returning the HTML page for the web app and I had to install a proxy certificate on my phone and read those mobile requests that a actually returned JSON objects. You have to get creative from time to time, but there is no such thing as an unscrapable APIs😅. Thanks for sharing your workflow!
I also scrape data as a living, particularly job data. This is all great information. Another really good point is sometimes you have to loop over tags In the front end to extract an ID for each item. Building robust solutions that can withstand changes is a learned skill.
@ronburgundy1033 If you're working for yourself, it could be difficult and take some time, You can build up a bunch of data that you have scraped and try to sell the data. You can sell your services to a company who wants something scraped. You can work for a company that does their own scraping. Honestly, there's a lot of ways to go about it but think of it as providing a service and providing data and you can come up with some good Solutions. In regards to learning, find some sites that you want to try and scrape and start there when you have a problem ask on stack overflow or something. There is also no code options like uipath
Very interesting. I didn't know about the TLS fingerprinting (but I did know about other kinds of fingerprinting). I agree that most sites are probably fairly easy to scrape but some seem straight impossible. There was one site that I couldn't get around with. It's anti-bot protection was super good. Scraping is such a deep and deceiving topic. It looks simple but there's so much behind it.
New to your channel. I really like your videos. Straight to the point with no fluff. I've always had a bit of a weird habit of running apps through packet sniffers just to see their API requests. I found it fascinating. Although I never really did anything with them. I've noticed that many modern websites like Instagram dynamically load data in a weird way that that cannot be seen using the inspector. Do you have a video on this?
@@darz_k.true, at least he didn’t ask for a use case for data collected this way - an actual question worth criticism for lacking creativity. His was valid, technical
Well client side apps with an api is really easy like you shown, usually its server side pages where you can't grab the data from any api or xml request, so you really have to scrape whatever data between the html elements you get.
Hi John. Thank you so much for these videos. It enabled me to actually create something without looking at thousands of html. One question though, there seem to be some apis that are invisible in the inspect, however I know it is there. Is there a way to uncover these hidden apis?
Hi Johan, thank you for the great videos. I have a RAG project(ai assistant for an English aticle website(for English language learners) that I need to use all articles as a vector database for my RAG agent . How should I automate this for free? Is there a free ai Webscrapper to build an ai assistant? Or better to code an ai scapper from scratch instead of using an external platform to automate this for my project.
with the websites i try to scrape, i can find interesting "responses" like you've mentioned by monitoring network traffic, but when i try to directly access that API request URL in my browser, I will encounter variations of this: ""message": "403 Forbidden - Valid API key is required"".... does this just mean my target websites are intentionally preventing webscrapers from accessing them in this way? What I am doing is using playwright to tediously navigate through every page and scrape the content of each page...
Assume this relies on the site being a spa and having json sent? I'm looking at a site that seems to respond with html :/ Think that would also apply to SSR sites, right?
great help your tutorials! alot of sites switching to cloudflare and they detect scraping alot of the times. do you have any tutorials on hls dash segmented video?
I really liked the video and I noticed that a lot of it is reverse engineering of the site or APIs. But what can I do when I experience blockages because the site uses cloudware for example? Thank you very much for your contribution!
I am a passionate webscraper as well with a few years of experience. Hardest thing to scrape in my view is online PowerBI tables (publicly avaliable data), its almost impossible to fetch the data as the backend doesn't reponse. Have you cracked it? If so, could you make a video of it some day?
I've been trying to scrape some data through an API. But after each hour the cookie needed in the headers expires. How can i extract the cookie automatically instead of manually copying it from the latest curl?
As a back end developer this is honestly unintentionally hilarious. Yeah you've really got those websites man. 18.27 to make yourself sound like you don't know what you are talking about. Any backend change it all breaks, ip lockdown it all breaks, token authentication it all breaks, oauth it all breaks. You are relying on the developer's grace to give open access, not your skill to access it. It's a public API to serve a website , you aren't hacking it providing a new id to serve different content. This is like a kid thinking they have hacked Google by modifying the URL parameters 😂
I have one more question: Do we need to get permission from any website or contact them via email before webscapping of their content? Sometimes their guidelines and terms of use are vague. Do you take permission your videos? I ask because I want to use their data to feed into a RAG project to use as a vector data repository for semantic search for ai.
Yes you absolutely need to get permission. This is their site, they built it, it's their data not yours. "How I STEAL data from 99% of sites" is the correct title for this video... What a scum you are, John. Build your own app instead of basing it on theft.
Absolute Banger! One of the best videos i've seen on the topic. Of course i'm lazy AF and just use AI scraping, and Zyte to unblock, but this is 100% an awesome way to keep costs down to the absolute ground if you have the time to spare. (when did you get a green screen?)
@JohnWatsonRooney I am finding some JSONs now, thank you. One issue I am running across, is it's not consistent. I have found about two items with this information loading but the rest don't have them. Why might this be? I do see a GET with a 404 called "current.jwt?app_client etc." do you have any videos on possible road blocks to scraping sites, in the context of the type of scraping you use in the video?
Its good for small websites but what about linkedin and other big data websites. You can't reverse engineer beacuse there is no hr file. How can we reverse engineer them.
Hey John , very good video ! I was wondering if I can help you with more Quality Editing in your videos and make Highly Engaging Thumbnails which will help your videos to get more views and engagement . Please let me know what do you think ?
The best part of all of this is the scammers loss aversion being used against them in the same way they use it against victims. Unlike the normal scambait shenanigans they probably feel an immense sense of loss afterwards since they already feel like the money is theirs. Overall really entertaining
I'm scraping data from a shipping line's website, but I need to login to get the bearer token and enter that into my python code to all the API calls to work. I need to be able to login via python, and obtain the access token, is this possible?
This is gold. You have shown your thought process and by following it I can pick up the whole web scraping concept easily. Love your video John.
You are the best teacher to learn scraping
This technique kind of only works for Client-Side Rendered sites. Not SSR sites (server side)
This analysis is on the client side. Allows you to continue checking the APIs for any exploits. That allows you to find the connection medium that exchanges client data to the server
It would struggle with HTMX too, heh.
@@abg44 This won't work even for that . Because he will be blocked by anti-bots when hitting non cached data .
Nice I ran into the same curl 403 issue while writing a GoLang scraper and used cf-forbidden to complete my request.
Amazing tutorial as always! Can't wait to try this in production! For any potatoes like me on older python versions here are some changes you have to make:
1. import 'from typing import Optional, List'
2. update 'rating: float | None' to "rating: Optional[float] = None'
this technique is really for CSR sites. with more and more sites switching to SSR it's not really possible to just go straight to the APIs
In most cases SSR is just for the first page, so robots get their mouth filled with right stuff. Next pages are hydrated on the client side over the API. This is the evolved pattern.
I’m new to data scraping, so please excuse my lack of knowledge, but I wanted to ask: since SSR delivers fully rendered content directly to the client, wouldn’t it be simpler to scrape data from SSR websites compared to CSR?
@@wkoell "In most cases SSR is just for the first page".
Why talk when you have no idea what you're talking about? 😂
That's exactly what I was going to say.
@@pedrolivaresanchez No, CSR pages typically include endpoints that return clean, structured data in formats like JSON (as demonstrated in the video). In contrast to SSR, where you need to parse through HTML to extract the desired data (which also includes a bunch of unwanted CSS and JavaScript).
Best Web Scraping Channel on TH-cam.
Just scraped a complete site with 70 lines of code.
Thank-you for taking the time to "make things a little bit bigger". So many channels have tiny fuzz in the corner of the screen and a huge empty space.
This is a masterpiece. More videos like this john. The 20 minute videos peppering in the end point manipulation explaination is genius.
actually this was the best way of scraping and it also makes the structuring of data easier for me also. i used this method already more than year ago
yeah I can’t wait to see tls fingerprint video 😆
Great content as always, thanks! I'm looking forward to the fingerprint video. If I may make one request, I would love to see a video about decrypting the response when it is encrypted. I’m currently trying to deal with a website like that, and I believe the decryption process must be hidden somewhere in the JavaScript since I can see the data on the website but can’t figure out how to crack it. Thanks again for your videos, man.. I really appreciate them!
you need a secret key they keep hidden commonly in .env files not just floating around in javascript.
@@brendanfusik5654 thanks for your reply.. my problem was in the end actually encoding with base64 and protobuf layers, and not encryption.. but thanks anyways
@@brendanfusik5654isn’t that a no-go? Pardon my ignorance
Thank you for this. Really thorough and excellent introduction into web scraping.
100% percent agree that front end scraping sucks. I remember having a hard time with python selenium because of different class names being generated with inconsistent names (maybe just to discourage scraping). For my last scraping project I used Deno Typescript. The API was only returning the HTML page for the web app and I had to install a proxy certificate on my phone and read those mobile requests that a actually returned JSON objects. You have to get creative from time to time, but there is no such thing as an unscrapable APIs😅. Thanks for sharing your workflow!
Scraping, btw
Very awesome john, insight full content, keep it up,
I'm trying to continue watch your almost any video, It's very helpful
Sick video man so easy to understand and execute, loads of ideas coming to mind
Thanks a lot for this John, really helpful brother. Bests
thanks for this! i thought this is yet another BeautifulSoup -type scraping. so detailed explanation
Looks like your video finally made them add some security to their API. Well done Adidas 🎉😄
I also scrape data as a living, particularly job data. This is all great information.
Another really good point is sometimes you have to loop over tags In the front end to extract an ID for each item. Building robust solutions that can withstand changes is a learned skill.
How can I learn this and do it as a living? Can you make 20k a year ?
@ronburgundy1033 If you're working for yourself, it could be difficult and take some time, You can build up a bunch of data that you have scraped and try to sell the data. You can sell your services to a company who wants something scraped. You can work for a company that does their own scraping. Honestly, there's a lot of ways to go about it but think of it as providing a service and providing data and you can come up with some good Solutions.
In regards to learning, find some sites that you want to try and scrape and start there when you have a problem ask on stack overflow or something. There is also no code options like uipath
Thanks for this! This is exactly what I needed!
Very informative, thanks! I did not know about curl cffi but definitely going to check it out now.
And here I was about to start scraping and parsing HTML tags.
I think, this was your last scraping video. Nothing else has to be told about this topic.
Thank you!
Very interesting. I didn't know about the TLS fingerprinting (but I did know about other kinds of fingerprinting).
I agree that most sites are probably fairly easy to scrape but some seem straight impossible. There was one site that I couldn't get around with. It's anti-bot protection was super good.
Scraping is such a deep and deceiving topic. It looks simple but there's so much behind it.
important to know this only works as long the backend from the site does not have any anti CSRF tokens on the API requests
New to your channel. I really like your videos. Straight to the point with no fluff.
I've always had a bit of a weird habit of running apps through packet sniffers just to see their API requests. I found it fascinating. Although I never really did anything with them. I've noticed that many modern websites like Instagram dynamically load data in a weird way that that cannot be seen using the inspector. Do you have a video on this?
John, I learned a ton from this and I had a lot of fun. Thanks
Top top level materials and content as always. Thanks a lot.
Just the cureq tip would have saved me a lot of work on figuring out the right headers and cookies for the fingerprint
Great vid! Easy to follow, and comprehensive!
This just saved me so much python coding and HTML scraping for financial data on interactive sites with Java. God bless you :D
Yo you are the best youtuber, when it comes to scraping
Great video John, thanks!
You earned a new subscriber!
Nice work mate, cheers for sharing.
Another great video!
Thank you.
Another great video; keep up the great work.
Thanks Alan
Great information and video! I had no idea about TLS fingerprinting.
New to this channel, just wanted to say that your content is so full of quality!!
I like your dress up, the earphone, the light and the color of your shirt, it is suitable with the grey background of command line tool
What do you when a website consists of hundreds of static html pages held together with scotch tape and php?
write something to parse and collect from html, hope to hell they don’t change the format of their site
Maybe, build something yourself, and stop consuming other peoples work?
@@darz_k.good advice man… why are we consuming this informative video. It’s not our work
@@viIden Even for a logical fallacy, that's weak.
Must do better.
@@darz_k.true, at least he didn’t ask for a use case for data collected this way - an actual question worth criticism for lacking creativity. His was valid, technical
what do you do with the data you scrape?
Well client side apps with an api is really easy like you shown, usually its server side pages where you can't grab the data from any api or xml request, so you really have to scrape whatever data between the html elements you get.
No hate, I enjoy your content but saying "REVERSE ENGINEER" this api isn't the term you can use for projects like these.
Well, he used it so clearly he can 😃
@@JakubSobczak 🤡
Fair enough, i see where you’re coming from. This example was more just seeing and using rather than anything else.
I'd say you're reverse engineering the usage of the API as a client..
Hi John. Thank you so much for these videos. It enabled me to actually create something without looking at thousands of html. One question though, there seem to be some apis that are invisible in the inspect, however I know it is there. Is there a way to uncover these hidden apis?
Hi Johan, thank you for the great videos. I have a RAG project(ai assistant for an English aticle website(for English language learners) that I need to use all articles as a vector database for my RAG agent . How should I automate this for free? Is there a free ai Webscrapper to build an ai assistant? Or better to code an ai scapper from scratch instead of using an external platform to automate this for my project.
with the websites i try to scrape, i can find interesting "responses" like you've mentioned by monitoring network traffic, but when i try to directly access that API request URL in my browser, I will encounter variations of this: ""message": "403 Forbidden - Valid API key is required"".... does this just mean my target websites are intentionally preventing webscrapers from accessing them in this way?
What I am doing is using playwright to tediously navigate through every page and scrape the content of each page...
Assume this relies on the site being a spa and having json sent? I'm looking at a site that seems to respond with html :/
Think that would also apply to SSR sites, right?
Yes that’s right, but if it’s ssr look in the page source there’s often a lot of json data in there to save parsing loads of html tags
@@JohnWatsonRooney perfect, thanks :)
great help your tutorials! alot of sites switching to cloudflare and they detect scraping alot of the times. do you have any tutorials on hls dash segmented video?
Hi, i wanted to follow this tutorial, but it seems that the search json response is no longer available, any thoughts on how to fix that?
I really liked the video and I noticed that a lot of it is reverse engineering of the site or APIs. But what can I do when I experience blockages because the site uses cloudware for example?
Thank you very much for your contribution!
I am a passionate webscraper as well with a few years of experience. Hardest thing to scrape in my view is online PowerBI tables (publicly avaliable data), its almost impossible to fetch the data as the backend doesn't reponse. Have you cracked it? If so, could you make a video of it some day?
I've been trying to scrape some data through an API. But after each hour the cookie needed in the headers expires. How can i extract the cookie automatically instead of manually copying it from the latest curl?
As a back end developer this is honestly unintentionally hilarious. Yeah you've really got those websites man. 18.27 to make yourself sound like you don't know what you are talking about. Any backend change it all breaks, ip lockdown it all breaks, token authentication it all breaks, oauth it all breaks. You are relying on the developer's grace to give open access, not your skill to access it. It's a public API to serve a website , you aren't hacking it providing a new id to serve different content. This is like a kid thinking they have hacked Google by modifying the URL parameters 😂
Do you have a github with code examples?
Thanks much for this. Now, I am getting {"error":"Anti forgery validation failed"} on a particular site - any thoughts on how to walk around it?
Sadly, this has an expiration date. Sites are moving more and more towards SSR and even hydration is sometimes html.
Can you please make a video of how to handle SSR scraping?
Even my grandma can do this.
This is a legit video! 💪💪
The best = John
wow, so cleen. goodbye beautiful soup.
Can you make a video to explain the waterfall stuff at the bottom of (fetch/xhr). I can see whenever you click it comes up as grey
I have one more question: Do we need to get permission from any website or contact them via email before webscapping of their content? Sometimes their guidelines and terms of use are vague. Do you take permission your videos? I ask because I want to use their data to feed into a RAG project to use as a vector data repository for semantic search for ai.
Unless OpenAI or some other LLM provider loses a lawsuit for scrapping publicly available data, I doubt it should be an issue.
Yes you absolutely need to get permission. This is their site, they built it, it's their data not yours.
"How I STEAL data from 99% of sites" is the correct title for this video...
What a scum you are, John.
Build your own app instead of basing it on theft.
Absolute Banger! One of the best videos i've seen on the topic. Of course i'm lazy AF and just use AI scraping, and Zyte to unblock, but this is 100% an awesome way to keep costs down to the absolute ground if you have the time to spare. (when did you get a green screen?)
@john do you have a course and how can get in touch with you
How woukd you get TikTok ads that are in app? The web doesnt have slonsored vids. Wonder how to scrape these
Basically you need to run a mitm proxy to intercept the requests made by the app. I’ve not done it myself though
amazing vid. also tell ur dog I said woof
is parsing html the best way to scrape server-rendered pages?
Incrediable as always. Going to AI / DB this - a much better process than Scarpy. Cheers - 100z
What if the XHR requests are hidden, when I go to response, it just says false.
There will be lots of xhr requests - have a look through them all and see if any have the data you need. It doesn’t work for all sites
@JohnWatsonRooney I am finding some JSONs now, thank you. One issue I am running across, is it's not consistent. I have found about two items with this information loading but the rest don't have them. Why might this be? I do see a GET with a 404 called "current.jwt?app_client etc." do you have any videos on possible road blocks to scraping sites, in the context of the type of scraping you use in the video?
what we can do with this data ? any idea plzz
Brilliant Video!
From where do you get web scraping work?
I think 99% people need UPC code price tile link
great, next make a video on how to scrape youtube data
hey guys, why i dont see search?q=boots in dev tools ? im newbie, thank for heping.
Excellent Work :-)
I have never seen this approach, but it seems a lot easier than faffing about with website designs and puppeteer or selenium.
do you have a course ?
ROOOOOONEY!
Why is scraping the html not going to work at all?
Its good for small websites but what about linkedin and other big data websites. You can't reverse engineer beacuse there is no hr file. How can we reverse engineer them.
Hey john still waiting.
Probably best to avoid scrapping websites like LinkedIn unless you want to get banned from the platform or sued
Is there a way to bypass mfa/otp when scraping?
I'm subscribing but show us your dog in the next one! 😅
Haha
Will this work in any websites? Like instagram, linkedin??
Nice, but this was like a scraper's dream and a very easy example.
Hey John , very good video ! I was wondering if I can help you with more Quality Editing in your videos and make Highly Engaging Thumbnails which will help your videos to get more views and engagement . Please let me know what do you think ?
How to deploy a selenium script? I couldn't do it.
What about sites without json, fhat just serve a document
The best part of all of this is the scammers loss aversion being used against them in the same way they use it against victims.
Unlike the normal scambait shenanigans they probably feel an immense sense of loss afterwards since they already feel like the money is theirs. Overall really entertaining
Please design a course for vetwrenes not cider to dive into and learn ❤. Ok s suggest tech start to learn and where to start from
thank you very match ☻
Great content
Can u tech us who to scrape website with cart iam work on one since months but i cant add product to cart by requests
selenium wire, bro. just sniff json packets and catch them.
He did a video on that
What about graphql?
I’ve seen it work the same way but gql is less common and I’ve got less experience with jt
I'm scraping data from a shipping line's website, but I need to login to get the bearer token and enter that into my python code to all the API calls to work. I need to be able to login via python, and obtain the access token, is this possible?
Try submitting a post request to the auth login endpoint
What Snozcumber said, or you can automate signing with a headless browser and copy the cookies
@@Pigeon-envelope Thanks dude
@@hurtado-w9c cheers, very helpful!
Aye man, don’t drop all this knowledge. You’re gonna get my bots clipped lmao
Haha 😝
Facts lol