it's not impressive. Of course querying a few hundred or even hundred thousand web pages isn't as complicated or slow of a task than querying trillions of webpages.
This is basically what we learned in my big data class, but we used map-reduce to do the TF-IDF calculations, so it's impressive you figured this out on your own
Awesome video! Will help immensely when I eventually make an AI RAG search engine. I wanna see if I can add blacklisted and whitelisted websites. That way things like useless citation sites and spam sites cannot come up, but things like Wikipedia and websites I get good results from ahow up more.
very nice, built something similar for my info retrieval class. we have to use okapi bm25 formula for the ranking but overall very similar. scrape, tokenize, parse, inverted index, rank
You can use a chrome like TLS config to not get blocked by cloud flare in a lot of cases, using a browser for scraping isn’t viable when tracking about scanning the internet.
is this engine oneline or ( wouldt it be abel to be oneline for otcher users ) so otcher also coulst enjoy it? or was it dust a peak or somthing you made cuz ( you where bored or smt )
Term frequency (the number of times a given word or so shows up in total) - inverse document frequency (the number of times it shows up in a specific document). The wikipedia article is pretty good: en.wikipedia.org/wiki/Tf-idf
we had a hackathon where we basically had to implement TF/IDF - also a search engine of a sort, but for files. we did the interface in python and all mathematics processing in C++. It would have been a fun experience if not for the time limit. we struggled really hard, on test data our solution worked faster by an order or two than most other participants, but... we somehow failed on the exam data. we failed fucking IO. and won nothing. I fucking hate hackathons since then. fuck IDF. also maybe this happened because i had written 75% of the code, while 4 other members did almost nothing. It was (their) responsibility to handle IO, and mine to handle mathematics and processing. I hate working in teams. I know noone cares but i might as well just burst out all of the rage I have towards that experience. once again, fuck team work, fuck hackathons, fuck my teammates, fuck everything and everyone
So you’re telling me I can access restricted data by telling it to, basically, ignore restrictions??? I Have been calling myself dev, admin, ownr, root in vain for far too long
Start building awesome projects with $15 free credits using BrightData today: brdta.com/conaticus1
no
no
no
no thanks
no
I don't know what this guy said, and still was mind-blown of all the effort this guy puts
Thanks much so 🙏 It would not be possible without your support
I’m impressed, can’t wait to see you build a multithreaded web server in assembly
Why do I find it super funny 😅😅😅.
@@da40au40 Me too :D
it's not impressive. Of course querying a few hundred or even hundred thousand web pages isn't as complicated or slow of a task than querying trillions of webpages.
@@DanskeCrimeRiderTV google also wastes time deciding wether you are allowed to see or not certain sites
@@KibitoAkuya what does that have to do with anything? Google is still faster at querying trillions of results than this.
7:40 flashing those questionable websites in a sponsored video is quite the move
You scared of porn?
This is basically what we learned in my big data class, but we used map-reduce to do the TF-IDF calculations, so it's impressive you figured this out on your own
Love your content. You and your quality have really improved. Keep it up ❤
Thanks so much, your support means a lot ♥
SERBIA MENTIONED 🎉🎉🎉
@europa_the_last_battle>goes to comments
>sees meme comment
>looks at replies
>only a LARPer replied
lol
that name rings a bell, maybe from some kind of Serbian movie?
@@MAXHASS-ph5ib tell that to the LARPer dawg
@@RealMephrestell that to yourself 😊
@@slimeyar you first
The problem is this approach is susceptible to SEO spamming/invisible SEO keywords
Yeah for sure, realistically it should be moderated based on user interaction as well
@@conaticus How would you do that?
Nice, you re-invented the lucene library
Awesome video! Will help immensely when I eventually make an AI RAG search engine. I wanna see if I can add blacklisted and whitelisted websites. That way things like useless citation sites and spam sites cannot come up, but things like Wikipedia and websites I get good results from ahow up more.
Let's go another conaticus video
W ad plug, it's 100% relevant and actually necessary to fulfill the premise of this vid.
3:07 Best pronunciation of Euclidean I have every heard :P
Where?
@@CrazyDiamondo I added a timestamp
filter out JS for another 10x bandwidth savings
alternatively use an adblocker. (can puppeteer do that? It's just chromium right?)
You could calculate and cache TF values on the fly so you don’t fill up your ram as quickly but still get a decent response time.
Please finish your file explorer in rust fully, because the idea of it is awesome. Love your videos, content is very engaging 🎉
Google also does the same but with disstributed computing to reduce the overall time .
Just scale the database horizontally and mimic googles apporach
🔥🔥🔥
Love this dude and his video projects
🙏
Why did you choose TF-IDF instead of word2vec or any context aware model?
+1 Woule like to know
Remember, never return an over 18 site without an over 18 word in the search request
Nice video and nice code, keep up the good work!
Subscribed & notifications on :)
you deserve more recognition bruh
>goes to youtube homepage
>finds this video
>yipeee
>oh
>lets try it
This is very impressive, what was the size of the database when indexing is finished? Seems like it would be quite big
great video, gave me ptsd from my information retrieval class though
Programming 🤝 martincitopants…match made in heaven
thats insane, hows this only at 12k views
very nice, built something similar for my info retrieval class. we have to use okapi bm25 formula for the ranking but overall very similar. scrape, tokenize, parse, inverted index, rank
Good! The world needs a new Google Search, one that's more like how it was in the 2000s.
In high school, I could outperform search engines of the time. I don't think I can say the same for today's search engines.
Well of course it is very fast, it only has like 200 websites
This video is so good. Instant hook.
You can use a chrome like TLS config to not get blocked by cloud flare in a lot of cases, using a browser for scraping isn’t viable when tracking about scanning the internet.
Create your own database engine for shits and giggles
B+Trees 💀
yk what would be funny? making the slowest search engine possible without like halting the program for a set time, just with maths
I believe it's "inverted indexing", as inverse indexing is something else.
oh my fuck i saw this on your github last night
such a cool video! i love the way how you explain what you are doing :)
random question but what is your editor font?
Appreciate it :) I'm using Jetbrains Mono it's free to download
Awesome effort ✨
Super good editing 🫡🫡🫡🫡
Would not possible with your breathtaking animations 😄
Bro managed to memleak in js
Rewrite your genetic code in Rust.
i would rather be bug free so i will pass
Supa dope. I would like to use this search engine of yours
🍎 👀
.. Apple being like "when will it be ready?".
ain't see rust there!
🔥🔥🔥
I was looking for that algorithm and didn't know its name.
Impressive, seriously!
what is things that i should to know or learn to create like these projects
bro thought he could scrape my web and get away with it.
Now make your own email system to go along with it. 😉
If only windows file explorer could do the same
For this we have thing named Everything :)
Why is there Rust in the thumbnail? This was written in Javascript
Used Rust for the API and TF-IDF matching - decided not to keep in much of the footage for that as it was already explained in the animations
"some fucking genius" lmao
Next time use the Common Crawl dataset ;)
How much did the scraping cost if it wasn't free?
is this engine oneline or ( wouldt it be abel to be oneline for otcher users ) so otcher also coulst enjoy it?
or was it dust a peak or somthing you made cuz ( you where bored or smt )
How did you manage to get a node.js memory leak??
how much did you pay for the web scraping service in total?
discord clone when
I found a worthy opponent
Bro sounds like WilburSoot
6:08 nahhhhhhhhhhh whats bro even searching 💀💀💀💀
Nice job :D
you seem ok
Lol. Got notif after clicking the video.
Auto solve captcha you say🧐
You should host it
@google acquire this man
Cant wait for you to rewrite JS in binary 🎉🎉
whats the link?
first time watching a vid of yours ...
i have one question : why are you vibrating ??
Cause he is vibrator
don't know either
why disallow and user-agent matter? can't you just scrap everything?
You can but it might be illegal
Can i not use brightdata?
What did u mean by the websites u shouldn’t have searched
then brightdata makes captchas useless
Captcha's effectiveness has been in question for quite some time now.
protects against amateurs but keeps it simple enough that an expert won’t breach/destroy their data to get what they want.
how can i install this search engine?
Instructions are on the Github repos :)
What are the consequences of scrapings sites you aren't allowed to?
Probably not much on its own as long as you're not violating copyright - however it is curtious not to scrape sites forbidden by the robots.txt
wastes their resources and yours
Bro make a compiler programming language
how do you edit your vids
Allen uses adobe after effects for the amazing animations - I just use Davinci to cut things up 😁
@@conaticus ok thx
1:06 automatically solve captchas? i knew these things exist just to waste our time and energy
good vid
hub 🎉🎉
nice
You made a search engine for porn?! Thats disgusting... is it on GitHub?! 👀
All open source and ready to play around with 😂
Great video 😊
FYI: bright data is an Israeli company 😮
what TF is IDF ?!!
idk man but watching it makes me feel smart
Term frequency (the number of times a given word or so shows up in total) - inverse document frequency (the number of times it shows up in a specific document). The wikipedia article is pretty good: en.wikipedia.org/wiki/Tf-idf
MAKE LONGER VIDEOS
Liked and subbed
da goat
Not to be the 🤓☝️ guy, but "Jana Vembunarayanan" is pronounced 'Ja' as in 'Jarvis' and 'na' as usual. Just fyi
Thank you, I'll do this if I ever pronounce it again 😂
Good
shockedd
we had a hackathon where we basically had to implement TF/IDF - also a search engine of a sort, but for files. we did the interface in python and all mathematics processing in C++. It would have been a fun experience if not for the time limit. we struggled really hard, on test data our solution worked faster by an order or two than most other participants, but... we somehow failed on the exam data. we failed fucking IO. and won nothing. I fucking hate hackathons since then. fuck IDF.
also maybe this happened because i had written 75% of the code, while 4 other members did almost nothing. It was (their) responsibility to handle IO, and mine to handle mathematics and processing. I hate working in teams. I know noone cares but i might as well just burst out all of the rage I have towards that experience. once again, fuck team work, fuck hackathons, fuck my teammates, fuck everything and everyone
skill issue
@@skorp5677 exactly
So you’re telling me I can access restricted data by telling it to, basically, ignore restrictions???
I Have been calling myself dev, admin, ownr, root in vain for far too long
0:33
🤨
Still not fast and scalable enough. The result is not even relevant, you made bing not google
wow really? Im also surprised one single guy didnt manage to make a product rivaling Google
wow Sheldon, you got your Nobel yet?
Make a better version of VSCode.
rust is a real badass❤❤
This is just an ad for BrightData. Compared to previous videos very low effort.