Design a Basic Search Engine (Google or Bing) | System Design Interview Prep

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ธ.ค. 2024

ความคิดเห็น • 305

  • @interviewpen
    @interviewpen  ปีที่แล้ว +26

    Thanks for watching! Visit interviewpen.com/? for more great Data Structures & Algorithms + System Design content 🧎

    • @SauravSingh-mz9hc
      @SauravSingh-mz9hc 2 หลายเดือนก่อน

      Wnat to give interview of ai😢

  • @prathamshenoy9840
    @prathamshenoy9840 ปีที่แล้ว +91

    needless to say.... your channel will grow superbly. this video was RECOMMENDED by youtube

    • @interviewpen
      @interviewpen  ปีที่แล้ว +8

      Thanks for the kind words! Yeah we will be posting a lot more & hope to create more quality stuff.

    • @newman6492
      @newman6492 ปีที่แล้ว +2

      Yes.

  • @artemvolsh387
    @artemvolsh387 ปีที่แล้ว +19

    Channel currently hugely underrated, material is just delicious, especially for those who seek examples of complex system schemes.
    Love it.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      Thanks! We have more coming - production starts this week!

    • @artemvolsh387
      @artemvolsh387 ปีที่แล้ว

      @@interviewpen Great to hear!

  • @SuperGojeto
    @SuperGojeto 25 วันที่ผ่านมา

    I was having a feeling that I would have to crawl the youtube videos for a quality video on Google System design! But BRAVO! I find this video as the first video with the highest views. Great job!

  • @michaelmaloy6378
    @michaelmaloy6378 19 วันที่ผ่านมา

    I hope this channel grows exponentially.
    This is amazing content!
    Thank you, and well done! :)

  • @marwanezzat2637
    @marwanezzat2637 ปีที่แล้ว +32

    Dude, your content quality is superior, Keep going.
    Yesterday you had 145 subscribers and now you have 245 i am so happy for you.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +3

      Thanks! haha - we'll be posting a lot more so stay tuned!

    • @recursion.
      @recursion. ปีที่แล้ว

      Nigga he's got 12.5k now lmaooo

    • @scorcism.
      @scorcism. ปีที่แล้ว

      18k now

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      24k now!

    • @scorcism.
      @scorcism. ปีที่แล้ว

      24.8k -> 25-05-2023

  • @nader2560
    @nader2560 2 หลายเดือนก่อน +1

    Honestly one of the best videos on the internet for system design!

  • @raghuboyapati7311
    @raghuboyapati7311 ปีที่แล้ว +4

    This channel is gonna explode. The content is just too good. Thank you.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks for watching - we'll be posting a lot more!

  • @dmitrydmitriev2554
    @dmitrydmitriev2554 ปีที่แล้ว +5

    Greetings, I just came to TH-cam to watch video about SQL optimization and your channel was offered. And I started to watch this video. It is amazing, the way you explain is brilliant and outstanding. Very clear, full of information, not boring because too obvious, not difficult because too sophisticated and convoluted - a golden middle.
    Thank you!

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      thanks for the kind words! and thanks for watching - more coming

  • @juanitoMint
    @juanitoMint 9 หลายเดือนก่อน

    Really appreciate the back-of-the-envelope calculations in between!
    Great work!

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน +1

      Thanks, glad you enjoyed it!

  • @lucasoliveira-xs5yh
    @lucasoliveira-xs5yh ปีที่แล้ว +31

    Awesome content! I liked to see some data structure (such as queue and heap) used in practice, because the simple examples are good in the beginning, but it is not that good with the time. Continue with this, really a hidden gem this channel

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks for watching - more coming

  • @williefr
    @williefr ปีที่แล้ว +15

    I really enjoyed the video! Thank you guys for taking your time and posting it, it was very entertaining and educational. Best regards

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      Thanks! Stay tuned for more content

  • @vladd3172
    @vladd3172 ปีที่แล้ว +7

    Clean, clear, efficient. ❤
    I’d love to see more videos like this from you!

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Will do, thanks for watching!

  • @carlboneri7772
    @carlboneri7772 ปีที่แล้ว +32

    One of the best walkthroughs I've ever seen, regardless of the topic or technical depth. Superb work man.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks for commenting & the nice words - more videos coming soon!

  • @ĀRYAN_GENE
    @ĀRYAN_GENE ปีที่แล้ว

    woow loved it
    by 1:48 I was in love because you ruled out everything every small detail required + planning this makes understanding alot easier rather than directly jumping into code and saying on the go

  • @MrRetroboyish
    @MrRetroboyish ปีที่แล้ว +3

    Only a 1/3 of the way through and already one of the best I've seen. Focused, logical leaps from topic to topic, minimal digressions. Keep it up

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks! and thanks for watching

  • @linonator
    @linonator ปีที่แล้ว +2

    Wooooah!!!! This is what I needed in my life 😢. I’m now complete

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      awesome haha - thanks for watching

  • @JM_utube
    @JM_utube ปีที่แล้ว +1

    I really appreciate this video! Information was clear and concise. Levels of depth are perfect for the viewer to be able to continue educating themselves about any of the topics mentioned here. Thank you so much

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Sure - thanks for watching!

  • @BraisonsCrece
    @BraisonsCrece ปีที่แล้ว +2

    keep it going!
    High quality content and a very solid platform! Without a doubt, I will buy the subscription soon and start learning!
    a hug from a new Spanish subscriber

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      cool! Thanks for watching. Let us know in Discord if u need any help.

  • @strawberriesandcream2863
    @strawberriesandcream2863 ปีที่แล้ว

    amazing video, thanks👏👏i like how you guys dig deep into complex aspects of every system that some other content just gloss over

  • @frankguo1748
    @frankguo1748 10 หลายเดือนก่อน

    Really clear, concise and efficient explanation and narrative. 👍

  • @andydataguy
    @andydataguy ปีที่แล้ว +29

    Your course looks great. I love that you have a teaching assistant and the explaining styles are awesome. Will try it out! Only thing is I really wish you supported Rust 🦀🙏🏾

    • @interviewpen
      @interviewpen  ปีที่แล้ว +5

      Thanks! We can add language support in under an hour. (from the engineering angle) We can push changes in a day. Just let us know in Discord.

    • @timSquash
      @timSquash ปีที่แล้ว

      yeae ive just started learning rust. It's such a cool language

  • @marko3808
    @marko3808 ปีที่แล้ว +2

    This is amazing! I honestly cant wait to look into your other videos!

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      More videos coming! Thanks for watching.

    • @marko3808
      @marko3808 ปีที่แล้ว +1

      @@interviewpen eagerly waiting!

  • @s8x.
    @s8x. ปีที่แล้ว +1

    Wow, this is information all for free. Thank you for making this video

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks for watching - more coming

  • @Roshen_Nair
    @Roshen_Nair ปีที่แล้ว +2

    Loved the video! A video I'd love to see in the future is system design for video streaming applications e.g. TH-cam, Netflix.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      Will do - thanks for watching

  • @pingqiu7318
    @pingqiu7318 หลายเดือนก่อน

    Very good video! Thanks for sharing! One tiny thing, I would prefer NFS over Blob Store like S3 to keep the downloaded pages. A webpage will keep references to lots of resources, like json/css/javascript files. The bold/highlighted words are more important than plain text. That's very important information for ranking. If we don't keep the css files, those information will be lost. So we need to keep them together with the HTML file. It will be very complex to keep those information for multiple files in a single webpage in metadata if we use S3. So I suggest just download the web page with everything to a folder in NFS, and ask indexer to help themselves.

  • @henrythomas7112
    @henrythomas7112 9 หลายเดือนก่อน

    I extremely like the video, man. Very helpful and informative. Thank you very much. It is presented so well too. Great, positive work.

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน

      Thanks, glad it helped you!

  • @johnny_silverhand
    @johnny_silverhand ปีที่แล้ว +3

    Exceptional way of explaining things , I'm subscribed to you guys now

  • @bandr-dev
    @bandr-dev ปีที่แล้ว +40

    lmao if this was the interview question I'd just not do it.. but I'm not there yet

    • @interviewpen
      @interviewpen  ปีที่แล้ว +6

      we'll get u there 🧎🧎 be brave

    • @theuniverse2268
      @theuniverse2268 ปีที่แล้ว

      ​@@interviewpen only slaves need to do this
      It's not worth it 🤷‍♂️ If you know how to build a search engine you're already the top 1% of the human population just make your own company and forget about a job lol

  • @gmanonDominicana
    @gmanonDominicana ปีที่แล้ว +1

    I was looking for something like this for a while. This content is worth the time spent.

  • @sinnloses746
    @sinnloses746 ปีที่แล้ว

    Second Video I watch from you. It’s so good. thank you

  • @yipmong
    @yipmong 7 หลายเดือนก่อน

    I am impressed, you really deserved my sub❤

  • @dave6012
    @dave6012 ปีที่แล้ว

    Dang, I never thought I could understand this whole process. I typically wrote off most of the implementation details as a black box, but this seems halfway approachable.
    Has me thinking a lot about single page applications, and how the crawlers handle them. A similar type of video would be awesome if you had it.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Glad you liked it! Yes, SPAs are notoriously hard to optimize for crawlers. However, strategies like static rendering and routing can make SPAs look more like typical websites to a crawler. I'm not an SEO expert though :)

    • @dave6012
      @dave6012 ปีที่แล้ว

      @@interviewpen haha I appreciate the legal disclaimer

  • @ekanshmishra4517
    @ekanshmishra4517 ปีที่แล้ว +4

    Never saw such a difficult problem explained so easily❤️ subscribed instantly
    Love from India❤

  • @syn3rman65
    @syn3rman65 ปีที่แล้ว +1

    Holy shit I'm glad I found this before you've blown up 🙌

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks for watching! More coming!

  • @sperpflerperberg8147
    @sperpflerperberg8147 ปีที่แล้ว +3

    This channel is amazing

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      thanks - we have a lot more coming!

  • @tofahub
    @tofahub ปีที่แล้ว +7

    How does sorting by frequency give us the most popular results? The frequency is the number of times the word occurs in that specific url. The word may appearing in that url too many times like being a common word doesn't make it the most popular search result

    • @interviewpen
      @interviewpen  ปีที่แล้ว +6

      You're completely right! Google uses the PageRank algorithm in addition to a more advanced index to handle that--we glossed over this for our "basic" search engine since it's more of an algorithms problem than a system design one. Regardless, there's some cool infrastructure that goes into calculating PageRank at scale so that's certainly something to look into if you're curious. Thanks for watching!

    • @esm2000
      @esm2000 ปีที่แล้ว +5

      ironically sorting by frequency was the original implementation of the page rank algorithm, long before it became more advanced

    • @H3llsHero
      @H3llsHero ปีที่แล้ว

      You can lookup tf-idf (term frequency-inverse document frequency) to learn more about how common "filler" words are filtered out in a basic search engine.

  • @Pankaj.Pilkhwal
    @Pankaj.Pilkhwal ปีที่แล้ว +1

    really wow!!!!!!! amazing content.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      thanks, more coming soon!

  • @Sgene9
    @Sgene9 ปีที่แล้ว +1

    This was amazing. Now I want to try build a search engine!

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      lets gooooo - thanks for watching

  • @notenlish
    @notenlish 7 หลายเดือนก่อน

    Great video man, wish I had found this before

  • @MuscleTeamOfficial
    @MuscleTeamOfficial ปีที่แล้ว +2

    This is high quality content.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks, glad you enjoyed it!

  • @chandrasekharmandapalli9181
    @chandrasekharmandapalli9181 ปีที่แล้ว +1

    Great work buddy....very detailed explanation... cheers

  • @govardhannarayan3907
    @govardhannarayan3907 ปีที่แล้ว

    Great video..
    Keep it up folks.

  • @basharatwani3948
    @basharatwani3948 ปีที่แล้ว

    Thank you for sharing, Good content and good work. Suggest start with core functional and non functional requirements and then capacity planning numbers and read write per sec needing to support the core functional needs. Otherwise seems we go straight into solution which is ok, some may want to know how we think ahead of an ambiguity and the problem space and have conversation around what we want to do with the interviewer. Maybe also consider adding handling copyright issues when we are extracting and rendering html, de dupe service and bloom filter, how nested cyclic loops in a site will be handed, caching strategy etc.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks for watching. You're right, addressing the requirements ahead of time is very important in this process, and our more recent videos tend to be better about that :)

  • @khuntasaurus88
    @khuntasaurus88 ปีที่แล้ว +1

    Well thats an instant sub!!

  • @maharshiguin7813
    @maharshiguin7813 ปีที่แล้ว +2

    Great video, really like your way of explaining stuff.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks - more videos on the way!

  • @jeromeeusebius
    @jeromeeusebius 8 หลายเดือนก่อน

    Thank you for sharing the great design prep video. What tools or combination of tools/software is used to create the figures (with the black blackground). Thanks

    • @interviewpen
      @interviewpen  8 หลายเดือนก่อน

      Thanks for watching! We use GoodNotes on an iPad.

  • @BrianStDenis-pj1tq
    @BrianStDenis-pj1tq ปีที่แล้ว

    This is great content. Regarding shingles, that takes a LOT to implement - lots of space and lots of CPU to compare them. The idea of the personalized recommendations is a huge success Google has and is surely difficult to implement considering the entire search, rank (personalize) and retrieve has to be done in a second.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks! You're exactly right--Google has built an incredibly impressive system :)

  • @theprovego2934
    @theprovego2934 ปีที่แล้ว

    2:00 This is how to make an ad, good job!

  • @rockosaji9400
    @rockosaji9400 ปีที่แล้ว +2

    Wow...Super impressed

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks! A lot more coming! We will be posting consistently.

  • @dibll
    @dibll ปีที่แล้ว +3

    Could someone pls explain what text and hash indexes are? Are they separate DBs storing partial information compare to the main DB or something else? Thanks!

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      You're exactly right. You can think of global indexes as a copy of the database but organized onto nodes differently, and the records generally only include enough data to be able to look up the corresponding record in the primary.

  • @christhornham
    @christhornham ปีที่แล้ว

    Outstanding! Thank you!

  • @chenhaofeng4842
    @chenhaofeng4842 9 หลายเดือนก่อน

    Really appreciate it. I have several questions for politeness part. If there are 10k hosts, are we supposed to have 10k queues for politeness? Let's say if one host has only 3 urls, after all the 3 urls are visited. are we supposed to delete the idle queue? Each time we have a new host, are we supposed to created a new queue.

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน +1

      Yep, we'd need one queue for each host. There'd probably be far more than 10k in fact! Of course, these would simply be logical partitions residing on a far smaller set of physical machines. We would need to add a queue when a host is visited for the first time (this would be trivial since a queue is just a logical abstraction), but we probably wouldn't need to worry about deleting since we'll keep re-crawling hosts. Hope that helps!

  • @PouyanNosrati
    @PouyanNosrati ปีที่แล้ว

    It was an incredibly detailed explanation

  • @premparihar
    @premparihar ปีที่แล้ว +1

    The video is really awesome and helpful ❤.

  • @yourlogarithm8607
    @yourlogarithm8607 ปีที่แล้ว

    Could you explain to me a thing I'm confused about here 13:35. When the router selects an element from the priority queue - it adds it to the politeness queue, by doing that wouldn't we loose the initial prioritization given that the politeness queues are sorted just by domain?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Sure. The router uses a weighted random algorithm to select a priority queue, so the higher priority queues are more likely to be selected. This ensures that higher priority pages are crawled more frequently, regardless of what politeness queue they end up in. Thanks!

  • @rembautimes8808
    @rembautimes8808 9 หลายเดือนก่อน

    One application of this solution is for horizon risk scanning. The use case is that a large multinational corporation wants to have an idea of new risks which are emerging and adopting this approach allows them to have a traceability back to the web source. Of course they won’t be crawling 100M pages but maybe 100k pages.

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน

      Interesting! Thanks for watching :)

  • @yuganderkrishansingh3733
    @yuganderkrishansingh3733 ปีที่แล้ว +2

    Don't think the schema design for the query pattern "Search for a word " is included. The video says there is a text index but I don't see "word" or "frequency" at th-cam.com/video/0LTXCcVRQi0/w-d-xo.html
    I think the schema needs to include these so that index automatically creates a table on top of these.
    Also the part about Router routing URLs to correct queue, It's mentioned that if there is no Queue corresponding to domain then it will added to "empty" queue. But then what about updating the Heap and selector.
    Also the mapping of a domain to queue has to be stored somewhere. Most likely in Redis cache as it seems like changing a lot in case queue becomes empty.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +2

      1. The "site content" field in the schema should hold the full text of the site, so words and their associated frequencies can be calculated when records are added/updated, and this data is what propagates to the text index.
      2. Yep, when a new host is added to the second set of queues, the router is responsible for adding that host to the heap so the selector knows about it.
      3. The host-to-queue mapping would be stored in the router, that way the router is able to quickly check which queue the next URL should be added to. It's worth noting that the router is low-traffic enough (

    • @yuganderkrishansingh3733
      @yuganderkrishansingh3733 ปีที่แล้ว

      ​@@interviewpen for the point 1, you mentioned that the word and frequency is calculated when a record is added or updated. But then also it needs the corresponding attributes so that it can be added to Databased when record is added or updated.
      As per timestamp 3:32 the schema doesn't contain word or frequency. Am I missing something? It might be something dumb apologies.

  • @maksym7703
    @maksym7703 ปีที่แล้ว +2

    man it's so good content, who are personally you btw?)

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      The instructor is named Bobby - I am Benyam, I do our Data Structures & Algorithms. Thanks for watching.

  • @langtuyetvuanh1999
    @langtuyetvuanh1999 ปีที่แล้ว +1

    great video, but can I ask? can we use elasticsearch instead? I'm not a professor but seeing a lot of system using elastic search to optimize their query performace.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      Glad you liked it! ElasticSearch actually uses a very similar data structure to the "text index" we described, and this could certainly be swapped out for our database in this system. It's just about tradeoffs between ease of use in a managed service and flexibility.

  • @FeyroozeCode
    @FeyroozeCode ปีที่แล้ว

    Very Simple and Good

  • @moacir8663
    @moacir8663 ปีที่แล้ว +2

    I'd like to watch a deeper explanation about how to search for data in a shard database like you explained.

    • @interviewpen
      @interviewpen  ปีที่แล้ว +2

      we'll cover sharding in-depth soon! thanks for watching!

    • @moacir8663
      @moacir8663 ปีที่แล้ว

      @@interviewpen I'm looking forward to watch it.

  • @SaveCount-bh8tp
    @SaveCount-bh8tp 6 หลายเดือนก่อน

    Your Channel is very good

  • @FranciscoGomez-tw1ii
    @FranciscoGomez-tw1ii ปีที่แล้ว +2

    Amazing!!!

  • @scottthornton4220
    @scottthornton4220 10 หลายเดือนก่อน

    Love the video but I'm perplexed as to why you want to store the site contents. I figure that you would just scrape it for word frequencies for matching later to queries?

    • @interviewpen
      @interviewpen  10 หลายเดือนก่อน +1

      Good question--we store the site contents so we don't have to scrape them again later if we want to change our algorithms. Google does this too! Thanks for watching.

  • @nikitaluparev6478
    @nikitaluparev6478 10 หลายเดือนก่อน

    while you've been explaining Schema you mentioned hash as a way to make sure something is unique. Can you explain in detail how hash helps with that?

    • @interviewpen
      @interviewpen  10 หลายเดือนก่อน

      Sure--hashing a large piece of data (such as a webpage) yields a far shorter, fixed-length string that uniquely represents that data and can be stored in a database. By checking if this hash already exists in our database, we can effectively check if the webpage has already been seen without having to compare the page content against petabytes of other pages.

  • @VermeilChan
    @VermeilChan ปีที่แล้ว +1

    The amount of time u put in this video is crazy 😭
    Keep it up 😼😼

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks, more is on the way :)

  • @Vinod_Kumar827
    @Vinod_Kumar827 ปีที่แล้ว

    Very nicely explained

  • @dombat44
    @dombat44 ปีที่แล้ว

    Great content, yours are the best system design interview mocks I've seen on here. Could you do one on a RSS feed website?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks! Sure, we'll add it to the backlog :)

  • @CertificationTerminal
    @CertificationTerminal ปีที่แล้ว

    Awesome!

  • @wayneisthebestable
    @wayneisthebestable ปีที่แล้ว

    Great video, but im curious is it really neccassar to sort by frequency of a word in URL?
    i think most well designed URL wont have key word like cat appear more than one time in Url?
    Also if there's cat and dog in a URL should I have two record for a URL?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      No, we're searching the content of the pages here, not the url. Thanks for watching!

  • @amigos786
    @amigos786 ปีที่แล้ว +1

    Hey awesome video. Just subd. What is the app you are using in ipad for this?

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      GoodNotes - thanks for watching!

  • @andrewkamoha4666
    @andrewkamoha4666 ปีที่แล้ว +1

    Piece of cake !!!

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      ye

    • @andrewkamoha4666
      @andrewkamoha4666 ปีที่แล้ว +1

      @@interviewpen I'm gonna build one now to smash Google !!! kkk :-D

  • @shs4293
    @shs4293 ปีที่แล้ว +1

    Instead of sharding right off the hook, could use partioning. Sharding should be the final resort

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      Good point, but 31TB of metadata is a lot to store on one node so it's necessary in this case to scale horizontally. Our query patterns work very nicely here (always single-record reads/writes by a unique key), so it shouldn't be a problem. Thanks for watching!

  • @danielghani3903
    @danielghani3903 ปีที่แล้ว +1

    terima kasih puan

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      sure - thanks for watching!

  • @cankuter
    @cankuter 11 หลายเดือนก่อน

    Very nice walkthrough appreciate the effort. I have a question tho, maybe a stupid one. I didnt quite get if "heap" means the data structure heap or the heap as a general memory space just like it is called in Java. I mean if its the data structure, wouldnt it be very inefficient to search for the correct pointer for the politeness queue you are looking for? From your explanation I am inferring that this heap is more like a memory space and works more like a hash map. Is this correct?

    • @interviewpen
      @interviewpen  11 หลายเดือนก่อน

      We did mean the heap data structure--this works very efficiently here since the earliest timestamp will always be at the top of the heap. The heap just tells us which politeness queue to look at next; no searching necessary. Thanks!

  • @EntertainerOnline
    @EntertainerOnline 5 หลายเดือนก่อน

    I'm not sure if I understood correctly but why are we not using any ES cluster to speed up our search? No DB can be as efficient as ES when it comes to search.

    • @interviewpen
      @interviewpen  5 หลายเดือนก่อน

      ElasticSearch is essentially a sharded db with full text search at its core, so a properly architected database will do the same thing. But you’re absolutely right-es is certainly a viable solution if we want a pre-built solution.

  • @dzuchun
    @dzuchun ปีที่แล้ว +1

    have a trouble finding that shingles technique author mentioned close to the end. can anyone give some sort of reference?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks for watching! It's a bit math heavy but here's a reference for shingling: nlp.stanford.edu/IR-book/html/htmledition/near-duplicates-and-shingling-1.html

  • @Rockyzach88
    @Rockyzach88 ปีที่แล้ว +2

    *ChatGPT giving me a "rudimentary" outline of some python code for this explanation based on the youtube transcript. What do you think:*
    *python*
    class API:
    def __init__(self):
    self.load_balancer = LoadBalancer()
    self.text_index = TextIndex()
    self.metadata_db = MetadataDatabase()
    self.blob_store = BlobStore()
    def search(self, query):
    urls = self.text_index.search(query)
    results = []
    for url in urls:
    metadata = self.metadata_db.get_metadata(url)
    page_content = self.blob_store.get_page_content(url)
    results.append((metadata, page_content))
    return results
    class LoadBalancer:
    def __init__(self):
    self.api_servers = []
    def distribute(self, query):
    api_server = self.select_api_server()
    return api_server.search(query)
    def select_api_server(self):
    # Logic to select API server based on load balancing
    pass
    class TextIndex:
    def search(self, query):
    # Implement search logic to return URLs based on query
    pass
    class MetadataDatabase:
    def get_metadata(self, url):
    # Retrieve metadata for the given URL
    pass
    class BlobStore:
    def get_page_content(self, url):
    # Retrieve page content for the given URL
    pass
    class Crawler:
    def __init__(self, url_frontier):
    self.url_frontier = url_frontier
    self.hash_index = HashIndex()
    self.metadata_db = MetadataDatabase()
    self.blob_store = BlobStore()
    def crawl(self):
    while True:
    url = self.url_frontier.get_next_url()
    if self.check_robots_txt(url):
    page = self.fetch_page(url)
    if not self.is_duplicate(page):
    self.save_page(page)
    new_urls = self.extract_urls(page)
    self.url_frontier.add_urls(new_urls)
    def check_robots_txt(self, url):
    # Check robots.txt for the given URL
    pass
    def fetch_page(self, url):
    # Download the page content for the given URL
    pass
    def is_duplicate(self, page):
    # Check if the page is a duplicate using HashIndex
    pass
    def save_page(self, page):
    # Save the page content and metadata
    pass
    def extract_urls(self, page):
    # Extract new URLs from the page content
    pass
    class HashIndex:
    def check_duplicate(self, page):
    # Check if the page content is a duplicate
    pass
    class URLFrontier:
    def __init__(self):
    self.priority_queues = []
    self.host_queues = []
    self.heap = []
    def get_next_url(self):
    # Get the next URL based on priority and politeness
    pass
    def add_urls(self, urls):
    # Add new URLs to the appropriate queues
    pass
    # Initialize components
    url_frontier = URLFrontier()
    api = API()
    crawler = Crawler(url_frontier)
    # Start crawling and serving API requests
    crawler.crawl()

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Might be a little more complex in practice :D Thanks for watching!

  • @eazypeazy8559
    @eazypeazy8559 ปีที่แล้ว +1

    cool guide, thanks

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      sure - thanks for watching, more videos coming

  • @pieter5466
    @pieter5466 ปีที่แล้ว

    4:22 anyone know how the database storage size Bytes estimates were determined?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      The sizes for each field are simply estimates based on the data being stored.

    • @pieter5466
      @pieter5466 ปีที่แล้ว

      Thanks!@@interviewpen

  • @TarrenHassman
    @TarrenHassman ปีที่แล้ว +1

    Also important to remember that search engines are moving to Vector databases with machine learning matrixes

  • @dibll
    @dibll ปีที่แล้ว +1

    Informative video! Very nicely explained. Could you pls do one on distributed key/value stores?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks for watching - yes that's in our backlog

  • @tirthdoshi1337
    @tirthdoshi1337 ปีที่แล้ว

    Can someone explain how does the priorityQueue really work for choosing the next element in the queue? Is it like a min priority queue where the top element will be having the minimum time to remove and we compare current time and minimum time and finally process the element and then if multiply rendering time by 10 and put it back to the queue and the priority queue. In that case if a 2 elements have the same time in priority queue how do we choose which one to pick?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Yep you got it right, we’re looking for the earliest timestamp. If two elements have the same timestamp, it doesn’t matter which one we pick. Thanks!

  • @ahmad-ali14
    @ahmad-ali14 ปีที่แล้ว +1

    Thanks

  • @andrecorreia8568
    @andrecorreia8568 ปีที่แล้ว

    Thanks, great video but I have 1 comment. You are saying that you are going to cache the robots.txt file. How does Google system then know that the robots.txt was updated? From what you mentioned, you always take it from cache as long as it is there but you didn't mention cache invalidation.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Thanks for watching! Really good point-in this system it’s not critical for the robots.txt to be constantly up to date, but there definitely should be some TTL set in the cache to make sure the data is re-fetched periodically.

  • @El_Remolino19
    @El_Remolino19 ปีที่แล้ว

    what are you using to draw on and the software to make this? i find it super helpful and would like to make my own videos using it, thank you

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Cool, we're using GoodNotes on an iPad. Thanks!

  • @savanpatel4938
    @savanpatel4938 ปีที่แล้ว +1

    awesome

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      thanks for watching - more videos coming soon!

  • @NitinVarmaManthena
    @NitinVarmaManthena 9 หลายเดือนก่อน

    What software do you use for the UI for the workflow and to highlight pen?

    • @interviewpen
      @interviewpen  9 หลายเดือนก่อน

      We use GoodNotes on an iPad. Thanks!

  • @satyamkumaryadav1560
    @satyamkumaryadav1560 ปีที่แล้ว +3

    Which app you are using for writing?
    BTW quality content 👌🏿

    • @interviewpen
      @interviewpen  ปีที่แล้ว +1

      We use GoodNotes on an iPad 👍

  • @mus_g117
    @mus_g117 ปีที่แล้ว

    nice content thank you

  • @CanRau
    @CanRau 11 หลายเดือนก่อน

    Is there some kind of open dataset to get the database going without having to crawl the whole web from 0?

    • @interviewpen
      @interviewpen  11 หลายเดือนก่อน +1

      There is! Check out www.commoncrawl.org/ (just one example)

    • @CanRau
      @CanRau 11 หลายเดือนก่อน

      ​@@interviewpenooooh that's incredible thank you so much 🙏🥰

  • @darkwoodmovies
    @darkwoodmovies ปีที่แล้ว +1

    The fact that when you crunch the numbers, the metadata is only

  • @ShueFig
    @ShueFig ปีที่แล้ว +1

    recognised the B2B SWE voice :)

  • @Tony-dp1rl
    @Tony-dp1rl ปีที่แล้ว

    Seems like a huge amount of complexity to avoid crawlers hitting the same URL. I would take the approach that they will rarely select the same URL anyway, so just have at it and wear that occasional doubling up for the massive speed increase it gives you on the 99% case - especially given the huge number of URLs something like google must be crawling.

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      When the crawler discovers a new site, it's pretty likely that several pages on that site would line up close together in the URL frontier. At the scale of thousands of crawlers, we'd basically be DDOSing every new site! But you're absolutely right that it's an important tradeoff to consider. Thanks!

  • @congminhluu5068
    @congminhluu5068 ปีที่แล้ว

    For resolving politeness issue, why a heap?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      Good question; the heap data structure enables us to efficiently look up the host with the smallest timestamp, i.e. the host that we crawled the longest ago. With a significant number of host queues, this operation could add notable latency without using a heap. Thanks!

    • @congminhluu5068
      @congminhluu5068 ปีที่แล้ว

      @@interviewpen oh I see. I haven’t encountered any heaps and was surprised to see it can be used as a HashMap of some kind.
      Why are you thanking me lol I should be thanking you for the video

  • @qingrex
    @qingrex ปีที่แล้ว +1

    🎉Great video🎉May I try to up a Chinese CC? It s useful to someone under me❤

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      sure - what is an email we can use to add you as a CC moderator?

  • @pieter5466
    @pieter5466 ปีที่แล้ว

    Does anyone know a list of storage requirements for things like text (diff sizes), images, etc?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      This is factored into the 2MB per page that we're storing in the blob store.

  • @TungLe-mm7eo
    @TungLe-mm7eo ปีที่แล้ว

    what is the tool you are using for presentation? thank you

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      We're using GoodNotes on an iPad. Thanks!

  • @kayeshparvez
    @kayeshparvez ปีที่แล้ว

    are we going to remove the blob after creating hash index and word index ?

    • @interviewpen
      @interviewpen  ปีที่แล้ว

      It depends on the requirements of the system, but in this case we'll keep the BLOB around. This is helpful since there's so much overhead involved in scraping sites--for example if we decided to change our indexing algorithm, we could do so from the saved BLOBs without having to re-crawl every page. Google does this too--in fact you can view Google's copy of a page by clicking the "cached" link on a search result. Thanks for watching!