AlgorithmsThread 2: RMQ Tricks

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ม.ค. 2025

ความคิดเห็น • 39

  • @SecondThread
    @SecondThread  4 ปีที่แล้ว +13

    If you have any questions, feel free to post them here: codeforces.com/blog/entry/79058 and I'll see if I can help :)

    • @srikkanthr8190
      @srikkanthr8190 4 ปีที่แล้ว

      Really cool method! I wouldn't have thought it even possible to do RMQ in , and you explained the idea very well. Some small stuff
      1. I thought RMQ stands for "range minimum query" and is a generic term for any data structure that solves this problem. This specific one is often called "sparse table", I believe.
      2. Why do you take logn = 32? Typically the maximum allowed is around 6e7 ints for which logn ~ 26 and usually rmq problems have logn ~ 20 to allow nlgn memory
      I think most people would be familiar with "sliding window minimum" algorithm, this seems to be the idea, for the small blocks case except here the blocks are so small that you can represent all of them with a single integer which helps with the memory.

    • @stv3qbhxjnmmqbw835
      @stv3qbhxjnmmqbw835 3 ปีที่แล้ว

      counting trailing zeroes takes log(n) [or say O(32), but that's a high constant] time I guess. Am I right?

  • @vishnusingh4118
    @vishnusingh4118 4 ปีที่แล้ว +7

    Couldn't have had a better start to my morning! 30 minutes of pure high-quality stuff! Thank you so much for teaching these! Highly appreciated!

  • @kc137
    @kc137 4 ปีที่แล้ว +7

    Please continue do this series. Very helpful

    • @RudraSingh-pb5ls
      @RudraSingh-pb5ls 3 ปีที่แล้ว

      Yup, this is something I haven't found anywhere else on TH-cam

  • @NaveenKumar-os8dv
    @NaveenKumar-os8dv 2 ปีที่แล้ว

    I don't know why, but I feel you'll be very good actor.

  • @akshaydutta4516
    @akshaydutta4516 4 ปีที่แล้ว +12

    Would really like to see next algorithm dead on greedy techniques. How do you think of it how do you prove it is correct etc.

  • @leetcodesolutions2035
    @leetcodesolutions2035 ปีที่แล้ว

    smooth second thread bhai...smooth

  • @nonamehere9658
    @nonamehere9658 4 ปีที่แล้ว +1

    >4:54 - I'll call this structure an RMQ.
    AFAIK that approach/structure of O(1) time per query; O(nlogn) time for building+memory always goes by the name of "sparse table".

  • @areebwadood6273
    @areebwadood6273 4 ปีที่แล้ว +1

    I think you can also use Sparse table for RMQs with same complexity with build and query , Also this will work only with offline queries Great Explanation :) Thanks

  • @nicktohzyu
    @nicktohzyu 3 ปีที่แล้ว

    27:12 I don't understand why we can't instead process it right to left without using a stack (only keep track of smallest num seen)? eg:
    min = inf
    6 -> update and indicate, now min = 6
    10 -> skip
    7 -> skip
    6 -> skip
    8 -> skip
    9 -> skip
    4 -> update and indicate, now min = 4
    7 -> skip
    2 -> update and indicate, now min = 2
    1 -> update and indicate, now min = 1
    if this doesn't work, I would appreciate an explanation why it doesn't!

  • @jithinbabu6902
    @jithinbabu6902 4 ปีที่แล้ว +1

    Thanks for the video. Logic is similar to binary lifting

  • @brunomont-yt
    @brunomont-yt 4 ปีที่แล้ว +6

    Hey! Cool video. Where did you lean about this RMQ technique (the O(n) / O(1)) ?

    • @SecondThread
      @SecondThread  4 ปีที่แล้ว +7

      It was first explained to me by Andrew He at a bytedance camp in Beijing last year. I got some inspiration for implementation details from jcg on this blog post: codeforces.com/blog/entry/71706
      Also, there is a really nice blog post written recently that gives good benchmarks for this version of the RMQ written by brunomont here: codeforces.com/blog/entry/78931

  • @nishankdas1895
    @nishankdas1895 4 ปีที่แล้ว +1

    Nice explanation dude

  • @ziedbrahmi4812
    @ziedbrahmi4812 2 ปีที่แล้ว

    a legend.

  • @mohnishsatidasani7552
    @mohnishsatidasani7552 4 ปีที่แล้ว +1

    Awesome explanation of RMQ , your explanation of O(1) query beat techniques of Binary Lifting and segment trees and Fenwick trees

  • @UdayKumar-cb3ig
    @UdayKumar-cb3ig 4 ปีที่แล้ว

    Please start a dp series

  • @laxmandesai573
    @laxmandesai573 4 ปีที่แล้ว +1

    Damn helpful!

  • @RudraSingh-pb5ls
    @RudraSingh-pb5ls 3 ปีที่แล้ว

    Plz do two algorithm thread videos on greedy and DP techniques.
    In greedy especially how you build a solution and prove that it will work ?
    Plz 🙏🙏🙏

  • @roberttrifan7269
    @roberttrifan7269 4 ปีที่แล้ว

    Regarding those small parts from the beginning and the end of a query (those smaller than log ) can't you simply use partial minimums for each bucket from left to right, and from right to left?

    • @SecondThread
      @SecondThread  4 ปีที่แล้ว

      If you did that, how would you handle queries where the size of the query is smaller than log?

    • @SecondThread
      @SecondThread  4 ปีที่แล้ว

      Partial mins are only helpful if you know your range will always extend past the end of the block

  • @CarrotCakeMake
    @CarrotCakeMake 4 ปีที่แล้ว +1

    It is still an nlogn build time, not linear, because (among other things) you are storing logn bits per entry. You just hide that log factor inside the word size so you get fake linear time.

    • @SecondThread
      @SecondThread  4 ปีที่แล้ว

      Well yeah kinda. You can do it in real O(n) with some 4-russians nonsense, but that is WAY harder to explain and runs slower anyway, which is why I went with this.
      For actual contests, this is very useful because it uses 3*n integers, so you can use it for an array of size 10^7, which is what matters most to me

    • @fr5229
      @fr5229 4 ปีที่แล้ว

      I don't follow. There are n/log(n) entries

    • @CarrotCakeMake
      @CarrotCakeMake 4 ปีที่แล้ว

      @@fr5229 I am talking about 22:10. I knew the assumption that word size is greater than log(n) is going to somehow make the local runtime different than the asymptotic runtime. But I think I was wrong about where, like you said only 1 bit is being stored per element of the original sequence. However the assumption that log(n) bits can be checked in constant time is where the difference will come up. But maybe there is a way to do it with 4 russians making what I said unimportant anyway. It is a neat optimization and a good presentation, I shouldn't complain so much, but I do care about when people make claims about runtimes without making it clear that they aren't talking about asymptotic runtimes.

    • @fr5229
      @fr5229 4 ปีที่แล้ว

      @@CarrotCakeMake Right, this will only work if the word size >= log(n). If you use 64 bit ints (which you can on a lot of platforms since they can do high/lo set bit with a single instruction) then your array size can be 2^64 and you would still have linear runtime total. As you said, not asymptotic, but could be made to work with ridiculously large inputs

    • @jeroenodb
      @jeroenodb 2 ปีที่แล้ว

      @@CarrotCakeMake In the word RAM model, it is assumed that word size >= log(n), and operations on words are O(1), (like + - * /, bit operations), although what ops are allowed does change and this is maybe the weak point (if multiplication of words in O(1) is allowed or not makes a big difference). This is a decent model for a computer, because if the word size were smaller, you couldn't even index your input, because pointers, and memory locations also have to fit in a word. So in the word RAM model, the runtime is O(n log(n) / w), but since w>=log(n), this is at least as good as O(n). So this is not fake linear time, only linear time in a different (but frequently used) model.

  • @jiashuzou8999
    @jiashuzou8999 4 ปีที่แล้ว

    What software are you using? Are you using a pen or just mouse?

  • @RifatulIslam__
    @RifatulIslam__ 4 ปีที่แล้ว +1

    Why the name Dead?

    • @SecondThread
      @SecondThread  4 ปีที่แล้ว +8

      It's a play off of the channel Algorithms Live

    • @RifatulIslam__
      @RifatulIslam__ 4 ปีที่แล้ว +1

      @@SecondThread 😮😮😮

  • @nikhiljha2998
    @nikhiljha2998 4 ปีที่แล้ว

    Correct me if i'm wrong but isn't this approach lg(M) where M is the length of the query

  • @МаксимЮрченков-ы5ь
    @МаксимЮрченков-ы5ь 4 ปีที่แล้ว +1

    timestamps would be nice to not listen about sparse tables for the millionth time

    • @SecondThread
      @SecondThread  4 ปีที่แล้ว

      lol fair enough. Potentially nontrivial LCA stuff starts at 7:36. I'll add the rest to the description