Efficient Exponentiation

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ม.ค. 2025

ความคิดเห็น • 274

  • @maroonshaded
    @maroonshaded 3 ปีที่แล้ว +293

    I've never seen this channel and it appeared in my recommendation within 43 minutes of publishing... So this the start of everything...

    • @alastairlocke4621
      @alastairlocke4621 3 ปีที่แล้ว +28

      Yeah this bloke is good.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +24

      Thanks!

    • @damndebonairdan
      @damndebonairdan 3 ปีที่แล้ว +2

      Welcome to the grind vibe.

    • @keent
      @keent 3 ปีที่แล้ว +1

      TH-cam knows our weaknesses

    • @Meego_007
      @Meego_007 3 ปีที่แล้ว

      Same

  • @gardnmi
    @gardnmi 3 ปีที่แล้ว +465

    You should rename this channel to "reality check". You put my programming knowledge to shambles.

    • @dijkstra4678
      @dijkstra4678 3 ปีที่แล้ว +6

      Same

    • @mCoding
      @mCoding  3 ปีที่แล้ว +122

      I love the feeling of talking to someone who knows much more than I do about something. The greater their knowledge is in comparison to my own, the more I stand to learn from the conversation!

    • @dijkstra4678
      @dijkstra4678 3 ปีที่แล้ว +12

      @@mCoding then please do your best to teach us, esteemed mCoding.

  • @WysockiD
    @WysockiD 3 ปีที่แล้ว +150

    i laughed when he said "and im not even going to get into that" as he is giving me the deepest look into such code that ive ever seen.. and to know this man is just scratching the surface humored me. well done man, i see the channel is climbing significantly with every video, you deserve it.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +42

      Thanks! It's short for "this video would be 3 hours long, and neither of us wants that". Simple questions often have very deep answers in mathematics.

    • @jakemayer2113
      @jakemayer2113 3 ปีที่แล้ว +2

      @@mCoding am i missing something? i'm thinking, naively, that the reason we can assume we're using the rightmost number is that if we weren't, a smaller (or at least not-larger) previously generated chain would have already calculated that and cached it

    • @Seb135-e1i
      @Seb135-e1i 3 ปีที่แล้ว +1

      @@jakemayer2113 The video mentioned that you can't assume this for powers large than ~12 thousand.
      Not using the largest power for a given multiplication doesn't mean you don't ever use it. You might need a very large number, and it might happen to be easier to sum 2 very different large numbers (abandoning the largest for a few calculations to get a different number), than to continuously work off the largest.

    • @griffoneron7924
      @griffoneron7924 3 ปีที่แล้ว

      @@mCoding I'd love to listen to you explaining something for 3 hours, your voice is soothing.

    • @shawon265
      @shawon265 3 ปีที่แล้ว

      @@jakemayer2113 It’s not always true that you will get using the largest number. @mConding already gave an example. Take 18 for example. We already know (1 2 3 6 12 15). Now for the next step, we can build 18 as 6+12 or 3+15. So, you have two equally efficient answer. As you can see, it's possible to be efficient without taking the largest number 15. You might imagine a scenario, where the largest number is slightly inefficient than taking smaller numbers. It’s rare though and the first time it happens is at 12509. I suggest you lookup Addition Chains in wikipedia which summarizes the paper @mCoding referenced.

  • @DatNguyen-vj1ro
    @DatNguyen-vj1ro 3 ปีที่แล้ว +166

    8:52 I just finished my Discrete Math Homework on Modular Arithmetics, my last question was 600 ^ 251 mod 1457, so jokes on you

    • @mCoding
      @mCoding  3 ปีที่แล้ว +15

      Happy day!

    • @sardonyx001
      @sardonyx001 3 ปีที่แล้ว +4

      Modular Arithmetics was my favorite math subject! It's amazing how freaky number theory can get compared to other theories when people just think it's just multiplying large numbers together.

    • @stanleydodds9
      @stanleydodds9 3 ปีที่แล้ว +4

      Well, joke's on you, that 600 isn't an integer with normal multiplication but rather the name for an element of that finite ring

    • @gregoryfenn1462
      @gregoryfenn1462 3 ปีที่แล้ว +3

      @@stanleydodds9 we all know that. But you can still call 600 an integer even in this context, informally.

    • @SaHaRaSquad
      @SaHaRaSquad 3 ปีที่แล้ว

      Cryptography uses modular exponentiation for this. It's less efficient than exponentiation by squaring for smaller numbers, but (in comparison) stupidly fast when the base isn't 600 but more like 600-6000 digits.

  • @HackerTypeZ
    @HackerTypeZ 3 ปีที่แล้ว +44

    "slap the sub button an odd number of times" got me good 😂

    • @daniel_77.
      @daniel_77. 6 หลายเดือนก่อน

      It doesn't work if you are already subscribed

  • @mustafamotiwala2335
    @mustafamotiwala2335 3 ปีที่แล้ว +10

    Seeing a new mCoding video has come out is always a delight. Thank you for what you do!

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      Glad you enjoy it!

  • @MatheusAugustoGames
    @MatheusAugustoGames 3 ปีที่แล้ว +47

    Dynamic Programming on steroids

  • @aryanparekh9314
    @aryanparekh9314 3 ปีที่แล้ว +8

    This channel WILL blow up, your videos are top notch

    • @dielaughing73
      @dielaughing73 3 ปีที่แล้ว

      Yeah we're in on the ground floor of this one

  • @AristAristA
    @AristAristA 3 ปีที่แล้ว +5

    Loved the conclusion on how this trick applies to matrix exponentiation. Simulated markov chains or random walks must benefit so much from this.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Indeed! Thank you for noticing! I never even said the word Markov :)

  • @ulilulable
    @ulilulable 3 ปีที่แล้ว +6

    Thanks for the "compiler explorer" tip! Now I finally understand Duff's device!

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      You bet!

    • @MattGodbolt
      @MattGodbolt 3 ปีที่แล้ว +1

      @@mCoding Excited to see you using the site to teach folks :)

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      @@MattGodbolt I'm star struck! It's Matt Godbolt himself!

    • @MattGodbolt
      @MattGodbolt 3 ปีที่แล้ว +1

      @@mCoding oh don't be silly :) I'm delighted to watch quality content and then go "ooh, I recognise that site!!" :)

  • @cat-.-
    @cat-.- 3 ปีที่แล้ว +10

    This channel is a gold mine for me!!!

  • @sardonyx001
    @sardonyx001 3 ปีที่แล้ว +1

    I got this in my recommended out of nowhere and I gotta say the algorithm is doing its work! I love this thank you! Subscribed.

  • @RobbieHatley
    @RobbieHatley 2 ปีที่แล้ว +1

    Right. But now, calculate e**π. Or worse, 1.17e**(3.7 - 0.014πi). 😀 Unless a and b are both integers greater than 1, a**b is no-longer quite so simple. (What if they're: Irrational? Complex? Matrices? Tensors? Quaternions? Octonions? The horror. The horror.)

  • @calculateddegenerate5685
    @calculateddegenerate5685 3 ปีที่แล้ว +7

    I consider myself a expert Python developer. Still I always learn something new about the beautiful language from your videos.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +2

      Glad to hear that!

    • @spell105
      @spell105 ปีที่แล้ว

      beautiful is debatable but Python has its charms

  • @xbarbz233
    @xbarbz233 3 ปีที่แล้ว

    Thanks TH-cam algorithm for recommending this, I am hooked. Subbed and looking forward to going through all your other videos

  • @Ensivion
    @Ensivion 3 ปีที่แล้ว +1

    I ran into this problem when I did my implementation of the newton fractal that 3Blue1Brown made two videos on. My naive approach made it so that it took like 20 seconds to compute ONE FRAME of an animation I was doing with not that great error bounds. I quickly
    found out that the solution was more along the lines of what you finished this video on, you kinda have to take a different approach to exponentiation, if one is available, which for computing polynomials (of even non-integer powers) there are some already done methods that speed this up significantly. So really this complex problem generally has solutions in many different ways. Also you don't go into complex numbers, which for those fractals I had to take powers of complex numbers, which in its own right also is a big problem to be optimized.

  • @glob2493
    @glob2493 3 ปีที่แล้ว +2

    There are of course ways to make a general exponentiation algorithm even more efficient by checking if the number to be exponentiated is a power of two. Then you just use bitwise shift left instead of multiplication.
    For example 2

    • @glob2493
      @glob2493 3 ปีที่แล้ว +1

      @Gary Williams Let's say we want to calculate 16^5, you can just interpret that as 16*16*16*16*16. Multiplying by 16 is the same as four bitwise shift left.

  • @erumabo
    @erumabo 3 ปีที่แล้ว +2

    When I need efficient exponents I go with a recursive strategy: if exponent is even return x^floor(exponent/2) * x^floor(exponent/2) , if odd multiply that times x.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Yeah, this is the strategy taken by most libraries (for good reason). The "optimal" way can sometime require much more memory than the way you suggested.

  • @mainogimenez7598
    @mainogimenez7598 3 ปีที่แล้ว +15

    Thumbnails getting better tho

  • @XetXetable
    @XetXetable 3 ปีที่แล้ว +1

    The exponent list isn't necessarily unique. In the case of 15, there are 4 minimal chains;
    {1, 2, 3, 5, 10, 15}
    {1, 2, 3, 6, 9, 15}
    {1, 2, 3, 6, 12, 15}
    {1, 2, 4, 5, 10, 15}
    While working this out, I came up with a much simpler recursive solution. Before I state it, consider the following:
    Note that 15 = 5 + 10 and there are two minimal chains there that decompose 15 that way. There are 4 minimal chains for 10;
    {1, 2, 3, 5, 10}, {1, 2, 4, 5, 10}, {1, 2, 4, 6, 10}, {1, 2, 4, 8, 10}
    and two for 5
    {1, 2, 3, 5}, {1, 2, 4, 5}
    By taking the outer product of these two lists of chains using set union as multiplication we have (Note: This code is in Mathematica);
    MinimalBy[Flatten[Outer[Union,
    {{1, 2, 3, 5, 10}, {1, 2, 4, 5, 10}, {1, 2, 4, 6, 10}, {1, 2, 4, 8, 10}},
    {{1, 2, 3, 5}, {1, 2, 4, 5}},
    1], 1], Length]
    which returns {{1, 2, 3, 5, 10}, {1, 2, 4, 5, 10}}, the two entries we had before merely missing our 15. Note that Outer is doing something similar to a list comprehension; use that if you want to port this code to other languages. Using this intuition we can define the following function which iterates over all decompositions of our goal, finding the minimal length chains
    MinChains[1] := {{1}}
    MinChains[x_] :=
    Set[MinChains[x],
    Append[#, x] & /@
    MinimalBy[Flatten[Table[
    Flatten[Outer[Union, MinChains[k], MinChains[x - k], 1], 1],
    {k, 1, Floor[x/2]}], 1], Length]]
    that "Set" tells the program to store the return value and use it whenever that same argument is called so it doesn't need to be recalculated. The "Append" line adds our argument to the end of all our chains. "Table" enumerates the outer product of the MinChains for every number decomposition for x between 1 and Floor[x/2].
    Using this, I was able to calculate in about two seconds that the minimal length chain for 100 is length 9 (example: {1, 2, 4, 8, 16, 32, 64, 96, 100}) and there are 62 such chains. Of course, that outer product is quite costly; I'm sure there are many optimizations that could be made to this, but I think it's a nice starting point.

    • @An-ht8so
      @An-ht8so 3 ปีที่แล้ว +1

      I'm not sure that I understand correctly what the outer product is, but I'm not convinced that the algorithm is correct. I think that there could be a number n=a+b, that has a minimal chain that uses x^a and x^b, but that isn't based on the minimal chains for a and b.

    • @XetXetable
      @XetXetable 3 ปีที่แล้ว

      @@An-ht8so Indeed, there are some missing cases. An explicit example of what you're talking about is the minimal chain for 13;
      {1, 2, 3, 5, 8, 13}
      which is missed by my algorithm. In this case, the chain for 5 is {1, 2, 3, 5} and the chain for 8 is {1, 2, 3, 5, 8}. In this case, the chain for 8 isn't minimal. I wrote the following much less efficient algorithm to cover these cases;
      subsetFil[{}] := {}
      subsetFil[{x_, y___}] :=
      Flatten[{{x}, Select[{y}, ! SubsetQ[#, x] &]}, 1]
      PreMinChains[1] := {{1}}
      PreMinChains[x_] :=
      Set[PreMinChains[x],
      subsetFil@
      SortBy[Append[#, x] & /@
      Flatten[Table[
      Flatten[Outer[Union, PreMinChains[k], PreMinChains[x - k], 1],
      1], {k, 1, Floor[x/2]}], 1], Length]]
      MinChains2[x_] := MinimalBy[PreMinChains[x], Length]
      Unfortunately, it's too memory inefficient to go beyond about 30. None of my tests found anything actually smaller than what my original implementations was able to find, but the number of minimal chains increased by between 0% and 35%, with most not changing at all.

    • @An-ht8so
      @An-ht8so 3 ปีที่แล้ว

      @@XetXetable thank you for the follow up. I don't have a good intuition about whether or not there is optimal values missed for greater n.

  • @MCLooyverse
    @MCLooyverse 3 ปีที่แล้ว

    I'm bad at using other people's stuff; I either write everything but(or sometimes even including) the standard stuff myself, or I abandon the project. As such, I've written `pow` functions a few times, and this optimisation is just one of these little things that I found myself, and am kinda proud of. This is to naive exponentiation as the way people normally multiply by hand is to repeated addition.

  • @pezus553
    @pezus553 3 ปีที่แล้ว

    this channel will definetly grow a lot

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Thank you so much 😀

  • @driesceuppens7623
    @driesceuppens7623 3 ปีที่แล้ว +3

    How would you compute the exponent of a non-integer value then? And are there similar tricks or optimisations you can do with this?

    • @mCoding
      @mCoding  3 ปีที่แล้ว +5

      Non integer exponents go in a completely different way, typically using a Taylor expansion approximation for exp(x).

    • @sammaksimovich2654
      @sammaksimovich2654 3 ปีที่แล้ว +3

      One way to do it is to split the non-integer value into its integer and fractional parts and compute the fractional part using Taylor series expansion. Then you can multiply both results (e.g x^12.345= x^(12 + 0.345) = x^12 * x^0.345)

    • @dekippiesip
      @dekippiesip 3 ปีที่แล้ว +2

      @@mCoding to add to this, before moving to Taylor series theres a more intuitive interpretation that works for rational numbers.
      First take a step back, how do we even define something like 10*(1/2)? Obviously we can't add 10 to itself 1/2 times, that doesn't make sense. We know that for integers a, b and c a*(b+c) = a*b + a*c and we want the same rule to hold for fractional multiplication. So we see that 10 = 10*1 = 10*(1/2 + 1/2) = 10*(1/2) + 10*(1/2). Therefore 10*(1/2) is the unique number x such that x*2 = x + x = 10, that means it makes sense to assign the value 10/2 = 5 to that operation.
      The same logic holds for exponention, 9^(1/2) is the unique number x such that x^2 = x*x = 9 or in other words, x = 3. So x^(1/2) is just the square root of x, similarly x^(1/n) is just the nth root of x. Generalizing basic rules of exponentiation then implies that x^(m/n) = (x^(1/n))^m, or the mth power of the nth root of x. Now any rational exponent makes sense!
      And since the rational numbers are a dense subset of the reals there is only 1 way to extend the function f(x) = a^x to the reals such that f is continious. Now we can make sense of x even for irrational numbers! To evaluate a^x for irrational x just consider a sequence of rational numbers xn converging to x and take the limit of a^xn as n goes to Infinity. The number that limit converges to is defined as a^x for irrational x, and this is the only assignment that makes said function continious.
      And there you have it, now you have a continious(even infinitely differentiable) function of wich you can derive a Taylor series and prove that that is exactly the one you would expect, and that the radius of convergence of the series is infinite!
      Invoking Taylor series before establishing this is circular reasoning btw, in order to derive the Taylor series expansion we need to know what is meant by non integer exponents in the first place!
      For matrices similar logic holds, though it could happen that the matrix equation A*A = B for given B has more than one solution, making B^(1/2) not well defined. B must have certain properties before this even makes sense(Im guessing being invertable, but not sure if that's enough).

  • @ChongFrisbee
    @ChongFrisbee 3 ปีที่แล้ว +13

    I want to say this method works exactly the same whenever * is an associative operation, regardless of what * means. But calling its repetition as exponentiation in all of those cases seems quite ok

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Indeed, this works for any associative operation!

    • @SimonBuchanNz
      @SimonBuchanNz 3 ปีที่แล้ว +2

      Well it's a bit odd to call it exponentiation when * is addition!

  • @benhbr
    @benhbr 3 ปีที่แล้ว +2

    I wonder how to extend this to allow divisions as well. E. g. x **15 can be computed as x**16/x in 4 multiplications + 1 division.

    • @bjmcculloch
      @bjmcculloch 3 ปีที่แล้ว +1

      Interesting idea. It’s unlikely to lead to better performance, though - division is slow.

  • @gregorymorse8423
    @gregorymorse8423 3 ปีที่แล้ว +1

    x^15=x^(16-1)=x^16×x^(-1) so 4 squarings and a division or multiplication by inverse modulo if doing modular exponentiation. Finding addition chains that are optimal has been proven NP hard so it's not trivial with large exponents.

    • @hetsmiecht1029
      @hetsmiecht1029 3 ปีที่แล้ว +1

      I believe division isn't as fast as multiplication, so I'd be careful with that.
      Also, if 15*[number] is close to overflowing, but 16*[number] already overflows, you can get vastly incorrect results if you first calculate 16*[number] and then divide (although I believe Python integers can't overflow, they just keep adding bits).

    • @gregorymorse8423
      @gregorymorse8423 3 ปีที่แล้ว

      @@hetsmiecht1029 complexity wise multiplication and division are the same. You can always divide with a constant time increase from multiplication like 3 multiplication via Montgomery reduction. 16 here is in the exponent there are no concerns. Of course a big integer library should be used in languages like C if the numbers will be beyond basic language integer sizes based off machine word sizes

  • @izzanzahrial7345
    @izzanzahrial7345 3 ปีที่แล้ว +24

    Can you create content about all design patterns?

    • @mCoding
      @mCoding  3 ปีที่แล้ว +12

      Design & program structure are difficult topics to teach, it may take a while.

    • @izzanzahrial7345
      @izzanzahrial7345 3 ปีที่แล้ว +1

      @@mCoding it’s ok, I’ll be happy to wait your content

    • @alexismandelias
      @alexismandelias 3 ปีที่แล้ว +3

      @@izzanzahrial7345 d&p can't be taught in a couple of videos. The book "design patterns: events of object oriented reusable code" (more or less) is a good starting point.

  • @amaarquadri
    @amaarquadri 3 ปีที่แล้ว +2

    I thought this video was going to end with binary numbers and repeated squaring... but that's where it started 😂. Great video!

  • @brandongroth4569
    @brandongroth4569 3 ปีที่แล้ว +2

    Does there end up being a cutoff time where if power < cutoff, then the binary power algorithm is faster? The general approach seems to use a quite a lot of object storage, which for math computations is usually not the best.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +4

      Yeah, storage is most likely the reason the compilers use 6 multiplications rather than 5 for pow_15.

  • @Yupppi
    @Yupppi ปีที่แล้ว

    For one million x one million matrix you'd maybe use the 2x2 block split that keeps the values in cache. And transpose one of them. I forget and didn't understand the rest, but there was a neat video about it.

  • @technowey
    @technowey 3 ปีที่แล้ว +1

    I wrote this function years ago. I just rewrote it from memory to post it here.
    Note, for some reason, TH-cam removes the first and last underscore character in '__name__' below.
    There should be two underscores at the start and end of 'name'.
    #!/usr/bin/env python
    def pow(base, exponent):
    """ Raise the base to the integer exponent 'exponent'.
    The value of 'base' must be greater than zero and
    the value of 'exponent' must be an integer.
    """
    value = 1
    pwr = base
    # Loop over each bit of the base value.
    while exponent > 0:
    if (exponent & 1) != 0:
    value *= pwr
    pwr = pwr * pwr
    exponent >>= 1
    return value
    if __name__ == "__main__":
    print pow(2, 15)
    print pow(3, 9)

    • @Ashebrethafe
      @Ashebrethafe 3 ปีที่แล้ว +1

      It's because TH-cam uses a markup language called Textile in which putting an underscore after a whitespace character, and another underscore before a whitespace character, indicates that everything between the underscores should be italicized. Similarly, asterisks make text bold, and dashes cross it out:
      '_some text_' => _some text_
      '*some text*' => *some text*
      '-some text-' => -some text-

    • @technowey
      @technowey 3 ปีที่แล้ว

      @@Ashebrethafe - Thank you.

    • @dielaughing73
      @dielaughing73 3 ปีที่แล้ว

      @@Ashebrethafe Does it escape with '\'? Let's see:
      "if \__name__ == '__main__'" => if \__name__ == "__main__"
      nope

  • @klimenkor
    @klimenkor 3 ปีที่แล้ว +1

    nice!
    a good example that shows that most efficient code doesn't mean the most elegant code :)

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      It rarely is, see e.g. most C++ code.

  • @rcatyvr
    @rcatyvr 3 ปีที่แล้ว +1

    A perhaps simple question: how do you get that nicely formatted exit code at the bottom/end of the module when you run it? I use VSCode for Mac

  • @suou7938
    @suou7938 3 ปีที่แล้ว

    i like the last example about powering a matrix.
    except it brings in another question of should i be diagonalizing that matrix first and if so by which method…

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      It depends on the application, similar to "should i sort my data before searching it?" If you only want to compute a few powers of the matrix, just computing the powers is faster. If you want to compute many many powers, diagonalizing first will be faster.

  • @b4ttlemast0r
    @b4ttlemast0r 2 ปีที่แล้ว +1

    I think in many cases in actual code where such an "x ^ n" problem comes up, n is actually a known constant (though I guess in those common cases you probably wouldn't have such a big n most of the time), so couldn't the compiler just precompute the minimal multiplication chain for those n's that occur in the code and then use that? Only up to a certain limit for n or time limit of course. For the cases with unknown n it would still use the other algorithm.

    • @API-Beast
      @API-Beast ปีที่แล้ว

      Yes, the code he has written is basically for a compiler, you don't do this at runtime as just doing the inefficient number of multiplications is faster for the small exponents and doing the binary representation trick is faster for big exponents.

  • @Wecoc1
    @Wecoc1 3 ปีที่แล้ว +1

    It's an interesting problem indeed. Is there an OEIS of the minimum number of multiplications required for n?

    • @tissuepaper9962
      @tissuepaper9962 3 ปีที่แล้ว

      Probably, but if there's not one you could submit it for consideration.

    • @tissuepaper9962
      @tissuepaper9962 3 ปีที่แล้ว +3

      Found it, it's A003313, "Length of shortest addition chain for n"

    • @Wecoc1
      @Wecoc1 3 ปีที่แล้ว

      @@tissuepaper9962 Thanks!

  • @JuliusAlphonso
    @JuliusAlphonso 3 ปีที่แล้ว +4

    That's a neat solution. Is it usually better to encapsulate stuff into classes like you have? I usually have a bunch of functions when solving Project Euler's questions

    • @mCoding
      @mCoding  3 ปีที่แล้ว +4

      I used a class here just to avoid have global variables everywhere. All the cached stuff and dictionaries would need to be stored somewhere, so it made sense to use a class here. Most problems I would say don't need a class though.

    • @MagicGonads
      @MagicGonads 3 ปีที่แล้ว +1

      @@mCoding because classes and functions are first class citizens in python, I consider them being accessible to the global scope as being the same as them being themselves global variables.
      (the global variable "EfficientExponentiation" stores the )
      So what difference does it really make if it's just a static holder like any global collection?

  • @Victor_Marius
    @Victor_Marius 2 ปีที่แล้ว

    Get the pair of divisors of 15 with the minimum sum then subtract 1 from each:
    15 = { 1 * 15 | 3 * 5 }
    sum = min { 16 | 8} = 8
    Then do the same for 3 and 5. But these are primes so they are final. Then you get 2 + 4 multiplications

  • @jhawar-ji
    @jhawar-ji ปีที่แล้ว

    Prime number partioning can be helpful to calculate this? Isn't that the first obvious solution to move ahead with?

  • @EternalFlameofHeaven
    @EternalFlameofHeaven 2 ปีที่แล้ว

    I'm interested in the semantics of using division as it is merely multiplying the the reciprocal. Seems there may be more efficient steps then, but no clue on if that changes computational complexity.

    • @mCoding
      @mCoding  2 ปีที่แล้ว

      You're definitely on to something! If you are allowed to use divisions as well you can sometimes do even less total operations. However, divisions are often much more expensive than multiplications, so the problem using only multiplications is the main problem of interest.

  • @dudono1744
    @dudono1744 3 ปีที่แล้ว

    Since i don't understand anything after you start defining the first class (i understand what a class is, i just don't understand this language), here is my solution for high exponents (approximative though):
    Let's say you search n**p
    Step 1: find log_2(n)
    Step 2: r = p * log_2(n)
    Step 3: multiply r by a constant so it becomes p * ln(n)
    Step 4: e**r (can be estimated with sum_(i=1)((r**i)/i!) )

  • @skipfred
    @skipfred 3 ปีที่แล้ว

    You look like a ghost and you always seem vaguely pissed off.
    Subbed.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      I am glad my pale complexion pleases you. I have zero experience in setting up lights to make me look more lively :)

  • @niconeuman
    @niconeuman 3 ปีที่แล้ว +2

    Very nice video!
    In an interpreted language, how much sense do these kind of optimizations make, versus calling a built-in function? I have tried to optimize stuff like this in Octave, and some times I found that what appears to be a rational choice ends up being slower because defining new variables takes more time due to memory allocation.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +6

      In interpreted languages, you will almost never be able to beat a builtin function in any situation, and I nearly always recommend using a builtin over writing it yourself. The fact that my pow_15_minimal beats x ** 15 is totally unexpected.

    • @niconeuman
      @niconeuman 3 ปีที่แล้ว +3

      @@mCoding does python have something like pow(X,15)? Maybe ** is working at the interpreted level?

  • @Gorlung
    @Gorlung 3 ปีที่แล้ว

    # is that really faster and/or memory-effective than this:
    def pow(digit, power):
    n = 1
    p = digit
    while n < power:
    digit *= p
    n += 1
    return digit
    pow(2,15)

  • @paklenizmaj
    @paklenizmaj 3 ปีที่แล้ว

    I wrote a method for power in ln(n) complexity and hyperpower (I don't remember the exact complexity for hpow). The methode was direct and I did not use the memory (array) to remember the intermediate results. Yes, this method was invented for a faculty project. I did not find any similar results at that time.
    As you have proposed an algorithm for multiplying matrices, here I see a potential problem with unnecessary memory usage and longer execution time. You can calculate directly the required values (almost in place)

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      I think you are mixing up the memory used to compute the addition chains with the memory used at runtime to compute the exponentiation.

    • @paklenizmaj
      @paklenizmaj 3 ปีที่แล้ว

      @@mCoding You don't need an efficient chain to calculate the pow(x,n) and also you dont need precalculated x2 x4 x8 x16...

  • @bathtubed
    @bathtubed 3 ปีที่แล้ว

    One fun application of this is calculating fibonacci numbers for large n! If you formulate the state as a vector of the two previous numbers then the nth fibonacci number can be found with [1, 1; 1, 0]^n * [1; 0]. Pretty nifty

    • @dekippiesip
      @dekippiesip 3 ปีที่แล้ว

      Theres already a closed form expression for that that's even faster, and it involves the golden ratio!
      Find the eigenvalues and eigenvectors of that matrix you refer to and diagonalize it in the form A = CDC^(-1). Now A^n = CDC^(-1)CDC...DC^(-1) = CD^(n)C^(-1). Because D is diagonal you can just take the nth power of each component, multiply this thing with the vector (1,0) and you will actually have a closed form formula for the nth fibonacci number, pretty crazy!

    • @bathtubed
      @bathtubed 3 ปีที่แล้ว

      @@dekippiesip where's the golden ratio in that algorithm? I am aware of a closed form solution which raises the golden ratio to the nth power or something I'm pretty sure but that doesn't involve matrices.
      I just wanted to highlight a fun usage of the exponent chaining shown here

    • @dekippiesip
      @dekippiesip 3 ปีที่แล้ว

      @@bathtubed the eigenvalues of the matrix are some lineair combination of the golden ratio(forgot the exact form). Just compute it and you will see what I mean, it's fun.

  • @GuRuGeorge03
    @GuRuGeorge03 3 ปีที่แล้ว +1

    This is the kind of stuff that makes people think that programming requires math and that programming is really hard. As a fulltime web dev myself this stuff baffles me and even after watching it I don't get it :D

  • @himothaniel
    @himothaniel 2 ปีที่แล้ว

    I didn't completely follow the process, but basically they have a logarithm table in the computer but for exponentiation?

  • @eliassaf9192
    @eliassaf9192 3 ปีที่แล้ว +1

    Awesome! Subbed. Just one question though... If a good compiler already takes care of this itself, why do we have to do it?

    • @hb-robo
      @hb-robo 3 ปีที่แล้ว +1

      I don't want to speak for James, but I believe he mentions that this was a ProjectEuler question so it's mostly just for practice and deepening your understanding of how computers actually derive complex mathematical equations. if you aren't writing a compiler for a language this probably isn't something that will come up in your career.

    • @mCoding
      @mCoding  3 ปีที่แล้ว +3

      Besides the fact that deepening your understanding is always a good thing, compilers do this for integers, but none will do it for matrices or many other objects that you exponentiate. This may very well be an optimization you do to speed up one of your own simulations.

    • @eliassaf9192
      @eliassaf9192 3 ปีที่แล้ว

      @@mCoding Thank you for the answer!

  • @eggmeister6641
    @eggmeister6641 3 ปีที่แล้ว

    ive been professionally programming in python for 8 years, but I was today years old when I learned that you can do list[-n] to sample the rear of a list.

    • @CuulX
      @CuulX 3 ปีที่แล้ว

      You can also do [::-1] to get a reversed list. (Might require numpy arrays, not sure)

  • @muha0644
    @muha0644 3 ปีที่แล้ว +1

    What about using logs?
    For example, log(10^15) would be just 15*log(10). You only have one multiplication there. Though you would need a good way to calculate logs.... Otherwise it would be the same as the first example.

    • @bertchristiaens6355
      @bertchristiaens6355 3 ปีที่แล้ว +1

      and then you still need to use exp(15*log(10)) to get the final result, which again is an exponentiation.

    • @spacelem
      @spacelem 2 ปีที่แล้ว

      @John House no, but if you diagonalise the matrix first, you can convert it into a form where you can just log-multiply-exp each element along the diagonal, then undo the diagonalisation to get the final matrix.

  • @tristunalekzander5608
    @tristunalekzander5608 3 ปีที่แล้ว +1

    Compiler writers are geniuses.

  • @1Poiuytgfdsa1
    @1Poiuytgfdsa1 3 ปีที่แล้ว +1

    Youre gonna blow up. Cant believe u only have 25k subs..

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Maybe one day!

    • @1Poiuytgfdsa1
      @1Poiuytgfdsa1 3 ปีที่แล้ว

      @@mCoding you have a very attractive and digestible (and ADDICTING) format to your videos. I also learn a lot. My CS degree aint got shit on the videos you release lol.
      Keep it up man. Thanks for the hard work!!

  • @Sonyim414
    @Sonyim414 3 ปีที่แล้ว

    9:00 "Imagine for a moment that x is not an integer but a one million by one million matrix"
    Me, knowing about the Jordan normal form: "Okay, so almost exactly like the integer exponent then"
    Jokes aside, this was a very educational video! Thank you!

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      This is a difficult topic, glad you enjoyed it!

  •  3 ปีที่แล้ว +1

    Hey, love your work in LCD Soundsystem

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Hey, thanks! ....

    • @hb-robo
      @hb-robo 3 ปีที่แล้ว

      lol I had the same thought.

  • @mikono2022
    @mikono2022 3 ปีที่แล้ว +1

    shhhh, He's staring into your soul

  • @msreeram8825
    @msreeram8825 3 ปีที่แล้ว +1

    Tapped on the video
    Thought what it could be
    What it is. None is them are the same anymore.

  • @hamzasayyid8152
    @hamzasayyid8152 3 ปีที่แล้ว +1

    Amazing. Can you make a video on how to time programs? Some people say use perf_counter or default_timer and it's all very confusing to me

    • @DrCodie
      @DrCodie 3 ปีที่แล้ว

      If you want to know how to use time, here is a simple video - th-cam.com/video/By3bfIpTZuo/w-d-xo.html

  • @m4rt_
    @m4rt_ ปีที่แล้ว

    why not this?
    def pow(x, y):
    acc = x
    for i in range(y): acc *= x
    return x

  • @RektYuan
    @RektYuan 3 ปีที่แล้ว

    This is your best video yet

  • @SeresHotes25
    @SeresHotes25 3 ปีที่แล้ว +1

    You can't get any faster than log2(n). And binary algorithm already has 2log2(n), what means that you can just wait just 2x more time rather than calculate the most efficient exponentiation. It's important, because your algorithm has e^n complexity. And that means for bigger numbers it's faster just to calculate with binary algorithm

    • @mCoding
      @mCoding  3 ปีที่แล้ว +3

      I think that you are mixing up the runtime of the exponentiation (which is log2(n) for the optimal chain as well as for the binary method) with the runtime it takes to compute the order in which to do the multiplications (which is > exp(n)). You are not meant to calculate the numbers for efficient exponentiation at the runtime of a program that needs fast exponentiation, this is an exponential time algorithm as you mentioned. You only need to calculate the optimal chain for each power ONCE for all time, it never changes, regardless of the input numbers or even datatypes. If we were writing a program where we knew that raising things to the 15th power was extremely common, we could calculate that (1 2 3 6 12 15) is optimal for 15 once, before runtime, and then implement the pow_15_minimal function and replace it for any places we had ** 15. When we run our program we will then see the performance gain of pow_15_minimal over ** 15, but we will not see the cost of computing (1 2 3 6 12 15) as this was precomputed. In general, you can download a table of these optimal chains so you don't ever have to compute them yourself, they are just available for you to use if you wish. In Python this is not something you would probably end up doing unless you were using big matrices.

  • @idof1994
    @idof1994 3 ปีที่แล้ว +1

    Why not:
    Def pow_15(x):
    X2 = x*x
    X4 = x2*x2
    X8 = x4*x4
    X16 = x8 * x8
    Return x16 / x?
    And then you could do a smart while loop that knows to divide when needed

    • @mCoding
      @mCoding  3 ปีที่แล้ว +4

      Assuming you mean x16 // x of course (int division), this is certainly an option. It's still 5 total math operations, so on par with the minimal multiplication method. Typically division is slower than multiplication at a hardware level so this is almost never done in practice, but there may yet be a niche use case for it in the future!
      Here are my timing tests:
      timeit.timeit("pow_15_minimal(100000)", globals=globals())
      Out[7]: 0.6788639999999901
      timeit.timeit("pow_15_minimal(100000)", globals=globals())
      Out[8]: 0.6870734000000027
      timeit.timeit("pow_15(100000)", globals=globals())
      Out[9]: 0.8700359000000049
      timeit.timeit("pow_15(100000)", globals=globals())
      Out[10]: 0.809845100000004
      timeit.timeit("100000 ** 15", globals=globals())
      Out[11]: 0.6612468999999948
      timeit.timeit("100000 ** 15", globals=globals())
      Out[12]: 0.7574889000000127

    • @idof1994
      @idof1994 3 ปีที่แล้ว

      @@mCoding Yeah that makes sense.
      Thanks!

  • @phuctran25277
    @phuctran25277 3 ปีที่แล้ว

    I checked your video in python and creating a function that optimized the multiplication. The idea is that to create a function generator one and used that for many calculations to save time. However I quickly found out that python power of operator performs better than my own generated function so it was a huge disappointment for me. But it was fun writing the code!

  • @MasterQuestMaster
    @MasterQuestMaster 3 ปีที่แล้ว

    I have never ventured into the code optimization part of programming, but this was interesting.

  • @InZiDes
    @InZiDes 3 ปีที่แล้ว

    The problem is more complicated because the processor can multiply in parallel independent things. Knowing that, ¿is there a paralell multiplication form more effective, than this chains?

  • @scottmiller2591
    @scottmiller2591 3 ปีที่แล้ว

    I get an error using the code you have here typed in manually when I try to execute a cell containing the class EfficientExponentiation (and nothing else). I get the same error if I use the code from your github in a cell. I'm using Python 3.7.10 in Jupyter notebook. The output from your code is
    ---------------------------------------------------------------------------
    TypeError Traceback (most recent call last)
    in
    1 from typing import Optional
    2
    ----> 3 class EfficientExponentiation:
    4
    5 def __init__(self, step_limit: Optional[int] = None):
    in EfficientExponentiation()
    30 self._compute_next_step()
    31
    ---> 32 def minimal_multiplication_chain(self, n: int) -> tuple[int, ...]:
    33 self._step_until_n_computed(n)
    34 return self._min_mult_chain[n]
    TypeError: 'type' object is not subscriptable
    ---------------------------------------------------------------------------
    If I change
    def minimal_multiplication_chain(self, n: int) -> tuple[int, ...]:
    to
    def minimal_multiplication_chain(self, n: int) -> None:
    it runs and returns the same value as if I had typed
    x = EfficientExponentiation(step_limit=50)
    x._step_until_n_computed(15)
    x._min_mult_chain
    It acts like the type hint module is broken. Note that tuple works fine on the earlier statements where tuple is not the output type, but an input type.
    There is a long rant here that seems to be the same issue: stackoverflow.com/questions/62871524/is-pep-585-unusable-at-runtime-under-python-3-7-and-3-8
    By adding at the top
    from typing import *
    and then changing
    def minimal_multiplication_chain(self, n: int) -> tuple[int, ...]
    to
    def minimal_multiplication_chain(self, n: int) -> Tuple[int, ...]
    that is, capitalizing Tuple, it works right.
    It's certainly not your fault that the Python typing format you used is broken for 3.7 and 3.8, but this was annoying to find and fix, and probably discouraging for noobs trying to run your code. The form of typing you are using originates in PEP 585, which only says Python 3.9. I don't know if it's in future or not, nor how to drag it into 3.7/3.8 if even if it is.
    Oh, PS: I don't want to discourage _you_. You're doing a great job teaching in general - good examples, clean code, etc. Keep up the good work. I learned a lot about typing tracking this down; unfortunately, I'm stuck at 3.7 until Anaconda catches up.

  • @pianochannel100
    @pianochannel100 3 ปีที่แล้ว

    There's some seriously interesting number theory research to be done here. I'm sure there are some mathematical dark arts yet undiscovered that will unlock this problem.

  • @taktarak3869
    @taktarak3869 3 ปีที่แล้ว +1

    what about using exp(ln()) for exponentiation? for instance, you want to find the value of x^y then you can use exp(ln(x)*y)
    note: i don't use python, so i don't know if it has inbult exp() and ln() function.

  • @ritiksahu1844
    @ritiksahu1844 3 ปีที่แล้ว

    what will be output @mcoding
    def foo(a):
    a.extend([4,5,6])
    a = [1,2,3]
    foo(a)
    print(a)
    and why is that

    • @ritiksahu1844
      @ritiksahu1844 3 ปีที่แล้ว

      other example:
      def foo(a):
      a['foo'] = "bar"
      return a
      a = {}
      foo(a)
      print(a)

    • @ritiksahu1844
      @ritiksahu1844 3 ปีที่แล้ว

      def foo(a):
      a = 5
      return a
      a = 0
      foo(a)
      print(a)

  • @kidsforcode3052
    @kidsforcode3052 3 ปีที่แล้ว

    NIIICE! Kudos for attacking a difficult problem with Python!

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      Thanks!

  • @sodiboo
    @sodiboo 3 ปีที่แล้ว

    4:52 w-what are you doing, step-variable?

  • @ineverchangemyplayericon3016
    @ineverchangemyplayericon3016 3 ปีที่แล้ว

    You said the best time complexity was exponential, this is false unless n^(1/2) counts as exponential which i think not. I know of a logN solution but its not the exact least mult, very easy to program. I think I can get the ans in exact least steps in sqrt(n) + sqrt(n/2) but its a little complicated and I'm to lazy to program and verify if it is true.

    • @reuben2011
      @reuben2011 3 ปีที่แล้ว

      I think the claim is that current algorithms for finding the minimum length addition chain for a number k takes exponential time. See the Wikipedia article on Addition Chain, in particular, the "Methods for computing addition chains" section.

  • @lukerichards22
    @lukerichards22 2 ปีที่แล้ว

    6:40 "if a key already exists it does nothing"... not actually true, it returns the value for that key.

  • @chonchjohnch
    @chonchjohnch 3 ปีที่แล้ว

    You should look into Lucas numbers and parititioning

  • @fabianandrespachecozelada4208
    @fabianandrespachecozelada4208 3 ปีที่แล้ว

    hi i have a doubt, what mean [x,...]; min 4:29

    • @mCoding
      @mCoding  3 ปีที่แล้ว +2

      The notation tuple[int, ...] is the way to type hint that this variable will be a tuple of variable length, but that it will contain only integers. For tuples of fixed length, say 3, I would just write tuple[int, int, int], but in this problem we consider tuples of longer and longer length, so this is why I use the variable length notation. In many situations, when the length is not known you could just use list[int] instead, but here we are storing the elements in a set _cached_chains, so tuples must be used instead of lists because lists are not hashable. Putting is all together, dict[int, tuple[int, ...]] is a dictionary whose keys are ints and whose values are variable length tuples of ints.

    • @fabianandrespachecozelada4208
      @fabianandrespachecozelada4208 3 ปีที่แล้ว

      @@mCoding ooh thanks :)

  • @shegua
    @shegua 3 ปีที่แล้ว

    Just wondering, is it wrong to do x15 by getting x5 and multiplying it by x3? Getting to x15 in only 4 multiplications

    • @catgamer4263
      @catgamer4263 3 ปีที่แล้ว +4

      Multiplying 2 powers of x will add their exponents.
      x5 * x3 = x8, because it's basically x*x*x*x*x * x*x*x.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      Thanks CatGamer for explaining!

  • @jakistam1000
    @jakistam1000 3 ปีที่แล้ว

    Copied the code directly and ran it:
    Traceback (most recent call last):
    File "exponentiation_efficient.py", line 9, in
    class EfficientExponentiation:
    File "exponentiation_efficient.py", line 38, in EfficientExponentiation
    def minimal_multiplication_chain(self, n: int) -> tuple[int, ...]:
    TypeError: 'type' object is not subscriptable
    Using Python 3.6.9

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      The code requires 3.9 because I used subscripting of builtin types.

    • @jakistam1000
      @jakistam1000 3 ปีที่แล้ว

      @@mCoding Thanks! Works now.

  • @AutomatedChaos
    @AutomatedChaos 3 ปีที่แล้ว

    It's cool that for numbers that are in the Fibonacci sequence, the chain is equal to the Fibonacci sequence (see 8:20 for number 13).

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      Is this a general pattern? Very interesting!

    • @reuben2011
      @reuben2011 3 ปีที่แล้ว

      @@mCoding This doesn't seem to hold all the time (e.g. for 8, the best chain is (1,2,4,8) whereas using the fib sequence would yield a chain that's one longer: (1,2,3,5,8)). I suspect that's true for many fibonnaci numbers though since you are always the largest two numbers in the chain and the only faster way to possibly get to your target number faster is to add the last number of the chain to itself (which is what is happening in the case of 8).

  • @montheclyde
    @montheclyde 3 ปีที่แล้ว

    Could you get away with 4 multiplications for the X**15 example?
    X*X=X**2,
    X**2 * X = X**3,
    X**3 *X**2= X**5,
    X**3*X**5 = X**15

    • @mCoding
      @mCoding  3 ปีที่แล้ว +1

      It makes me happy to see that viewers are trying these things for themselves! Nice try but small mistake! X**3 * x**5 == x**8, not x**15!

    • @montheclyde
      @montheclyde 3 ปีที่แล้ว

      @@mCodingI forgot how this works, cheers

  • @PeterZaitcev
    @PeterZaitcev ปีที่แล้ว

    Dear James.
    I really appreciate your approach of asking to slap the Like button odd number of times, but this algorithm has one major issue - if the like is already present, doing so will remove it. Please, change the algorithm to take into account that behaviour.

    • @mCoding
      @mCoding  ปีที่แล้ว

      Hi, could you provide a traceback that led to this issue? I'll create a ticket right away and we'll get this fixed by the next release.

  • @cottawalla
    @cottawalla 3 ปีที่แล้ว

    If I was concerned about this level of optimisation in an interpreted scripting language (as python is) I'd use a compiled language instead. Scripting is great for managing a process and for human-speed interfaces but not for the compute intensive stages.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      I think you are missing the point here. Python is only used to compute the optimal addition chains, which only have to be computed once and never again. Once they are computed, you can use them in whatever language you want. You could use them in your implementation of a C compiler to produce faster C code.

    • @cottawalla
      @cottawalla 3 ปีที่แล้ว

      @@mCoding I watch your videos because you actually do what I was trained to do, which is analyse and optimise every line of code of every algorithm. I missed the point that this was a one-off precomputing of a lookup table.
      But consider that the table will be stored in memory and it then becomes a question of whether the speed of memory access, even for compiled languages, can compete with the compute speed of a dedicated math unit on chip.
      Dedicated, tuned and compiled math libraries are essential for all languages, especially scripting languages like python.
      My reaction to your video came from the perspective of seeing, for example, professional java or python or PL/SQL developers implement every aspect of a process within the language they specialise in, often at great cost to the process and the enterprise when they should be selecting the tools that are appropriate. Their reasoning will often be they are just prototyping but the temporary always becomes the permanent as no-one wants to redo work from scratch. They want to move on to more interesting work.
      I've worked in environments riddled with processes built this way and have seen them, not just clog servers but clog entire IT departments with constant resource issues with efforts to find workarounds. Because fixing the root cause at that stage means major reworking of projects that are now in production.

  • @Caedin8
    @Caedin8 3 ปีที่แล้ว

    I love this channel

  • @gideonmaxmerling204
    @gideonmaxmerling204 3 ปีที่แล้ว

    It's significantly simpler to use the binary form with repeated squaring and division by 2, also, I don't see a case where it would be slower.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      See the other comment about this, division is significantly slower than multiplication on most hardware, and speed testing confirms that pow_15_minimal is faster than the builtin ** 15, which is faster than repeated squaring then division.

    • @gideonmaxmerling204
      @gideonmaxmerling204 3 ปีที่แล้ว

      @@mCoding well, since it's a divide by 2, you can just shift right.

    • @dielaughing73
      @dielaughing73 3 ปีที่แล้ว

      @@gideonmaxmerling204 nuh-uh, you're dividing by the base

    • @gideonmaxmerling204
      @gideonmaxmerling204 3 ปีที่แล้ว

      @@dielaughing73 what do you mean?
      dividing by two and shifting right are equivalent.

  • @scientisthere
    @scientisthere 3 ปีที่แล้ว +3

    I was hoping for some "evil bit hacks" 😂

  • @linear9185
    @linear9185 3 ปีที่แล้ว

    loving these videos keep em coming :D

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      Glad you like them!

  • @nicolasturek7563
    @nicolasturek7563 3 ปีที่แล้ว

    Another solution:
    x2 = x * x
    x3 = x2 * x
    x5 = x3 * x2
    x10 = x5 * x5
    x15 = x10 * 5
    5 multiplications... The problem is that python is not assembly and this function you made will allocate a lot memory, which takes time, also it's only power for 15. You'd needed a lot memory just for power function with any variable. This program is efficent only when you do it in assembly and if you need only power of 15.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      It will allocate more memory, but I definitely would not call it "a lot" of memory. If you are worried about all the temporaries, that was just for ease of the viewer, you can compute it using only a single temporary integer as follows:
      def pow_15_minimal_fewest_temp_vars(x):
      # (1 2 3 6 12 15)
      y = x * x # x2
      x *= y # x3
      y = x * x # x6
      y *= y # x12
      x *= y # x15
      return x
      Now the extra memory is at most the memory of a single python int. (Note the binary assembly method also requires a temp int).

  • @AnyFactor
    @AnyFactor 3 ปีที่แล้ว

    I always wondered who are those people who have those weird hacky solutions in codewars. Now, I am looking at one of them.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      If it works, it works!

  • @JohnZakaria
    @JohnZakaria 3 ปีที่แล้ว

    Any chance of making AVX2 Videos :D ?

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      While I certainly talk about vectorized ops a lot, avx2 specifically is a bit too esoteric for my audience right now at least. Maybe it will come up in tensorflow video or something like that.

    • @JohnZakaria
      @JohnZakaria 3 ปีที่แล้ว

      Can you recommend any good Avx2 resources?

  • @colonelgraff9198
    @colonelgraff9198 3 ปีที่แล้ว

    I can do it in one mutiplication:
    1) Take the log of your number
    2) Multiply that log times the number you are exponentiating by
    3) Take the inverse log of the final number
    Example:
    X^15 = inverse log(15 * log(X))

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      But how many multiplications does computing the log of your number take?

    • @colonelgraff9198
      @colonelgraff9198 3 ปีที่แล้ว

      @@mCoding it would require a lookup table, or possibly some conversion to a log(2) using binary expansion. I haven’t gone that far but I’ll admit the gains could be lost in the implementation.
      I also have a hunch that exponentiation libraries might use something like this trick under the hood… but this method is how humans would do this before computers, using lookups into log tables

  • @boringpsychology9737
    @boringpsychology9737 3 ปีที่แล้ว

    great video! many thanks

  • @kaziaburousan166
    @kaziaburousan166 3 ปีที่แล้ว

    These gives are blowing my mind🔥🔥🔥

  • @livb4139
    @livb4139 3 ปีที่แล้ว

    make a data structures and algorithms series pls

  • @aadityabhetuwal5990
    @aadityabhetuwal5990 3 ปีที่แล้ว +1

    I thought doing Fibonacci using mnemonization was a great feat until then.

  • @erikgiesenloo1871
    @erikgiesenloo1871 3 ปีที่แล้ว

    This seems like an instance of Dynamic Programming

    • @NeillClift
      @NeillClift 3 ปีที่แล้ว

      Generally this is disputed by noting that an addition chain can be composed of chains for smaller numbers that are not optimal. The way to correct this would be to have subproblems consisting of sets of numbers that have to all be computed. Problem is the state to memoize then explodes.

  • @jakemuff9407
    @jakemuff9407 3 ปีที่แล้ว

    Great video! But would love to learn/hear more about the little bit at the end with matrix exponentials! Also, what about real number exponentiation and the different results python gives (sometimes complex etc) vs C++ giving different results entirely. For example -0.25 raised to the power of 0.125. Python gives (0.7768869870150186+0.3217971264527913j), whereas C++ gives -0.00390625 ((-0.25)*(0.125)*(0.125);) or nan (pow(-0.25, 0.125)). Clearly the compilers are doing something different here and why is real number exponentiation so different from integer exponentiation?

    • @chaddaifouche536
      @chaddaifouche536 3 ปีที่แล้ว +2

      First (-0.25)*(0.125)*(0.125) is NOT (-0.25) to the power of 0.125, it has nothing to do with it.
      Generally, raising to a non-integer power is related to the exponential functions, but here your power can also be expressed as a ratio (0.125 == 1/8) thus you can interpret it as taking the 8th root (taking the 8th root of x is finding a number y such that y^8 == x). Then the difference between Python answer ((-0.25)**(0.125) == 0.7…+0.3…j) and the C/C++ one (pow(-0.25, 0.125) -> NaN that is Not a Number) lies in where you search the 8th root :
      - the C/C++ function search a "real number" which raised to the power of 8 gives (-0.25) but there are no such reals since whether x is positive or negative, x^8 will be positive (think of the sign rules). This is why pow(-0.25, 0.125) returns NaN, Not a Number.
      - Python core modules implement "complex numbers", that is the set of number obtained when you add the number "j" (more often noted "i" in Maths) such that "j² = -1" to the set of real numbers and allows all normal operations with this new number so that all complex numbers can be noted "a+b*j" where a and b are reals. This new set of numbers is very very useful in many contexts but here we're just interested in the fact that in complex numbers, all negative numbers have an 8th root (in fact they have 8 of them, the one you're getting here is just one of them).
      It is of course possible to implement complex numbers in C/C++ and there are libraries that do that and offer a power function that would gives the same result as Python. This is just a result of the default choices being different between Python and C/C++.
      For matrices, there is nothing really special about exponentiation of matrices here compared to exponentiation of floats or integer. If you want to better understand matrices in general, I'll recommend 3b1b (3 blue 1 brown) series on linear algebra as an excellent introduction, nicely animated.
      Note that I speak of "real numbers" and "complex numbers" with quotes because neither Python nor C/C++ can represent those set completely (it is just impossible on a computer) and thus they use a standardized approximation that is often called floats (floating point numbers). This lead to some unpleasant consequences if you're not a bit cautious in your manipulations.

    • @alxjones
      @alxjones 3 ปีที่แล้ว +3

      Right away, to answer your last question, the difference is that integer exponents are just repeated multiplication (with a reciprocal for negative powers), while non-integer exponents are not. They are actually defined in terms of the exp function, which is in turn defined by a power series.
      Negative real numbers (or even integers) tend to not have real values when raised to real (non-integer) powers. The reason for this is the same reason that there is no real number which squares to -1.
      To solve the latter problem, we introduce the constant i (or j if you like) such that (-1)^(1/2) = i, and call all numbers a + ib the complex numbers. It turns out that this solves the former problem as well; all complex numbers have complex values when raised to complex powers.
      Another problem that non-integer powers have is that there is often more than one of them. For example, 1^2 = (-1)^2 = 1, so we might say 1^(1/2) = 1 or -1, or maybe we say 1^(1/2) = {-1, 1} as a set containing all possibilities.
      When we decide to implement the behavior of non-integer exponentiation in code, we have to decide how to approach these problems. (1) Do we return complex values when they exist, or just say there is no real value? (2) When there is more than one value, do we return all of them or just one, and how do we pick which one?
      Both python and C++ (and basically all standard implementations) choose the same answer for (2), which is that they return a single value called the principal root. There are actually 8 values of (-1/4)^(1/8), but python returns the principal root, which is just a canonical value decided by mathematicians to solve this multi-valued function problem.
      The discrepancy you see is that python and C++ have chosen different answers for (1); python chose to return the complex (principal) value, while C++ decided to fail out (nan). Neither of these is better than the other, it just depends on your use-case. If you are working in a domain where complex numbers don't make sense, then getting a complex number back is wholly unhelpful. On the other hand, if complex numbers are important to your software, you don't want them thrown out automatically.
      And one last thing: (-0.25)*(0.125)*(0.125) is an entirely different expression than the one being considered here, so it's absolutely no surprise that this gives an unrelated value.

    • @user-hk3ej4hk7m
      @user-hk3ej4hk7m 3 ปีที่แล้ว

      It might have to do with the precision used in each case, exponential functions with negative basis are not continuous for exponents less than 1. Solutions "pop in and out" from the real to the complex plane from one infinitesimal to another.

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      Very nice and in-depth response, thanks!

    • @mx_mazunki
      @mx_mazunki 3 ปีที่แล้ว

      Take a look into Robotic Kinematics. Matrix multiplication is used a lot in 3D animations and similar stuff.

  • @therealplnts8533
    @therealplnts8533 3 ปีที่แล้ว

    Next time im in an interview im turning the tables on em

    • @mCoding
      @mCoding  3 ปีที่แล้ว

      Woah there, be nice to your interviewers!

  • @mathematicalninja2756
    @mathematicalninja2756 3 ปีที่แล้ว +1

    Just like dynamic programming

  • @ritiksahu1844
    @ritiksahu1844 3 ปีที่แล้ว

    Hi, in previous video I asked u about not using global var instead make main func, however will it make difference if I use global var just to read its value (like constant) and knowing its value will never be changed. Or just avoid global var at all. :)

  • @rokkuform
    @rokkuform 3 ปีที่แล้ว

    can't find this code on your github

    • @mCoding
      @mCoding  3 ปีที่แล้ว +2

      Fixed!

  • @slava6105
    @slava6105 3 ปีที่แล้ว +1

    Cool! but don't apply it to instances with overridden operators

    • @mCoding
      @mCoding  3 ปีที่แล้ว +3

      It's fine as long as your * is associative!