4:58 "Think of it this way, the worst that could happen is that you leave the 2nd largest coin available to be picked up by your opponent..." I'm not convinced by this rationalization that this move is optimal. How can you turn that rationalization into a proof that such a move is always optimal?
Good question. I tried to build some intuition behind the approach but if you do want the full out proof, let me direct you to the paper by Tomasz Idziaszek: www.mimuw.edu.pl/~idziaszek/termity/termity.pdf
卡机不, I added English subtitles and I may be able to also add Russian and Armenian versions sometime in the future. My Spanish is rather rusty and that's pretty much where my language skills end. Thanks again for the suggestion. I hope this helps.
How can I print the sequence in which the coins are picked, where by when a coin is picked, we want to get the index of that coin in the original array?
You can cache the solutions as the algorithm runs. This technique is called memoization. As explained in the video, the number of entries in the cache is at most n squared. Looking up a value in the cache is a constant time operation (hashmap). Thus, the algorithm runs in n squared time.
Awesome explanation sir. I loved how you broke it down and the pace you took. However, you don't seem to give any details on how to obtain the final result after compressing the array using Tomasz Idiaszek's algorithm In the example you gave, the coins were arranged as [1,1,9,3,2,1,7,8,4] and the resulting compressed array was [1,-6,3]. How exactly do we get the final opyimal solution from the compressed array? By solving it using the recursive algorithm, I know the optimal solution for [1,1,9,3,2,1,7,8,4] is 16, but I can't tell how to get it from [1,-6,3]. Would really appreciate a response. Thank you
Thanks for the compliment. To answer your question, as you pointed out, compressing the array tells you only the next optimal move. However, the key here is that the algorithm does not make any assumptions about how the opponent is going to play next. So for example, the opponent may make a bad (sub optimal form her perspective) move, so then rerunning the algorithm for the following move may yield a different result. The nice thing is that re-running the algorithm for the next move takes linear time. The recursive solution has the same weakness. It calculates the "final optimal solution", as you put it, in one pass, but it assumes that the opponent is going to play optimally. But if the opponent makes an unexpected move (sub optimal), well then the whole solution would need to be thrown out =) In order to make the next optimal move, you'd need to rerun the recursive algorithm again, which takes n-squared time. Of course, if you want to get really sophisticated, you could cache previously found solutions, so that instead of n-squared running time, the recursive solution would preform better. But still, it's hard to beat Tomasz Idziaszek's linear time algorithm.
@@preetichand8053 what you need to understand is that the algorithm does not give you the final answer. It only tells you what direction to pick your next coin from. So basically you pick from left or right (depending on if the compressed array has the max value at left or right) The recursive algorithm on the other hand gives you a final answer BUT it makes an assumption that your opponent is also picking optimally. If you want to get the final answer from Tomasz's algorithm, the algorithm should act like a helper function to direct you on where to pick next from (right or left). Apply the algorithm at each turn (yours and the opponent's) and make your pick (However this would also be assuming that your opponent is picking optimally)
Awesome style of explanation! So peaceful to listen and learn
Glad to hear that you enjoyed the video and thanks for the compliment!
keep 'em coming Andre!
Thank you so much for this video. By far the best explanation I’ve seen of this problem and it’s solutions.
I have this problem in code wars, 2 weeks trying to figure it out and now I understand it, thank you
Glad to hear it!
Well explained and proper implementation. Thank you for this video. Hope to see some more optimized problems in the future.
Glad it was helpful!
great explanation and presentation.
thank you for making such videos
Thanks! I am glad you enjoyed it.
Learnt something new, Thank you. Subbed to your channel
Awesome, thank you!
Great explanation, everything was very clear and interesting, thank you!
Thanks for the good words!
Excellent explaination sir
Thank you! I hope you found it interesting, especially Tomasz Idziaszek's solution, which is just so very clever.
Awesome, keep posting your videos !!
Thanks! I have a couple of new topics in development but I am always looking for new video subjects. Please let me know if you have any suggestions!
@@stablesort Algorithms on Graphs and problem solving techniques
@@DHRUVNARAYANSINGH Ok, thanks, I do appreciate it.
great video and method of presentation, subbed!
Cheers and thanks for the compliment!
4:58 "Think of it this way, the worst that could happen is that you leave the 2nd largest coin available to be picked up by your opponent..."
I'm not convinced by this rationalization that this move is optimal. How can you turn that rationalization into a proof that such a move is always optimal?
Good question. I tried to build some intuition behind the approach but if you do want the full out proof, let me direct you to the paper by Tomasz Idziaszek: www.mimuw.edu.pl/~idziaszek/termity/termity.pdf
Sir,your video is very helpful!Thank you very much,but it will be better to add subtitles for our nonnative English speakers.
Thanks for the suggestion. I'll look into it.
卡机不, I added English subtitles and I may be able to also add Russian and Armenian versions sometime in the future. My Spanish is rather rusty and that's pretty much where my language skills end. Thanks again for the suggestion. I hope this helps.
How can I print the sequence in which the coins are picked, where by when a coin is picked, we want to get the index of that coin in the original array?
Great one🔥
Glad you liked it!
How is the time complexity for recursive n^2. I think it would be 2^n. Please explain if anyone understood that part.
You can cache the solutions as the algorithm runs. This technique is called memoization. As explained in the video, the number of entries in the cache is at most n squared. Looking up a value in the cache is a constant time operation (hashmap). Thus, the algorithm runs in n squared time.
Why dont we need fruitless move by Tomasz? Anyway thanks to give an illustration using stack, rather than graph like Tomasz did, I sorta confused
Glad to hear that it helped =)
can someone explain why the first solution of first finding odd numbers sum and even numbers sum is not optimal ?
The first solution does not work in all possible cases. Namely, it does not work if the number of coins is odd.
I see !!!
It's an optimal gameplay problem.
=) I hope you enjoyed solving it!
@@stablesort 😁
Awesome explanation sir. I loved how you broke it down and the pace you took.
However, you don't seem to give any details on how to obtain the final result after compressing the array using Tomasz Idiaszek's algorithm
In the example you gave, the coins were arranged as [1,1,9,3,2,1,7,8,4] and the resulting compressed array was [1,-6,3]. How exactly do we get the final opyimal solution from the compressed array?
By solving it using the recursive algorithm, I know the optimal solution for [1,1,9,3,2,1,7,8,4] is 16, but I can't tell how to get it from [1,-6,3].
Would really appreciate a response. Thank you
Thanks for the compliment. To answer your question, as you pointed out, compressing the array tells you only the next optimal move. However, the key here is that the algorithm does not make any assumptions about how the opponent is going to play next. So for example, the opponent may make a bad (sub optimal form her perspective) move, so then rerunning the algorithm for the following move may yield a different result. The nice thing is that re-running the algorithm for the next move takes linear time.
The recursive solution has the same weakness. It calculates the "final optimal solution", as you put it, in one pass, but it assumes that the opponent is going to play optimally. But if the opponent makes an unexpected move (sub optimal), well then the whole solution would need to be thrown out =) In order to make the next optimal move, you'd need to rerun the recursive algorithm again, which takes n-squared time.
Of course, if you want to get really sophisticated, you could cache previously found solutions, so that instead of n-squared running time, the recursive solution would preform better. But still, it's hard to beat Tomasz Idziaszek's linear time algorithm.
@@stablesort Oh I get it now. The compressed array guides you on what side to pick from. Thanks a lot :-)
@@johnbabatola23 Hey did you implement this idea.I am still confused as how this compressed array will be used to find the answer.
@@preetichand8053 what you need to understand is that the algorithm does not give you the final answer. It only tells you what direction to pick your next coin from. So basically you pick from left or right (depending on if the compressed array has the max value at left or right)
The recursive algorithm on the other hand gives you a final answer BUT it makes an assumption that your opponent is also picking optimally.
If you want to get the final answer from Tomasz's algorithm, the algorithm should act like a helper function to direct you on where to pick next from (right or left). Apply the algorithm at each turn (yours and the opponent's) and make your pick (However this would also be assuming that your opponent is picking optimally)
@@johnbabatola23 okay now it makes sense.Thanks John.Your answer really helped.
[5,3,7,10] doesnt apply first rule on it......
I am sorry, I don't quite understand your train of thought. Could you elaborate please?
i guess it's called memoization
en.wikipedia.org/wiki/Memoization
Who else is coming here from interviewbit??