I initially thought your approach was abstract and confusing, but on further thought it's enlightens how the recursion executes. Perhaps that's why we struggle with recursion in the first place, we look to the output rather than look at the steps to get there.
thank you so much this is wonderful , but could you please organize the videos so that we can follow the right order and thanks again for your great effort .
Hi Alvin, one question, isn't the space complexity O(n^2) too? cuz it creates a new subarray in each call stack of n recursive calls, so it's n*n memory space overall.
no, it depends on if the recursion is “preorder” or “inorder” or “postorder” etc traversal. The temporary variables are all cleared if its the recursion occurs preorder which it often is with arrays, meaning the big-O space complexity could be as low as linear; such occurs as a limitation of a computer to only have one turing-esque pointer, but is not a factor in actual multicore computing i think
Great tutorial however I'm having issues understanding space complexity used here. How is that possible that Space: O(n) in sum algorithm from 09:30. We are doing n recursive calls and each of them keeps local variable 'rest' on the stack (which is roughly of size n). According to my understanding the space complexity should be O(n^2). Am I missing something?
True. Each recursive call will have new array to work with and the size of it will decrease by 1 on each recursive call, meaning the space used by all those arrays is an integral of n, which results into 1/2 * n^2 and simplifies by big-O rules to just n^2
Linear Time Solution def sum_array(nums): size = len(nums) if not size: return 0 print(nums) num = nums.pop(size - 1) time.sleep(2) return num + sum_array(nums)
Your videos are wonderful! Please keep providing new content - your explanations are incredibly easy to understand~
especially the visualization! easy to understand
I appreciate the multiple examples in this video. Thanks.
I don't know why this channel doesn't have more subscribers!!!
For the sum recursion problem, I implemented it using .pop() instead of slicing.
def sum(arr):
if len(arr) == 0:
return 0
return arr.pop() + sum(arr)
I initially thought your approach was abstract and confusing, but on further thought it's enlightens how the recursion executes. Perhaps that's why we struggle with recursion in the first place, we look to the output rather than look at the steps to get there.
thank you so much this is wonderful , but could you please organize the videos so that we can follow the right order and thanks again for your great effort .
Hi Alvin, one question, isn't the space complexity O(n^2) too? cuz it creates a new subarray in each call stack of n recursive calls, so it's n*n memory space overall.
no, it depends on if the recursion is “preorder” or “inorder” or “postorder” etc traversal. The temporary variables are all cleared if its the recursion occurs preorder which it often is with arrays, meaning the big-O space complexity could be as low as linear; such occurs as a limitation of a computer to only have one turing-esque pointer, but is not a factor in actual multicore computing i think
If we think about what Alvin wrote here, simply each stack call consumes O(n) space so it should be O(N^2).
Great tutorial however I'm having issues understanding space complexity used here. How is that possible that Space: O(n) in sum algorithm from 09:30. We are doing n recursive calls and each of them keeps local variable 'rest' on the stack (which is roughly of size n). According to my understanding the space complexity should be O(n^2). Am I missing something?
True. Each recursive call will have new array to work with and the size of it will decrease by 1 on each recursive call, meaning the space used by all those arrays is an integral of n, which results into 1/2 * n^2 and simplifies by big-O rules to just n^2
I love the way of explaining.
Really great videos!!! Thank you
all of your videos are amazing
wow your videos are super amazing! Tnx I really needed them
First like. Superb Video. Awesome explanation
Thanks for watching our videos!
Great video
Great work!
You can set a default value to the *idx* and skip passing the 0 by the first call of *_sum* function.
Linear Time Solution
def sum_array(nums):
size = len(nums)
if not size:
return 0
print(nums)
num = nums.pop(size - 1)
time.sleep(2)
return num + sum_array(nums)
divide into two halfs...divive each half into two halfs....its log2(N) much better than linear time
I came up with >> return arr.shift() + sum(arr), anything wrong with that?
Awesome 🤍
Thank you so much!!!
Thank yoooooou!
💎
Unsbscribe? I'd rather leave development
Divide and conquer. O Log2(N) Now I'll write it, then I'll watch the video. Then I'll roll me eyes and write in in ARmV8 Assembler just to be a chode
For the sum recursion problem, I implemented it using .pop() instead of slicing.
def sum(arr):
if len(arr) == 0:
return 0
return arr.pop() + sum(arr)