Easily the hardest 'Medium' I have ever seen. If you didn't get this one, don't be discouraged. Just get really good at recursive thinking and come back to it later.
Hi, this is stated MEDIUM but I think it's quite HARD. Anyway, I have an improvement: The lookup of the "pivot" in the Inorder array makes this order of magnitude more complex. The worst case around O(n^2). I took an approach of keeping a stack, whose top tells me if I should close the current subtree. It is O(n). The code as it is is not pleasing to look at, but works: fun buildTree(preorder: IntArray, inorder: IntArray): TreeNode? { if (preorder.isEmpty()) return null var curI = 0 val root = TreeNode(preorder[0]) val stack = Stack().apply { this.add(preorder[0]) } var curP = 1 fun hasNext() = curI < inorder.size && curP < preorder.size fun nextInorder() = if (curI >= inorder.size) null else inorder[curI] fun stackPeekOrNull() = if (stack.isEmpty()) null else stack.peek() fun dfs(curNode: TreeNode) { if (hasNext() && nextInorder() != curNode.`val`) { curNode.left = TreeNode(preorder[curP++]) stack.push(curNode.left!!.`val`) dfs(curNode.left!!) } if (nextInorder() == curNode.`val`) { curI++ stack.pop() if (curI >= inorder.size) return } if (nextInorder() == stackPeekOrNull()) { return } if (nextInorder() != curNode.`val` && nextInorder() != stackPeekOrNull()) { curNode.right = TreeNode(preorder[curP++]) stack.push(curNode.right!!.`val`) dfs(curNode.right!!) } } dfs(root) return root }
agree with you, this one is definitely Hard that requires some tricks and DFS configuration. Thats why I dont trust Leetcode medium and hard labels after a while of grinding.
Storing the index for mid in the hash map would be more efficient IMO. That would lead to time complexity O(n) otherwise it's O(n^2). Adding a section of time complexity is what's missing in most videos. IFF possible, please create time complexity videos for Neet75 and add the pertinent links in the description. That would be super helpful for people who are only using these videos to learn about the right approach to solving these problems. ``` # Definition for a binary tree node. # class TreeNode: # def __init__(self, val=0, left=None, right=None): # self.val = val # self.left = left # self.right = right class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]: # takes the left and right bound of inorder, logic --> any given inorder index bisects the tree in left and right subtree def localBuildTree(leftBound, rightBound): nonlocal preOrderListIndex if leftBound > rightBound: return None newRootVal = preorder[preOrderListIndex] newRoot = TreeNode(newRootVal) preOrderListIndex += 1 newRoot.left = localBuildTree(leftBound, inorderIndexFor[newRootVal]-1) newRoot.right = localBuildTree(inorderIndexFor[newRootVal]+1, rightBound) return newRoot inorderIndexFor = dict() for index,element in enumerate(inorder): inorderIndexFor[element] = index preOrderListIndex = 0 return localBuildTree(0, len(preorder)-1) ```
But the time and space complexity are both O(n^2) because of the inorder.index() function and passing subarrays of preorder/inorder in each stack of the recursion.
@@gouthamr8214 at each step why should we create hashmap if i am only traversing once ? After i travel once that too i return at the first hit itself, i never use that same hashmap again in the code ever.
4:45. The 2nd value in preorder is not guaranteed to be the left node because it might not have a left node. What is guaranteed is in preorder = [root, [leftSubTreeValues], [rightSubTreeValues]]. A node's left subtree values come before its right subtree values in preorder traversal if looking at it in array form.
I also noticed this when he said that. In the example tree, if we take out the 9, the root of 3 has just a right sub tree. What we do know is that any value to the right of a node in preorder is a child. We just do not know which one.
To optimize this further from O(N^2) to O(N): - Create a hashset with the keys being all inorder numbers and their indexes. (Ask your interviewer to confirm all inorder values are UNIQUE). Now instead of having to use inorder.index you can do inorderHash[preorder[0]] Now its still O(N^2) because we are slicing the array every time we do recursion. Lets get rid of that. To handle this we will simply reverse our preorder array, so usually we need to access the first index everytime, now we can just pop the end of the array off everytime instead of slicing to get the correct first index everytime. We basically turned our preorder array into a postorder class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]: inorderHash = {}; for i in range(len(inorder)): inorderHash[inorder[i]] = i;
I don't think it's explicitly stated here (apologies if it is), but not only is the root the first element of the preorder, but all the left subtree items come before the right subtree which is way taking the first mid items only gets left subtree values. And buildtree recursing in preorder (root, left, right) is necessary. This is my mod to use a hashmap and indexing to get linear time. class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
inorder_idx_by_val = {inorder[i]:i for i in range(len(inorder))} def _buildTree(pi, pj, ii, ij): if (pi > pj) or (ii > ij): return None node = TreeNode(val=preorder[pi]) mid = inorder_idx_by_val[node.val] node.left = _buildTree(pi+1, pi+(mid-ii), ii, mid-1) node.right = _buildTree(pi+(mid-ii)+1, pj, mid+1, ij) return node return _buildTree(0, len(preorder)-1, 0, len(inorder)-1) Your channel is awesome and thanks for putting all this out there.
A few things to improve speed or in general: 1) As several people have mentioned, using the index function is inefficient and will search through inorder in linear time until the corresponding value is found. It would be better to build a dictionary and continue to use that (e.g. with a helper function). Alternatively, at least in Java, using an array (list in python) as a map due to the limited input domain is more efficient. 2) Splicing is pretty inefficient as it takes extra time and memory to create the lists. It would be better to create a helper function and use indices.
Building the dictionary takes linear time too. Also note that index function does less work every recursive call due to the divide and conquer nature of this problem
@@abdullahshahid9051 That is true. However, as you mentioned, the work happens every recursive call. In the best and average case time complexities, the in-order search will be O(n lg(n)), considering all the recursive calls, and worst case O(n^2). If you modify the method to only have the dictionary built once, then it will be O(n) for best, average, and worst cases, considering all the recursive calls.
if you use a deque you can get O(n) from collections import deque class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]: def helper(bound=None): if not inorder or inorder[0] == bound: return None root = TreeNode(preorder.popleft()) root.left = helper(root.val) inorder.popleft() root.right = helper(bound) return root inorder = deque(inorder) preorder = deque(preorder) return helper()
God damn. I was about to give up, but I solved it. I was trying to come up with a brute force solution and the key moments that helped me during my thought process were: 1. Noticing that the first element in preorder is always a root node. 2. Noticing that everything to the left of the root value in inorder list is a left subtree and everything to the right is a right subtree. 3. Then you just need to figure out how to apply a recursion to above 2 statements to build the left and right subtrees.
for the preorder indexing, I prefer to say left_size = mid and then use left_size instead of mid. It makes more sense for my mind - since I feel like mid was used more in the context of inorder (as an index of inorder) rather than preorder. For the inorder indexing, I use mid though, cause it makes sense not to include the midpoint. Ex. ``` root = TreeNode(preorder[0]) mid = inorder.index(root.val) left_size = mid root.left = self.buildTree(preorder[1:1+left_size], inorder[:mid]) root.right = self.buildTree(preorder[1+left_size:], inorder[mid+1:]) ```
Since the first value in preorder is always the root, isn't it also possible to use preorder[1:] as inputs for both left and right instead of using mid to split it?
This is a really good explanation. Perhaps the best one I’ve seen. Also, an unpopular opinion: it is quite a good problem too as in it ties both of the in-order and pre-order traversal techniques.
You mentioned that with preorder traversal, the value to the right of another is always the left subtree root. This is not true. In the example tree, if you take away the 9 the root node of 3 has just a right subtree. What we do know with preorder is that any node to the right of another is a child node. We just don't know which one. The solution still works because when we partition the tree we would get an empy list on the left recursive call and return nullptr as our inorder traversal would have the root at the far left of the list and indicating no left children. Just want to clarify this.
My solution beats 98.95% in runtime and 91.32% in memory. It uses stack, use preorder to go down (add left/right node) and inorder to go up: let result = new TreeNode(preorder[0]); let stack = [result]; let current = result; let j = 0; // inorder pointer let addSide = 0; // 0 left, 1 right for (let i = 1; i < preorder.length; i++) { console.log('ADD LEFT', current.val); console.log('stack[stack.length - 1]', stack[stack.length - 1].val); // going up the tree; while (inorder[j] === stack[stack.length - 1].val) { current = stack.pop(); j++; addSide = 1; // if going up then the next add side is right if (stack[stack.length - 1] === undefined) break; } // going down the tree after adding if (addSide === 0) { current.left = new TreeNode(preorder[i]); current = current.left; } else { current.right = new TreeNode(preorder[i]); current = current.right; } stack.push(current); addSide = 0; } return result;
mid is found from inorder but why can you use it to index elements in preorder? Will the lengths of inorder and preorder be identical throughout the recursions and array splitting?
Finding the index is O(N) I would allocate a map pointing all the in order values to its index so you can look up indexes in O(1), but at the cost of some space complexity
One issue with this approach (on an edge case): if the tree is nearly vertical (width 1 each level, randomly left or right), then .index would take O(n) time on average and O(n^2) total. This can be avoided in an iterative method w/ hashmap.
The solution is concise enough. But time and space complexity may be way better. Instead of creating new lists for each subproblem we can use pointers which will represents boundaries (left, right) in main preorder. And also instead of iterating in each subproblem through inorder to get current number index we can create map, which will store indices of each number. We will have space O(n) & time O(n) On leetcode solution with these improvements runs at least 3 times faster.
I want to thank you soooo much! The visualizations & level of analysis is exactly what I needed to understand the algorithm-level solution. Your videos are the best!
Good explanation! I couldn't come up with the idea to partition the pre-order array. Is the reason why you're partitioning the pre-order array with the left and right sub-array sizes of the in-order array because in pre-order left sub-tree comes before right sub-tree?
Ah, I see. Because `mid` from inorder array tells us how many items which will be in the left sub tree. So we count from that. preorder = [400,9,1,2,20,15,17] inorder = [1,9,2,400,15,20,7] mid = 3 (3 is an index when the number is 400) left sub tree will contain [1,9,2] right sub tree will contain [15,20,7] preorder[1:mid+1] = [9,1,2]
I love the structure of your videos! You do such a good job at explaining the approach and how to go about the problem, that I often am able to figure out the code before you even get to that part. Thanks so much!
Thanks for explanation! Still the confusing part for me is that you're using the same "mid" index for both preorder and inorder arrays and cannot catch an idea why is it working :)
@@leah7291, yeah, but the neurons in my brain stubbornly resisted making the necessary connections. Now I finally seem to understand. But it's not something I could ever have figured out on my own
Thanks @NeetCode for everything you are doing. I know your solution is awesome. But, I just tried your subsequent solution (build binary tree from in-order and post-order) and try to implement this problem and it looks like it is working as expected and efficient too def buildTree_nc(self, preorder: list[int], inorder: list[int]) -> Optional[TreeNode]: map_inorder = { v : i for i,v in enumerate(inorder)} def helper(l, r): if l > r: return None root = TreeNode(preorder.pop(0)) idx = map_inorder[root.val] if idx - l > 0: root.left = helper(l, idx - 1) if r - idx > 0: root.right = helper(idx + 1, r) return root return helper(0, len (preorder)-1)
You should also consider the Time complexity of this solution which is nlogn. Easy to use a map of value:index for inorder to get index position in constant time making the overall time complexity n.
Python Code | much more efficient solution in time and space | Improvised from neetcode solution | Must read Improvements: 1. Create a hashmap to retrieve index 2. Pass current interval of preorder and inorder instead of slicing class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]: def r_build_tree(preorder_left, preorder_right, inorder_left): if preorder_left == preorder_right: return None
Thanks its great. personally, it makes more sense for me to use L, R pointers for inorder rather than preorder as per Neetcode's suggestion. Here is mine, i don't like nonlocal stuffs so i made it into separate function. class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]: self.index_map = { val: i for i, val in enumerate(inorder) } return self.build_tree_recur(preorder, inorder, 0, 0, len(inorder) - 1) def build_tree_recur(self, preorder, inorder, preorder_start, inorder_start, inorder_end): if preorder_start >= len(preorder) or inorder_start > inorder_end: return None root = TreeNode(val=preorder[preorder_start]) mid = self.index_map[root.val] root.left = self.build_tree_recur(preorder, inorder, preorder_start + 1, inorder_start, mid - 1) root.right = self.build_tree_recur(preorder, inorder, preorder_start + mid - inorder_start + 1, mid + 1, inorder_end) return root
I really liked this solution, but I have one question regarding dividing our preorder list, how did you arrive at the solution of choosing the left half and right half based on a pt you found in inorder list? Did my question made any sense?
Since mid is equal to the number of nodes on the left subtree. So we use mid to slice preorder arr because we know that the following 'mid' numbers of nodes after the root should be in left subtree.
Preorder and inorder variables make sense only as inputs for the general function. As the number of elements should be the same. Later name preorder and inorder are misleading a bit. Because they do not represent all nodes of the tree and the number of elements in each of them is different. Do I understand this correctly?
LOVE IT! I thought it would be a hard one but you make it so easy!!! I figure out the code just from your drawing explanation, because you explain the concept so clearly!!!
if not preorder or not inorder: return None # Take the root values of subtrees from queue root_val = preorder.pop(0) root = TreeNode(root_val) # Find that root val's index in inorder list to compute the LEFT and RIGHT ind = inorder.index(root_val) root.left = self.buildTree(preorder, inorder[:ind]) root.right = self.buildTree(preorder, inorder[ind+1:]) return root
Only thing that is missing from Neetcode's otherwise almost perfect video is the time and space complexity analysis. So is his solution O(n^2) for time (n recur * n item slicing) and also O(n^2) space (n recur * each recur requiring n space?)?
nice solution, but you probably want to create a dictionary for looking up the root index in the inorder list-otherwise you're doing an O(n) look for each mid, which is O(log n) for the average case but O(n^2) in the worst case.
HOW IS THIS A FUN PROBLEM also i think we can use a recursive helper function then construct a hash table in the parent function storing the indices for nodes in the inorder sequence reducing the o(n) operation to o(1) (this is o(n): mid = inorder.index(preorder[0]) )
I got a variation of this in a real interview where the interviewer swapped the LR order in preorder array. I could get the code working in JS but was hired. Now I'm the CEO of the company.
// For java lovers // We basically need to find the things we see in the example picture from the two arrays given we know the value of mid. int[] leftPreorder = Arrays.copyOfRange(preorder, 1, mid + 1); // second parameter is exclusive just like python. int[] leftInorder = Arrays.copyOfRange(inorder, 0, mid); int[] rightPreorder = Arrays.copyOfRange(preorder, mid + 1, preorder.length); int[] rightInorder = Arrays.copyOfRange(inorder, mid + 1, inorder.length); root.left = helper(leftPreorder, leftInorder); root.right = helper(rightPreorder, rightInorder);
Notice there wasnt. Could you explain what the runtime and memory usage is? I was having a hard time trying to factor in the runtime and memory usage of the array slice.
Great solution vids. Just a heads up in the base case here, you do not have to check "preorder or inorder". They should progress with the exact same amount in each array so you just check if one of them is empty.
Your solution has ~90MB memory usage, while this one have only 19MB memory usage (because we do not copy inorder/preorder sublists). class Solution: def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]: inorder_value_to_index = {value: x for x, value in enumerate(inorder) } node_index = 0
def build(left, right): nonlocal node_index if left > right: return None
I don't think so. There are three key insights, imo. If you didn't know at least one of them, you'd need to discover all three _and_ realize how they complement each other within 15 minutes each: 1. Discover that a tree's `root` partitions its serial inorder traversal. 2. Discover that a tree's serial preorder traversal's `left_subtree` nodes all precede its `right_subtree` nodes 3. Discover that the length of an inorder traversal's left_partition (i.e. `left_subtree`) can be used to select the `left_subtree`'s nodes from the preorder traversal (because they are the same length, `n_nodes`, and all left_subtree nodes are consecutive in preorder traversal). If you had experience (by working problems) with the first two properties of preorder and inorder traversals, you might be able to connect them and discover the third (which is where we should all be after practicing this problem). However, working through the Blind 75 problems, this is the first time I've worked with inorder traversal- even if I knew what inorder traversal was. I guess perspective is worth 20 IQ points.
i got this in VS with code shown in video: 3 , 9 , None , None , 20 , 15 , None , None , 7 , None , None Am i doing something wrong? # Definition for a binary tree node. class TreeNode: def __init__(self, val=[], left=None, right=None): self.val = val self.left = left self.right = right def __str__(self): return f"{self.val} , {self.left} , {self.right}" class Solution: def buildTree(self, preorder, inorder) : if not preorder or not inorder: return None
I'm getting a wrong answer (output = []) when running the code, I've checked several times and it is identical :( Edit: found the issue, was missing a 'not' in front of inorder in the base case!
🌲Tree Playlist: th-cam.com/video/OnSn2XEQ4MY/w-d-xo.html
Easily the hardest 'Medium' I have ever seen. If you didn't get this one, don't be discouraged. Just get really good at recursive thinking and come back to it later.
This is the type of problem you give someone you don't want to hire...
ahahah right? It's doable but very tricky
It took my 2 days to solve 😂😂
Lol 😂
Totaly
@@ayushpatel5463 You're supposed to cheat and learn, not spend 2 days working on pre-solved problems
Thanks for all your help NeetCode and all the effort you put into teaching concepts thoroughly!!
Yo man this is the easiest explanation I found on the internet
you gained a sub
Hi, this is stated MEDIUM but I think it's quite HARD.
Anyway, I have an improvement:
The lookup of the "pivot" in the Inorder array makes this order of magnitude more complex. The worst case around O(n^2).
I took an approach of keeping a stack, whose top tells me if I should close the current subtree. It is O(n).
The code as it is is not pleasing to look at, but works:
fun buildTree(preorder: IntArray, inorder: IntArray): TreeNode? {
if (preorder.isEmpty()) return null
var curI = 0
val root = TreeNode(preorder[0])
val stack = Stack().apply { this.add(preorder[0]) }
var curP = 1
fun hasNext() = curI < inorder.size && curP < preorder.size
fun nextInorder() = if (curI >= inorder.size) null else inorder[curI]
fun stackPeekOrNull() = if (stack.isEmpty()) null else stack.peek()
fun dfs(curNode: TreeNode) {
if (hasNext() && nextInorder() != curNode.`val`) {
curNode.left = TreeNode(preorder[curP++])
stack.push(curNode.left!!.`val`)
dfs(curNode.left!!)
}
if (nextInorder() == curNode.`val`) {
curI++
stack.pop()
if (curI >= inorder.size) return
}
if (nextInorder() == stackPeekOrNull()) {
return
}
if (nextInorder() != curNode.`val` && nextInorder() != stackPeekOrNull()) {
curNode.right = TreeNode(preorder[curP++])
stack.push(curNode.right!!.`val`)
dfs(curNode.right!!)
}
}
dfs(root)
return root
}
Was thinking the same on the difficulty level, this felt like a massive ramp-up compared to the mediums I was doing
agree with you, this one is definitely Hard that requires some tricks and DFS configuration. Thats why I dont trust Leetcode medium and hard labels after a while of grinding.
Storing the index for mid in the hash map would be more efficient IMO. That would lead to time complexity O(n) otherwise it's O(n^2). Adding a section of time complexity is what's missing in most videos. IFF possible, please create time complexity videos for Neet75 and add the pertinent links in the description. That would be super helpful for people who are only using these videos to learn about the right approach to solving these problems.
```
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
# takes the left and right bound of inorder, logic --> any given inorder index bisects the tree in left and right subtree
def localBuildTree(leftBound, rightBound):
nonlocal preOrderListIndex
if leftBound > rightBound:
return None
newRootVal = preorder[preOrderListIndex]
newRoot = TreeNode(newRootVal)
preOrderListIndex += 1
newRoot.left = localBuildTree(leftBound, inorderIndexFor[newRootVal]-1)
newRoot.right = localBuildTree(inorderIndexFor[newRootVal]+1, rightBound)
return newRoot
inorderIndexFor = dict()
for index,element in enumerate(inorder):
inorderIndexFor[element] = index
preOrderListIndex = 0
return localBuildTree(0, len(preorder)-1)
```
Yes, I like your implementation much better. Using the splice operator, as in his example, will also cost O(n) each time it occurs I think.
But the time and space complexity are both O(n^2) because of the inorder.index() function and passing subarrays of preorder/inorder in each stack of the recursion.
We can create a hash map and make it a constant time operation
@@gouthamr8214 We are passing the sublist at each call, creating hashmap requires O(n) time only right?
@@shriharikulkarni3986 creating hashmap will be O(n) but accessing will be a constant time operation
@@gouthamr8214 at each step why should we create hashmap if i am only traversing once ? After i travel once that too i return at the first hit itself, i never use that same hashmap again in the code ever.
@@shriharikulkarni3986 u just have to create hashmap once
4:45. The 2nd value in preorder is not guaranteed to be the left node because it might not have a left node. What is guaranteed is in preorder = [root, [leftSubTreeValues], [rightSubTreeValues]]. A node's left subtree values come before its right subtree values in preorder traversal if looking at it in array form.
I also noticed this when he said that. In the example tree, if we take out the 9, the root of 3 has just a right sub tree. What we do know is that any value to the right of a node in preorder is a child. We just do not know which one.
that wasn't relevant to his solution so I guess he just misspoke there. Good catch though I wondered the same thing.
if we don't have 9 then in preorder list after one recursion the list will be empty and hence we will get null value for left subtree
Thank you. This is very easy to understand. You saved me from sitting at the computer for 5 hours more.
To optimize this further from O(N^2) to O(N):
- Create a hashset with the keys being all inorder numbers and their indexes. (Ask your interviewer to confirm all inorder values are UNIQUE).
Now instead of having to use inorder.index you can do inorderHash[preorder[0]]
Now its still O(N^2) because we are slicing the array every time we do recursion. Lets get rid of that.
To handle this we will simply reverse our preorder array, so usually we need to access the first index everytime, now we can just pop the end of the array off everytime instead of slicing to get the correct first index everytime.
We basically turned our preorder array into a postorder
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
inorderHash = {};
for i in range(len(inorder)):
inorderHash[inorder[i]] = i;
preorder.reverse();
return self.build(preorder, inorderHash, 0, len(inorder) - 1);
def build(self, postorder, inorderHash, start, end):
if start > end:
return;
postorderNum = postorder.pop();
curIdx = inorderHash[postorderNum];
root = TreeNode(postorderNum);
root.left = self.build(postorder, inorderHash, start, curIdx - 1);
root.right = self.build(postorder, inorderHash, curIdx + 1, end);
return root;
Good optimization technique
reverse and pop is great technique. But like spaceoddity1567 said, its really not postorder.
nice
Reversing a preorder array is not equivalent to a postorder array. Other than that, pretty good optimization.
Don't you also need to adjust the start and end idx since the preorder is reversed?
I was also just going through this problem, I really like watching your videos, please keep posting!
Thanks, much appreciated 😃
I don't think it's explicitly stated here (apologies if it is), but not only is the root the first element of the preorder, but all the left subtree items come before the right subtree which is way taking the first mid items only gets left subtree values. And buildtree recursing in preorder (root, left, right) is necessary. This is my mod to use a hashmap and indexing to get linear time.
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
inorder_idx_by_val = {inorder[i]:i for i in range(len(inorder))}
def _buildTree(pi, pj, ii, ij):
if (pi > pj) or (ii > ij):
return None
node = TreeNode(val=preorder[pi])
mid = inorder_idx_by_val[node.val]
node.left = _buildTree(pi+1, pi+(mid-ii), ii, mid-1)
node.right = _buildTree(pi+(mid-ii)+1, pj, mid+1, ij)
return node
return _buildTree(0, len(preorder)-1, 0, len(inorder)-1)
Your channel is awesome and thanks for putting all this out there.
this is the best code i've seen for this problem, way to go!
How are you using mid from the inorder subarray to slice the preorder?
Notice that mid is equal to the number of nodes on the left tree
A few things to improve speed or in general:
1) As several people have mentioned, using the index function is inefficient and will search through inorder in linear time until the corresponding value is found. It would be better to build a dictionary and continue to use that (e.g. with a helper function). Alternatively, at least in Java, using an array (list in python) as a map due to the limited input domain is more efficient.
2) Splicing is pretty inefficient as it takes extra time and memory to create the lists. It would be better to create a helper function and use indices.
Building the dictionary takes linear time too. Also note that index function does less work every recursive call due to the divide and conquer nature of this problem
@@abdullahshahid9051 That is true. However, as you mentioned, the work happens every recursive call. In the best and average case time complexities, the in-order search will be O(n lg(n)), considering all the recursive calls, and worst case O(n^2).
If you modify the method to only have the dictionary built once, then it will be O(n) for best, average, and worst cases, considering all the recursive calls.
@@bob_jones That's a good point, I didn't think of it that way
if you use a deque you can get O(n)
from collections import deque
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
def helper(bound=None):
if not inorder or inorder[0] == bound: return None
root = TreeNode(preorder.popleft())
root.left = helper(root.val)
inorder.popleft()
root.right = helper(bound)
return root
inorder = deque(inorder)
preorder = deque(preorder)
return helper()
Instead of mid, If we rename it to leftTreeLength then we can understand the partitions very easily
God damn. I was about to give up, but I solved it.
I was trying to come up with a brute force solution and the key moments that helped me during my thought process were:
1. Noticing that the first element in preorder is always a root node.
2. Noticing that everything to the left of the root value in inorder list is a left subtree and everything to the right is a right subtree.
3. Then you just need to figure out how to apply a recursion to above 2 statements to build the left and right subtrees.
for the preorder indexing, I prefer to say left_size = mid
and then use left_size instead of mid.
It makes more sense for my mind - since I feel like mid was used more in the context of inorder (as an index of inorder) rather than preorder.
For the inorder indexing, I use mid though, cause it makes sense not to include the midpoint.
Ex.
```
root = TreeNode(preorder[0])
mid = inorder.index(root.val)
left_size = mid
root.left = self.buildTree(preorder[1:1+left_size], inorder[:mid])
root.right = self.buildTree(preorder[1+left_size:], inorder[mid+1:])
```
I was not able to understand indexing of preorder part but this comments got me. Thank you!
If the interviewer gives you this question, he doesn't want you.
Since the first value in preorder is always the root, isn't it also possible to use preorder[1:] as inputs for both left and right instead of using mid to split it?
No, because if there is no left node to the root, then mid would be 0.
This is a really good explanation. Perhaps the best one I’ve seen.
Also, an unpopular opinion: it is quite a good problem too as in it ties both of the in-order and pre-order traversal techniques.
Thanks for the explanation, it was really helpful. You are the Mr.Miyagi of Competitive Coding. P.S: Please keep posting !
leetcode*, not competitive programming
Looking for the video explanation for LeetCode 106 and found this explanation for 105 is very useful too. Thank you so much! :)
You mentioned that with preorder traversal, the value to the right of another is always the left subtree root. This is not true. In the example tree, if you take away the 9 the root node of 3 has just a right subtree. What we do know with preorder is that any node to the right of another is a child node. We just don't know which one.
The solution still works because when we partition the tree we would get an empy list on the left recursive call and return nullptr as our inorder traversal would have the root at the far left of the list and indicating no left children. Just want to clarify this.
Yes, agreed. I pretty much came down to the comments because I was thinking the exact same thing.
My solution beats 98.95% in runtime and 91.32% in memory. It uses stack, use preorder to go down (add left/right node) and inorder to go up:
let result = new TreeNode(preorder[0]);
let stack = [result];
let current = result;
let j = 0; // inorder pointer
let addSide = 0; // 0 left, 1 right
for (let i = 1; i < preorder.length; i++) {
console.log('ADD LEFT', current.val);
console.log('stack[stack.length - 1]', stack[stack.length - 1].val);
// going up the tree;
while (inorder[j] === stack[stack.length - 1].val) {
current = stack.pop();
j++;
addSide = 1; // if going up then the next add side is right
if (stack[stack.length - 1] === undefined) break;
}
// going down the tree after adding
if (addSide === 0) {
current.left = new TreeNode(preorder[i]);
current = current.left;
} else {
current.right = new TreeNode(preorder[i]);
current = current.right;
}
stack.push(current);
addSide = 0;
}
return result;
mid is found from inorder but why can you use it to index elements in preorder? Will the lengths of inorder and preorder be identical throughout the recursions and array splitting?
im sure his version works, but i just followed his logic from his explanation, slice preorder by the lengths of the inorder subarrays
Finding the index is O(N) I would allocate a map pointing all the in order values to its index so you can look up indexes in O(1), but at the cost of some space complexity
nice point:
io = {}
j = 0
for i in inorder:
io[i] = j
j += 1
root = Node(preorder[0])
mid = io[preorder[0]]
But creating the mapping takes O(n), which is the same as .index(), so I don't think this will help.
@@JohnnyMetz it definitely helps, it’s O(n) preprocessed one time, this is O(N) within every recursive loop
One issue with this approach (on an edge case): if the tree is nearly vertical (width 1 each level, randomly left or right), then .index would take O(n) time on average and O(n^2) total. This can be avoided in an iterative method w/ hashmap.
the very best explanation for this problem. thank you!!
Thanks!
The solution is concise enough. But time and space complexity may be way better. Instead of creating new lists for each subproblem we can use pointers which will represents boundaries (left, right) in main preorder. And also instead of iterating in each subproblem through inorder to get current number index we can create map, which will store indices of each number.
We will have space O(n) & time O(n)
On leetcode solution with these improvements runs at least 3 times faster.
Saved my day. Many thanks, genius.
This should be labelled hard🤯
I want to thank you soooo much! The visualizations & level of analysis is exactly what I needed to understand the algorithm-level solution. Your videos are the best!
Good explanation! I couldn't come up with the idea to partition the pre-order array. Is the reason why you're partitioning the pre-order array with the left and right sub-array sizes of the in-order array because in pre-order left sub-tree comes before right sub-tree?
Yes thats exactly correct.
you dont need to partition it you could just pop(0) the value and then pass preorder in both recursion
I don't really get why we pass `preorder[1:mid+1]` for building the left sub tree🤔
Ah, I see. Because `mid` from inorder array tells us how many items which will be in the left sub tree. So we count from that.
preorder = [400,9,1,2,20,15,17]
inorder = [1,9,2,400,15,20,7]
mid = 3 (3 is an index when the number is 400)
left sub tree will contain [1,9,2]
right sub tree will contain [15,20,7]
preorder[1:mid+1] = [9,1,2]
I love the structure of your videos! You do such a good job at explaining the approach and how to go about the problem, that I often am able to figure out the code before you even get to that part. Thanks so much!
loves from India and thank you sir
Thanks for explanation! Still the confusing part for me is that you're using the same "mid" index for both preorder and inorder arrays and cannot catch an idea why is it working :)
That's explained around 12:05
"mid" is the index in the inorder array and also the length of the left subtree after 1 in the preorder array
@@leah7291, yeah, but the neurons in my brain stubbornly resisted making the necessary connections. Now I finally seem to understand. But it's not something I could ever have figured out on my own
Thanks @NeetCode for everything you are doing. I know your solution is awesome. But, I just tried your subsequent solution (build binary tree from in-order and post-order) and try to implement this problem and it looks like it is working as expected and efficient too
def buildTree_nc(self, preorder: list[int], inorder: list[int]) -> Optional[TreeNode]:
map_inorder = { v : i for i,v in enumerate(inorder)}
def helper(l, r):
if l > r:
return None
root = TreeNode(preorder.pop(0))
idx = map_inorder[root.val]
if idx - l > 0:
root.left = helper(l, idx - 1)
if r - idx > 0:
root.right = helper(idx + 1, r)
return root
return helper(0, len (preorder)-1)
There's no way for me to think of this solution. Good explanation, thanks!
You should also consider the Time complexity of this solution which is nlogn. Easy to use a map of value:index for inorder to get index position in constant time making the overall time complexity n.
Python Code | much more efficient solution in time and space | Improvised from neetcode solution | Must read
Improvements:
1. Create a hashmap to retrieve index
2. Pass current interval of preorder and inorder instead of slicing
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
def r_build_tree(preorder_left, preorder_right, inorder_left):
if preorder_left == preorder_right:
return None
nonlocal inorder_hash_map
inorder_root_index = inorder_hash_map[preorder[preorder_left]] - inorder_left
root = TreeNode(preorder[preorder_left])
root.left = r_build_tree(preorder_left + 1, preorder_left + inorder_root_index + 1, inorder_left)
root.right = r_build_tree(preorder_left + inorder_root_index + 1, preorder_right, inorder_left + inorder_root_index + 1)
return root
inorder_hash_map = {}
for index, node in enumerate(inorder):
inorder_hash_map[node] = index
return r_build_tree(0, len(preorder), 0)
Your code is longer . How it is effiicient wow. It seem more complex
Thanks its great.
personally, it makes more sense for me to use L, R pointers for inorder rather than preorder as per Neetcode's suggestion.
Here is mine, i don't like nonlocal stuffs so i made it into separate function.
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
self.index_map = {
val: i
for i, val in enumerate(inorder)
}
return self.build_tree_recur(preorder, inorder, 0, 0, len(inorder) - 1)
def build_tree_recur(self, preorder, inorder, preorder_start, inorder_start, inorder_end):
if preorder_start >= len(preorder) or inorder_start > inorder_end:
return None
root = TreeNode(val=preorder[preorder_start])
mid = self.index_map[root.val]
root.left = self.build_tree_recur(preorder, inorder, preorder_start + 1, inorder_start, mid - 1)
root.right = self.build_tree_recur(preorder, inorder, preorder_start + mid - inorder_start + 1, mid + 1, inorder_end)
return root
Finding the algorithm is not that hard for this (or at least the pattern). It's writing code for this that is hard. Thanks man!
I really liked this solution, but I have one question regarding dividing our preorder list, how did you arrive at the solution of choosing the left half and right half based on a pt you found in inorder list? Did my question made any sense?
Since mid is equal to the number of nodes on the left subtree. So we use mid to slice preorder arr because we know that the following 'mid' numbers of nodes after the root should be in left subtree.
Thanks@@chenpr , that was very helpful
Thank you very much for the videos. They helped me a lot!
I wrote down the code for Inorder and Postorder Traversal, based on this video 😀
if len(inorder)==0 or len(postorder)==0:
return None
tree_len = len(postorder)
root = TreeNode(postorder[tree_len -1])
mid = inorder.index(postorder[tree_len -1])
root.left = self.buildTree(inorder[:mid], postorder[0:mid])
root.right = self.buildTree(inorder[mid+1:], postorder[mid:tree_len -1])
return root
does the subarray in python add space complexity? As opposed to using start and end pointers?
@14:49 why should we include the mid index? the left part only include 1 index to mid-1 index right?..the mid is the root node itself
My question is if they replace one of the arrays with the postorder array, will it still be possible to build the tree?
thank you so much.
Just used this for my final exam!!
Preorder and inorder variables make sense only as inputs for the general function. As the number of elements should be the same.
Later name preorder and inorder are misleading a bit. Because they do not represent all nodes of the tree and the number of elements in each of them is different. Do I understand this correctly?
Woah!
A really clear elaboration I have ever heard
even if we pass the same preorder list excluding the 0th element , it should work ?
Thank you NeetCode so much
Great explanation, my friend. Keep up the good work!
LOVE IT! I thought it would be a hard one but you make it so easy!!! I figure out the code just from your drawing explanation, because you explain the concept so clearly!!!
if not preorder or not inorder:
return None
# Take the root values of subtrees from queue
root_val = preorder.pop(0)
root = TreeNode(root_val)
# Find that root val's index in inorder list to compute the LEFT and RIGHT
ind = inorder.index(root_val)
root.left = self.buildTree(preorder, inorder[:ind])
root.right = self.buildTree(preorder, inorder[ind+1:])
return root
Best solution explanation for this problem on internet :D
Only thing that is missing from Neetcode's otherwise almost perfect video is the time and space complexity analysis.
So is his solution O(n^2) for time (n recur * n item slicing) and also O(n^2) space (n recur * each recur requiring n space?)?
Damn, this Python's slicing is very very powerful. This makes superior to the other programming langauge when it comes to coding interview.
true!! struggling with AddressSanitizer: heap-buffer-overflow in c++ from past few hours!!
nice solution, but you probably want to create a dictionary for looking up the root index in the inorder list-otherwise you're doing an O(n) look for each mid, which is O(log n) for the average case but O(n^2) in the worst case.
LETS GO! This is the first time I completed a problem by myself where it looks identical to yours, that felt good
There is NO WAY you solved it unless it took you like days
@@brawlboy1382 I think it took like an hour and a lot of pen and paper work haha
thanks for making it simple
HOW IS THIS A FUN PROBLEM
also i think we can use a recursive helper function then construct a hash table in the parent function storing the indices for nodes in the inorder sequence reducing the o(n) operation to o(1)
(this is o(n): mid = inorder.index(preorder[0]) )
I code in Java...but I watch your videos for better explanation...and code it myself...how cool
you just made a complicated problem seem f**n easy. Thank you!!
Elegance logic and code implementation. You always surprises me. 💥
I got a variation of this in a real interview where the interviewer swapped the LR order in preorder array. I could get the code working in JS but was hired. Now I'm the CEO of the company.
I got asked this question in my bloomberg interview, and i blew it!
// For java lovers
// We basically need to find the things we see in the example picture from the two arrays given we know the value of mid.
int[] leftPreorder = Arrays.copyOfRange(preorder, 1, mid + 1); // second parameter is exclusive just like python.
int[] leftInorder = Arrays.copyOfRange(inorder, 0, mid);
int[] rightPreorder = Arrays.copyOfRange(preorder, mid + 1, preorder.length);
int[] rightInorder = Arrays.copyOfRange(inorder, mid + 1, inorder.length);
root.left = helper(leftPreorder, leftInorder);
root.right = helper(rightPreorder, rightInorder);
The solution is very clear and precise thanks.
Greattt videos man. Can you do time and space at end of each video. That would literally finish whole cycle.
what if values are not unique?
One of the harder medium questions IMO
Notice there wasnt. Could you explain what the runtime and memory usage is? I was having a hard time trying to factor in the runtime and memory usage of the array slice.
Great solution vids. Just a heads up in the base case here, you do not have to check "preorder or inorder". They should progress with the exact same amount in each array so you just check if one of them is empty.
Thanks, G.O.A.T . all time savior
What is the time/space complexity of this solution?
Great Explanation, was elated to have found such good help.
Is buildTree called for every node, and an index lookup is O(n), so is the solution here O(n^2)?
Your solution has ~90MB memory usage, while this one have only 19MB memory usage (because we do not copy inorder/preorder sublists).
class Solution:
def buildTree(self, preorder: List[int], inorder: List[int]) -> Optional[TreeNode]:
inorder_value_to_index = {value: x for x, value in enumerate(inorder) }
node_index = 0
def build(left, right):
nonlocal node_index
if left > right:
return None
node = TreeNode(preorder[node_index])
split_point = inorder_value_to_index[node.val]
node_index += 1
node.left = build(left, split_point - 1)
node.right = build(split_point + 1, right)
return node
return build(0, len(preorder) - 1)
Really nice explanation. Thanks 👍
Nice approach, Btw do you guys think a normal human being can come up with this approach under 45 minutes?
I don't think so.
There are three key insights, imo.
If you didn't know at least one of them, you'd need to discover all three _and_ realize how they complement each other within 15 minutes each:
1. Discover that a tree's `root` partitions its serial inorder traversal.
2. Discover that a tree's serial preorder traversal's `left_subtree` nodes all precede its `right_subtree` nodes
3. Discover that the length of an inorder traversal's left_partition (i.e. `left_subtree`) can be used to select the `left_subtree`'s nodes from the preorder traversal (because they are the same length, `n_nodes`, and all left_subtree nodes are consecutive in preorder traversal).
If you had experience (by working problems) with the first two properties of preorder and inorder traversals, you might be able to connect them and discover the third (which is where we should all be after practicing this problem). However, working through the Blind 75 problems, this is the first time I've worked with inorder traversal- even if I knew what inorder traversal was.
I guess perspective is worth 20 IQ points.
The code can't be more efficient❤❤
Neatest coding channel out there 😎
i got this in VS with code shown in video: 3 , 9 , None , None , 20 , 15 , None , None , 7 , None , None
Am i doing something wrong?
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=[], left=None, right=None):
self.val = val
self.left = left
self.right = right
def __str__(self):
return f"{self.val} , {self.left} , {self.right}"
class Solution:
def buildTree(self, preorder, inorder) :
if not preorder or not inorder:
return None
root = TreeNode(preorder[0])
mid = inorder.index(preorder[0])
root.left = self.buildTree(preorder[1:mid+1], inorder[:mid])
root.right = self.buildTree(preorder[mid+1:], inorder[mid+1:])
return root
s=Solution()
preorder = [3, 9, 20, 15, 7]
inorder = [9, 3, 15, 20, 7]
print(s.buildTree(preorder, inorder))
Great explanation thank you so much.
Very nice explanation! Thank you!
Really helpful video. The explanation was very thorough and helpful
simple, clear and short!
Please make a video on binary tree construction from preorder and postorder traversals!
I'm getting a wrong answer (output = []) when running the code, I've checked several times and it is identical :(
Edit: found the issue, was missing a 'not' in front of inorder in the base case!
That was beautiful
Awesome explanation thank you
wow, such a neat solution. following the same thoughts, I was able to crack LC106
The moment i start the video i like it coz i already know the explanation is gonna be awesome
I get it how the inorder list can be split with the mid index, but the preorder one is still a bit magic for me.
love the solution as always 😩
Excellent vid
Thanks for the explanation, helps a lot!
Glad it helped!