Good and interesting videos, but how do we see the code better? it is frustrating because there is no way to FF and rewind the video's to view them better.
@@michaelakin766 Rewrite the URL like this -- then it becomes a standard YT video. And you use keyboard shortcuts to FF and rewind. th-cam.com/video/Ck1sHkIijJQ/w-d-xo.html
The general single-source shortest path algorithm for weighted graphs is the Bellman-Ford algorithm. Dijkstra's algorithms requires non-negative weights. Fun fact: the video doesn't really discuss the restriction.
@@joergsonnenberger6836 yes but one could reasonably argue that negative edge weights change the board and since the board is the same in this problem maybe we can exclude them? Just brainstorming.
I thought the same about Dijkstra, and even though it wouldn’t be an effective way to solve it, especially more if n by n is bigger and bigger, but that’s why A* exists as simply enough and most effective path finding algorithm
@@user-hj2um5sz3hA* is only more efficient if the weights follow the triangular equation. As such, it is even more restrictive than the assumption that Dijkstra can be used.
@@user-hj2um5sz3h Yes, Dijkstra's is pretty much never used in actual services. For example if services like google maps used it they would take probably hours or days, maybe even weeks, to find a route if they analysed every possible path around the entire world
…. O(n) when n is the amount of cells. Your algo is not optimal at all, in big board cases you will have the option to skip cells that have values higher then other that may be in the path to the target point. Using the A* algo here would be optimal, when the fitness function is the total sum + amount of jumps to get to the target cell (this is calculated for each option we have to move starting from the first cell) This would have been considered optimal.
I dont think this channel is for you if you want most optimal solutions since he always just gives the most simple solution to the problem. It may be helpful for novices like me.
@@bono95zg No, you are wrong, this is only for worst case, the A* algo will have O(3^p) (3 is because you have 3 moving options from each cell, excluding the first that have 4, p is the length of the optimal path) The nice thing about it is that this does not necessarily go up when the board dimension go up, this is why in very big board cases this will win, although if your optimal path is super super long it will take a lot of time only in worst cases.
@@yanivka optimal path is (m+n) because you start at (1,1) and end on (m,n) in every case. And it is 2^(m+n) because you can only go down or right. And 2^(m+n) will always be much larger than m*n...
You dont actually need to store it in this case, you can start at the bottom right and walk back by taking the square to the top or left which has the lowest value until you are at the most top left square.
@@Krokodil986Yeah, they use A* and Dijkstra but I said "basically" because Google Maps also uses graphs instead of matrices but the basic concept for how these algorithms work is the same.
I have never undestood why these kind of problems are part of an interview. If it is "very common" then the solution can be looked up and everything you are testing for is whether the applicant can memorize the solution. Of course I am extremely bad at these kinds of tests, so it might all be a case of sour grapes 😄
It’s hugely faster if you start from the end. And hugely faster if you add on an estimate of the distance to the endpoint and pick the best estimated path next.
exactly the same problem can be found in the Unified State Exam, the main exam for schoolchildren in Russia entering university, and it can be easily solved in Excel. but we also have some walls or even teleportation
"We're only allowed to go right or down" Oh, I see, I've initially missed this assumption. The algorithm doesn't work if we could move left/up like you normally would when looking for a minimum path.
i guess it works when you can rely on being in the top left corner and the goal in the bottom right but it doesn't track the path you took but the cost. you could add simply storing the i,j value in the if clauses, but how about making it more versatile by checking for visited and reachable coordinates and only processing them
Also, I offer private 1 on 1 tutoring sessions for coding interviews / data structures and algorithms. Send me an email at greg.hogg1@outlook.com if you're interested. Cheers!
Nah bro it is dp bottom up approach also called as tabulation. But djikstra's will be the worst solution for larger data. As it takes 2powN * logn times
What i dont get is that the result returns a grid. How is a grid telling me the shortest path? Like if someone gives me a solution of a 100 by 100 grid and say that is the answer, wouldnt i still have to write an algo to decipher the path in that shortest path grid?
class Solution: def minPathSum(self, grid: List[List[int]]) -> int: m, n = len(grid), len(grid[0]) dp = [[0]*n for _ in range(m)] dp[0][0] = grid[0][0] # initialize first cell for i in range(1, m): dp[i][0] = dp[i-1][0] + grid[i][0] # initialize first column for j in range(1, n): dp[0][j] = dp[0][j-1] + grid[0][j] # initialize first row for i in range(1, m): # rest dp table for j in range(1, n): dp[i][j] = min(dp[i-1][j], dp[i][j-1]) + grid[i][j] return dp[m-1][n-1] I find this more easier O(m*n) time complexity
Kia Ora; this is from my university lecture from 2014, Dynamic Programming is as simple as. begin if at base case; return answer else: for each possible subsolution: Solve the subproblem recursively return the best of those solution This is a possible solution to the problem, I haven't tested in an IDE though. My solution takes a greedy approach; I stop searching the list once I've reached search_list[find_row][find_cell]. # our list List = [[1, 3, 1], [1, 5, 1], [4, 2, 1]] # begin def FindCostToPath(search_list, find_row, find_cell=None, cost=0, price=1): # The row is None if find_row is None: return 0 # The row doesn't exist if find_row > search_list: return 0 # for each possible solution for x in search_list: cost = cost + price; # increase the cost for searching row # if we are at the best case (e.g. only want Row) if (x == find_row) && (find_cell is None): return cost # there is no need to look for cells if (find_cell is None): continue; # we want a cell too row = search_list[x] # check if the row has a cell at "find_cell" if find_cell not in row: continue; # each possible subsolution (this could be done recursively) subcost = FindCostToPath(row, find_cell, None, cost, price); # check if this has been solved if subcost > 0: return subcost # nothing has been solved return 0
# run the function print(FindCostToPath(List, len(FindCostToPath), len(FindCostToPath[0]), 0);
This algorithm doesn't work though. Take [[1,1,1],[1,5,1],[4,2,1]] by changing one element and countering one assumption (that left and up being closer = lower cost path aka don't need to consider bottom or right neighbors) you can see your algorithm fails. Don't reinvent the wheel, form a connected graph with ALL edges into a node and run a lowest cost path algorithm like djikstra's. You have also modified the underlying data, what happens when the cost changes for a node? You have made it unrecoverable.
Code is nice, though your variable names violate the PEP8 naming conventions. As does the method name, but as I know it was done by LeetCode developers.
@@GregHogg we as humans place initial optimal choices as higher value to us, but that path may not be the best when costing the entire collection. As machines we r able to take more broader costing into account. It’s just an observation how we r wired.
How exactly is this O(1) solution? Clearly for a square of n*n elements you had to do a comparison for each one so this does scale with the size of the input. Whether you're mutating the input or copying data doesn't change the number of operations you have to do.
@@YuzuAndGin O(1) space means that it would be independent from the input size. Which it is not. If you are analysing an algorithm you have to consider the general case and not the particular example. Space complexity here is O(n*m) and runtime complexity as well.
@@GregHogg No he is not and neither are you. Please check wikipedia here about Space complexity: en.wikipedia.org/wiki/Space_complexity Even there it is stated, that the memory occupied by the input is part of the space complexity of your algorithm. This is a fundamental concept that you would also find in any textbook. You are teaching something wrong here and the fact that you consider O(1) as true makes me sad, because I really thought you make a joke here.
@@metinEsturbit's a little bit ambiguous. It isn't clear if you are allowed to clobber the input memory. It isn't clear if the input is passed by value or reference or if it is mutable.
I want to make applications but I am realizing I hate the process of making them. Why can't I just like, eat an apple and then my app just appears in front of the company I want to buy it?
@@Shubhampatil-sx3km It looks like DP but really its Greed. You are simply taking the sum, and using it as a measure. This is essentially a simplified graph for dajikstras, and you simply ignore the actual path in this implementation however you could easily store the path.
Yes. Google djikstras algorithm. All you do is adjust the rules accordingly, as you construct your grid of path costs. The only extra set is to record which indexes you have already explored, so you don't run into a loop condition.
Google? I tried using this since the owner were in the white house with trump, it is work for the better for me or trump at that time (2016) but Microsoft get the Jedi contract for 10 years 1 billion per year.
Right after I went to an event at my local google office, which is the top-most of a 30-story building, and the first short I see - About google. 25 floors in 7 seconds... That elevator exploded my ear canals 😬
"google loves this problem" based on what experience, exactly? also you have NO idea what constant space is if you think that this geometrically increasing problem set isn't solved linearly. also using nested loops... really dude?
That's a really nice problem! I have one question about this solution though. Why do you check: if i != 0: and if j != 0: when you already checked it at: if i == j == 0: continue Isn't the base case already handled?
The line i==j==0 only exists to skip the very first time the double loop is executed. The indivdual. I≠0 is saying as long as i is a number other than zero you have a row above to calculate the "up path" from.
Master Data Structures & Algorithms For FREE at AlgoMap.io!
That's what I'm talking about man
Good and interesting videos, but how do we see the code better? it is frustrating because there is no way to FF and rewind the video's to view them better.
@@michaelakin766 Rewrite the URL like this -- then it becomes a standard YT video. And you use keyboard shortcuts to FF and rewind.
th-cam.com/video/Ck1sHkIijJQ/w-d-xo.html
Good job ! I can't believe Python accepts i == j == 0. There goes my respect for the language...
@@davidespinosa1910 it reads well to me, what's your issue with it?
This feels like a simpler version of Dijkstra's right?
The general single-source shortest path algorithm for weighted graphs is the Bellman-Ford algorithm. Dijkstra's algorithms requires non-negative weights. Fun fact: the video doesn't really discuss the restriction.
@@joergsonnenberger6836 yes but one could reasonably argue that negative edge weights change the board and since the board is the same in this problem maybe we can exclude them? Just brainstorming.
I thought the same about Dijkstra, and even though it wouldn’t be an effective way to solve it, especially more if n by n is bigger and bigger, but that’s why A* exists as simply enough and most effective path finding algorithm
@@user-hj2um5sz3hA* is only more efficient if the weights follow the triangular equation. As such, it is even more restrictive than the assumption that Dijkstra can be used.
@@user-hj2um5sz3h Yes, Dijkstra's is pretty much never used in actual services. For example if services like google maps used it they would take probably hours or days, maybe even weeks, to find a route if they analysed every possible path around the entire world
…. O(n) when n is the amount of cells. Your algo is not optimal at all, in big board cases you will have the option to skip cells that have values higher then other that may be in the path to the target point.
Using the A* algo here would be optimal, when the fitness function is the total sum + amount of jumps to get to the target cell (this is calculated for each option we have to move starting from the first cell) This would have been considered optimal.
I dont think this channel is for you if you want most optimal solutions since he always just gives the most simple solution to the problem. It may be helpful for novices like me.
how would a* be optimal? complexitiy of a* would be O(2^(m+n)) while complexity of this algo is O(m*n)
@@bono95zg No, you are wrong, this is only for worst case, the A* algo will have O(3^p) (3 is because you have 3 moving options from each cell, excluding the first that have 4, p is the length of the optimal path)
The nice thing about it is that this does not necessarily go up when the board dimension go up, this is why in very big board cases this will win, although if your optimal path is super super long it will take a lot of time only in worst cases.
@@yanivka optimal path is (m+n) because you start at (1,1) and end on (m,n) in every case. And it is 2^(m+n) because you can only go down or right. And 2^(m+n) will always be much larger than m*n...
Save that for the interview bud
If all numbers are positive, You can even use positive/negative numbers to store where did you come from to restore the path
You dont actually need to store it in this case, you can start at the bottom right and walk back by taking the square to the top or left which has the lowest value until you are at the most top left square.
Your videos always make my day. Keep shining!
Awe so glad to hear it 😊
They love this because that's basically how Google Maps' navigation algorithm works
not at all, but alr
@@snared_ Then how does it work?
@@nclsDesignit probably uses the a* algorithm but you might have to look that up cuz I'm not sure
@@Krokodil986Yeah, they use A* and Dijkstra but I said "basically" because Google Maps also uses graphs instead of matrices but the basic concept for how these algorithms work is the same.
No
I have never undestood why these kind of problems are part of an interview. If it is "very common" then the solution can be looked up and everything you are testing for is whether the applicant can memorize the solution.
Of course I am extremely bad at these kinds of tests, so it might all be a case of sour grapes 😄
Why do you write your twos like mirrored 6-es 😭 What did 2 ever do to you, sir?
It’s hugely faster if you start from the end. And hugely faster if you add on an estimate of the distance to the endpoint and pick the best estimated path next.
Dynamic programming is used in sequence alignments like RNA aligned to (homologous) DNA.
Indeed!
exactly the same problem can be found in the Unified State Exam, the main exam for schoolchildren in Russia entering university, and it can be easily solved in Excel. but we also have some walls or even teleportation
"We're only allowed to go right or down" Oh, I see, I've initially missed this assumption. The algorithm doesn't work if we could move left/up like you normally would when looking for a minimum path.
i guess it works when you can rely on being in the top left corner and the goal in the bottom right but it doesn't track the path you took but the cost. you could add simply storing the i,j value in the if clauses, but how about making it more versatile by checking for visited and reachable coordinates and only processing them
I submitted just after listening to this approach and got accepted at once ❤
Graph problem. Can be solved with dijkstra algorithm since it doesn't have negative values
Correct! :)
But isn’t this returning the weight of the minimum path possible rather than the actual minimum path?
Yes that is what the question asks for :)
@@GregHogg The way you worded it made me think they were asking for the path itself, great vid regardless! I really enjoy your explanations
@@jorgeimoyeah I thought the same until he got to the center value, and I had to restart the video because I realized I missed something
Minimum path, what if we have to actually return an array of steps to the destination, how would that be solve
Also, I offer private 1 on 1 tutoring sessions for coding interviews / data structures and algorithms. Send me an email at greg.hogg1@outlook.com if you're interested. Cheers!
plz no
Basically a oversimplified version of google map route recommendation algorithm
Reminds me of the needleman-wunsch/smith-waterman algorithm
Dijkstra's algorithm is a dynamic programming code now? Who keep making these up
No Dijkstra's here, there is some dynamic programming though
Nah bro it is dp bottom up approach also called as tabulation. But djikstra's will be the worst solution for larger data. As it takes 2powN * logn times
This misses solutions where going left or up for a bit is faster/shorter.
I believe the constraints are that all numbers are positive and hence that situation cannot happen
In the constraints, it's specified that you can only go right or down. It's just a typical dynamic programming exercise
What i dont get is that the result returns a grid. How is a grid telling me the shortest path?
Like if someone gives me a solution of a 100 by 100 grid and say that is the answer, wouldnt i still have to write an algo to decipher the path in that shortest path grid?
No sorry we return the sum in the bottom right corner
I’m new to programming. What do you mean by “costs” and a better explanations as to why you’re “moving here”
How is this constant space O(1) if you iterate through each element? O(n) no?
Great video, Sir
I learned this in my first semester of ETH and I forgot how AIDS DP is
class Solution:
def minPathSum(self, grid: List[List[int]]) -> int:
m, n = len(grid), len(grid[0])
dp = [[0]*n for _ in range(m)]
dp[0][0] = grid[0][0] # initialize first cell
for i in range(1, m):
dp[i][0] = dp[i-1][0] + grid[i][0] # initialize first column
for j in range(1, n):
dp[0][j] = dp[0][j-1] + grid[0][j] # initialize first row
for i in range(1, m): # rest dp table
for j in range(1, n):
dp[i][j] = min(dp[i-1][j], dp[i][j-1]) + grid[i][j]
return dp[m-1][n-1]
I find this more easier O(m*n) time complexity
The russian state computer science exam has an exact same task. Sometimes though, the grid has "walls," which prevent us from going through them
Its better to use Excel
@@giron_6682 yeah, but no matter what software you use, the algorithm remains the same
Real applications can be task management in large scale using graphs. Or just graphs. Isn't that a bellman Ford ?
DP is probably the most challenging concept I've tackled in programming. I can't wrap my head around this shit, any advice?
Yeah it's definitely tricky. My dp playlist will be coming out in the next few months, until then, I'm a pretty big fan of neetcode's stuff
Kia Ora; this is from my university lecture from 2014, Dynamic Programming is as simple as.
begin
if at base case;
return answer
else:
for each possible subsolution:
Solve the subproblem recursively
return the best of those solution
This is a possible solution to the problem, I haven't tested in an IDE though.
My solution takes a greedy approach; I stop searching the list once I've reached search_list[find_row][find_cell].
# our list
List = [[1, 3, 1], [1, 5, 1], [4, 2, 1]]
# begin
def FindCostToPath(search_list, find_row, find_cell=None, cost=0, price=1):
# The row is None
if find_row is None:
return 0
# The row doesn't exist
if find_row > search_list:
return 0
# for each possible solution
for x in search_list:
cost = cost + price; # increase the cost for searching row
# if we are at the best case (e.g. only want Row)
if (x == find_row) && (find_cell is None):
return cost
# there is no need to look for cells
if (find_cell is None):
continue;
# we want a cell too
row = search_list[x]
# check if the row has a cell at "find_cell"
if find_cell not in row:
continue;
# each possible subsolution (this could be done recursively)
subcost = FindCostToPath(row, find_cell, None, cost, price);
# check if this has been solved
if subcost > 0:
return subcost
# nothing has been solved
return 0
# run the function
print(FindCostToPath(List, len(FindCostToPath), len(FindCostToPath[0]), 0);
Bro is casually throwing gang signs
This algorithm doesn't work though. Take [[1,1,1],[1,5,1],[4,2,1]] by changing one element and countering one assumption (that left and up being closer = lower cost path aka don't need to consider bottom or right neighbors) you can see your algorithm fails. Don't reinvent the wheel, form a connected graph with ALL edges into a node and run a lowest cost path algorithm like djikstra's. You have also modified the underlying data, what happens when the cost changes for a node? You have made it unrecoverable.
The algorithm does work, and yes Dijkstra's would work too
Code is nice, though your variable names violate the PEP8 naming conventions. As does the method name, but as I know it was done by LeetCode developers.
Hahaha love it
I literally solved the question after seeing this short
That's amazing!
It’s optimized for early weight bias !
What's that mean?
@@GregHogg we as humans place initial optimal choices as higher value to us, but that path may not be the best when costing the entire collection. As machines we r able to take more broader costing into account. It’s just an observation how we r wired.
I had something simmilar on a graduation school exams. This can be solved with an Exel.
Top down dp approach is easier to come up in an interview anytime.
What is that reversed, almost-'at' sign?
Dont lie - its O(n*m), and this developed a long time before Google founders was born.
could a summed area table be used here?
How exactly is this O(1) solution? Clearly for a square of n*n elements you had to do a comparison for each one so this does scale with the size of the input. Whether you're mutating the input or copying data doesn't change the number of operations you have to do.
He says it's a constant space solution, not constant time.
@@ababey1644 but it's not, as the space geometrically increases (size M) the space of the probelm increases geometrically
@@conundrum2u That doesn't matter, what matters is the space of the solution.
i dont get it but ill try and learn thanks you for your video they help me understand programming concept as a beginner and a self learner.
I'm about to do my long form series on dynamic programming and you can check that out which will explain in further detail :)
thanks you again i spent my morning learning it now i understand and i am able to use this concept
Best laugh I had today... O(1) :D :D :D You really should do standup comedy :D
O(1) space solution since he's over writing in place.
@@YuzuAndGin O(1) space means that it would be independent from the input size. Which it is not. If you are analysing an algorithm you have to consider the general case and not the particular example. Space complexity here is O(n*m) and runtime complexity as well.
Yuzu is correct here
@@GregHogg No he is not and neither are you. Please check wikipedia here about Space complexity: en.wikipedia.org/wiki/Space_complexity
Even there it is stated, that the memory occupied by the input is part of the space complexity of your algorithm. This is a fundamental concept that you would also find in any textbook. You are teaching something wrong here and the fact that you consider O(1) as true makes me sad, because I really thought you make a joke here.
@@metinEsturbit's a little bit ambiguous. It isn't clear if you are allowed to clobber the input memory. It isn't clear if the input is passed by value or reference or if it is mutable.
I want to make applications but I am realizing I hate the process of making them. Why can't I just like, eat an apple and then my app just appears in front of the company I want to buy it?
That's incredibly easy
fuck it i go greedy
May i know how?
Coz when we do greedy there is a possibility of missing the best path i guess
Greedy will be 8
Not efficient
Isnt this something like Djikstra?
it *is* djikstra
It isn't djiksha it's dynamic programming approch multi stage graph problem .As it takes decision at each stage.
@@Shubhampatil-sx3km It looks like DP but really its Greed. You are simply taking the sum, and using it as a measure. This is essentially a simplified graph for dajikstras, and you simply ignore the actual path in this implementation however you could easily store the path.
What about multiple paths being equal at the beginning how do you choose?
The min function will already take care of that if both paths are equal
Why does that matter?
Does this solution also find the shortest path if the path goes to the left or up?
I don't think so. This solution doesn't work for all cases. Imagine a maze with walls of high-value, that won't work.
Yes. Google djikstras algorithm. All you do is adjust the rules accordingly, as you construct your grid of path costs. The only extra set is to record which indexes you have already explored, so you don't run into a loop condition.
What do you mean by a constant space solution?
No extra space used! So like no stack, extra array, recursion etc
Awesome 👏
A*
arr[-1, - 1], there you go. one step, no lines
Ez
That is a basic task from russian national exam
Haha okay!
Google doesn't give a f**k when your job is done xD
Why do you say that?
@@GregHogg big corps doesn’t care about people
Isn't that BFS ?
Not really
Чуваки с ЕГЭ по информатике уже ждут оффер из гугла
I don't 😅get it, what is the problem 🤔 , do I alon here?
Smallest sum of path from the top left to the bottom right, that's all
Why not Dijkstra?
Because dp ;)
Because he’s only allowed to go right and down so Dijkstra would be an over complication.
You should have uses recursion and memoization. This is the worst solution to this problem possible.
BFS DFS???
You probably could. We just iterate the grid for this one
And this is the reason I'm not a programmer. This made absolutely no sense to me
That's totally okay. It likely takes more than a quick video to fully grasp these things.
path finding algorithm ?
Sorta, yeah! Not like Dijkstra's though
This is just pathfinding...
djikstra's algorithm, i believe
Similar problem it would solve, but we didn't really use Dijkstra's
@@GregHogg i know, i was trting to say djiksta's algorithm would solve it.
a*
isnt that just needleman wunsch?
Not familiar with that term!
@@GregHogg nvm, its for getting alignment of strings and uses recursive backtracking
Backtracking always works
Just use any of the greedy algorithm or dynamic programming algorithm, e.g prism or kruskal, travelling salesman or anyone
Google? I tried using this since the owner were in the white house with trump, it is work for the better for me or trump at that time (2016) but Microsoft get the Jedi contract for 10 years 1 billion per year.
Fcuk google
Right after I went to an event at my local google office, which is the top-most of a 30-story building, and the first short I see - About google.
25 floors in 7 seconds... That elevator exploded my ear canals 😬
Okay I quit
isnt this just breath first search
Hmm, not really actually!
What a bad name: dynamic programming
I completely agree honestly
Just ask got
Dijkstra's bby
Yes you could :)
this is hard
Yeah, mediums are usually pretty difficult haha
WRONG
"google loves this problem" based on what experience, exactly? also you have NO idea what constant space is if you think that this geometrically increasing problem set isn't solved linearly. also using nested loops... really dude?
I did not even understand a thing suggest me something so that i can understand what are you really talking about
I'm working on long form solutions to all of these :)
Who would bother passing hard interviews at Google to end up being micromanaged by a random woman or DEI hire? And forget about promotions
Get your sexist bs off my channel
That's a really nice problem! I have one question about this solution though.
Why do you check:
if i != 0:
and
if j != 0:
when you already checked it at:
if i == j == 0:
continue
Isn't the base case already handled?
The line i==j==0 only exists to skip the very first time the double loop is executed.
The indivdual. I≠0 is saying as long as i is a number other than zero you have a row above to calculate the "up path" from.
Doesnt that only check if both j and i are equal to 0? I don't write python so not 100% sure