First of all, highly grateful for the recursive solutions of Bellman-Ford and Floyd-Warshall algorithms I did not see any other TH-cam video cover this recursive aspect, which is very fundamental to derive to the Bottom-Up DP solution, in my opinion. Why can we not think in the same way for Bellman-Ford algorithm, as we do in Floyd-Warshall ? minCost(source, destination) can include almost all V vertices In the recursive solution, if we just, strip down the first loop in Floyd Warshall ( basically Looping through all nodes as source), are we not calculating minCost(source, destination, V) Basically if we write the Bellman-Ford algorithm like this: # Excluding Caching and all for simplicity def get_shortest_path_from_source(graph: Graph, source: int) -> List[int]: """ param graph: Graph -> Graph is some custom class param source: int -> Source node """ # TC -> O(V ** 2) destination: int max_nodes_to_consider_in_shortest_path: int shortest_paths: List[int] = list() for max_nodes_to_consider_in_shortest_path in range(graph.get_node_count()): for destination in range(graph.get_node_count()): min_cost: int = get_min_cost(source=source, destination=destination, k=max_nodes_to_consider_in_shortest_path) if max_nodes_to_consider_in_shortest_path == get_node_count - 1: shortest_paths.append(min_cost) return shortest_paths def get_min_cost(source: int, destination: int, k: int) -> int: """ Min Cost from source to destination using atmost k nodes """ if source == destination: return 0 if k == -1: return edge_weight[source][destination] # Below operations are constant time, as we'll cache these min_cost_including_kth_node: int = get_min_cost(source, k, k-1) + get_min_cost(k, destination, k-1) min_cost_excluding_kth_node: int = get_min_cost(source, destination, k-1) return min(min_cost_including_kth_node, min_cost_excluding_kth_node) This may reduce Bellman-Ford to O(V**2) from O(VE). Am I missing something here ? If not why are we not thinking about nodes instead of edges in Bellman-Ford, as iterating over edges is O(V**2) in complete graph ?
you proof is not correct. why you can be sure that you can update n intermediate nodes with only n operations. why multiple update is not required afterwards.
it does actually get multiple updates (some of them), if you look at the matrix you can see some of the updated elements are updated after we increased the "k" again, meaning that if the number did'nt change after the "k" was increased, the element did'nt need any ubdates
Thank you so much for this explanation!
Thank you for uploading this material. Really helpful !!
Thank you for the video! It's clear and easy to understand.
best explanation on yt, thank you!
spectacular video, great graphics and very easy to understand! easy the best i could find!
You are the best professor...🎉
Best explanation found... This video should be appreciated more ... Its beeter than those indian youtubers
i learned a ton from ur videos thanks.
Thanks for the great video!
First of all, highly grateful for the recursive solutions of Bellman-Ford and Floyd-Warshall algorithms
I did not see any other TH-cam video cover this recursive aspect, which is very fundamental to derive to the Bottom-Up DP solution, in my opinion.
Why can we not think in the same way for Bellman-Ford algorithm, as we do in Floyd-Warshall ?
minCost(source, destination) can include almost all V vertices
In the recursive solution, if we just, strip down the first loop in Floyd Warshall ( basically Looping through all nodes as source), are we not calculating minCost(source, destination, V)
Basically if we write the Bellman-Ford algorithm like this:
# Excluding Caching and all for simplicity
def get_shortest_path_from_source(graph: Graph, source: int) -> List[int]:
"""
param graph: Graph -> Graph is some custom class
param source: int -> Source node
"""
# TC -> O(V ** 2)
destination: int
max_nodes_to_consider_in_shortest_path: int
shortest_paths: List[int] = list()
for max_nodes_to_consider_in_shortest_path in range(graph.get_node_count()):
for destination in range(graph.get_node_count()):
min_cost: int = get_min_cost(source=source,
destination=destination,
k=max_nodes_to_consider_in_shortest_path)
if max_nodes_to_consider_in_shortest_path == get_node_count - 1:
shortest_paths.append(min_cost)
return shortest_paths
def get_min_cost(source: int, destination: int, k: int) -> int:
""" Min Cost from source to destination using atmost k nodes """
if source == destination: return 0
if k == -1: return edge_weight[source][destination]
# Below operations are constant time, as we'll cache these
min_cost_including_kth_node: int = get_min_cost(source, k, k-1) + get_min_cost(k, destination, k-1)
min_cost_excluding_kth_node: int = get_min_cost(source, destination, k-1)
return min(min_cost_including_kth_node, min_cost_excluding_kth_node)
This may reduce Bellman-Ford to O(V**2) from O(VE).
Am I missing something here ? If not why are we not thinking about nodes instead of edges in Bellman-Ford, as iterating over edges is O(V**2) in complete graph ?
Keep with yours amazing vídeos sir
C'est très clair, merci !
Thanks alot
thanks! finally i understand this fucking algorithm
thanks
Why k< 0 and not k
its not a directed graph...so its more hard in this example
you proof is not correct. why you can be sure that you can update n intermediate nodes with only n operations. why multiple update is not required afterwards.
Please point me to where in the video you are referring to.
it does actually get multiple updates (some of them), if you look at the matrix you can see some of the updated elements are updated after we increased the "k" again, meaning that if the number did'nt change after the "k" was increased, the element did'nt need any ubdates