in the proof from 17:00-35:00 Isn't he just proofing this for the special cut he defined? But the first part of the theorem states |f| = c(S,T) for ANY S,T cut
max flow |f| = some c(S,T) cut..... not any I think. some (S,T) cut may have a larger capacity which is okay since all we need id f < any (S,T) cut. here , he just showed that there is always some (S,T) cut out there such that |f| = capacity of that (S,T) cut. did I make sense?
There is no error in the lecture. In this patological case, the residual capacity of the augmenting path is "1" on each iteration (because the edge with minimum capacity is either a->b or b->a). At the same time, the maximum flow of this graph is 2 billion: there are 2 billion units going out of source (1 billion in s->a and 1 billion in s->b); you can also see it in terms of the sink, again there are 2 billion units coming in sink. So, to achieve the maximum flow, we need 2 billion add-1-unit iterations.
@@FunnyMartix For some path-search stragety, it may select what you mentioned( path: s->a->t), but it's also possible for other stragety it always selects the s->a->b->t and s->b->a->t alternately,which will lead to the specific 2 billion iterations. After all, this is just a demostration for bad case of Ford-Fulkerson Algorithm. Hope this make sense to u.
try to manually do the ford-fulk algo , but always choose the bad path (s-a-b-t) and (s-b-a-t)....and see what happens. You'll easily see why it will run 2 billion times :)
@@FunnyMartix the reason is the Ford Fulkerson does not specify the augmenting path to pick if there are multiple choices. In the worst case if we are not lucky, it could pick the bad path always and the algo just runs 2 billion iterations.
The last example ,from what I understood allows team NY to win only one game w.r.t the games it plays with the other 5 teams ,but can there be a case where the team wins a game against a team we have not considered and hence not allowed team 5 to still make it ? Please correct me if I am wrong
1:21:00 Can someone help me out? srini said that it has capacity infinite in that case it makes sense in not considering it in min cut value cuz it is limited by the source (s), but we are going from 'S' to 'T' for those flows but the infinite flow is from "T" to "S" (S and T are the partition that we obtained after cut) so it must be -infinite ....why don't we consider the flow through cut here as we did in (th-cam.com/video/VYZGlgzr_As/w-d-xo.html)....... I understood the capacity of the middle region is infinite because they are limited by the incoming flow from source(so that will be adjusted accordingly).
The trick is that in a directed graph, we only have to cut the edges that go from the resulting S to T. So, intuition wise, we are separating T so that flow can't get from s to t. Think about it as if s is a battery and t is a light bulb. en.wikipedia.org/wiki/Minimum_cut#With_terminal_nodes may help also
The capacities of inner edges (between the "team-pairs" layer and "teams" layer) are irrelevant and so are set to infinity. Each of these edges indicates the number of games a particular team won when playing against some other team (for example, flow through edge "1-2" -> "1" shows how many times Team 1 won while playing against Team 2). Clearly, the flow through each of these edges is bounded by the number of games each team-pair has to play. Since these numbers are already specified as capacities of source-edges and sink-edges, they effectively restrict max flow that can go through inner edges. Thus, there is simply no need to put additional restrictions on inner-edge capacities.
Did the professor give a wrong answer to his baseball example or am I the one who thinks wrong? If the number of games between the leading teams (which is 30 in the example) is greater than the capacity to the drain (26), then Detroit is out because some games will exceed the drain capacity. Only when total capacity from the source, said Cs, is less than or equal to total capacity to the drain (26) then one needs to run network flow algorithm to decide. In such case only if the maximum flow is smaller than Cs then Detroit is out, because that means some game is going to add to a team with saturated capacity to the drain to eliminate Detroit.
Don’t know if you’ll ever see this reply lol, but if I understood you correctly, neither you nor the prof seems to be wrong. The latter half of your argument is the same argument he made, but the difference between your comment and what he said is that you additionally (in the first part of your comment) found a faster verification method, but it relies on the specific conditions you laid out. The max flow approach applies to every scenario, which is the benefit of that analysis. If you notice, your argument that if the capacity to the drain (which I’m assuming you mean if you take a cut where S includes everything but t, and T={t}) is less than the total number of games required to be played, you don’t need to run the algorithm. But, if you run the algorithm, you know by the max-flow min-cut theorem that the value of the flow returned is going to be upper bounded by the capacity into the drain anyway, so the flow approach works here too, though seemingly unnecessary. I guess we mostly care about worst case analysis though, since it will take the same amount of time for the computer to figure out regardless of if the problem is structured nicely enough to make assumptions. Hope that clarified things!
You're rocking it, Srinivas 🎉
Max-flow min-cut theorem and the proof starting from 17:00
THANK YOU
This whole series is awesome and the instructors are amazing. So full of energy and with amazing teaching skills. Thank you so much all of you :)
31:25 professor enjoying his frisbee skill
outstanding
Good explanation
I think it will be better that the camera always focuses on the blackboard.
No thanks! I like that it focuses on the professor. You can use the course slides from the website.
I also think it will be better that the camera always focuses on the blackboard.
No! It is so much fun watching this professor. He is entertaining and can explain things really well.
in the proof from 17:00-35:00
Isn't he just proofing this for the special cut he defined? But the first part of the theorem states |f| = c(S,T) for ANY S,T cut
max flow |f| = some c(S,T) cut..... not any I think. some (S,T) cut may have a larger capacity which is okay since all we need id f < any (S,T) cut.
here , he just showed that there is always some (S,T) cut out there such that |f| = capacity of that (S,T) cut.
did I make sense?
This helped a lot
at 42:49, I didn't understand why are there 2 billion iterations? Doesn't it keep oscillating between the two paths?
There is no error in the lecture. In this patological case, the residual capacity of the augmenting path is "1" on each iteration (because the edge with minimum capacity is either a->b or b->a). At the same time, the maximum flow of this graph is 2 billion: there are 2 billion units going out of source (1 billion in s->a and 1 billion in s->b); you can also see it in terms of the sink, again there are 2 billion units coming in sink. So, to achieve the maximum flow, we need 2 billion add-1-unit iterations.
@@dimakuv I don't understand why you can't do s-a-t and skip the a-b or b-a bottleneck. Thanks in advance!
@@FunnyMartix For some path-search stragety, it may select what you mentioned( path: s->a->t), but it's also possible for other stragety it always selects the s->a->b->t and s->b->a->t alternately,which will lead to the specific 2 billion iterations. After all, this is just a demostration for bad case of Ford-Fulkerson Algorithm. Hope this make sense to u.
try to manually do the ford-fulk algo , but always choose the bad path (s-a-b-t) and (s-b-a-t)....and see what happens. You'll easily see why it will run 2 billion times :)
@@FunnyMartix the reason is the Ford Fulkerson does not specify the augmenting path to pick if there are multiple choices. In the worst case if we are not lucky, it could pick the bad path always and the algo just runs 2 billion iterations.
Some other example would have helped students outside MIT. The league structure and scores seems a bit obfuscating.
The last example ,from what I understood allows team NY to win only one game w.r.t the games it plays with the other 5 teams ,but can there be a case where the team wins a game against a team we have not considered and hence not allowed team 5 to still make it ?
Please correct me if I am wrong
1:21:00 Can someone help me out? srini said that it has capacity infinite in that case it makes sense in not considering it in min cut value cuz it is limited by the source (s), but we are going from 'S' to 'T' for those flows but the infinite flow is from "T" to "S" (S and T are the partition that we obtained after cut) so it must be -infinite ....why don't we consider the flow through cut here as we did in (th-cam.com/video/VYZGlgzr_As/w-d-xo.html)....... I understood the capacity of the middle region is infinite because they are limited by the incoming flow from source(so that will be adjusted accordingly).
The trick is that in a directed graph, we only have to cut the edges that go from the resulting S to T. So, intuition wise, we are separating T so that flow can't get from s to t. Think about it as if s is a battery and t is a light bulb. en.wikipedia.org/wiki/Minimum_cut#With_terminal_nodes may help also
how would you apply same concept to cricket tournaments where there are 3 possible outcomes of each match + net run rate in case of equal points?
42:10 I thought it was really 2 billion iterations
it is from clrs
it is, because each edge has capacity 10^9 = 1 billion, not 109 (which is what I personally read at first)
🚀❤
At 1:08:31, how the capacities of edges are infinite?
The capacities of inner edges (between the "team-pairs" layer and "teams" layer) are irrelevant and so are set to infinity. Each of these edges indicates the number of games a particular team won when playing against some other team (for example, flow through edge "1-2" -> "1" shows how many times Team 1 won while playing against Team 2). Clearly, the flow through each of these edges is bounded by the number of games each team-pair has to play. Since these numbers are already specified as capacities of source-edges and sink-edges, they effectively restrict max flow that can go through inner edges. Thus, there is simply no need to put additional restrictions on inner-edge capacities.
@@dimakuv shit im copying parts of your comment for my hw
I got lost with the baseball part, don't understand at all
Did the professor give a wrong answer to his baseball example or am I the one who thinks wrong?
If the number of games between the leading teams (which is 30 in the example) is greater than the capacity to the drain (26), then Detroit is out because some games will exceed the drain capacity.
Only when total capacity from the source, said Cs, is less than or equal to total capacity to the drain (26) then one needs to run network flow algorithm to decide. In such case only if the maximum flow is smaller than Cs then Detroit is out, because that means some game is going to add to a team with saturated capacity to the drain to eliminate Detroit.
Don’t know if you’ll ever see this reply lol, but if I understood you correctly, neither you nor the prof seems to be wrong.
The latter half of your argument is the same argument he made, but the difference between your comment and what he said is that you additionally (in the first part of your comment) found a faster verification method, but it relies on the specific conditions you laid out. The max flow approach applies to every scenario, which is the benefit of that analysis.
If you notice, your argument that if the capacity to the drain (which I’m assuming you mean if you take a cut where S includes everything but t, and T={t}) is less than the total number of games required to be played, you don’t need to run the algorithm. But, if you run the algorithm, you know by the max-flow min-cut theorem that the value of the flow returned is going to be upper bounded by the capacity into the drain anyway, so the flow approach works here too, though seemingly unnecessary. I guess we mostly care about worst case analysis though, since it will take the same amount of time for the computer to figure out regardless of if the problem is structured nicely enough to make assumptions. Hope that clarified things!
Alexaaaaaander
who the f** is holding the camera!?
Please restore to previous algorithms for camera's motion. This video involves too much movement of camera.
Dude, these are pre-recorded and uploaded in batches, it's not that they are gonna make adjustments to the filming based on viewers' feedback.