12. Bellman-Ford

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 38

  • @sheeevum
    @sheeevum 2 года назад +38

    I initially found this lecture frustratingly opaque, but after thinking about it for a while I realized it was just a lot of material that Jason had to go through in a small amount of time. I found it valuable to refactor the lecture in this way:
    Bellman-Ford works to solve two problems:
    1. How to handle cycles?
    - We've already identified algorithms that can find fast single source shortest paths on directed acyclic graphs, so let's recast the graph as a DAG
    - To do this, make a V x W grid of nodes and reconstruct edges to force you down levels for each edge. Jason explains this late in the lecture at 35:04, but I thought it more intuitive to think of that up front to frame the problem. But what should be the number of levels W to go out to?
    - We definitely want to cover all simple paths. ***The longest simple path takes V-1 edges to traverse, as explained at 13:35, so start by thinking of that for now.
    - In fact, we'll just take the distances to the V-1th traversal as the minimum distances for now, since they integrate information for all simple paths.
    2. How to handle negative weights? (And in combination with #1, how to handle negative weight cycles?
    - Now we need to think about cycle lengths. The most adversarial cycle you could construct would be through every node of the graph in a big circle, and you'd need V edge traversals to complete that and get back to the source.
    - ^That's a key point! If you go out V edges, you will complete not only every simple path, but also every cycle! ***Comparing V with V-1 distinguishes between simple paths and cycles.
    - So if we compare the V-1'th nodes with the V'th nodes and and find the shortest path to V'th to be less for one node, there must have been a negative weight cycle present. Jason talks about this at 18:00 theoretically and in example at 55:20. We'll call that node a witness, and return to it a little later. That's pretty nifty, though, damn.
    - But this is only relevant if you can prove the converse: every negative cycle must have at least one witness node that dropped distance between the V-1 and V'th level. That way, we just need to look out for witness nodes to signal us there's a negative cycle present. Jason proves this at 22:57, but I got better intuition by thinking of what a witness actually is on the graph:
    - We already know from above that I'll complete every cycle by V edge traversals. So if there's a negative cycle, I'll have either *just* traversed it if it has V elements in a big circle, or I'll have already traversed it some time ago & the shortest distance is circling the cycle again and again. Either way, by completing the negative weight cycle, I'll have ended more negative than when I started. If I keep traversing more edges along the cycle, it'll get more and more negative! ***And importantly, as long as I keep circling, each newly visited node along the negative cycle will be the witness. So every negative cycle must have a witness, and the witness is just the last node visited when I've already circled the cycle.
    - Now that I have a witness, I know there must be a negative weight cycle. So just find everything connected to the witness nodes, which is everything that would be ratcheted down to minus infinity distance. Then just go back and erase the V-1th level distances & replace with minus infinity!
    I hope this explanation is helpful for others in the future! Maybe it'll give a little more intuition to inform why the proofs are important & how they stitch together.

    • @jialunluo769
      @jialunluo769 2 года назад +1

      hidden gem! Thanks

    • @ecpgieicg
      @ecpgieicg 9 месяцев назад

      The number of comments saying the lecture presented a "modified" Bellman-Ford baffles me. It makes me wonder how many of these complaints come from ppl who understand (or even thought about) why Bellman-Ford works.
      The transformed graph is a very simple way to see what each pass of DAG relaxation does. That is, (as long as you remove those 0-weight edges), each pass considers an additional set of paths of length 1 edge greater.
      It's probably better to first discover for oneself why DAG relaxation doesn't work with cycles and cannot be modified to work with cycles containing negative edges. And thus it is more motivating to do the graph transformation -- which maps paths 1-to-1, and gets rid of cycles.
      Prof Ku's lectures tend not to be ones where the learning wraps up within the lecture.
      It is also true that these issues are better shown first with examples. Some inductive steps are more elaborate -- although in my opinion, the graph transformation already can prove correctness without induction, and that's why it's good -- still the packaging found in textbook proofs is generally worth stressing over. They are best included in the main lecture. But more details can be saved for lecture notes since listening to them is not better than reading them.
      Still, the complaints make zero sense when taken literally.

    • @4Znak0000
      @4Znak0000 8 месяцев назад

      You may not see it, but I salute you

    • @othersherbet
      @othersherbet 2 месяца назад

      you're awesome.

  • @abcde9421
    @abcde9421 3 года назад +11

    I found this slightly more difficult than the vanilla Bellman Ford, but the method of graph duplication is eye-opening to me. Thanks Prof. Jason for the nice introduction.

  • @kyrylogryshko8572
    @kyrylogryshko8572 2 года назад +10

    This instructor is true confusion master. Having the same letter to mean different things in one formula, having quite different writings of the same letter in one sentence / claim, plus many other tricks to bring deeper confusion (they work so well sometimes that you just do not notice the trick but feel confusion about the topic that seem simple). This is really new level in confusion spreading. Still I found it is good that other version of the same lecture (from 2011 L17) are available on youtube/OCW. If what you are looking is understanding and not confusion you can check them out instead.

    • @JeffreyConcerto
      @JeffreyConcerto 2 года назад +1

      Lol, so true. The antithesis of a good teacher, unfortunately. The 2011 lecture is light years better.

    • @netoskin
      @netoskin Год назад

      Dr. Srini Is just a better lecturer

  • @VishalSingh-wn4ul
    @VishalSingh-wn4ul 5 месяцев назад

    I see people find it hard to understand than the first 11 lectures. but guys please trust me (a stranger on internet) that this lecture is truly worth it. first half of the lecture was bit unclear to me but after reading explanation of the first half from the comment section I get the intuition. after the first half you will get to learn something truly amazing. I have some previous knowledge of bellman-ford (actually I watched 2011 version of 6.006 bellman-ford lecture, lec 17, on suggestion from fellow self learners in comment), but this explanation is amazing, new, and intuitive. so guys if you find it hard, bear with it. ask for help. don't give up

  • @user-yg8ss3ey6r
    @user-yg8ss3ey6r Год назад +3

    I thought this lecture was brilliant.
    The way he relates the concept of DAG relaxation through graph duplication is amazing.
    I think there's a point as to why he chose to explain a modified version of Bellman-Ford.
    I loved it.

  • @sungjuyea4627
    @sungjuyea4627 2 года назад +2

    I guess the heart of this lecture is to understand the concept of
    1. the definition of witness
    2. a witness contains a negative cycle
    3. in a negative cycle we have a witness
    So by 2 and 3, we may think of a witness as "sort of" 1-1 corresponding to a negative cycle. Knowing all the witnesses may show us all the negative cycles in the graph and vice versa.

  • @nullentrophy
    @nullentrophy 3 года назад +3

    I understood this one after watching old version. Thanks MIT and prof. Jason!

  • @raymondwang8334
    @raymondwang8334 3 года назад +4

    Bellman Ford is actually dynamic programming, and also the DAG Relaxation in last lec! Bingo!

  • @alfredwindslow1894
    @alfredwindslow1894 3 месяца назад

    I agree with the comments that this lecture is certainly more confusing than any of the previous. I also had watched the 2011 Bellman-Ford lecture before this and had followed that with little confusion. However I think this lecture does explain Bellman-Ford more concretely. I would definitely recommend spending the extra time to understand each of the different concepts introduced in this lecture.

  • @bigchadryans230
    @bigchadryans230 2 года назад

    Your proofs are elegant. I'm impressed like binge-watching Netflix from 28:29 to the end. Thank you, sir.

  • @Adiosumi
    @Adiosumi 3 года назад +9

    This is a slightly enhanced Bellman-Ford,
    but I think the vanilla one is easier to learn and could help to understand the presented one.
    Erik's 6.046J of 2005 is a good start: ruclips.net/video/Ttezuzs39nk/видео.html.
    The presented Bellman-Ford transformed a graph into a DAG by parameterizing "state" (k-level), and had its equivalence/correctness been proved. A DAG has no cycles (since acyclic) and can apply relaxation (or existing algorithm) happily till the end.
    The vanilla Bellman-Ford runs practically faster, but it stops when it detects a cycle. (simple shortest path)
    The presented Bellman-Ford is able to return shortest paths containing a cycle. (non-simple shortest paths)
    It is also more easily to analyze mathematically, but harder to understand.

  • @mohammadsalehdehghanpour9856
    @mohammadsalehdehghanpour9856 8 месяцев назад

    18:47 how can a simple path contain cycles? Because by definition it shouldn't visit a vertex twice...

  • @violetashopova3586
    @violetashopova3586 8 месяцев назад

    This poses an interesting question - which style is better for TEACHING: from discrete observarion to generalization of ideas, or from general ideas to supporting proofs 🤔

  • @Karim-nq1be
    @Karim-nq1be Год назад +1

    When we reached the witness part, I completely forgot WT... we were supposed to show.

  • @蘇偉華-p4b
    @蘇偉華-p4b 3 года назад +14

    I hope Jason can improve his teaching style. Trying to think his sentences over before speaking it out can help a lot. Also, provide some example is necessary. Thanks anyway!

  • @pariveshplayson
    @pariveshplayson 3 года назад +6

    Some examples would have helped.

  • @bobongleizi5179
    @bobongleizi5179 6 месяцев назад +1

    Am i stupid or this topic is harder than the others??? I had no problem following lecture 1 to 11, but here I have spent like an hour and only halfway through the video

  • @umardaaud4
    @umardaaud4 25 дней назад

    I wish all lectures were thought by prof. Erik Demaine

  • @thinhnguyenvan7003
    @thinhnguyenvan7003 3 года назад +3

    Claim 1: If δ(s, v) is finite, there exists a shortest path to v that is SIMPLE. Can anyone explain for me what is the meaning of the word SIMPLE in this case ?

    • @nishantdhamija2880
      @nishantdhamija2880 3 года назад +3

      simple means not going through the vertex more than once just watch11:35-11:50

    • @thinhnguyenvan7003
      @thinhnguyenvan7003 3 года назад

      @@nishantdhamija2880 thank you so much

    • @pariveshplayson
      @pariveshplayson 3 года назад

      key word is finite ... otherwise one could go on in a cycle with negative weights to continue decreasing the distance

  • @arsyadkamili6628
    @arsyadkamili6628 3 года назад +16

    Gotta say this lecture is the worst one out of the others. Jason stumbled with his words way too much that the explanation becomes incoherent and it becomes really hard to understand what he's trying to convey.

  • @asfdasdf-lm7gz
    @asfdasdf-lm7gz 2 года назад

    What is a witness

  • @vidakkazic3289
    @vidakkazic3289 2 года назад +1

    At 28:53, I don't see how instructor's statement holds. In the earlier claim that he references we had distances between s and v on both sides of the inequality, but here we have s,v on the left side and s,pred(v) on the right, not s,v, therefore I cannot see how the claim can be applied here.

  • @hanyanglee9018
    @hanyanglee9018 2 года назад

    If you have a negative sub circle of 3 elements, while 1e6 elements in total, it may take only 1e4 steps to update to the stable state except for the negative circle. And then you have to update everything in the data like 3e5 times only to get the some node witness the max length of path to be greater than total number of nodes.
    Also I'm confused with the witness. Does it compare the total steps and sum of weight at the same time and tries to figure out something?

  • @haomintian6815
    @haomintian6815 2 года назад

    Why can there be vertices with -∞ shortest-path weight that are not witnesses? An example will be really helpful.

    • @sungjuyea4627
      @sungjuyea4627 2 года назад

      A "triangle" graph - |V| = |E| = 3 - might help(not sure if it helps though)? Every edge has w = -1, and we simply pick a vertex s out of those three vertices. Then for another distinct vertex v, we have \delta_{2}(s, v) = -2 = \delta_{3}(s, v) (obviously there is no path of edge number 3 in this case). I'm afraid this might be a little bit vacuous example, but I think we may find other pathological examples in similar ways.

  • @jackzeinder1705
    @jackzeinder1705 2 года назад +5

    I am pretty sure most of you are witness or a victim of a negative cycle when leaarning about Negative Cycle Witness proof. What the flying fox is the use of the Negative Cycle Witness? Jason this is the part that needs to be addressed for this lecture.

  • @amansinghal5908
    @amansinghal5908 2 года назад +2

    Much better explanation here - ruclips.net/video/ozsuci5pIso/видео.html&ab_channel=MITOpenCourseWare (MIT OCW 2013)

  • @coding99
    @coding99 11 месяцев назад

    20:55