I watched all these graph videos elsewhere over the past 2 years. Nice to see all those names in ur lecture. I'll be able to learn more deeper from u :)
Karger-Stein algo Hi Pavel, Is this the correct CORE ALGO (i.e. the algo that we run in each of those log(n) * log(1/ε) iterations)? : init gmc = ∞ algo(n) : 11 loop 2222 merge the 2 nodes of the selected random edge 2222 if graph has only 2 nodes 333333 mincut = # of edges between the 2 nodes 333333 gmc = min(gmc, mincut) 333333 return 2222 if graph has n/√2 nodes 333333 break 11 endloop 11 algo(n/√2) 11 algo(n/√2) NOTE: 11 | 2222 | 333333 | … -- these are to INDENT the pseudo-code of the algo I'm not feeling confident because my core algo may compute gmc = min(gmc, mincut) more than once, for sure... and hence quite a lot of times during those log(n)*log(1/ε) runs of the core algo. Cud u pls clarify? Thanks! :)
@1:27:00 -- Karger-Stein algo Hi Pavel, In ur proof by induction, by assuming P(n) ≥ 1/logn, u have arrived at logn ≥ 2 In order to compute GMC, the graph should've atleast 2 nodes, i.e. n ≥ 2. When n ≥ 2, logn ≥ log(√2)² ≥ 2 Is this what u intended to prove? Please clarify.
@49:08 -- Algo based on mincut(s,t) = ΣWsv ∀v as s's neighbor (Stoer-Wagner algo) Hi Pavel, I understood why mincut(s,t) = ΣWsv But I didn't understand how W(v, Av) ≤ Cv !! Didn't understand coz in the proof u used W(u, Au) ≤ Cu. This is what we're trying to prove, and u used it even before proving it! Cud u pls clarify? Thanks.
Karger's algo :: O(n⁴log(1/ε)) Hi Pavel, (1) Does this mean we run the algo (i.e. merge the nodes of a random edge by assuming that the edge doesn't belong to GMC) on the unweighted graph kn² = n²log(1/ε) times, and update GMCmin every time? (where mincut during every algo run is the number of edges between the 2 nodes we would be finally left with after multiple node-merges). Is my understanding correct? (2) And, in the case of a weighted graph, mincut during every algo run is the sum of the weights of the edges between the 2 nodes we would be finally left with after multiple node-merges. Rest of the algo behavior is same as with unweighted graph. Is this Karger's algo applicable to weighted graphs also?
1) yes 2) to make it work for weighted graph, we need to modify the procedure of selecting the edge to contract, the probability for the edge being contracted must be proportional to its weight.
@1:09:00 -- Karger'a algo Hi Pavel, When Cmin is the global min cut, and e ∈ Cmin means edge e belongs to the min-cut. then shouldn't probability that e ∈ Cmin = 2/n represent the probability of success? (U said that's the probability of failure)
The 2nd algo :: O(nmlogn) Hi Pavel, (1) Could you please write the algo name. Couldn't understand the name from ur lecture! (2) How is it O(nmlogn)? I'm getting something else! We have to generate the ordered sequence of nodes using binary heap (BH) (implemented as max heap) only once : B = 1 3 2 5 4 -- Starting with set A=∅ and start with all nodes in BH, with 0 as the "initial total weight of all edges connecting the node to nodes in set A" -- all nodes have same weight, so one of them is removed from BH. Let this be 1 -- A = {1} -- Now, for every node in BH, if it has an edge to the latest addition to set A (1 here), add the edge weight to it's current value, else add nothing -- remove node with max value from heap -- and so on -- O(logn) time for each remove() OP O(n) time to update total weight values for all nodes in BH And we do this O(n) times Hence O(n * (n + logn)) to generate the ordered sequence of nodes B -- init mincost = ∞ We start with (s,t) = (final node, penultimate node) in B -- cost = sum of weights of all edges connected to node s this runs in O(m) time, as there can be more than 1 edge between s and some other node mincost = min(mincost, cost) -- Now we merge s and t s = merge(s, t) this runs in O(n) time -- now t again becomes the penultimate node in B (s is the final node in B, and is a set of nodes) -- And we continue this O(n) times until we are left with only 1 entry in B = s, containing all the nodes of the graph -- Hence we run this OP in O(n * (m + n)) -- Total run time T(algo#2) = T(build B) + T(compute mincost) = O(n * (n + logn)) + O(n * (n + m)) = O(n * n) + O(n * m) = either O(n²+nm) or O(nm) Could you please let me know where did I go wrong in my understanding? Thankyou!
If you spend O(n) time to update weights, the total time for a single phase will be O(n^2), and the total time will be O(n^3). Using BH we can do the single phase in O(m log n) so the total time will be O(nm log n). (basically the same as in Dijkstra)
I watched all these graph videos elsewhere over the past 2 years. Nice to see all those names in ur lecture. I'll be able to learn more deeper from u :)
Karger-Stein algo
Hi Pavel,
Is this the correct CORE ALGO (i.e. the algo that we run in each of those log(n) * log(1/ε) iterations)? :
init gmc = ∞
algo(n) :
11 loop
2222 merge the 2 nodes of the selected random edge
2222 if graph has only 2 nodes
333333 mincut = # of edges between the 2 nodes
333333 gmc = min(gmc, mincut)
333333 return
2222 if graph has n/√2 nodes
333333 break
11 endloop
11 algo(n/√2)
11 algo(n/√2)
NOTE: 11 | 2222 | 333333 | … -- these are to INDENT the pseudo-code of the algo
I'm not feeling confident because my core algo may compute gmc = min(gmc, mincut) more than once, for sure... and hence quite a lot of times during those log(n)*log(1/ε) runs of the core algo.
Cud u pls clarify? Thanks! :)
@1:27:00 -- Karger-Stein algo
Hi Pavel,
In ur proof by induction, by assuming P(n) ≥ 1/logn, u have arrived at logn ≥ 2
In order to compute GMC, the graph should've atleast 2 nodes, i.e. n ≥ 2.
When n ≥ 2, logn ≥ log(√2)² ≥ 2
Is this what u intended to prove? Please clarify.
Thanks from India. Great content. resolved my doubts
@49:08 -- Algo based on mincut(s,t) = ΣWsv ∀v as s's neighbor (Stoer-Wagner algo)
Hi Pavel,
I understood why mincut(s,t) = ΣWsv
But I didn't understand how W(v, Av) ≤ Cv !!
Didn't understand coz in the proof u used W(u, Au) ≤ Cu. This is what we're trying to prove, and u used it even before proving it!
Cud u pls clarify? Thanks.
Karger's algo :: O(n⁴log(1/ε))
Hi Pavel,
(1) Does this mean we run the algo (i.e. merge the nodes of a random edge by assuming that the edge doesn't belong to GMC) on the unweighted graph kn² = n²log(1/ε) times, and update GMCmin every time? (where mincut during every algo run is the number of edges between the 2 nodes we would be finally left with after multiple node-merges). Is my understanding correct?
(2) And, in the case of a weighted graph, mincut during every algo run is the sum of the weights of the edges between the 2 nodes we would be finally left with after multiple node-merges. Rest of the algo behavior is same as with unweighted graph. Is this Karger's algo applicable to weighted graphs also?
1) yes 2) to make it work for weighted graph, we need to modify the procedure of selecting the edge to contract, the probability for the edge being contracted must be proportional to its weight.
@1:09:00 -- Karger'a algo
Hi Pavel,
When Cmin is the global min cut, and e ∈ Cmin means edge e belongs to the min-cut. then shouldn't probability that e ∈ Cmin = 2/n represent the probability of success? (U said that's the probability of failure)
No, if we compress two nodes that belongs to different sides of the optimal cut, we have no change to find this cut anymore, so it's a failure
The 2nd algo :: O(nmlogn)
Hi Pavel,
(1) Could you please write the algo name. Couldn't understand the name from ur lecture!
(2) How is it O(nmlogn)? I'm getting something else!
We have to generate the ordered sequence of nodes using binary heap (BH) (implemented as max heap) only once : B = 1 3 2 5 4
-- Starting with set A=∅
and start with all nodes in BH, with 0 as the "initial total weight of all edges connecting the node to nodes in set A"
-- all nodes have same weight, so one of them is removed from BH. Let this be 1 -- A = {1}
-- Now, for every node in BH, if it has an edge to the latest addition to set A (1 here), add the edge weight to it's current value, else add nothing
-- remove node with max value from heap
-- and so on
-- O(logn) time for each remove() OP
O(n) time to update total weight values for all nodes in BH
And we do this O(n) times
Hence O(n * (n + logn)) to generate the ordered sequence of nodes B
-- init mincost = ∞
We start with (s,t) = (final node, penultimate node) in B
-- cost = sum of weights of all edges connected to node s
this runs in O(m) time, as there can be more than 1 edge between s and some other node
mincost = min(mincost, cost)
-- Now we merge s and t
s = merge(s, t)
this runs in O(n) time
-- now t again becomes the penultimate node in B (s is the final node in B, and is a set of nodes)
-- And we continue this O(n) times until we are left with only 1 entry in B = s, containing all the nodes of the graph
-- Hence we run this OP in O(n * (m + n))
-- Total run time T(algo#2) = T(build B) + T(compute mincost)
= O(n * (n + logn)) + O(n * (n + m))
= O(n * n) + O(n * m)
= either O(n²+nm) or O(nm)
Could you please let me know where did I go wrong in my understanding? Thankyou!
If you spend O(n) time to update weights, the total time for a single phase will be O(n^2), and the total time will be O(n^3). Using BH we can do the single phase in O(m log n) so the total time will be O(nm log n). (basically the same as in Dijkstra)
@@pavelmavrin wow! super... understood my mistake... O(nmlogn) it is :)