Algorithms Lab
Algorithms Lab
  • Видео 118
  • Просмотров 363 384
Applications of LP Duality: Matchings, Flows and Shortest Path
In this video we put duality to use. Specifically, we will
1.) prove Hall's theorem, which characterizes when a bipartite graph has a perfect matching based on the duality of max-matching and min-vertex-cover in bipartite graphs (König's theorem)
2.) see two LP formulations of the Max-Flow, which lead to different formulations of Min-Cut (and proving through LP duality the Max-Flow Min-Cut theorem)
3.) Dualize the flow-based formulation of the shortest path problem
One tool used in these applications and discussed in the video are totally unimodular matrices.
00:00 Matchings
03:12 Hall's theorem
04:44 König's theorem
05:47 proof of Hall's theorem
10:00 totally unimodular matrices
12:46 LPs with tot...
Просмотров: 511

Видео

Steiner Forest via Primal-Dual
Просмотров 1,1 тыс.11 месяцев назад
In this video, we look at the 2-approximation for the Minimum Steiner Forest Problem using the primal-dual scheme with synchronized increases. In a previous video, I should you how to use the primal-dual scheme for set cover problem. But this is not just yet another application, but what is new here are the "synchronized increases", i.e., we are increasing several dual variables at the same tim...
MaxSat by LP Rounding
Просмотров 65611 месяцев назад
We take a look at 2 randomized algorithms for Maximum Satisfiability (MaxSat): First simply randomly setting the variables, and then solving the relaxed LP for MaxSat and setting the variables by "randomized rounding". We analyze both algorithms, and show that the combination of both gives us a 0.75-approximation. Further, we get to know conditional expectations as technique for derandomizing r...
LP-based Approximation Algorithms for Set Cover: LP Rounding, Primal-Dual and Dual fitting
Просмотров 1,4 тыс.Год назад
An introduction to approximation algorithms based on linear programming (LP) by the example of the set cover problem. The techniques that we cover are: - LP Rounding - Primal-Dual - Dual fitting (for the analysis of the greedy set cover algorithm) 00:00 Lower bounds from Linear Programming 04:17 Set Cover ILP 07:59 LP Rounding 09:50 Set Cover: Optimum vs Optimum of relaxed LP 11:13 First Attemp...
Linear Programming (LP) Duality, part 1: Introduction and Physical Interpretation
Просмотров 397Год назад
I introduce duality by an example, we take a look at weak and strong duality, go through a recipe for duality and conclude with a physical interpretation of duality: 00:00 Introduction to LP Duality 08:05 Weak Duality Theorem 10:53 Feasibility vs Optimality 15:15 Duality recipe 19:57 Physical Interpretation of LP Duality
Linear Programming (LP) Duality, part 2: Farkas Lemma
Просмотров 640Год назад
Farkas Lemma is the key tool for proving the Strong Duality Theorem. In this video, I provide a geometric interpretation of Farkas Lemma and show how to use it to prove the Strong Duality Theorem. 00:00 Farkas Lemma 05:02 Variant of Farkas Lemma 10:16 Strong Duality Theorem, proof 13:53 proof, part 2
Simplex Algorithm, part 4: Efficiency/Pivot Rules
Просмотров 306Год назад
The focus of this part lies on common options for pivot rules, and the number of pivot steps needed. 00:00 Warm-Up Example 02:43 Efficiently implementing a pivot step 05:42 Pivot Rules 08:25 Exponential running time/Klee-Minty Cubes 10:52 Fewer Pivot Steps? 14:22 Summary of Simplex Method
Simplex Algorithm, part 2: Exceptions
Просмотров 190Год назад
00:00 Unboundedness 04:30 Degeneracy 08:30 Infeasibility, or how to find the initial basic feasible solution 10:20 Auxiliary linear program with auxiliary variables 14:23 Back to the original problem
Simplex Algorithm, part 3: the simplex algorithm in general
Просмотров 169Год назад
After introducing linear programming by examples in the previous 2 videos, we here take a look at a general description of the simplex algorithm. 00:00 Simplex tableaus in general 04:49 Proof of Lemma 07:15 Decisions in the algorithm 09:15 Correctness statement (without proof)
Simplex Algorithm, part 1: Introductory Example
Просмотров 345Год назад
Here I introduce the simplex algorithm by an example. This example follows the book "Understanding and Using Linear Programming" by Jirka Matoušek and Bernd Gärtner. Make sure to know the concepts of equational (aka standard) form and basic feasible solutions, before watching this (I have a video on these concepts.) 00:00 Deriving the equational form and the first basis 02:52 A simplex tableau ...
Theory of Linear Programming: convex polytopes, equational form and basic feasible solutions
Просмотров 298Год назад
In preparation for the simplex algorithm we are taking a look at some algebraic and geometric concepts underlying linear programming. 00:00 linear programming vs linear algebra 02:05 convex polytopes 06:21 cubes and cross-polytopes 10:55 Writing LPs in the form Ax at most x 13:00 Equational form 18:14 basic feasible solutions: geometric intuition 22:38 What is a basic feasible solution (bfs)? 2...
Integer Linear Programming
Просмотров 1,1 тыс.Год назад
Introduction to Integer Linear Programming (ILP). We are going to take a look at ILPs for three problems: - maximum weight perfect matching - minimum vertex cover - maximum independent set 00:00 Integer Linear Programming 04:13 Maximum Weight Perfect Matching 09:54 Integer solution to the LP relaxation 17:49 Minimum Vertex Cover 19:38 Rounding 24:34 Maximum Independent Set 25:58 LP relaxation n...
Integer Linear Programming (ILP), part 2: More Examples + techniques for solving ILPs
Просмотров 341Год назад
This is a "bonus" video on ILPs, where I show you for two algorithmic problems, how to model them as ILP: shortest path and TSP. At the end I briefly discuss techniques for solving ILPs like branch and bound. 00:00 Shortest path as ILP 03:30 Shortest path as LP 05:40 TSP 07:58 MTZ formulation 10:05 DFJ formulation 12:24 Solving ILPs 12:55 Branch and bound 15:33 Cutting Planes
Linear Programming: Introduction and Examples
Просмотров 1 тыс.Год назад
An introduction to linear programming, largely consisting of examples. It follows the book "Understanding and Using Linear Programming" by Jiří Matoušek and Bernd Gärtner (Chapters 1 2) 00:00 Introduction 01:53 A linear program 06:26 Example in Python 07:73 Cases: 1, several, no and unbounded solution 09:44 Efficiency/Algorithms 11:40 Diet problem 16:16 Network Flow 20:11 Fitting a line 24:49 S...
Fully Polynomial-Time Approximation Scheme for the Knapsack Problem
Просмотров 4,7 тыс.Год назад
We first present a pseudo-polynomial time algorithm for the knapsack problem, which we then use as a basis for a fully polynomial-time approximation scheme. There is also some complexity theory in this video, since we need concepts like pseudo-polynomial (vs. polynomial) time algorithms, weakly (vs. strongly) NP-hard problems, and of course (fully) polynomial time approximation schemes. 00:00 K...
Approximation Algorithm for Metric k-Center using Parametric Pruning
Просмотров 942Год назад
Approximation Algorithm for Metric k-Center using Parametric Pruning
Approximation Algorithm for Multiway Cut
Просмотров 992Год назад
Approximation Algorithm for Multiway Cut
Approximations algorithms for the Steiner Tree Problem and the Traveling Salesperson Problem (TSP)
Просмотров 2,1 тыс.Год назад
Approximations algorithms for the Steiner Tree Problem and the Traveling Salesperson Problem (TSP)
From Set Cover to Shortest Superstring
Просмотров 720Год назад
From Set Cover to Shortest Superstring
Approximation algorithm for vertex cover using local ratio (aka layering)
Просмотров 774Год назад
Approximation algorithm for vertex cover using local ratio (aka layering)
Greedy Approximation Algorithm for Set Cover
Просмотров 8 тыс.Год назад
Greedy Approximation Algorithm for Set Cover
Approximation Algorithms: Introduction by the Example of Vertex Cover
Просмотров 4,1 тыс.Год назад
Approximation Algorithms: Introduction by the Example of Vertex Cover
Practical Efficiency of Fibonacci Heaps
Просмотров 158Год назад
Practical Efficiency of Fibonacci Heaps
Fibonacci Heaps
Просмотров 559Год назад
Fibonacci Heaps
Binomial heaps (part 3/3): Lazy Union
Просмотров 442Год назад
Binomial heaps (part 3/3): Lazy Union
Binomial Heaps (part 2/3): Amortized Analysis of Insert
Просмотров 1,4 тыс.Год назад
Binomial Heaps (part 2/3): Amortized Analysis of Insert
Binomial heaps (part 1/3): Introduction and worst-case analysis
Просмотров 1,3 тыс.Год назад
Binomial heaps (part 1/3): Introduction and worst-case analysis
Traveling Salesperson Problem and the Held-Karp Dynamic Programming Algorithm
Просмотров 518Год назад
Traveling Salesperson Problem and the Held-Karp Dynamic Programming Algorithm
Djikstra's algorithms and priority queues (recap)
Просмотров 220Год назад
Djikstra's algorithms and priority queues (recap)
Dynamic Program for Rod Cutting
Просмотров 370Год назад
Dynamic Program for Rod Cutting

Комментарии

  • @Gladieth
    @Gladieth 6 дней назад

    Shoutout to the Discrete Mathematics and Its Applications textbook by Rosen. Really helps provide a solid foundation for this lecture. Also, great video.

  • @ishanvi2585
    @ishanvi2585 9 дней назад

    great video and explanation!

  • @kennysus6174
    @kennysus6174 10 дней назад

    You are an amazing teacher. I hope you are happy in your life.... Sending you all the positive energy i can. You are a nice person

  • @mahshidsoltani4034
    @mahshidsoltani4034 11 дней назад

    loved this, way to go

  • @MatthewRichardJamison
    @MatthewRichardJamison 14 дней назад

    Well explained. Thanks!

  • @miauuuauLP
    @miauuuauLP 21 день назад

    Im struggling to understand why m is limited to n since it does not seem to make a difference in functionality or (what I understood) runningtime. (Same amount of partitions for m=n as for m>n and the algorithm terminates after finding at most n Hull points anyway) Could you provide some example how m > n could bring drawbacks?

    • @algorithmslab
      @algorithmslab 16 дней назад

      You are right: strictly speaking limiting m to n is not necessary. For m>n the algorithm would do exactly the same as for m=n. But the algorithm is formulated as it is, because there is also no point in using an m>n. I'd say the exact formulation boils down to a matter of taste. The original formulation by Chan bounds m to n, on the slides we follow this formulation.

    • @miauuuauLP
      @miauuuauLP 15 дней назад

      @ Thank you for the confirmation :)

  • @donkeyKongKong
    @donkeyKongKong 22 дня назад

    Note to self: In Fractional cascading we first find the canonical nodes based on x coordinate (in the main tree). Then we only traverse and report the nodes in the auxillary arrays for which the main node is a canonical node i.e., for all the canonical nodes we traverse the auxillary array upto a certain point where y coordinate falls in the range.

  • @sanidhyas3s
    @sanidhyas3s 25 дней назад

    This semester this channel has been very helpful for me, luckily it has some material on both Advanced Algorithm and Computational Geometry which were my only 2 courses for this semester.

  • @arevalinchy
    @arevalinchy Месяц назад

    Great lecture! I have a question, it's not quite clear what goes in the nodes of the 2D range tree, does it save the associated tree of points in Y coordinates orders ? if so, then what is stored in the nodes of the associated tree? I'm leaving a timestamp for reference [ 16:28 - 18:44 ]

    • @algorithmslab
      @algorithmslab Месяц назад

      Summary of reply: At the inner nodes of the associated trees, we store (dummy) keys to guide the search on the y-coordinates, just as we have previously seen in the 1d case. For a visualization see here: zhoujoseph.github.io/Orthogonal-range-tree-visualization/ Longer answer: First lets have a look at the 1D search trees again (see 4:58). We use a balanced binary search tree with the points stored at the leaves. The inner nodes only store information to guide the search. Now in 2d we use the same type of binary search tree on the x-coordinates as primary data structure. Thus, again the inner nodes store information to guide the search on the x-coordinate. Additionally, we store at each node a 1d-range tree of all points that fall into the corresponding x-range, but now this range tree is on the y-coordinates, see 16:45. And now to answer your question: Since these are 1D range trees on the y-coordinates, the inner nodes store information to guide the search in y. Thus for the example in 16:45, we could store a 4 at the root, which tells me that if the y-coordinate I am search for is <=4, then I go into the left (lower) subtree, otherwise into the right (upper) subtree. And so on. Here is a nice demo: zhoujoseph.github.io/Orthogonal-range-tree-visualization/ Just click to add a couple of points e.g. 4, and then "Build range tree", and it will nice visualize the tree below including the information stored at the inner nodes. Hope this clarifies it.

    • @aman-rp5fj
      @aman-rp5fj 27 дней назад

      each node stores the subtree with the root as that node and they are ordered using the y-coordinate

  • @ahbarahad3203
    @ahbarahad3203 Месяц назад

    i guess you are using a voice activated mic for recording and its missing words everyt ime you pause

    • @algorithmslab
      @algorithmslab Месяц назад

      Thanks for pointing this now. Yes, I have noticed that some of the videos out of this series unfortunately have audio issues. I fixed my setup by now for future videos.

  • @axlrose5082
    @axlrose5082 Месяц назад

    You're amazing! I finally understood everything involving this interesting DS. Thank you for explaining the intuition behind every property and behaviour!

  • @abhijeet19403
    @abhijeet19403 Месяц назад

    where can i find the ppt used here?

    • @algorithmslab
      @algorithmslab Месяц назад

      The slides are here: github.com/kbuchin/slide-archive/blob/main/advanced-algorithms-2024/07-approx-03-steiner-tree_multiway-cut.pdf They were created in ipe (ipe.otfried.org/), so there is no ppt. You can edit the pdf in ipe.

  • @learntocode6205
    @learntocode6205 Месяц назад

    Why we don't just have amortized cost equals to 1 for every single of the operation?

    • @algorithmslab
      @algorithmslab Месяц назад

      I assume this question is about the multi-pop stack. Lets say the actual costs per operation are: - push: 1 - pop: 1 - multi-pop(k): k Now, lets try to use amortized costs: 1, 1, and 1. The sum of amortized costs should always be at least as high as the sum of actual costs. But if my sequence is: 8*push(), then 1*multi-pop(8), then the sum of amortized costs is 9*1=9, but the sum of actual costs is 8*1 + 1*8 = 16. So the condition above is violated. If we want to assign the multi-pop constant amortized cost, the only way to solve this, is to increase the amortized cost of push to (at least) 2. And if we do so, then we actually save enough to set the amortized cost of pop and multi-pop to 0. Of course, if we wanted to, we could give pop and multi-pop amortized costs of 1 (or 2 or even higher), but this would simply be unnecessary. For a deeper understanding of amortized costs in the accounting method it can be helpful to look at the potential method; see follow-up video. After defining the potential (which in this case is simply the number of elements currently in the stack), the amortized costs can be simply calculated, and the result is push:2, pop:0, multi-pop:0. And to summarize: increasing the amortized cost of pop and multi-pop is of course possible (but useless), decreasing the amortized cost for push is not possible. I hope this clarifies why the costs are set as they are.

  • @zokpls8712
    @zokpls8712 Месяц назад

    Super helpful, awesome explanation. Thank you!

  • @abp18
    @abp18 Месяц назад

    thank you so much

  • @KyleGuy-rp9tg
    @KyleGuy-rp9tg Месяц назад

    It’s 1am nearly for me, picked up games programming again, I’m doing procedurally generated dungeon at the minute and this is perfect for creating paths between rooms, so glad of your video

    • @algorithmslab
      @algorithmslab Месяц назад

      Thanks for the comment, it is always great to hear about applications of computational geometry!

  • @rudygon426
    @rudygon426 Месяц назад

    Your videos are amazing, my professor is new and he often will write too much of the mathematical definitions rather than focusing more on concepts , thank you for creating these

  • @luca-ik2bo
    @luca-ik2bo Месяц назад

    amazing

  • @johnluo7295
    @johnluo7295 Месяц назад

    This is a really nice lp rounding video, thanks.

  • @BigKevSexyMan
    @BigKevSexyMan 2 месяца назад

    I've read that RBTrees are supposed to be more efficient for insertions and deletions over an AVL tree. Why is that? It seems like we're doing almost as many if not more rotations with the addition of having more data points to track in our node.

  • @NaixinZong
    @NaixinZong 2 месяца назад

    Thank you very much

  • @parsarahbari3747
    @parsarahbari3747 2 месяца назад

    one of the greatest explanation for this topic. Nice job!

  • @ayo2084
    @ayo2084 2 месяца назад

    For slab decomposition, I do not understand why the space complexity is n squared. Since we only develop binary search tree for the slab where the query's x-coordinate was found, wouldn't the complexity simply be O(n). Or why do we have to develop binary search trees for every single slab and not just the slab where the query was found

    • @algorithmslab
      @algorithmslab 2 месяца назад

      The corresponding figure and explanation is at 11:10. We need to build the data structure in advance without knowing query. Therefore the query might be in any of the Theta(n) slabs, and therefore we need to build a binary search tree for each of the slabs. In the example, every binary search tree has size Theta(n), so that ends up giving us Theta(n^2) overall. Does that make sense?

  • @sanketpanda5363
    @sanketpanda5363 2 месяца назад

    You really saved me from my exam. Great explanation.

    • @algorithmslab
      @algorithmslab 2 месяца назад

      Great to hear! Thanks for the kind words!

  • @penguinmonk7661
    @penguinmonk7661 3 месяца назад

    I am currently writting a piece on approximation and I had a few things I needed to have cleared up about it. You have given an absolutely splendid explation complete with all the important nuances and I thank you so, oh so deeply. For anyone reading this: this is in my opinion the best video on approximation algorithms you can find (with a short runtime) on this platform.

    • @algorithmslab
      @algorithmslab 3 месяца назад

      Thanks for the kind words! I am really glad to hear that the videos are helpful.

  • @umutkalman7356
    @umutkalman7356 3 месяца назад

    Thank you so much, immaculate explanation! I watched this video before the lecture and when my professor was trying to explain the same topic, i said to him "Du gehst mir auf den Sack!", and sent him away and played your video in the class, sir.

    • @algorithmslab
      @algorithmslab 3 месяца назад

      Happy to hear that it was helpful! Videos are a great tool for flexible learning. On the other hand, lectures provide a great opportunity for real-time interaction. I'd generally encourage students to take advantage of both formats.

  • @raghavbalaji6795
    @raghavbalaji6795 3 месяца назад

    long time no see. please continue posting. this is a great channel where i have learnt so much about various domains of math and CS. your explanations are super neat. hoping for your comeback

    • @algorithmslab
      @algorithmslab 3 месяца назад

      Thanks, great to hear! I usually post videos when I teach a new topic. Luckily for me, I’ll be teaching the course on approximation algorithms and linear programming again next semester. But there will surely be in the not-to-far future new topics, and with them, new videos.

  • @amanraj478
    @amanraj478 3 месяца назад

    not all heroes wear caps some of them use to teach on youtube

  • @Hurdersh
    @Hurdersh 3 месяца назад

    In the first algorithm is not clear how to recombine the selected vertices to obtain a vertex cover. The lemma is true, but the dimension of obtained vertex cover can be lower than the minimum :(. I can show to you a counter example on the size of an optimal vertex cover with just 5 vertices. The first algorithm does not approximate at all, since it does not return a correct vertex cover. However, you give to me the correct intuition about layering❤

    • @algorithmslab
      @algorithmslab 3 месяца назад

      Thanks for the comment! I expect there is simply a misunderstanding about the base case of the recursion. In short, the algorithm chooses as vertex cover the vertices v which have w(v)=0 in the base case (i.e. when the recursion stops). I think the problem is that I say this very late in the video (starting at 10:44) and never write down the full algorithm. (Yes, reordering the presentation and adding pseudo-code seems like a good idea.) Here is a simple, non-recursive formulation of the algorithm in pseudo-code: 1. While there exists an edge (u, v), such that min{w(u), w(v)} > 0: 2. Let eps = min{w(u), w(v)}. 3. w(u) ← w(u) − eps. 4. w(v) ← w(v) − eps. 5. Return the set C = {v | w(v) = 0} LIne 5 here corresponds to the base case of the recursion. I hope this clarifies the algorithm. This algorithm is by Bar-Yehuda and Evan ["A local ratio theorem for approximating the weighted vertex cover problem". Ann. Discr. Math. 25, 27-46.], and a nice article about it is here: www.wisdom.weizmann.ac.il/~oded/bbfr.pdf [Bar-Yehuda, R., Bendel, K., Freund, A., & Rawitz, D. (2004). Local ratio: A unified framework for approximation algorithms. In memoriam: Shimon Even 1935-2004. ACM Computing Surveys (CSUR), 36(4), 422-463.]

  • @ibrahimcetin153
    @ibrahimcetin153 3 месяца назад

    the best..

  • @Manaffacts
    @Manaffacts 3 месяца назад

    Amazing! Thank you

  • @willywonka9698
    @willywonka9698 3 месяца назад

    Thank you very much for your clarification but could you please provide me with any reference or notes or book to further boost my understanding?

    • @algorithmslab
      @algorithmslab 3 месяца назад

      There are certainly many excellent books and notes on linear programming. My lectures/videos are based on the book "Understanding and using linear programming" by Jiří Matoušek and Bernd Gärtner (2007). It is a great book, because it strikes a good balance between the underlying maths and how LPs are actually used.

  • @mrleenudler
    @mrleenudler 3 месяца назад

    Nice and enlightening video, thanks! If I'd dare some criticism, It could be more clear the reason for the rotation in case2. It seems almost like the criterium is whether it's the right or left child of the parent, but that depends again on which child the parent is of the grandparent. Anyway, it's the most informative video I found on the topic, so good job!

    • @algorithmslab
      @algorithmslab 3 месяца назад

      Thanks, good point. From the perspective of the algorithm it makes sense to order the cases as they are (i.e. discuss the corresponding slide from left to right). But for explaining why we have the cases, it might be better to go from right to left (i.e. what do we want to achieve, how we get if from case 3, and then how we get case 3, in case we have case 2). Anyway, you figured it out, so I achieved my goal!

  • @amuga_1
    @amuga_1 4 месяца назад

    Good decent video. better definition of convex here if you didn't understand her mathematical expressiveness : ruclips.net/video/KdKjXRwEa_0/видео.html

  • @romangavrilovich8453
    @romangavrilovich8453 4 месяца назад

    Hi, thanks for explanation, but it didn’t get the difference between perfect and random list for deletion. What is the difference between deletion of 12 in the first implementation and deletion of 10 in the second? Thanks in advance

    • @algorithmslab
      @algorithmslab 4 месяца назад

      In the first implementation, if we would want to delete the 12, the whole second half of the list would need to change. The 13 would need to take the role of the 12 with 4 levels, while 15 would now only have 1 level instead of 2, and so on. But if it is a randomized skip list, deleting the 12 is easy. All the references into the 12 now need to continue to the next nodes as seen from the 12; however none of the nodes need to change its number of levels.

    • @romangavrilovich8453
      @romangavrilovich8453 4 месяца назад

      @@algorithmslab oh thanks, changing of level is a key to understand the difference :)

  • @vctorroferz
    @vctorroferz 4 месяца назад

    Amazing video really good explanation ! Thanks so much

  • @jakoon_the_manic
    @jakoon_the_manic 5 месяцев назад

    Cheers

  • @mayuinc
    @mayuinc 6 месяцев назад

    Hey can you guys do a trajectory on an ideal pathway to learning computational geometry, beginning with high school algebra?

    • @algorithmslab
      @algorithmslab 6 месяцев назад

      The main prerequisite is basic knowledge of “algorithms and data structures”, and the prerequisites for that. Prerequisites for “algorithms and data structures” are basic programming skills (language does not matter, it is more about the concepts) and “high-school maths”++. So the pathsway would be: A. Introduction to programming B. Mathematics for Computer Scientists (must include proof techniques & basic probability theory) C. Introduction to Algorithms and Data structures And now you should be set. The more exposure you had to algorithms & basic maths, the easier it should be, but in principle the above should be sufficient. A and B are independent of each other, and may be known partially from school. “Mathematics for Computer Scientists” is of course a bit vague, but a solid mathematical foundation is crucial for algorithms. And sometimes a basic maths course for computer scientists does not yet include probability theory, so it might be necessary to learn that separately. All of the above are standard first-year courses in a computer science curriculum, so you will find plenty of excellent online resources. For Algorithms and Data Structures I also have videos, but there are also excellent MOOCs, e.g. from Princeton and Stanford. Finally, I do have a video where I list prerequisites for my advances algorithms course: ruclips.net/video/omsr-55nG7s/видео.htmlsi=bwRs7wCMCA-8ZWgb I think this comes quite close to what I would expect for computational geometry.

    • @mayuinc
      @mayuinc 6 месяцев назад

      @@algorithmslab thank you so much!

  • @SupanthaPandit-xs6lf
    @SupanthaPandit-xs6lf 6 месяцев назад

    This is a great playlist. I love all your videos. Please share the playlist slides so I can adopt them in my course.

    • @algorithmslab
      @algorithmslab 6 месяцев назад

      I switched to a different set of slides a while ago, and don't have these lying around anymore. If you send me an e-mail (to my TU Dortmund address), I could see whether I can help you with a different slide set (also based on the Cormen et al.)

  • @SupanthaPandit-xs6lf
    @SupanthaPandit-xs6lf 6 месяцев назад

    Can you please provide the slides?

    • @algorithmslab
      @algorithmslab 6 месяцев назад

      the slides are the second half of the following: github.com/kbuchin/slide-archive/blob/main/advanced-algorithms-2024/02-Amortized-Analysis.pdf

  • @VidushJindal
    @VidushJindal 6 месяцев назад

    Professor, I was not able to understand how did you write the horizontal line segment comparison in status comparator as in the book (computational geometry algorithms and applications) on p. 26 it is written that horizontal line segments must be placed last, I am not sure how does your code work. But it works correctly as I have seen in my points test cases. Thanks and Regards

    • @algorithmslab
      @algorithmslab 6 месяцев назад

      In the video I just gloss over this, so I assume you are referring to the Jupyter notebook. The comparator class there unfortunately turned out rather complicated, since it has to handle quite a few edge cases to make the code reasonable robust. The horizontal line segments are handled in the last two cases in the method "_check_special_cases". There one can see that a segment comes later in the comparison, when the y-coordinates of the two end points are (nearly) the same and the other segment is not to the right of the event point. The latter is necessary, because (as is written in the book) "If there is a horizontal segment, it comes last among all segments containing p".

    • @VidushJindal
      @VidushJindal 6 месяцев назад

      @@algorithmslab yes proffesor I understood and wrote the code, it works. I wrote the code in cpp and it is really fast it takes about 1 second for brute force for 10000 segments. it is a lovely code you wrote. Thanks and Regards

    • @algorithmslab
      @algorithmslab 6 месяцев назад

      @@VidushJindal great to hear, thanks! (And the code was written by the other author of the notebook, so I will pass on the compliment)

  • @ShiyuLi-fh2ss
    @ShiyuLi-fh2ss 6 месяцев назад

    Very amazing! Before seeing the video, I could not get the physical meaning of the dual variables of the LP. The visualization is fantastic! The whole lecture is clear and inspired.

    • @algorithmslab
      @algorithmslab 6 месяцев назад

      Thank you! Happy to hear this.

  • @baddie3982
    @baddie3982 6 месяцев назад

    to the point and concise, really helpful. thanks!!

  • @martablank8514
    @martablank8514 6 месяцев назад

    Thank you so much for this video!

  • @DanielWood
    @DanielWood 7 месяцев назад

    Excellent video, really well explained, with clear examples. Thank you.

  • @WildGrowthJ
    @WildGrowthJ 7 месяцев назад

    SOO helpful! thanks!

  • @evitaemort
    @evitaemort 7 месяцев назад

    Servus

  • @ShivangSharma-ih9yg
    @ShivangSharma-ih9yg 8 месяцев назад

    sir I very very love u 😘😘

    • @algorithmslab
      @algorithmslab 8 месяцев назад

      Happy to hear that the videos are helpful!

  • @fatinsadab3821
    @fatinsadab3821 8 месяцев назад

    very good explanation. thank u sir

  • @fndTenorio
    @fndTenorio 8 месяцев назад

    Very good presentation, thanks!