Это видео недоступно.
Сожалеем об этом.

Lecture 23: Computational Complexity

Поделиться
HTML-код
  • Опубликовано: 6 авг 2024
  • MIT 6.006 Introduction to Algorithms, Fall 2011
    View the complete course: ocw.mit.edu/6-006F11
    Instructor: Erik Demaine
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

Комментарии • 358

  • @ximbabwe0228
    @ximbabwe0228 8 лет назад +590

    this classroom has a nondeterministic amount of chalkboards

    • @nikhilpandey2364
      @nikhilpandey2364 7 лет назад +2

      good one!

    • @mr_easy
      @mr_easy 7 лет назад +2

      Really!?? :p

    • @GaDominion
      @GaDominion 6 лет назад +8

      NP-eed!

    • @nibblerdoo
      @nibblerdoo 5 лет назад +2

      The classroom has an uncountably infinite number of chalkboards

    • @slavkochepasov8134
      @slavkochepasov8134 4 года назад +1

      funny enough, but in USSR some universities used exactly same board setup. Is it ancient Greeks or Dutch, came up with this moving boards?

  • @sergeykholkhunov1888
    @sergeykholkhunov1888 3 года назад +61

    01:10 three complexity classes P, EXP, R
    11:02 most decision problems are uncomputable
    19:12 NP
    19:43 NP as a problem solvable in P time via lucky algorithm
    26:07 NP as a problem which positive result can be checked in P time
    31:00 P = NP?
    37:50 NP-complete, EXP-complete
    40:35 reductions

  • @sudharsansaravanan33
    @sudharsansaravanan33 9 лет назад +147

    "you cant engineer luck". Absolutely loved it. Thanks a lot :)

    • @user-in5ih7yz6j
      @user-in5ih7yz6j 6 лет назад

      con hod.|||||||||||||||||||||||||||||||||||

  • @jerridan2003
    @jerridan2003 10 лет назад +151

    This lecturer is the man!

    • @callMeEvs
      @callMeEvs 7 лет назад +4

      Professor Srinivas Devadas, who thaught other lectures of this same course is great too.

  • @PhilippeCarphin
    @PhilippeCarphin 8 лет назад +58

    I love that they still use the chalk board. All of my lectures are with power point presentations and very little use of the chalk board.

    • @gyaseddintanrkulu
      @gyaseddintanrkulu 5 лет назад +8

      I'd prefer chalkboard over boring power point slides.

    • @nihilisticboi3520
      @nihilisticboi3520 3 года назад

      The best is to use the chalkboard for teaching and explanation, and PowerPoint presentation for just visual information.

  • @elitescooby
    @elitescooby 10 лет назад +2

    Very nice lecture! Clear explanations of everything and very easy to understand

  • @gursimarmiglani9143
    @gursimarmiglani9143 8 лет назад +8

    I just love MIT video lectures

  •  9 лет назад +120

    Wow look at that hand held self powered calcium deposition printer, it's like the future.

    • @TroyWhorten
      @TroyWhorten 8 лет назад

      +Seán O'Nilbud this made me lol XD

    • @hektor6766
      @hektor6766 5 лет назад +2

      All the cool kids have them.

    • @AlexandrBorschchev
      @AlexandrBorschchev 4 года назад

      most people use whiteboard nowadays..

  • @spasticpeach
    @spasticpeach 8 лет назад +13

    Way more in depth than I needed for my exam but couldnt switch off.

  • @Jorvanius
    @Jorvanius 2 года назад +2

    Finally I'm able to understand NP complete. Awesome lecture, thank you so much 🙂

  • @prakharprateek1643
    @prakharprateek1643 7 лет назад +7

    i dunno why but the chalk thumping the board is actually kinda soothing....still thanks for the lecture

  • @HieuNguyen-ty7vw
    @HieuNguyen-ty7vw 6 лет назад

    You are a great teacher Erik!!!

  • @lucamantova3070
    @lucamantova3070 6 лет назад +25

    Interesting to see how a MIT guy has got time to write stuff on blackboard. My uni lecturers do not even know what a chalk is.

    • @guywithaname5408
      @guywithaname5408 5 лет назад +7

      *clicks through powerpoint reading out the slides*

  • @gaulindidier5995
    @gaulindidier5995 4 года назад +3

    Just one thing I’ve noticed, the halting problem is undecidable, running the program for which you’re trying to determine if it’ll halt or not doesn’t really solve the halting problem itself. Great intro regardless of that hick up.

  • @ady234
    @ady234 11 лет назад +4

    I could watch Erik Demaine talking about math all day long. I don't understand anything but it's like music to my ears.

  • @shubhamkakkar6365
    @shubhamkakkar6365 4 года назад +1

    Thank You so much sir for such a great explanation .

  • @shuvammanna3640
    @shuvammanna3640 8 лет назад +4

    helped me get an A+ in tst.. so my verdict is that it's actually awesome

  • @Adam_42_01
    @Adam_42_01 8 лет назад +9

    When he put the decimal in front of the decision problem table to turn it into a real number, my mind was blown. When he said that one can view programs as natural numbers, I expected him to somehow represent decision problems as real numbers, but I didn't now how. Goddamn.

    • @Mark-kt5mh
      @Mark-kt5mh 3 года назад +1

      Algorithms are usually functions. Functions have domain and range. Some decision problems can be such that they have finite domain, others don't.

  • @jackbar2476
    @jackbar2476 Год назад +1

    This was so brilliant! Thank you so much 😊

  • @yetanotherchannelyac1434
    @yetanotherchannelyac1434 3 года назад +1

    This was a great lecture !

  • @casperdewith
    @casperdewith 2 года назад +3

    25:38 I’m pretty sure that you can force a death every time by just … doing nothing and let all the pieces stack up in the middle. No way to clear lines, because no piece spans the entire width. Therefore, it will halt in linear time w.r.t. the height of the board.

  • @dylanh7226
    @dylanh7226 3 года назад +1

    "You can't engineer luck" - wow this is the best explanation I have heard regarding NP

  • @jnwatts
    @jnwatts 9 лет назад +2

    I never understood this topic in school....will watch later to gain some insight. Hope it's good! :)

  • @xxlucasxx19
    @xxlucasxx19 7 лет назад

    enjoyed the lecture very much

  • @MS-ib8xu
    @MS-ib8xu 7 лет назад

    Great lecture! Thank you

  • @Neueregel
    @Neueregel 11 лет назад +2

    thanks for posting

  • @31337flamer
    @31337flamer 4 года назад +1

    this episode is the most important for understanding the basics of complexity theory :O seriously.. why is this not 1.

  • @carlosabrilruiz423
    @carlosabrilruiz423 2 года назад

    Thank you! Great lecture!

  • @kvelez
    @kvelez Год назад

    Excellent video.

  • @DFBSDAN11
    @DFBSDAN11 5 лет назад

    Wow man I think I love this guy

  • @quosswimblik4489
    @quosswimblik4489 4 года назад

    You know the amount of different sudokus out there would it not be the number of ways you can lay out 1 to 9 without repeating multiplied by the number of permutations possible when the first columb is filled 1 to 9 from smallest to largest.

  • @esc120
    @esc120 7 лет назад

    You cannot solve longest path problem with reduction and Bellman-Ford Algorithm because there may be negative cycles that can't be handled by Bellman-Ford Algorithm when you negate the weights. The longest path problem is NP-hard.

  • @timperry6948
    @timperry6948 6 лет назад +1

    So, The Meaning of Life, the Universe and Everything is in which set?

  • @yuanbowang4852
    @yuanbowang4852 11 лет назад +1

    Very well explained, 100 times better than my prof

  • @generativeresearch
    @generativeresearch 4 года назад +135

    Fun fact: He is the youngest professor ever hired by MIT.

    • @Gulag00
      @Gulag00 3 года назад

      Age?

    • @generativeresearch
      @generativeresearch 3 года назад +9

      @@Gulag00 20

    • @Gulag00
      @Gulag00 3 года назад +2

      Sakeeb Rahman jeez

    • @srn306x
      @srn306x 3 года назад +19

      he literally enrolled at college at the age of 12

    • @Gulag00
      @Gulag00 3 года назад +22

      @@srn306x I think he might be smart

  • @tirosc
    @tirosc 8 лет назад +4

    I LOVE YOUR VOICE

  • @chxnge2873
    @chxnge2873 2 года назад

    Great lecture.

  • @SivaKumarKumaravelu
    @SivaKumarKumaravelu 6 лет назад +2

    Eric you beauty... Superb teaching... :)

  • @ArtOfTheProblem
    @ArtOfTheProblem 8 лет назад +19

    love it

    • @FatihErdemKzlkaya
      @FatihErdemKzlkaya 8 лет назад

      I knew, I would find you here. When does the next episode of computer science comes?

    • @ArtOfTheProblem
      @ArtOfTheProblem 8 лет назад

      +Fatih Erdem Kızılkaya funny you ask, i'm just about to render the next video - will post tomorrow!

    • @FatihErdemKzlkaya
      @FatihErdemKzlkaya 8 лет назад

      +Art of the Problem Great, looking forward to it. By the way I love how you make complicated things simple and beautiful. Please keep doing what you are doing.

  • @Linaiz
    @Linaiz 4 года назад

    Amazing lecture

  • @sugarfrosted2005
    @sugarfrosted2005 9 лет назад

    I do know a problem that's worse than EXP, quanitifier elimination on the theory or real numbers is the classical example. this is EXP_2, double exponential time.

  • @botelhorui
    @botelhorui 9 лет назад +26

    "I am NP-Complete" :D

  • @deeplearningpartnership
    @deeplearningpartnership 3 года назад

    Great talk.

  • @monicadelpilar23
    @monicadelpilar23 7 лет назад +1

    Very GOOD! Thanks a LOT!!...

  • @ESEJERITO
    @ESEJERITO 10 лет назад

    pudiera medirse con matemáticas vectoriales como la de los poliedros en otra di mención multiplicado por el numero de posibles cara frac-tales dividido por el posible tiempo elevado a la masa del objeto. ¿?

  • @Doctor_monk
    @Doctor_monk 8 лет назад

    19:19 "Out here Nothing happens"? Isn't it where everything happens and we have no knowledge of how they happen?

  • @beback_
    @beback_ 8 лет назад +5

    Oh great Erik has become a confident lecturer now!

    • @dawnkumar5669
      @dawnkumar5669 8 лет назад

      +Arya Pourtabatabaie His name is spelled Erik. Just being a grammar troll. :)

    • @beback_
      @beback_ 8 лет назад +1

      +Dawn Lassen Fixed :D

  • @cookiecan10
    @cookiecan10 5 лет назад +1

    I have a question (at 18:25)
    Given infinite space, you can write a program for every problem by hardcoding the solutions
    Just going
    if input = 0, output = 0
    if input = 1, output = 0
    if input = 2, output = 1
    if input = 3, output = 0
    if input = 4, output = 1
    etc.
    Just do this for every problem.
    Wouldn't that mean every problem in R is computable?
    (I'm obviously wrong, but what am I missing here?)

    • @imrealrage
      @imrealrage 8 месяцев назад +1

      Great question dude, i also thought the same, but when i look it down deeper, i think than R is a range where we'd probably say that this problem is impossible to solve so maybe thats what it means, but this is yet a very deeper topic to dive in, and i'm obviously wrong here too.

  • @bassett_green
    @bassett_green 7 лет назад +1

    what was the exception @2:00? Had they done a sudoku solver?

  • @senshtatulo
    @senshtatulo 6 лет назад

    While theoretically true that most (almost all, in some sense) problems are not computable, all of the problems that people encounter are in some way relatively small, in the sense that they can be understood or contemplated, i.e., they are dependent somewhat on the intelligence of people for determining the knowability of the problem, i.e., even recognizing that the problem exists, whereas most (again, almost all) of the noncomputable problems are larger or more difficult than what people can understand or conceive. So saying that most problems are noncomputable is a red herring. How many and which problems that people can recognize or understand are computable?

  • @BlazeCyndaquil
    @BlazeCyndaquil 11 лет назад +4

    Also, might I add that words can also be represented by integers, therefore most decision problems cannot be written.

  • @dar1e
    @dar1e 10 лет назад

    Can anyone explain to me please why since there is a 1 to 1 relation between problems and decisions he said that there are way more problems ? I just did not get that... :(

  • @debidattagouda9374
    @debidattagouda9374 7 лет назад +1

    which lecture is the implementation of this lecture

  • @danielescotece7144
    @danielescotece7144 8 лет назад

    is the numbering of computer programs related to Gödel numbers?

  • @thomasmurray856
    @thomasmurray856 8 лет назад +3

    I don't know why I'm here but I'm only in pre-calculus and this seems interesting

  • @joefagan9335
    @joefagan9335 2 года назад

    The proof around 16:00 is ridden with flaws. For example It’s true that 0.00110010111… is just one number in R but to then say the whole of R is much bigger than N is irrelevant.

  • @austinisi
    @austinisi 11 лет назад

    why not?

  • @Wemdiculous
    @Wemdiculous 8 лет назад

    Wouldnt the shortest point in 3d between 2 points be super easy. All you would have to do is put those 2 points on the same plane. Or is there some magical path that you can follow that is outside any 2d plane which is shorter?

    • @Eoaiyer21987rhei
      @Eoaiyer21987rhei 8 лет назад

      Exactly. You could have your destination be completely inaccessible on that plane.

  • @Carnifrex
    @Carnifrex 11 лет назад +1

    Good teacher!

    • @josikie
      @josikie Год назад

      hi! it has been 9 years, are u still there?

  • @ryancookparagliding
    @ryancookparagliding 9 лет назад +6

    Wow, 17:15 blew my mind :D

  • @stepansigut1949
    @stepansigut1949 5 лет назад +6

    39:48 - he says we know EXP != P and then he proceeds to say that proving NP = EXP isn't as famous as P = NP problem and it won't get you a million dollars. My question is: Doesn't proving NP = EXP prove P != NP as well? On the other hand I know proving NP != EXP does not prove P = NP, however I still find his wording a bit weird.

    • @liamwhite3522
      @liamwhite3522 4 года назад +2

      He doesn't say we have proved NP = EXP, in fact explicitly says we haven't. And there are problems that were once NP that have been discovered are, in fact, P (see primality problem), so saying *NP = EXP != P, so therefore NP != P* is just giving up.

    • @MauriceMauser
      @MauriceMauser Месяц назад

      yes, would answer both questions and get you the prize

  • @rb385354
    @rb385354 8 лет назад +3

    why is Go not exp complete? GO have way more possible moves than chess no?

    • @saltyman7888
      @saltyman7888 8 лет назад +1

      When you're dealing with the infinite, every single finite thing is the same size: nothing.

  • @siprus
    @siprus 9 лет назад

    Hmm. One question. why can we assume that each program is only capable of solving 1 problem?

  • @miharu3188
    @miharu3188 5 лет назад

    Just awesome:).

  • @colinmaharaj
    @colinmaharaj 3 года назад

    4:20 Wow, someone really did clean that chalk board.

  • @MrPuff1026
    @MrPuff1026 7 лет назад +1

    so is this really a question sort of about proving the transcendence from one state to another if the first is not a subset/lacks properties for existing in the second like if there is a state between finite and infinite existence where functions of time can cross between? or like a "tunnel" to travel through both?

  • @igorborovkov7011
    @igorborovkov7011 10 месяцев назад

    what is an example of a problem that is in (EXP - NP)

  • @atishyagupta5396
    @atishyagupta5396 3 года назад

    Is it possible to get a better quality of the videos, I really want to watch it in like 720p or 480p

  • @amerkiller1995
    @amerkiller1995 10 лет назад

    Fantastic

  • @nopantsnoproblem1
    @nopantsnoproblem1 11 лет назад

    can someone explain to me what studying is going towards?

  • @jakolu
    @jakolu 5 лет назад +1

    39:30: Proving exp is at the same spot on the line as NP would prove P=NP, right? If we know that P is not equal to EXP that is. Then you would get $1M

  • @CentralParkish
    @CentralParkish 3 года назад

    Game solvable in exponential time is best!

  • @PerfectPotion
    @PerfectPotion 7 лет назад +2

    On the subject of solving the P versus NP problem, instead of trying to solve an NP-complete problem in polynomial time, why does no one take the approach of trying to prove that a problem that we already know is in P to in fact be NP-complete? Of course, I'm assuming that P = NP.

  • @isbestlizard
    @isbestlizard 4 года назад

    hmm is the idea that lucky algorithms aren't realistic still valid given quantum computing algorithms?

  • @JamesOfKS
    @JamesOfKS 9 лет назад

    I interpret that at 33:29 you just said QC is easier than development.

  • @Krissam2k
    @Krissam2k 9 лет назад

    at 25:40 he say "can I die" should not be in np, but that's solvable in O(n) (worst case), which makes it a P problem, shouldn't that make it an np problem as well?

    • @CharlesMacKay88
      @CharlesMacKay88 9 лет назад +3

      Andreas Kristoffersen Yes you are right. the professor did not state the answer correctly. "Can i survive [this series of tetris pieces]?" is NP. Since we can guess a series of moves, and if we survive, we stop execution and return true. "is there no way to survive?" would be the complement of "Can i survive?" and this would NOT be in NP. Since you would have to check all possible combinations of pieces, and prove that there is no solution that would make you survive this is called co-np.
      "is it not possible to survive" is not the same as "can i die". "can i die" is in NP, since you can find a solution that you die from in polynomial time. "is it not possible to survive", you cannot guess a solution, and must check all possible solutions so this class of problem is in "Co-NP".
      hope this helps.

  • @slavkochepasov8134
    @slavkochepasov8134 4 года назад

    This lecture states theorem "most decision problems are not solvable by program". The funny guy Erik forgot to clarify the condition of prove: "solve with with infinite precision". That is what makes this statement "depressing". In my view most practical problem have bargain about precision of solution, that is how the program (from N) solves any real decision problem (from R) => life is full of practical optimism! It is all about point of view on practical need in "precision". But that is a topic for philosophy class. Chin Up Canada! ;)

  • @vishnukl
    @vishnukl 8 лет назад +3

    brilliant lecture

    • @anoophallur5914
      @anoophallur5914 8 лет назад

      +vishnu karthik hi dude :D

    • @vishnukl
      @vishnukl 8 лет назад

      Haha hi man. It's a small world

  • @lordfabri
    @lordfabri 8 лет назад

    very interesting

  • @heiderjeffer7833
    @heiderjeffer7833 6 лет назад +3

    Hello Prof. Erik Demaine,
    is the RSA algorithm is in P or in NP ?
    Thank you in advanced,

    • @thijsgelton
      @thijsgelton 4 года назад +1

      (I know this is very late) Well from my understanding if it would be in P, almost all our ways of communication online would be unsecure. RSA relies on modular arithmetic with uncertainty to be NP. This way guessing the right key would take EXP time as the mod N gets bigger and bigger.

    • @toebel
      @toebel 3 года назад +2

      The act of encrypting/decrypting a number using RSA can be done in polynomial time if you have the private key.
      The security of RSA encryption rests on the belief that integer factorization is a computationally hard problem (integer factorization is inefficient if time complexity is measured in number of bits, e.g. it can be inefficient to find the factors of a 1000-bit number).
      However, integer factorization is actually believed to be in NP-Intermediate (a problem that's in NP \ P, but not NP-Complete). In the context of the diagram, it's believed this problem comes strictly after the notch where P ends but strictly before the notch where NP ends.

  • @footballCartoon91
    @footballCartoon91 3 года назад

    What are they trying to find?

  • @thesunnydk
    @thesunnydk 4 года назад +1

    this is really interesting, but does anyone know where i can find the paper to the proof that most decision problems are not solvable. Have some queries in my head that perhaps reading the complete proof will help!

    • @sriyansh1729
      @sriyansh1729 3 года назад

      Check out the notes from the mitocw website they write it down

    • @thesunnydk
      @thesunnydk 3 года назад

      @@sriyansh1729 thank you for the reply! but it has been 11 months since i asked the question so I'm not even sure what I was asking about haha but will have a look at the notes

    • @sriyansh1729
      @sriyansh1729 3 года назад

      LOL

  • @sachkofretef
    @sachkofretef 11 лет назад

    give an example ?

  • @javaz6538
    @javaz6538 8 лет назад

    so solving for pi is not in R.

  • @akanegally
    @akanegally 8 лет назад

    Compared to French course.
    It's very different.
    It's less abstract : no formal proofs, no maths...
    I find it really interesting and really cool.
    But I wonder if it's enough accurate to have a full meaning of the topic.

    • @utkarsh_108
      @utkarsh_108 2 года назад

      Can you tell me the name of the French Course? How to get it?

  • @mixcocam
    @mixcocam 9 лет назад +22

    If you prove that NP = EXP, and we know that EXP != P then we also prove that NP != P. So you would get the money! :)

    • @elliotwaite
      @elliotwaite 9 лет назад +9

      +Rodrigo Camacho True, but perhaps he's betting that if a proof regarding NP and EXP is discovered, it will only prove that NP ≠ EXP, which wouldn't prove or disprove that NP ≠ P.

    • @broccoloodle
      @broccoloodle 4 года назад

      Proving NP = EXP is at least as hard as P != NP, well, by reduction.

    • @Mono_Autophobic
      @Mono_Autophobic 4 года назад

      Lol u tried simple logic

  • @YuriRadavchuk
    @YuriRadavchuk 8 лет назад

    Why does one reduce non-deterministic to guesses or it's just an explanatory trick?

    • @kennyrogers9834
      @kennyrogers9834 8 лет назад +1

      b/c if an algorithm is non-deterministic then there is no way to know the answer before you run the algorithm. that's kind of like a guess. there is no way to know what the answer to a guess will be before a person makes a guess. with a deterministic algorithm you always know the answer before you run the algorithm. for a given input, you get the same answer every time you run the algorithm

  • @retrodevremastered6613
    @retrodevremastered6613 4 года назад +2

    If we could consider relativistic effects time during algorithms execution (E.g. observer is very very close to a black hole's event horizon, but computer is far from it), might we say that NP has its limit to P? In other words is it possible to find such reference of frame in space-time?

    • @aidanokeeffe7928
      @aidanokeeffe7928 4 года назад

      Whoa that's a crazy idea. I don't have a clue

  • @NoxuzBlog
    @NoxuzBlog 11 лет назад +1

    nice!!!

  • @anonviewerciv
    @anonviewerciv 2 года назад

    Job searching is a decent example of a lucky algorithm.

  • @amaraojiji
    @amaraojiji 7 лет назад

    I couldn't agree with proof for amount of decisions. On first step it composes a real number out of all decisions (fine), on the next step it start talking about cardinality of real numbers. For uncountable amount of decisions you need to show that there is more than one such 'real number' string of decisions. And I see that he depleted all his infinities on writing down all possible decisions.

  • @gr4cetube
    @gr4cetube 8 лет назад

    How can you solve the problem "will I survive playing tetris with a lucky algorithm?"
    To me this is one of the problem in not in R because to understand if you survive you have to play for an infinite time.

    • @epbmetal7399
      @epbmetal7399 8 лет назад +1

      When Tetris is introduced, Demaine stated that the list of blocks you are going to play with are given. So with that (finite) list you have to check somehow (lucky guess, for instance) if there is a surviving strategy.

  • @brendawilliams8062
    @brendawilliams8062 3 года назад

    Thanky

  • @pruthvi7798
    @pruthvi7798 3 года назад +1

    I am NP complete but with result no.

  • @Qubrof
    @Qubrof 10 лет назад

    Is there such a thing as R-completeness? If so, what are some examples of problems that are R-complete? And what would that mean, exactly?

    • @NootanGhimire
      @NootanGhimire 10 лет назад

      Qubrof I don't think there is such thing as R-completeness. Put simply, as shown in the graph, you have some point to which you define a class of complexity, for example for P class: You have a point greater than which you define next class of complexity called NP, so anything on the point is an intersection of P class and P-hard class. But since the class R extends to infinity, i.e., there are not any class beyond the R class, there must not exist R-completeness.
      Put it in the other way, we define X - hard (X being any one of the class) as the set of problems that are at least as hard as every problems in X class.
      Now, let's assume we have R completeness, Now R complete = R (intersection) R-hard
      Now, R-hard is the set of problems that are at least as hard as every problems in R class, i.e., at least as hard as the hardest problem in R class. Now, the hardest problem in R class in not defined as infinity. That's why R completeness must not exist.
      PS : That was just the way i thought! I don't know if there are some R completeness but from the current logic and intiution that i have got, it seems that it is not posible!

    • @tommyrjensen
      @tommyrjensen 9 лет назад

      Nootan Ghimire But there is actually a class beyond R, it is the class of R-hard problems, and this class is not empty since it contains the halting problem, which is not in R.
      So it seems quite reasonable to define a class of R-complete problems. Such a problem B belongs to R and has the property that for each problem A, if there exists any algorithm to solve A, then A reduces to B in the appropriate sense. Vaguely now, this may be related to the concept of a universal Turing machine.
      Edit: actually I suspect tetris is an example of an R-complete problem. It certainly belongs to R. And it is complete for R for much the same reason it is NP-complete: if you have any problem in R, then it reduces to tetris. It does not always P-reduce. But it reduces in the appropriate sense.

  • @MonuYadav594
    @MonuYadav594 7 лет назад

    What about other classes, I want to know about #-P Complete, can you please provide something on this???

    • @zeronothinghere9334
      @zeronothinghere9334 4 года назад +1

      Maybe check out the course he mentioned. I believe it was 6045? Somewhere in the video he mentioned that this is just a 1 hour taste of what these people do. Or you could also youtube it with your keyword

    • @sriyansh1729
      @sriyansh1729 3 года назад

      Check out complexity classes on wikipedia for all of them

  • @itsvollx9684
    @itsvollx9684 7 лет назад +3

    why is @Jeb_ in video xD

  • @ruisen2000
    @ruisen2000 3 года назад

    Is anyone else a bit confused at the proof for most problems being non-computable?
    The proof relies on the fact that programs need to be finite, making the set of all programs less than the set of all functions. That implies that there's a theoretical maximum sized program, because lets say the set of all programs, which is finite, has size N. Now, if we take the largest program in N, and add 1 bit to it, that is not a different program, and contradicts the statement that N includes every possible program.
    But for any program of an arbitrarily large size S, you can make a program of size S+1. This means there can't be a maximum size program.

    • @mgregory22
      @mgregory22 3 года назад

      So what? There's a practical limit to the size of a program a human or even all humans can make.
      And sadly, it's a lot smaller than you think.

  • @ajr993
    @ajr993 9 лет назад +3

    I've always been confused why P has to equal or not equal NP. It's possible that P sometimes = NP. It seems like a fundamentally false dilemma; learning and adaptive algorithms for example. A person can learn to play tetras more efficiently. A person can also play chess better. Sometimes patterns emerge that allow people to arrive at a valid solution with much better odds than simple "luck". For instance, stacking all tetras blocks on the left hand side will make you lose faster, and it's obvious that it's one decision tree you can ignore. if you can limit the number of possibilities for a given problem by removing unintelligent choices, you can reduce it to be solvable in polynomial time even though it's technically NP.

    • @ajr993
      @ajr993 9 лет назад

      ***** Right, i forgot that it applied to the worst case, because that's really the only reference point we can use to categorize these things. Thanks for the clarification. However, the brain seems like it solves these types of problems, generally, in vastly superior ways than the worst case scenario. The question therefore becomes--can an heuristic algorithm be so good that it can eventually make some np problems to take a polynomial amount of time ? How would you calculate the maximum efficiency of a learning algorithm?

  • @notoriouswhitemoth
    @notoriouswhitemoth 7 лет назад +7

    Okay, but what does "polynomial time" mean - and why does everywhere I look to try to learn the answer to that question assume that I already know the answer?

    • @MartinCharles
      @MartinCharles 7 лет назад +4

      Polynomial time means the time complexity (time it takes to solve the problem) can be written as a polynomial function.
      f(x) = x^10 + x^5 + 3
      where parameters to f are the sizes of the inputs.
      Here are some example of the time complexities of NP problems:
      f(x) = x!
      f(x) = e^x

    • @notoriouswhitemoth
      @notoriouswhitemoth 7 лет назад +1

      I appreciate the effort; without contextualizing x, though, your examples don't mean much. I have since come across a source that explained it in a way I could understand, specifically because it gives the single most important piece of information in understanding this model, that I could not find anywhere else.
      I knew what a polynomial is. What I didn't understand was what was meant by time as it relates to complexity: specifically, that complexity measures time in calculations.

    • @code-dredd
      @code-dredd 7 лет назад +5

      notoriouswhitemoth
      Martin already contextualized the "x" variable: it's the size of the inputs. I'll give it a quick try and then point you to the link at the end which should be helpful.
      Since you already know what a polynomial and a function are, when we say that the an algorithm has a "time complexity" of some polynomial function, say n^2 + 5n + 7, where "n" is the "size" of the problem instance, this describes the "order of growth", or how quickly the problem and computational effort to solve it grow in relation to the size of the input.
      In plain English, if the function above represented the time necessary to get a list sorted in alphabetical order, then "n" would represent the number of items in the list you want to sort, and the result of evaluating the function would be the number of computational steps that would be necessary to sort the list using said algorithm.
      Clearly, you can see, even without running the algorithm, that a list with 100 items will require less computational steps than it would if you wanted to sort a list of 1,000 items. But with a polynomial function that tells you how an algorithm will 'behave' based on the size of the input, you can get a more specific idea of what this difference actually looks like. And this is what would allow you to compare this algorithm to another algorithm that is also meant to sort lists and decide which one should require less time to complete as the size of the input keeps growing.
      Lastly, in Big-O notation, the number of terms is reduced to only the most significant/dominant term for simplicity, because this is meant to be an approximation for when the size of the input, N, becomes extremely large (i.e. tends to positive infinity). This means the function above would be expressed as O(n^2) in Big-O notation.
      In any case, hope that helped. I still recommend you read this post:
      stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation

    • @guywithaname5408
      @guywithaname5408 5 лет назад +1

      ​@@notoriouswhitemoth The complexity of an algorithm is how fast it grows depending on the size of the input (n). If you like, you can think of this input as a list of numbers. Exponential time algorithms double (for example) every time you increase the input size (n) by 1. Polynomial time algorithms grow much, much slower than exponential time algorithms relative to the input size, and this is particularly noticeable (and relevant) when n is very large.
      You can think of exponential time problems as problems that take too long to solve with a large n value to be of any use, and polynomial time problems as problems that are generally solvable in a reasonable amount of time, even with a large value of n.
      That's the simplest explanation I can give without going a lot deeper.