01:10 three complexity classes P, EXP, R 11:02 most decision problems are uncomputable 19:12 NP 19:43 NP as a problem solvable in P time via lucky algorithm 26:07 NP as a problem which positive result can be checked in P time 31:00 P = NP? 37:50 NP-complete, EXP-complete 40:35 reductions
25:38 I’m pretty sure that you can force a death every time by just … doing nothing and let all the pieces stack up in the middle. No way to clear lines, because no piece spans the entire width. Therefore, it will halt in linear time w.r.t. the height of the board.
+Art of the Problem Great, looking forward to it. By the way I love how you make complicated things simple and beautiful. Please keep doing what you are doing.
Love it when he says "I will not show you the proof because it actually quite simple", also those proofs were hard to cook up at some time. Still, I love his way of explaining things. Got my msc in computer science more than ten years ago, this was an important course to get the bsc at the time. Never seen it explain so eloquently as he does!
Just one thing I’ve noticed, the halting problem is undecidable, running the program for which you’re trying to determine if it’ll halt or not doesn’t really solve the halting problem itself. Great intro regardless of that hick up.
When he put the decimal in front of the decision problem table to turn it into a real number, my mind was blown. When he said that one can view programs as natural numbers, I expected him to somehow represent decision problems as real numbers, but I didn't now how. Goddamn.
The proof around 16:00 is ridden with flaws. For example It’s true that 0.00110010111… is just one number in R but to then say the whole of R is much bigger than N is irrelevant.
I have a question (at 18:25) Given infinite space, you can write a program for every problem by hardcoding the solutions Just going if input = 0, output = 0 if input = 1, output = 0 if input = 2, output = 1 if input = 3, output = 0 if input = 4, output = 1 etc. Just do this for every problem. Wouldn't that mean every problem in R is computable? (I'm obviously wrong, but what am I missing here?)
Great question dude, i also thought the same, but when i look it down deeper, i think than R is a range where we'd probably say that this problem is impossible to solve so maybe thats what it means, but this is yet a very deeper topic to dive in, and i'm obviously wrong here too.
39:48 - he says we know EXP != P and then he proceeds to say that proving NP = EXP isn't as famous as P = NP problem and it won't get you a million dollars. My question is: Doesn't proving NP = EXP prove P != NP as well? On the other hand I know proving NP != EXP does not prove P = NP, however I still find his wording a bit weird.
He doesn't say we have proved NP = EXP, in fact explicitly says we haven't. And there are problems that were once NP that have been discovered are, in fact, P (see primality problem), so saying *NP = EXP != P, so therefore NP != P* is just giving up.
at 25:40 he say "can I die" should not be in np, but that's solvable in O(n) (worst case), which makes it a P problem, shouldn't that make it an np problem as well?
Andreas Kristoffersen Yes you are right. the professor did not state the answer correctly. "Can i survive [this series of tetris pieces]?" is NP. Since we can guess a series of moves, and if we survive, we stop execution and return true. "is there no way to survive?" would be the complement of "Can i survive?" and this would NOT be in NP. Since you would have to check all possible combinations of pieces, and prove that there is no solution that would make you survive this is called co-np. "is it not possible to survive" is not the same as "can i die". "can i die" is in NP, since you can find a solution that you die from in polynomial time. "is it not possible to survive", you cannot guess a solution, and must check all possible solutions so this class of problem is in "Co-NP". hope this helps.
While theoretically true that most (almost all, in some sense) problems are not computable, all of the problems that people encounter are in some way relatively small, in the sense that they can be understood or contemplated, i.e., they are dependent somewhat on the intelligence of people for determining the knowability of the problem, i.e., even recognizing that the problem exists, whereas most (again, almost all) of the noncomputable problems are larger or more difficult than what people can understand or conceive. So saying that most problems are noncomputable is a red herring. How many and which problems that people can recognize or understand are computable?
i agree, if he arbitrarily puts a dot at the start of the numbers in the set of inputs to make it a real number between zero and one, he could just as easily do that to the numbers in the set of programs and make it a real number between zero and one.
39:57 "Does NP = Exp? Not as famous. You won't get a million dollars, but still a very big open question." suppose NP = EXP and you found a proof for that. then, since obviously EXP > P you have NP > P and just won a million dollars.
Renan Silva if there were a proof, then this would proof that P ≠ NP. since the P vs NP problem is still open, there is no proof that P ≠ NP, but this means that there can't be a proof that EXP > P.
+Renan Silva No one has proven that it's not in P (doing that would prove P /= NP). Chess on the other hand has been proven to be not in P. See the distinction ?
Okay, but what does "polynomial time" mean - and why does everywhere I look to try to learn the answer to that question assume that I already know the answer?
Polynomial time means the time complexity (time it takes to solve the problem) can be written as a polynomial function. f(x) = x^10 + x^5 + 3 where parameters to f are the sizes of the inputs. Here are some example of the time complexities of NP problems: f(x) = x! f(x) = e^x
I appreciate the effort; without contextualizing x, though, your examples don't mean much. I have since come across a source that explained it in a way I could understand, specifically because it gives the single most important piece of information in understanding this model, that I could not find anywhere else. I knew what a polynomial is. What I didn't understand was what was meant by time as it relates to complexity: specifically, that complexity measures time in calculations.
notoriouswhitemoth Martin already contextualized the "x" variable: it's the size of the inputs. I'll give it a quick try and then point you to the link at the end which should be helpful. Since you already know what a polynomial and a function are, when we say that the an algorithm has a "time complexity" of some polynomial function, say n^2 + 5n + 7, where "n" is the "size" of the problem instance, this describes the "order of growth", or how quickly the problem and computational effort to solve it grow in relation to the size of the input. In plain English, if the function above represented the time necessary to get a list sorted in alphabetical order, then "n" would represent the number of items in the list you want to sort, and the result of evaluating the function would be the number of computational steps that would be necessary to sort the list using said algorithm. Clearly, you can see, even without running the algorithm, that a list with 100 items will require less computational steps than it would if you wanted to sort a list of 1,000 items. But with a polynomial function that tells you how an algorithm will 'behave' based on the size of the input, you can get a more specific idea of what this difference actually looks like. And this is what would allow you to compare this algorithm to another algorithm that is also meant to sort lists and decide which one should require less time to complete as the size of the input keeps growing. Lastly, in Big-O notation, the number of terms is reduced to only the most significant/dominant term for simplicity, because this is meant to be an approximation for when the size of the input, N, becomes extremely large (i.e. tends to positive infinity). This means the function above would be expressed as O(n^2) in Big-O notation. In any case, hope that helped. I still recommend you read this post: stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation
@@notoriouswhitemoth The complexity of an algorithm is how fast it grows depending on the size of the input (n). If you like, you can think of this input as a list of numbers. Exponential time algorithms double (for example) every time you increase the input size (n) by 1. Polynomial time algorithms grow much, much slower than exponential time algorithms relative to the input size, and this is particularly noticeable (and relevant) when n is very large. You can think of exponential time problems as problems that take too long to solve with a large n value to be of any use, and polynomial time problems as problems that are generally solvable in a reasonable amount of time, even with a large value of n. That's the simplest explanation I can give without going a lot deeper.
You know the amount of different sudokus out there would it not be the number of ways you can lay out 1 to 9 without repeating multiplied by the number of permutations possible when the first columb is filled 1 to 9 from smallest to largest.
This lecture states theorem "most decision problems are not solvable by program". The funny guy Erik forgot to clarify the condition of prove: "solve with with infinite precision". That is what makes this statement "depressing". In my view most practical problem have bargain about precision of solution, that is how the program (from N) solves any real decision problem (from R) => life is full of practical optimism! It is all about point of view on practical need in "precision". But that is a topic for philosophy class. Chin Up Canada! ;)
P vs Np complete.. P does not equal NP A^3 + B ^2 = C^3 Where A is 1/x of C Or C multiplied by A equal 1. Decision problem with complex values: Problem: Given a set of complex numbers A, B, and x, is there a solution to the equation A^3 + B ^2 = C^3 where C is 1/x of A? Decision question: does there exist a set of complex numbers A,B, and x, such that A^3 + B ^2= C^3 , where C is 1/x of A. To demonstrate NP-completeness, we need to show two things: 1. The problem is in NP: Given a potential solution, we can verify it in polynomial time. 2. The problem is NP-hard: Any problem in NP can be reduced to this problem in polynomial time. To prove NP-hardness, we'll reduce the well-known NP-complete problem, the subset sum problem, to our decision problem with complex values. Subset Sum Problem: Given a set of integers and a target sum, does there exist a subset of the integers that sums to the target sum? We can reduce the subset sum problem to our decision problem with complex values by transforming the integers into complex numbers with zero imaginary parts: For each integer ai in the subset sum problem , we create a complex number. Ai = ai +0i . We set B=0 and X=1 Now, if there exists a subset of integers that sums to the target sum, then there exists a solution to our decision problem with complex values, where A^3 +B^2 = C^3. Therefore, our decision problem with complex values is NP-complete. Solution is percise with no approximation.. does exist with the knowledge by this author of this post …
this is really interesting, but does anyone know where i can find the paper to the proof that most decision problems are not solvable. Have some queries in my head that perhaps reading the complete proof will help!
@@sriyansh1729 thank you for the reply! but it has been 11 months since i asked the question so I'm not even sure what I was asking about haha but will have a look at the notes
i'm not pretending to remotely understand, but as A Zoologist and C programmer, I've gotta say my bit. See the thing is, our brains can compute chess(chess being an EXP{exponential time to check}), some better than others. it's a mix of instinct and strategy. the brain is nothing more than a computer with some intense amount of bits. that being said it's only a matter of time until we create a silicon quantum computing brain capable of winning at chess every time, capable of learning to adapt and learn from experience. Then eventually it will tell us the equation it used because that's how a computer thinks, in code. We humans think in our own type of code. we solve complex problems every day without knowing. so i believe that eventually P=NP, but without the super computer, no, P!=NP
You cannot solve longest path problem with reduction and Bellman-Ford Algorithm because there may be negative cycles that can't be handled by Bellman-Ford Algorithm when you negate the weights. The longest path problem is NP-hard.
Is anyone else a bit confused at the proof for most problems being non-computable? The proof relies on the fact that programs need to be finite, making the set of all programs less than the set of all functions. That implies that there's a theoretical maximum sized program, because lets say the set of all programs, which is finite, has size N. Now, if we take the largest program in N, and add 1 bit to it, that is not a different program, and contradicts the statement that N includes every possible program. But for any program of an arbitrarily large size S, you can make a program of size S+1. This means there can't be a maximum size program.
Wouldnt the shortest point in 3d between 2 points be super easy. All you would have to do is put those 2 points on the same plane. Or is there some magical path that you can follow that is outside any 2d plane which is shorter?
Qubrof I don't think there is such thing as R-completeness. Put simply, as shown in the graph, you have some point to which you define a class of complexity, for example for P class: You have a point greater than which you define next class of complexity called NP, so anything on the point is an intersection of P class and P-hard class. But since the class R extends to infinity, i.e., there are not any class beyond the R class, there must not exist R-completeness. Put it in the other way, we define X - hard (X being any one of the class) as the set of problems that are at least as hard as every problems in X class. Now, let's assume we have R completeness, Now R complete = R (intersection) R-hard Now, R-hard is the set of problems that are at least as hard as every problems in R class, i.e., at least as hard as the hardest problem in R class. Now, the hardest problem in R class in not defined as infinity. That's why R completeness must not exist. PS : That was just the way i thought! I don't know if there are some R completeness but from the current logic and intiution that i have got, it seems that it is not posible!
Nootan Ghimire But there is actually a class beyond R, it is the class of R-hard problems, and this class is not empty since it contains the halting problem, which is not in R. So it seems quite reasonable to define a class of R-complete problems. Such a problem B belongs to R and has the property that for each problem A, if there exists any algorithm to solve A, then A reduces to B in the appropriate sense. Vaguely now, this may be related to the concept of a universal Turing machine. Edit: actually I suspect tetris is an example of an R-complete problem. It certainly belongs to R. And it is complete for R for much the same reason it is NP-complete: if you have any problem in R, then it reduces to tetris. It does not always P-reduce. But it reduces in the appropriate sense.
How can you solve the problem "will I survive playing tetris with a lucky algorithm?" To me this is one of the problem in not in R because to understand if you survive you have to play for an infinite time.
When Tetris is introduced, Demaine stated that the list of blocks you are going to play with are given. So with that (finite) list you have to check somehow (lucky guess, for instance) if there is a surviving strategy.
((Great lecture! I was just wondering is reducing 3-sat really proof that Tetris is NP-complete? If it proves Tetris is atleast as hard as 3-sat and hence NP-hard wouldn't you still need to show that Tetris can also be reduced to some NP-complete problem for it to be proven NP-complete? I guess this might be obvious to someone who's not a layman. Still I'm curious.)) I realise I need to clarify my question. It's my understanding that: 1. 3-sat is NP-complete. 2. 3-sat can be reduced to Tetris which shows Tetris is harder than 3-sat. 3. Since Tetris is harder than 3-sat, Tetris is NP-Hard. 4. NP-complete problems are NP-hard problems "contained" in NP. 5. All NP-complete problems are hard but all NP-hard problems are not NP-complete. Based on these assumptions what I meant to ask was. "If Tetris is harder than an NP-complete problem Tetris is NP-hard, but wouldn't Tetris then need to be reduced to an NP-complete problem to establish that it is NP-complete and not just NP-hard?" I appreciate any answers I may get. I wouldn't keep on pursuing these questions if the answers weren't valuable to me and I appreciate the patience shown.
+TheFabus95 Thanks for the answer. My question was a bit unclear since I don't fully understand the concept. It didn't really clear up what I was interested in knowing but my question may have been redundant in the first place. I'll try to rephrase my question. It's my understanding that: 1. 3-sat is NP-complete. 2. 3-sat can be reduced to Tetris which shows Tetris is harder than 3-sat. 3. Since Tetris is harder than 3-sat, Tetris is NP-Hard. 4. NP-complete problems are NP-hard problems "contained" in NP. 5. All NP-complete problems are hard but all NP-hard problems are not NP-complete. Based on these assumptions what I meant to ask was. "If Tetris is harder than an NP-complete problem Tetris is NP-hard, but wouldn't Tetris then need to be reduced to an NP-complete problem to establish that it is NP-complete and not just NP-hard? I appreciate any answers I may get. I wouldn't keep on pursuing these questions if the answers weren't valuable to me and I appreciate the patience shown.
You are correct: If we reduce 3-sat to Tetris and we know that 3-sat is NP-hard, it follows that Tetris is NP-hard, but not necessarily NP-complete. Think of reducing a problem to the halting problem, which is also NP-hard. But we also know that Tetris is in NP. Thus, Tetris is NP-complete, since the NP-complete problems are those who are (1) NP-hard and (2) contained in NP. However, note that reducing Tetris to a NP-complete problem L does not prove anything about the NP-hardness of Tetris. The fact that L is NP-complete means that every problem in NP (like Tetris) is reducible to L. I hope this answers your question.
+TheFabus95 Thanks again! This really cleared things up for me. Especially you pointing out Tetris being in NP. Somehow I had missed the whole proof of Tetris being in NP as a part of the NP-completeness proof. I don't know how I managed this since he even says "therefore Tetris is in NP", and it can't get much clearer than that. Much appreciated!
b/c if an algorithm is non-deterministic then there is no way to know the answer before you run the algorithm. that's kind of like a guess. there is no way to know what the answer to a guess will be before a person makes a guess. with a deterministic algorithm you always know the answer before you run the algorithm. for a given input, you get the same answer every time you run the algorithm
39:30: Proving exp is at the same spot on the line as NP would prove P=NP, right? If we know that P is not equal to EXP that is. Then you would get $1M
pudiera medirse con matemáticas vectoriales como la de los poliedros en otra di mención multiplicado por el numero de posibles cara frac-tales dividido por el posible tiempo elevado a la masa del objeto. ¿?
Can anyone explain to me please why since there is a 1 to 1 relation between problems and decisions he said that there are way more problems ? I just did not get that... :(
Compared to French course. It's very different. It's less abstract : no formal proofs, no maths... I find it really interesting and really cool. But I wonder if it's enough accurate to have a full meaning of the topic.
I couldn't agree with proof for amount of decisions. On first step it composes a real number out of all decisions (fine), on the next step it start talking about cardinality of real numbers. For uncountable amount of decisions you need to show that there is more than one such 'real number' string of decisions. And I see that he depleted all his infinities on writing down all possible decisions.
I do know a problem that's worse than EXP, quanitifier elimination on the theory or real numbers is the classical example. this is EXP_2, double exponential time.
I've always been confused why P has to equal or not equal NP. It's possible that P sometimes = NP. It seems like a fundamentally false dilemma; learning and adaptive algorithms for example. A person can learn to play tetras more efficiently. A person can also play chess better. Sometimes patterns emerge that allow people to arrive at a valid solution with much better odds than simple "luck". For instance, stacking all tetras blocks on the left hand side will make you lose faster, and it's obvious that it's one decision tree you can ignore. if you can limit the number of possibilities for a given problem by removing unintelligent choices, you can reduce it to be solvable in polynomial time even though it's technically NP.
***** Right, i forgot that it applied to the worst case, because that's really the only reference point we can use to categorize these things. Thanks for the clarification. However, the brain seems like it solves these types of problems, generally, in vastly superior ways than the worst case scenario. The question therefore becomes--can an heuristic algorithm be so good that it can eventually make some np problems to take a polynomial amount of time ? How would you calculate the maximum efficiency of a learning algorithm?
Maybe check out the course he mentioned. I believe it was 6045? Somewhere in the video he mentioned that this is just a 1 hour taste of what these people do. Or you could also youtube it with your keyword
the decision to put the set of inputs and outputs in R instead of N was completely arbitrary and unproven. these are two different types of infinite sets that are not orthogonal. there was no explanation why the inputs and outputs can not be in N. this would be as absurd to claim that the set of all primes exists in R.
How come by guessing you can guarantee that if there is a yes you will get there without checking all the paths???!!!! You either need to guess until you reach to a yes which in worst case scenario makes you check all the paths!!! Unless we say by guessing if we are lucky we can reach to a yes if it exists. In other words if a yes exists the probability of reaching to it by guessing is greater than 0!!!!
The "lucky guessing" analogy is just one mental picture, not necessarily the best way to think of it. You may prefer the following alternative explanation of NP: Suppose you can actually see the entire search tree for your problem, in particular all the possible search paths (possibly some infinite ones) and all the end-nodes of the finite paths, each marked with a Y or N for positive and negative answers at those nodes respectively. Then to check that the problem is in NP, you just have to find a single search path that leads to a Y-node and which has length at most p(n), where n is the size of the problem input, and p is a fixed polynomial. Similarly, a problem is in co-NP if there is a short path that leads to an N-node. The lecturer in the video seems a bit confused about the distinction between NP and co-NP, which I think might be emphasized.
with respect of the importance of the Tetris game to the CS computational complexity (complete surprise to me:), the history of game Tetris: en.wikipedia.org/wiki/Tetris .
+Rodrigo Camacho True, but perhaps he's betting that if a proof regarding NP and EXP is discovered, it will only prove that NP ≠ EXP, which wouldn't prove or disprove that NP ≠ P.
01:10 three complexity classes P, EXP, R
11:02 most decision problems are uncomputable
19:12 NP
19:43 NP as a problem solvable in P time via lucky algorithm
26:07 NP as a problem which positive result can be checked in P time
31:00 P = NP?
37:50 NP-complete, EXP-complete
40:35 reductions
this classroom has a nondeterministic amount of chalkboards
good one!
Really!?? :p
NP-eed!
The classroom has an uncountably infinite number of chalkboards
funny enough, but in USSR some universities used exactly same board setup. Is it ancient Greeks or Dutch, came up with this moving boards?
This lecturer is the man!
Professor Srinivas Devadas, who thaught other lectures of this same course is great too.
"you cant engineer luck". Absolutely loved it. Thanks a lot :)
con hod.|||||||||||||||||||||||||||||||||||
I love that they still use the chalk board. All of my lectures are with power point presentations and very little use of the chalk board.
I'd prefer chalkboard over boring power point slides.
The best is to use the chalkboard for teaching and explanation, and PowerPoint presentation for just visual information.
25:38 I’m pretty sure that you can force a death every time by just … doing nothing and let all the pieces stack up in the middle. No way to clear lines, because no piece spans the entire width. Therefore, it will halt in linear time w.r.t. the height of the board.
Unless you dont have enough pieces
Fun fact: He is the youngest professor ever hired by MIT.
Age?
@@Gulag00 20
Sakeeb Rahman jeez
he literally enrolled at college at the age of 12
@@srn306x I think he might be smart
love it
I knew, I would find you here. When does the next episode of computer science comes?
+Fatih Erdem Kızılkaya funny you ask, i'm just about to render the next video - will post tomorrow!
+Art of the Problem Great, looking forward to it. By the way I love how you make complicated things simple and beautiful. Please keep doing what you are doing.
Love it when he says "I will not show you the proof because it actually quite simple", also those proofs were hard to cook up at some time. Still, I love his way of explaining things. Got my msc in computer science more than ten years ago, this was an important course to get the bsc at the time. Never seen it explain so eloquently as he does!
I could watch Erik Demaine talking about math all day long. I don't understand anything but it's like music to my ears.
I just love MIT video lectures
Wow look at that hand held self powered calcium deposition printer, it's like the future.
+Seán O'Nilbud this made me lol XD
All the cool kids have them.
most people use whiteboard nowadays..
Just one thing I’ve noticed, the halting problem is undecidable, running the program for which you’re trying to determine if it’ll halt or not doesn’t really solve the halting problem itself. Great intro regardless of that hick up.
When he put the decimal in front of the decision problem table to turn it into a real number, my mind was blown. When he said that one can view programs as natural numbers, I expected him to somehow represent decision problems as real numbers, but I didn't now how. Goddamn.
Algorithms are usually functions. Functions have domain and range. Some decision problems can be such that they have finite domain, others don't.
i dunno why but the chalk thumping the board is actually kinda soothing....still thanks for the lecture
The proof around 16:00 is ridden with flaws. For example It’s true that 0.00110010111… is just one number in R but to then say the whole of R is much bigger than N is irrelevant.
48:49 np problem can be reduced each kther
I have a question (at 18:25)
Given infinite space, you can write a program for every problem by hardcoding the solutions
Just going
if input = 0, output = 0
if input = 1, output = 0
if input = 2, output = 1
if input = 3, output = 0
if input = 4, output = 1
etc.
Just do this for every problem.
Wouldn't that mean every problem in R is computable?
(I'm obviously wrong, but what am I missing here?)
Great question dude, i also thought the same, but when i look it down deeper, i think than R is a range where we'd probably say that this problem is impossible to solve so maybe thats what it means, but this is yet a very deeper topic to dive in, and i'm obviously wrong here too.
"You can't engineer luck" - wow this is the best explanation I have heard regarding NP
39:48 - he says we know EXP != P and then he proceeds to say that proving NP = EXP isn't as famous as P = NP problem and it won't get you a million dollars. My question is: Doesn't proving NP = EXP prove P != NP as well? On the other hand I know proving NP != EXP does not prove P = NP, however I still find his wording a bit weird.
He doesn't say we have proved NP = EXP, in fact explicitly says we haven't. And there are problems that were once NP that have been discovered are, in fact, P (see primality problem), so saying *NP = EXP != P, so therefore NP != P* is just giving up.
yes, would answer both questions and get you the prize
Interesting to see how a MIT guy has got time to write stuff on blackboard. My uni lecturers do not even know what a chalk is.
*clicks through powerpoint reading out the slides*
at 25:40 he say "can I die" should not be in np, but that's solvable in O(n) (worst case), which makes it a P problem, shouldn't that make it an np problem as well?
Andreas Kristoffersen Yes you are right. the professor did not state the answer correctly. "Can i survive [this series of tetris pieces]?" is NP. Since we can guess a series of moves, and if we survive, we stop execution and return true. "is there no way to survive?" would be the complement of "Can i survive?" and this would NOT be in NP. Since you would have to check all possible combinations of pieces, and prove that there is no solution that would make you survive this is called co-np.
"is it not possible to survive" is not the same as "can i die". "can i die" is in NP, since you can find a solution that you die from in polynomial time. "is it not possible to survive", you cannot guess a solution, and must check all possible solutions so this class of problem is in "Co-NP".
hope this helps.
helped me get an A+ in tst.. so my verdict is that it's actually awesome
what was the exception @2:00? Had they done a sudoku solver?
why is Go not exp complete? GO have way more possible moves than chess no?
When you're dealing with the infinite, every single finite thing is the same size: nothing.
Very well explained, 100 times better than my prof
While theoretically true that most (almost all, in some sense) problems are not computable, all of the problems that people encounter are in some way relatively small, in the sense that they can be understood or contemplated, i.e., they are dependent somewhat on the intelligence of people for determining the knowability of the problem, i.e., even recognizing that the problem exists, whereas most (again, almost all) of the noncomputable problems are larger or more difficult than what people can understand or conceive. So saying that most problems are noncomputable is a red herring. How many and which problems that people can recognize or understand are computable?
Also, might I add that words can also be represented by integers, therefore most decision problems cannot be written.
16:00 makes no sense. But then again, maybe the proof is left as an exercise for the reader
i agree, if he arbitrarily puts a dot at the start of the numbers in the set of inputs to make it a real number between zero and one, he could just as easily do that to the numbers in the set of programs and make it a real number between zero and one.
39:57 "Does NP = Exp? Not as famous. You won't get a million dollars, but still a very big open question."
suppose NP = EXP and you found a proof for that. then, since obviously EXP > P you have NP > P and just won a million dollars.
Korneel Redwasp Nice observation. Idk if there is a proof that EXP > P.
Renan Silva if there were a proof, then this would proof that P ≠ NP. since the P vs NP problem is still open, there is no proof that P ≠ NP, but this means that there can't be a proof that EXP > P.
+Renan Silva Chess is in EXP, but not in P. Doesn't this proof that EXP > P ?
+Ahad Alex 3cnf-sat is in NP but not in P. Does it proof that P ≠ NP? Nope.
+Renan Silva No one has proven that it's not in P (doing that would prove P /= NP). Chess on the other hand has been proven to be not in P. See the distinction ?
Okay, but what does "polynomial time" mean - and why does everywhere I look to try to learn the answer to that question assume that I already know the answer?
Polynomial time means the time complexity (time it takes to solve the problem) can be written as a polynomial function.
f(x) = x^10 + x^5 + 3
where parameters to f are the sizes of the inputs.
Here are some example of the time complexities of NP problems:
f(x) = x!
f(x) = e^x
I appreciate the effort; without contextualizing x, though, your examples don't mean much. I have since come across a source that explained it in a way I could understand, specifically because it gives the single most important piece of information in understanding this model, that I could not find anywhere else.
I knew what a polynomial is. What I didn't understand was what was meant by time as it relates to complexity: specifically, that complexity measures time in calculations.
notoriouswhitemoth
Martin already contextualized the "x" variable: it's the size of the inputs. I'll give it a quick try and then point you to the link at the end which should be helpful.
Since you already know what a polynomial and a function are, when we say that the an algorithm has a "time complexity" of some polynomial function, say n^2 + 5n + 7, where "n" is the "size" of the problem instance, this describes the "order of growth", or how quickly the problem and computational effort to solve it grow in relation to the size of the input.
In plain English, if the function above represented the time necessary to get a list sorted in alphabetical order, then "n" would represent the number of items in the list you want to sort, and the result of evaluating the function would be the number of computational steps that would be necessary to sort the list using said algorithm.
Clearly, you can see, even without running the algorithm, that a list with 100 items will require less computational steps than it would if you wanted to sort a list of 1,000 items. But with a polynomial function that tells you how an algorithm will 'behave' based on the size of the input, you can get a more specific idea of what this difference actually looks like. And this is what would allow you to compare this algorithm to another algorithm that is also meant to sort lists and decide which one should require less time to complete as the size of the input keeps growing.
Lastly, in Big-O notation, the number of terms is reduced to only the most significant/dominant term for simplicity, because this is meant to be an approximation for when the size of the input, N, becomes extremely large (i.e. tends to positive infinity). This means the function above would be expressed as O(n^2) in Big-O notation.
In any case, hope that helped. I still recommend you read this post:
stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation
@@notoriouswhitemoth The complexity of an algorithm is how fast it grows depending on the size of the input (n). If you like, you can think of this input as a list of numbers. Exponential time algorithms double (for example) every time you increase the input size (n) by 1. Polynomial time algorithms grow much, much slower than exponential time algorithms relative to the input size, and this is particularly noticeable (and relevant) when n is very large.
You can think of exponential time problems as problems that take too long to solve with a large n value to be of any use, and polynomial time problems as problems that are generally solvable in a reasonable amount of time, even with a large value of n.
That's the simplest explanation I can give without going a lot deeper.
Very nice lecture! Clear explanations of everything and very easy to understand
You know the amount of different sudokus out there would it not be the number of ways you can lay out 1 to 9 without repeating multiplied by the number of permutations possible when the first columb is filled 1 to 9 from smallest to largest.
Oh great Erik has become a confident lecturer now!
+Arya Pourtabatabaie His name is spelled Erik. Just being a grammar troll. :)
+Dawn Lassen Fixed :D
19:19 "Out here Nothing happens"? Isn't it where everything happens and we have no knowledge of how they happen?
P=NP, NP-Hard, with or without quantum
be skilled, not knowledgeful
its only fictional complexity, there is always
you have no time, too much in hurry
why did you skip my wedding
P = EXP, get my drift
This lecture states theorem "most decision problems are not solvable by program". The funny guy Erik forgot to clarify the condition of prove: "solve with with infinite precision". That is what makes this statement "depressing". In my view most practical problem have bargain about precision of solution, that is how the program (from N) solves any real decision problem (from R) => life is full of practical optimism! It is all about point of view on practical need in "precision". But that is a topic for philosophy class. Chin Up Canada! ;)
Finally I'm able to understand NP complete. Awesome lecture, thank you so much 🙂
P vs Np complete..
P does not equal NP
A^3 + B ^2 = C^3
Where A is 1/x of C
Or C multiplied by A equal 1.
Decision problem with complex values:
Problem: Given a set of complex numbers A, B, and x, is there a solution to the equation A^3 + B ^2 = C^3 where C is 1/x of A?
Decision question: does there exist a set of complex numbers A,B, and x, such that A^3 + B ^2= C^3 , where C is 1/x of A.
To demonstrate NP-completeness, we need to show two things:
1. The problem is in NP: Given a potential solution, we can verify it in polynomial time.
2. The problem is NP-hard: Any problem in NP can be reduced to this problem in polynomial time.
To prove NP-hardness, we'll reduce the well-known NP-complete problem, the subset sum problem, to our decision problem with complex values.
Subset Sum Problem:
Given a set of integers and a target sum, does there exist a subset of the integers that sums to the target sum?
We can reduce the subset sum problem to our decision problem with complex values by transforming the integers into complex numbers with zero imaginary parts:
For each integer ai in the subset sum problem , we create a complex number.
Ai = ai +0i . We set B=0 and X=1
Now, if there exists a subset of integers that sums to the target sum, then there exists a solution to our decision problem with complex values, where A^3 +B^2 = C^3.
Therefore, our decision problem with complex values is NP-complete.
Solution is percise with no approximation.. does exist with the knowledge by this author of this post …
So, The Meaning of Life, the Universe and Everything is in which set?
this is really interesting, but does anyone know where i can find the paper to the proof that most decision problems are not solvable. Have some queries in my head that perhaps reading the complete proof will help!
Check out the notes from the mitocw website they write it down
@@sriyansh1729 thank you for the reply! but it has been 11 months since i asked the question so I'm not even sure what I was asking about haha but will have a look at the notes
LOL
i'm not pretending to remotely understand, but as A Zoologist and C programmer, I've gotta say my bit. See the thing is, our brains can compute chess(chess being an EXP{exponential time to check}), some better than others. it's a mix of instinct and strategy. the brain is nothing more than a computer with some intense amount of bits. that being said it's only a matter of time until we create a silicon quantum computing brain capable of winning at chess every time, capable of learning to adapt and learn from experience. Then eventually it will tell us the equation it used because that's how a computer thinks, in code. We humans think in our own type of code. we solve complex problems every day without knowing. so i believe that eventually P=NP, but without the super computer, no, P!=NP
You cannot solve longest path problem with reduction and Bellman-Ford Algorithm because there may be negative cycles that can't be handled by Bellman-Ford Algorithm when you negate the weights. The longest path problem is NP-hard.
Is anyone else a bit confused at the proof for most problems being non-computable?
The proof relies on the fact that programs need to be finite, making the set of all programs less than the set of all functions. That implies that there's a theoretical maximum sized program, because lets say the set of all programs, which is finite, has size N. Now, if we take the largest program in N, and add 1 bit to it, that is not a different program, and contradicts the statement that N includes every possible program.
But for any program of an arbitrarily large size S, you can make a program of size S+1. This means there can't be a maximum size program.
So what? There's a practical limit to the size of a program a human or even all humans can make.
And sadly, it's a lot smaller than you think.
Wouldnt the shortest point in 3d between 2 points be super easy. All you would have to do is put those 2 points on the same plane. Or is there some magical path that you can follow that is outside any 2d plane which is shorter?
Exactly. You could have your destination be completely inaccessible on that plane.
"I am NP-Complete" :D
I interpret that at 33:29 you just said QC is easier than development.
Is there such a thing as R-completeness? If so, what are some examples of problems that are R-complete? And what would that mean, exactly?
Qubrof I don't think there is such thing as R-completeness. Put simply, as shown in the graph, you have some point to which you define a class of complexity, for example for P class: You have a point greater than which you define next class of complexity called NP, so anything on the point is an intersection of P class and P-hard class. But since the class R extends to infinity, i.e., there are not any class beyond the R class, there must not exist R-completeness.
Put it in the other way, we define X - hard (X being any one of the class) as the set of problems that are at least as hard as every problems in X class.
Now, let's assume we have R completeness, Now R complete = R (intersection) R-hard
Now, R-hard is the set of problems that are at least as hard as every problems in R class, i.e., at least as hard as the hardest problem in R class. Now, the hardest problem in R class in not defined as infinity. That's why R completeness must not exist.
PS : That was just the way i thought! I don't know if there are some R completeness but from the current logic and intiution that i have got, it seems that it is not posible!
Nootan Ghimire But there is actually a class beyond R, it is the class of R-hard problems, and this class is not empty since it contains the halting problem, which is not in R.
So it seems quite reasonable to define a class of R-complete problems. Such a problem B belongs to R and has the property that for each problem A, if there exists any algorithm to solve A, then A reduces to B in the appropriate sense. Vaguely now, this may be related to the concept of a universal Turing machine.
Edit: actually I suspect tetris is an example of an R-complete problem. It certainly belongs to R. And it is complete for R for much the same reason it is NP-complete: if you have any problem in R, then it reduces to tetris. It does not always P-reduce. But it reduces in the appropriate sense.
How can you solve the problem "will I survive playing tetris with a lucky algorithm?"
To me this is one of the problem in not in R because to understand if you survive you have to play for an infinite time.
When Tetris is introduced, Demaine stated that the list of blocks you are going to play with are given. So with that (finite) list you have to check somehow (lucky guess, for instance) if there is a surviving strategy.
((Great lecture! I was just wondering is reducing 3-sat really proof that Tetris is NP-complete? If it proves Tetris is atleast as hard as 3-sat and hence NP-hard wouldn't you still need to show that Tetris can also be reduced to some NP-complete problem for it to be proven NP-complete? I guess this might be obvious to someone who's not a layman. Still I'm curious.))
I realise I need to clarify my question.
It's my understanding that:
1. 3-sat is NP-complete.
2. 3-sat can be reduced to Tetris which shows Tetris is harder than
3-sat.
3. Since Tetris is harder than 3-sat, Tetris is NP-Hard.
4. NP-complete problems are NP-hard problems "contained" in NP.
5. All NP-complete problems are hard but all NP-hard problems are not
NP-complete.
Based on these assumptions what I meant to ask was.
"If Tetris is harder than an NP-complete problem Tetris is NP-hard, but
wouldn't Tetris then need to be reduced to an NP-complete problem to
establish that it is NP-complete and not just NP-hard?"
I appreciate any answers I may get. I wouldn't keep on pursuing these
questions if the answers weren't valuable to me and I appreciate the
patience shown.
+Gustaf Smith 3-sat is in fact np complete.
+TheFabus95 Thanks for the answer. My question was a bit unclear since I don't fully understand the concept. It didn't really clear up what I was interested in knowing but my question may have been redundant in the first place. I'll try to rephrase my question.
It's my understanding that:
1. 3-sat is NP-complete.
2. 3-sat can be reduced to Tetris which shows Tetris is harder than 3-sat.
3. Since Tetris is harder than 3-sat, Tetris is NP-Hard.
4. NP-complete problems are NP-hard problems "contained" in NP.
5. All NP-complete problems are hard but all NP-hard problems are not NP-complete.
Based on these assumptions what I meant to ask was.
"If Tetris is harder than an NP-complete problem Tetris is NP-hard, but wouldn't Tetris then need to be reduced to an NP-complete problem to establish that it is NP-complete and not just NP-hard?
I appreciate any answers I may get. I wouldn't keep on pursuing these questions if the answers weren't valuable to me and I appreciate the patience shown.
You are correct: If we reduce 3-sat to Tetris and we know that 3-sat is NP-hard, it follows that Tetris is NP-hard, but not necessarily NP-complete. Think of reducing a problem to the halting problem, which is also NP-hard.
But we also know that Tetris is in NP. Thus, Tetris is NP-complete, since the NP-complete problems are those who are (1) NP-hard and (2) contained in NP.
However, note that reducing Tetris to a NP-complete problem L does not prove anything about the NP-hardness of Tetris. The fact that L is NP-complete means that every problem in NP (like Tetris) is reducible to L.
I hope this answers your question.
+TheFabus95 Thanks again!
This really cleared things up for me. Especially you pointing out Tetris being in NP. Somehow I had missed the whole proof of Tetris being in NP as a part of the NP-completeness proof.
I don't know how I managed this since he even says "therefore Tetris is in NP", and it can't get much clearer than that.
Much appreciated!
what is an example of a problem that is in (EXP - NP)
this episode is the most important for understanding the basics of complexity theory :O seriously.. why is this not 1.
brilliant lecture
+vishnu karthik hi dude :D
Haha hi man. It's a small world
Why does one reduce non-deterministic to guesses or it's just an explanatory trick?
b/c if an algorithm is non-deterministic then there is no way to know the answer before you run the algorithm. that's kind of like a guess. there is no way to know what the answer to a guess will be before a person makes a guess. with a deterministic algorithm you always know the answer before you run the algorithm. for a given input, you get the same answer every time you run the algorithm
hmm is the idea that lucky algorithms aren't realistic still valid given quantum computing algorithms?
39:30: Proving exp is at the same spot on the line as NP would prove P=NP, right? If we know that P is not equal to EXP that is. Then you would get $1M
which lecture is the implementation of this lecture
pudiera medirse con matemáticas vectoriales como la de los poliedros en otra di mención multiplicado por el numero de posibles cara frac-tales dividido por el posible tiempo elevado a la masa del objeto. ¿?
Great lecture, Thanks a lot!!
I never understood this topic in school....will watch later to gain some insight. Hope it's good! :)
What are they trying to find?
Why is p fixed in polynomial time? Why not linear time?
Linear time is contained within polynomial time. Like the expression 2x +1 is technically a polynomial and also a linear expression.
Can anyone explain to me please why since there is a 1 to 1 relation between problems and decisions he said that there are way more problems ? I just did not get that... :(
I LOVE YOUR VOICE
I am NP complete but with result no.
Compared to French course.
It's very different.
It's less abstract : no formal proofs, no maths...
I find it really interesting and really cool.
But I wonder if it's enough accurate to have a full meaning of the topic.
Can you tell me the name of the French Course? How to get it?
Wow, 17:15 blew my mind :D
How is shortest path in 3D different to 2D? o.o
I CLAIM THAT A PATH IN 3D IS THE SAME AS 2D WITH MORE VERTICES
I couldn't agree with proof for amount of decisions. On first step it composes a real number out of all decisions (fine), on the next step it start talking about cardinality of real numbers. For uncountable amount of decisions you need to show that there is more than one such 'real number' string of decisions. And I see that he depleted all his infinities on writing down all possible decisions.
Game solvable in exponential time is best!
Excellent video.
I do know a problem that's worse than EXP, quanitifier elimination on the theory or real numbers is the classical example. this is EXP_2, double exponential time.
Hmm. One question. why can we assume that each program is only capable of solving 1 problem?
Is it possible to get a better quality of the videos, I really want to watch it in like 720p or 480p
I've always been confused why P has to equal or not equal NP. It's possible that P sometimes = NP. It seems like a fundamentally false dilemma; learning and adaptive algorithms for example. A person can learn to play tetras more efficiently. A person can also play chess better. Sometimes patterns emerge that allow people to arrive at a valid solution with much better odds than simple "luck". For instance, stacking all tetras blocks on the left hand side will make you lose faster, and it's obvious that it's one decision tree you can ignore. if you can limit the number of possibilities for a given problem by removing unintelligent choices, you can reduce it to be solvable in polynomial time even though it's technically NP.
***** Right, i forgot that it applied to the worst case, because that's really the only reference point we can use to categorize these things. Thanks for the clarification. However, the brain seems like it solves these types of problems, generally, in vastly superior ways than the worst case scenario. The question therefore becomes--can an heuristic algorithm be so good that it can eventually make some np problems to take a polynomial amount of time ? How would you calculate the maximum efficiency of a learning algorithm?
BOINC distributed computing science projects are P or NP problems right? #BOINC
+James D Correct. Some of them are P problems (prime search, SETI, etc...) and others are NP (protein folding)
Thanks for answering my question. :)
What about other classes, I want to know about #-P Complete, can you please provide something on this???
Maybe check out the course he mentioned. I believe it was 6045? Somewhere in the video he mentioned that this is just a 1 hour taste of what these people do. Or you could also youtube it with your keyword
Check out complexity classes on wikipedia for all of them
thanks for posting
is the numbering of computer programs related to Gödel numbers?
so solving for pi is not in R.
360p ... why?
can someone explain to me what studying is going towards?
if a problem takes N*log(N) time to resolve, is it in the EXP or in the P category?
+TheSymetrie77 It's in EXP but it's also in P. N*log(N) is O(poly(N)).
N*log(n)
why is @Jeb_ in video xD
You are a great teacher Erik!!!
4:20 Wow, someone really did clean that chalk board.
Great talk.
enjoyed the lecture very much
Wow man I think I love this guy
the decision to put the set of inputs and outputs in R instead of N was completely arbitrary and unproven. these are two different types of infinite sets that are not orthogonal. there was no explanation why the inputs and outputs can not be in N. this would be as absurd to claim that the set of all primes exists in R.
why not?
How come by guessing you can guarantee that if there is a yes you will get there without checking all the paths???!!!!
You either need to guess until you reach to a yes which in worst case scenario makes you check all the paths!!!
Unless we say by guessing if we are lucky we can reach to a yes if it exists. In other words if a yes exists the probability of reaching to it by guessing is greater than 0!!!!
NP is hypothetical, you can't actually create an NP algorithm.
The "lucky guessing" analogy is just one mental picture, not necessarily the best way to think of it. You may prefer the following alternative explanation of NP:
Suppose you can actually see the entire search tree for your problem, in particular all the possible search paths (possibly some infinite ones) and all the end-nodes of the finite paths, each marked with a Y or N for positive and negative answers at those nodes respectively. Then to check that the problem is in NP, you just have to find a single search path that leads to a Y-node and which has length at most p(n), where n is the size of the problem input, and p is a fixed polynomial.
Similarly, a problem is in co-NP if there is a short path that leads to an N-node. The lecturer in the video seems a bit confused about the distinction between NP and co-NP, which I think might be emphasized.
Job searching is a decent example of a lucky algorithm.
This was a great lecture !
In spirit of the lecture, an example of unsolvable decision problem: Does the life has a meaning? :)
with respect of the importance of the Tetris game to the CS computational complexity (complete surprise to me:), the history of game Tetris: en.wikipedia.org/wiki/Tetris .
Great lecture.
This was so brilliant! Thank you so much 😊
If you prove that NP = EXP, and we know that EXP != P then we also prove that NP != P. So you would get the money! :)
+Rodrigo Camacho True, but perhaps he's betting that if a proof regarding NP and EXP is discovered, it will only prove that NP ≠ EXP, which wouldn't prove or disprove that NP ≠ P.
Proving NP = EXP is at least as hard as P != NP, well, by reduction.
Lol u tried simple logic
17:30 is that statement applicable to deep learning models as well?
give an example ?