The guy Gérard Huet behind Coq also wrote some data structure he called The Zipper. Apparently he said in conference during a demonstration: "Now I'm gonna open the Zipper and show you my Coq". Source: someone at INRIA who knew Huet told me.
Kevin Pacheco I didn't. I assume the guy wasn't trying to troll us when he told the story during a dinner (with mainly scientists and engineers), because well, he wasn't the trolling type. And you can check that Huet actually invented the Zipper (a kind of tree data structure). Of course, what I said doesn't constitute a proof, but that's all I have.
A Huet Zipper isn't necessarily a tree-like structure, but you can make a Huet Zipper of a tree. A zipper is more or less a structure that gives the notion of a cursor on an underlying structure such that there's O(1) operations on whatever's underneath the cursor, and it has several operations for moving that cursor around said structure. If you want to see some trolling, try out McBridge's presentation to the Haskell Implementors Meeting, '09 about the Haskell Preprocessor.
+Foagik Thanks a lot for the info, I am undoubtedly uninformed about that kind of structure as I never needed to use one. That seems quite powerful, I've downloaded Huet's paper for further understanding. Conor McBride, right? I don't seem able to find this presentation, is it on yt?
Mathematicians should wear robes and pointy hats and be interviewed in front of some copper plated bubbling apparatuses (apparati?). Then replace "theorem" by "curse", objects by "demons", and demonstrations by "invocation". "First evoke a local Riemann-integrable demon. the Fundamental Curse of Analysis allows you make sure you can restore that demon when you observe its growth." Cedric Vilani is the headmaster of the Lyon School of Wizardry and professor of Implicit Demons Taming.
As a programmer, this makes so much sense. I was in a weird situation where i learned algebra first, and then learned programming (and algorithms and data structures), and then more maths, and i couldn't help but think how i could represent the concepts i learned in computer code, but the notation used in maths was set theory, which was slightly confusing some times. Then i noted to a lecturer that a definition he wrote of something could be expressed as a short recursive function, and turns out he knew (among other things) haskell and he agreed that the recursive notation was correct, but pointed out that you can solve some such functions which may be noted recursively using transformations that allow you to do it as one computation instead of a series of computations (iterative), which can dramatically impact the time they take to run, and he had done work on that. Using maths to speed up some heavy logic functions by transforming them. Anyways, my big take-away is i'm fairly decent at maths, but i'm a bad calculator, and i strongly prefer computer science notation over set theory notation. If i had been taught maths through programming the concepts, i would have done MUCH better, and it would also have allowed running the functions and (automatically through code) visualizing the output to gain a better "feel" for it.
As a unsuccessful student of mathematics who went to computer science I think, it's because computer scientists care much more about the style of notation and are better trying to find the right language which describes their current problems.
The jist of the curry-howard correspondence is: In a well-typed programming language (such as Coq, Agda, Idris etc.) Types correspond to theorems Programs correspond to proofs. You write the type, then you prove it by writing a body to that function.
TH-cam is great at suggesting videos. I am starting college and covering set theory, and I realized that there are multiple encodings for natural numbers and that implementation details were being exposed. And then I find this video which mentions the same thing. Amazing.
@@ivanpiri8982 Oof, that was pretty long ago. I forgot what I wanted to provide as examples back then. I'm going to need some time revising, but I'm still willing to provide examples. I would prefer to provide them in Idris purely for syntax reasons. So far I can illustrate the following: *) Boolean logic. *) maybe Existential quantifications I'll probably update the list in a short amount of time.
Thorsten is a great explainer and talks about some really subjects - liked this video! You guys should also track down Kevin Buzzard. He's a great speaker, interesting guy, and he's recently been working on proof formalisation in a language called Lean. He and students have been writing up a whole bunch of definitions/theorems/proofs, and is planning to do his whole department's curriculum.
That was really fascinating. I have heard of type theory but didn't know much more that it resembles types in programming. He contrasts it to set theory which helps a lot and makes sense. What we seem to have here, as he is saying, a potential, qualitative (of course partial) transformation of a field, mathematics, because new tools, computers, bring new paradigms. Fascinating, thank you so much. I would be happy to see some more of this, and maybe some examples from this style of math.
I was surprised that he managed to talked about set, type and homotopy theory, abstract mathematics and new foundations of mathematics without mentioning category theory.
Calle Silver-Granhall I think because category theory is more difficult to explain to a general audience than the other three. Also, previous videos have alluded to type theory and set theory.
Literally every thing he mentioned would have been way easier if he did a quick recap of category theory and what it is. I mean basically everything he talked about was under category theory.
We are happy to welcome you mathematicians into the programming world, but there is one thing you need to learn fast and constantly repeat to yourselves if necessary: Single character identifiers are not acceptable! Use full words and clear and consistent naming conventions. Learn it, live it, love it.
I think if you have a first year discrete math class or even maybe a high school computer engineering class under your belt, you should be able to follow along
Surprising as this may seem, Mathematicians seldom bother with sets in practice. In fact they already use mathematical constructions pretty much exactly like how it is described with types. For example the natural numbers N, defined as a set together with a preferred element 0 and a function s:N ->N that gives the successor (programmers would call it next) which is "universal" with this property (in the precise technical sense below), is enough to define N unique up to _unique_ isomorphism (= bijection), and that is perfectly good enough to work with it and ignore any implementation details and it nicely avoids asking silly questions like whether 2 \in 3 because they are not preserved under these isomorphisms. Here a set of natural numbers (N, 0 , s) is _universal_ if it has the following property: for any set X, element x \in X and any function T:X->X, there is a unique function f: N->X such that f(0) = x and f(s(n)) = Tf(n) (from which it easily flows: f(n) = T^n(x)). Of course one now has to prove that such a set exists (surprise surprise, it does).
This is still an implementation detail. Type theory and category theory abstract away all of this and just have the type N. What you're describing is a declarative but iterative way of describing a set, which really isn't any better from my perspective than set builder notation, which is also declarative and how the highest abstraction of iteration is expressed in functional languages. In fact what you're describing is two levels of abstraction lower than set builder notation which is considered implicit iteration while the iterator you're describing here is external iteration
I think of mathematical proofs as a mechanism for problem-solving and gaining understanding while producing useful outcomes. Consider machine learning vs. algorithms. Machine learning can develop a function based on a training set that we can use to predict results from other inputs. That's great, but it doesn't necessarily lead to expanding our understanding of the underlying principles, things that we can use to help solve other more general problems.
Mark, that's because the implementation and representation don't matter. As Dr Thor said, you have 1+1+... or binary or decimal... It also doesn't matter what shape the symbols are. I was surprised to find out that the Predators use decimal system the same way as we do, just that symbols for 0, 1, ..., 9 are those funny patterns.
Underneath Type Theory there is a more foundational mathematical approach called Category Theory, which seems to unify all areas of mathematics. Unsuprisingly, Types form a Category. Just as there is a connection between Homotopy Theory and Type Theory, there is a connection between eg. Set Theory and Topology Theory via Category Theory and the theorems of one theory correspond to theorems of the other. I suggest you look into that.
I have been studying formal mathematics for my Computer Science thesis for a little while now. And looking back on it, I think Set Theory has its priority backwards. Set Theory tries to take two Sets of elements, and tries to evaluate whether a function definition exists as a pathway between these two sets. Type Theory takes a set and a function definition, and uses it to generate elements of another set, which can be used as proofs for the other set. How I understand it is that: Set Theory is like a funky subset of Type Theory because Set Theory deals with concrete definitions generated from abstract ideas, whereas Type Theory works the other way around. In a way, this is more useful and powerful than Set Theory because you can easily create various definitions of a set of elements that would otherwise be hard to do using Set Theory. It's not impossible. It's just hard...
Patrick Wienhöft I'm talking on the "looks like science/math" scale. Although I think James could compete with this guy. Oh I forgot the Chemistry guy too!
Eyyy funny stuff, I just used Coq a month ago for researching a class project. I was writing a paper on using recurrent neural nets to automatically write proofs in a Coq-like language. I was not successful - turns out it's pretty hard to do, lol
Dr Thorsten Altenkirch At 13:50, you talk about the Univalence Principal, where it is stated that if two things are equivalent, then they are equal. I have interpreted this as "If two things are logically equivalent, then they have the same properties. I shall show by example that this is not always the case: From the premises "forall x (Sx implies Px)", it can be proved that "Exists x (Sx & Px) iff Exists x(Sx) Using this example, I shall prove that if two logical expressions are equivalent, they don't necessarily have the same properties. In this example the property they don't have in common will be truth value. Take both Sx and Px to be "x=x". Substitution gives (1): "forall x (x=x implies x=x)" can prove "Exists x (x=x and x=x) iff Exists x (x=x)" (1) However, by the principal of transposition, it will also be true that (2): "forall x (not (x=x) implies (not x=x))" can prove "Exists x (not x=x) and (not x=x)) iff Exists x (not x=x)" (2) Example (1) has a truth value of true, however that of (2) is false. (x=x is always true, likewise (not x=x) is always false.) Thus if two statements are logically equivalent, they do not always have the same properties. Using transposition again, this means that if two statements have the same properties, they are not always the same statement.
I didn't mean "logical equivalence" but logical equivalence is a special case. Indeed all operations on propositions in type theory are "truth functional". I don't understand your counterexample in particular I don't know what the principle of transposition is. I have no idea how you justify the step form (1) to (2).
It turns out I made a mistake in the proof. I found out that I had used a universal specification rule on a negated universal statement. Because it was negated it was existentially quantified and the rule couldn't be used. Sorry about that. (Thanks for the reply though)
This video is really abstract to me compared to the other. In some videos the most basic concepts (to me) are explained but this one is really hard to follow, so much stuff is considered to be known and there doesn't seem to be much concrete stuff to relate to. It would have helped if it had been a bit more scripted, may be with examples or something. It's quite hard to get into Dr Thorsten Altenkirch's head for this topic!
We are talking about a type of logic that has to be necessarily less expressive than First Order Logic (that is used for mathematical proofs) otherwise computers cannot help that much.
insidioso The trick is that constructive mathematics works well with the termination requirements for programs and physical computation. To derive equivalence, you simply have to inhabit the type representing the equivalence. The first order logic is derivable from the theory, which is something set theory cannot do.
Formal methods are getting more and more important as systems grow. For things like blockchain processing one approach to proving correctness is Hoare's Communicating Sequential Processes. Then there's Predicate Transformer Semantics etc.. It all gets very complicated.
great video hope to see more abstract discussions like this for more computerphile videos! or at least a numberphile video on this but more from the other guys' perspective.
I was taught that it is never embarrassing if you prove something and it get established and afterwards someone else proves it to be wrong, because this is how the whole research process works. You prove with what knowledge is available to you; if it gets established that means smart brains in your time can't prove that wrong, they agree with it. And it is alright. 00:30
It's not actually that much of a problem since we can use codata. Consider, for example, if we wanted to represent an infinite sequence of binary digits. We can do this by defining a function from the natural numbers onto the booleans, so that if we want the nth digit, we just issue it into the function defining the sequence. Since the function's algorithm is finite, we can represent the full infinite sequence using finite data. Generally speaking, so long as we can define something in finite terms (which is everything we can define in practice, including the concept of infinity itself), we can represent it on a computer using those exact terms.
I've always struggled with gaining intuition in abstract mathematical theories because of the set theoretic foundations, (set theoretic definition of topology, for instance)
Computers along with there parts be it Monitors, Screens, Keyboards, Mouses, Remotes, Modems, Scanners, Printers, Floppy Discs, SD Cards, CPU's, and other devices can go hand in hand whether they are desktops, laptops, I pads, kindles, smart phones, cell phones, palm pilots, and other electronic devices and can serve well if they do not over heat or run out of storage space.
Nice explantions but I doubt many will understand, I learned Measure Theory and thus can understand the general outline but those unfamiliar will be lost.
It's about his research for formalizing type theory inside type theory. Basically, if lots of math can be expressed in TT, why not try TT itself as well? Since proofs in TT are executable programs, modeling TT inside TT is also a way of doing metaprogramming and proving properties of metaprograms in the same system.
Kind of... although in general when you compile type theory it's more along the lines of interpreting it. For example, for a Coq proof to be "proven," all you have to do is compile it and there's nothing left to execute. So in a sense, executing the abstract math happens as your proof gets verified. As a result, it doesn't feature many tools for writing executable code (although it does allow you to "extract" the code into another language that can provide the tools for writing useful programs). You can also use some other languages to compile type theory programs into executable programs (Idris and ATS, for example).
Is Computer Science consider a subfiled of Mathematics? It has its origins rooted in Set Theory, Graph Theory, and other mathematical fields, so wouldn't it be redundant to say "Computer Science union Math" when describing Type Theory?
There are subjects in Computer Science unrelated to Mathematics. There are subjects in Mathematics unrelated to Computer Science. The symmetric difference is going to be nonempty.
+htmlguy88 Computer science includes a lot of subjects related to the physical and practical aspects of computing, such as real world efficiency, caching, representation of human language text, issues with human-machine interaction, making the work less error prone etc. Mathematics includes subjects that cannot be accurately represented in computers, such as infinitely precise numbers, infinitely large sets, the entire field of mathematical analysis (differential and integral calculus) etc.
This is hardly a new development in Computer Science or Mathematics. Coq itself has been around for nearly 30 years. The foundations for this come from the Curry-Howard correspondence (completed by Howard in 1969) and Intuitionistic type theory (created in 1972), so in other words almost 50 years ago. The original type theory that was created by Russell was published all the way back in the 1910s in Principia (as a side note, his motivation were the paradoxes in naive set theory that he discovered). So proof assistants may be getting more use now due to recent developments such as the proof of the four colors theorem, but they have been around for what is considered an eternity in these fields (and pretty much every other field). Anyway, most mathematicians don't work with formalized set theory. An exception is those who are obsessed with mathematical foundations or working in areas closely related to it (mathematical logic, category theory). The rest just throw out the constructions that lead to paradoxes (ie, by defining collections or families of sets differently than sets), but they don't work with an axiomatic set theory like ZFC. There are even problems that can't be formalized in ZFC (or any other formalism using higher than first order logic thanks to Godel's incompleteness theorems). Edit: Well I just watched the rest of the video and I guess it turns out that Dr. Altenkirch pointed some of this stuff out himself. He has a very obscure definition of the phrase "new development" Edit 2: It is worth pointing out that the proofs these assistants help make do so by checking the type of the program, not running it. The correspondence is between the type of the program and a proposition in the logic. As such, it is static. Most efforts in Computer Science go towards what is useful at run time (and the type system is meant to aid in writing a program for the purpose of running it, not type checking it, it just helps out that we can reduce runtime errors by catching type errors before running the program). Most stuff in Computer Science cannot even be proven formally (this is actually true of math in general, but mathematicians aren't interested in those areas, whereas Computer Scientists are). In other words, Computer Scientists are studying computation, not static analysis (though that is an important part of theoretical CS, it is a small part handled by PLT). To be blunt, a program in Coq and a program in Idris, though similar in structure, have 100% orthogonal purposes. One is meant to be type checked and the other to be ran. I would go so far as to say that proof assistants are very marginally a part of Computer Science and mostly just pure math that got an implementation on a computer. Edit 3: It would be interesting to see what a functor between the category of types with functions as morphisms in (I'm assuming Intuitionistic) type theory and the category of surfaces with paths as morphisms (don't know if this is correct) in homotopy theory looks like. Perhaps that is what that mathematician discovered. In other words, category theory is awesome.
This is the fifth time I have a great idea only to find out it has existed years before I was born... Well, gotta come up with something even better yet again.
Did I get this correctly? - Set theory is like a declarative programming language and a set may not be enumerable or infinite (termination problem); - Type theory is more constructive much like an imperative programming language.
No, if anything, it's the opposite, as type theories tend to be functional languages. Set theory isn't like programming at all. It's a completely static logical system which, by default, has no computational interpretation. Type theory exists so that a computer can internalize the semantics of what a program is meant to do. If you want a program to sort a list, you can use type theory to define what it means to sort a list, then a type checker verifies that the program meets that specification within that type. What's significant is that, at the extreme ends of expressiveness, this is sufficient to act as a foundation for mathematics.
Finally! I always hated how the mathematics they taught us in uni was so ... 'clumsy' - and I *always* wanted it to be *based on programming* which is just way clearer, you cannot misinterpret it etc...
I just watched this video and to my understanding using HOTT you would be able to show that in fact two different proofs have the same outcome and thus are the same. So you would be able to first create a proof and then verify that in fact it is an improvement (more elegant, easier, whatever) to an already existing older proof... ? Or, the other way around, you would be able to show that a proof is really different from another proof...?
Set theory has the concept of universal set on which a choice function to extract a particular set can be applied.... Like moving from from everything towards being specific. Does universal set include everything that might be constructed?
Seems weird not to mention that HTT is not a developed field and the idea of it functioning as a new foundation for mathematics is only a hypothesis, there are many theorems left to prove before one can conclude that the theory is powerful enough to be able to build all mathematics from it like one can do with set theory. Still a really fun prospect for research though.
11:32 "It's true that 2 is an element of 3, doesn't really make any sense, right?" I mean, isn't 3 basically == | | | And 2 == | | so there indeed is a *| |* inside 3. So 2 is indeed an element of the set called 3
The guy Gérard Huet behind Coq also wrote some data structure he called The Zipper. Apparently he said in conference during a demonstration: "Now I'm gonna open the Zipper and show you my Coq".
Source: someone at INRIA who knew Huet told me.
Terrence Zimmermann Now that you can't make up.
Kevin Pacheco I didn't. I assume the guy wasn't trying to troll us when he told the story during a dinner (with mainly scientists and engineers), because well, he wasn't the trolling type. And you can check that Huet actually invented the Zipper (a kind of tree data structure).
Of course, what I said doesn't constitute a proof, but that's all I have.
A Huet Zipper isn't necessarily a tree-like structure, but you can make a Huet Zipper of a tree. A zipper is more or less a structure that gives the notion of a cursor on an underlying structure such that there's O(1) operations on whatever's underneath the cursor, and it has several operations for moving that cursor around said structure.
If you want to see some trolling, try out McBridge's presentation to the Haskell Implementors Meeting, '09 about the Haskell Preprocessor.
+Foagik Thanks a lot for the info, I am undoubtedly uninformed about that kind of structure as I never needed to use one. That seems quite powerful, I've downloaded Huet's paper for further understanding.
Conor McBride, right? I don't seem able to find this presentation, is it on yt?
I've worked with people at the INRIA and that's totally the kind of humour they have! I have no problem believing this story!
Mathematicians should wear robes and pointy hats and be interviewed in front of some copper plated bubbling apparatuses (apparati?). Then replace "theorem" by "curse", objects by "demons", and demonstrations by "invocation". "First evoke a local Riemann-integrable demon. the Fundamental Curse of Analysis allows you make sure you can restore that demon when you observe its growth." Cedric Vilani is the headmaster of the Lyon School of Wizardry and professor of Implicit Demons Taming.
Antonin Caors haha, yeah 😌 Master of Demon Invocations
Amandeep Kumar Come now, there's not even any DnD references yet! 😅
what? they are. wait where do you guys live?
New name for mathematics: demons taming
this sounds just so cool.
I'm starting realize that just because I come here, it doesn't mean I'm smart.
Ксения Ковалевская lolz...
You're wandering into the trolls' dungeon, you might find yourself getting too much attention from those who are smart.
It's just learning the terminology. You are smart.
Zod 'kneel before Zod.' You were awesome.
***** I wish it applied to me in class. I'm horrible at math.
As a programmer, this makes so much sense. I was in a weird situation where i learned algebra first, and then learned programming (and algorithms and data structures), and then more maths, and i couldn't help but think how i could represent the concepts i learned in computer code, but the notation used in maths was set theory, which was slightly confusing some times. Then i noted to a lecturer that a definition he wrote of something could be expressed as a short recursive function, and turns out he knew (among other things) haskell and he agreed that the recursive notation was correct, but pointed out that you can solve some such functions which may be noted recursively using transformations that allow you to do it as one computation instead of a series of computations (iterative), which can dramatically impact the time they take to run, and he had done work on that. Using maths to speed up some heavy logic functions by transforming them.
Anyways, my big take-away is i'm fairly decent at maths, but i'm a bad calculator, and i strongly prefer computer science notation over set theory notation. If i had been taught maths through programming the concepts, i would have done MUCH better, and it would also have allowed running the functions and (automatically through code) visualizing the output to gain a better "feel" for it.
As a unsuccessful student of mathematics who went to computer science I think, it's because computer scientists care much more about the style of notation and are better trying to find the right language which describes their current problems.
Wow, this is soo true.
But you can also tail call optimize at the compiler level.
John McCarthy, the father of LI
SP, relates how he tried to suggest to mathematicians that functional LI
Didn't know Thor was into maths
Hehehehe...get it? Because his name is Thorsten
lol red
Thors stone...
math*
it's maths, not math, stop omitting words
The jist of the curry-howard correspondence is:
In a well-typed programming language (such as Coq, Agda, Idris etc.)
Types correspond to theorems
Programs correspond to proofs.
You write the type, then you prove it by writing a body to that function.
Drink a shot every time he says "ja".
HGich.T - Tutenchamun, he is clearly a fan ;D
Thank you for the advice I'm still laughing!
ja-ger
I cannot unread your comment....
"Das ich soll!"
"A function which doesn't function shouldn't be called a function"
Real-life James Bond Villain - 2017
I will die a happy man when the producers of James Bond finally hire this man to be their evil genius.
Mathematical Poofs! I love this guy
He uses the coq system for his poofs!
Shubham Bhushan is this about Alan Touring :D ?
*Alan Giedrojc*
"...Alan Turing"
@@GilesBathgate bun
War is Peace - Freedom is Slavery - Ignorance is Strength.
I recently signed up for CS in school, and math class is doing set notation. A youtube title has never been so relatable.
More vids on Haskell and functional programming would be great!
Man, this dude is so intelligent. He just pulls all this theory history out of his head; its awesome.
Proof theorems ja. samsing wrong wis se pruuf.
Ja?
TH-cam is great at suggesting videos. I am starting college and covering set theory, and I realized that there are multiple encodings for natural numbers and that implementation details were being exposed. And then I find this video which mentions the same thing. Amazing.
I think this would have been more clear if we had examples of some useful types that propositions and their proofs could take.
I could provide some really simple examples in Idris or Agda, if you're still interested, but it takes a bit of knowledge Haskell-like syntax.
This was more like a history lesson than an explanatory video
Still this is very interesting stuff
@@Bratjuuc are u still willing to provide examples in agda? I'm currently studying this subject and the lack of examples is killing me
@@ivanpiri8982 Oof, that was pretty long ago. I forgot what I wanted to provide as examples back then.
I'm going to need some time revising, but I'm still willing to provide examples. I would prefer to provide them in Idris purely for syntax reasons.
So far I can illustrate the following:
*) Boolean logic.
*) maybe Existential quantifications
I'll probably update the list in a short amount of time.
So it seems our galactic president Beeblebrox is into type theory.
BTW, great video! Love this more theoretical approach to Computerphile!
He must have painted his seond head Pink, too
Yeah totally agree! Altough I wish he would have written down something
"coq could upset english speakers" LOL
and they say Germans dont have a sense of humor
cofi 41 French*
i think he meant the speaker, but isnt he like dutch or something?
@@steliostoulis1875 I think that was intentional
Thorsten is a great explainer and talks about some really subjects - liked this video!
You guys should also track down Kevin Buzzard. He's a great speaker, interesting guy, and he's recently been working on proof formalisation in a language called Lean. He and students have been writing up a whole bunch of definitions/theorems/proofs, and is planning to do his whole department's curriculum.
I think the guy behind this saw us complain about the 404 video and is now giving us some meatier content. I'm happy. :)
Ja
ja
That was really fascinating. I have heard of type theory but didn't know much more that it resembles types in programming. He contrasts it to set theory which helps a lot and makes sense. What we seem to have here, as he is saying, a potential, qualitative (of course partial) transformation of a field, mathematics, because new tools, computers, bring new paradigms. Fascinating, thank you so much. I would be happy to see some more of this, and maybe some examples from this style of math.
I was surprised that he managed to talked about set, type and homotopy theory, abstract mathematics and new foundations of mathematics without mentioning category theory.
Calle Silver-Granhall I think because category theory is more difficult to explain to a general audience than the other three. Also, previous videos have alluded to type theory and set theory.
There is some on the whiteboard on its back...
I doubt it.
Category Theory isn't hard.
Literally every thing he mentioned would have been way easier if he did a quick recap of category theory and what it is. I mean basically everything he talked about was under category theory.
Need a proof? Bring on the COQ.
cringe
We are happy to welcome you mathematicians into the programming world, but there is one thing you need to learn fast and constantly repeat to yourselves if necessary: Single character identifiers are not acceptable! Use full words and clear and consistent naming conventions. Learn it, live it, love it.
I'm pretty sure the only people who understood what he was saying already understood everything beforehand.
Gordon Chin roughly
I had to rewatch it but I think I got it down. Abstract math isn't exactly my forte though.
I think if you have a first year discrete math class or even maybe a high school computer engineering class under your belt, you should be able to follow along
First year Discrete Math couse is enough
Having a degree in CS should be enough to understand what he's saying
Surprising as this may seem, Mathematicians seldom bother with sets in practice. In fact they already use mathematical constructions pretty much exactly like how it is described with types.
For example the natural numbers N, defined as a set together with a preferred element 0 and a function s:N ->N that gives the successor (programmers would call it next) which is "universal" with this property (in the precise technical sense below), is enough to define N unique up to _unique_ isomorphism (= bijection), and that is perfectly good enough to work with it and ignore any implementation details and it nicely avoids asking silly questions like whether 2 \in 3 because they are not preserved under these isomorphisms.
Here a set of natural numbers (N, 0 , s) is _universal_ if it has the following property: for any set X, element x \in X and any function T:X->X, there is a unique function f: N->X such that f(0) = x and f(s(n)) = Tf(n) (from which it easily flows: f(n) = T^n(x)). Of course one now has to prove that such a set exists (surprise surprise, it does).
This is still an implementation detail. Type theory and category theory abstract away all of this and just have the type N.
What you're describing is a declarative but iterative way of describing a set, which really isn't any better from my perspective than set builder notation, which is also declarative and how the highest abstraction of iteration is expressed in functional languages.
In fact what you're describing is two levels of abstraction lower than set builder notation which is considered implicit iteration while the iterator you're describing here is external iteration
I know I'm late but props to the one(s) who made the subtitles. Very creative with using the relevant symbols.
For those who don't know, "coq" means "rooster" in french.
I think of mathematical proofs as a mechanism for problem-solving and gaining understanding while producing useful outcomes. Consider machine learning vs. algorithms. Machine learning can develop a function based on a training set that we can use to predict results from other inputs. That's great, but it doesn't necessarily lead to expanding our understanding of the underlying principles, things that we can use to help solve other more general problems.
It will be interesting to see if a type theory is developed in which univalence is a theorem rather than an axiom
Check out the cubical system made at Chalmers university.
I like listening to Dr Thor here
But I'm gonna be honest, I got lost about 5 minutes in :(
DeoMachina I was lost in his "ja"s.
His name is Dr. Altenkirch. He's not nordic either.
Mark, that's because the implementation and representation don't matter. As Dr Thor said, you have 1+1+... or binary or decimal... It also doesn't matter what shape the symbols are.
I was surprised to find out that the Predators use decimal system the same way as we do, just that symbols for 0, 1, ..., 9 are those funny patterns.
haha "Dr Thor", really like it, esecially when i read its an abbeviation of his real name..
Ok, the comments have spoken: "Dr. Thor" it is
Edit: very interesting! Thanks for doing these!
This is exciting.
Just finished Group Theory and this seems like something I can learn from a text book now.
Underneath Type Theory there is a more foundational mathematical approach called Category Theory, which seems to unify all areas of mathematics. Unsuprisingly, Types form a Category. Just as there is a connection between Homotopy Theory and Type Theory, there is a connection between eg. Set Theory and Topology Theory via Category Theory and the theorems of one theory correspond to theorems of the other. I suggest you look into that.
I have been studying formal mathematics for my Computer Science thesis for a little while now. And looking back on it, I think Set Theory has its priority backwards. Set Theory tries to take two Sets of elements, and tries to evaluate whether a function definition exists as a pathway between these two sets. Type Theory takes a set and a function definition, and uses it to generate elements of another set, which can be used as proofs for the other set.
How I understand it is that: Set Theory is like a funky subset of Type Theory because Set Theory deals with concrete definitions generated from abstract ideas, whereas Type Theory works the other way around. In a way, this is more useful and powerful than Set Theory because you can easily create various definitions of a set of elements that would otherwise be hard to do using Set Theory. It's not impossible. It's just hard...
I like to think that I know quite a lot about both mathematics and computer science, but this goes way over my head
Please do video on backpropagation algorithm in neural networks
or linzer computerphile, don't be afraid to get a bit technical!
or linzer 3blue1brown did that video recently
gotta love the category theory diagrams on the back
This guy takes no.3 spot after the Klien bottle and Japanese guy.
No way. Matt and James come before him by a long shot.
Patrick Wienhöft but they are over on numberphile ;)
Patrick Wienhöft
I'm talking on the "looks like science/math" scale. Although I think James could compete with this guy. Oh I forgot the Chemistry guy too!
I was going to say, no one looks more like science than Prof. Sir Martyn Poliakoff.
Well, then Tadashi shouldn't be there. He just looks like an ordinary Japanses.
The chemistry guy obviously belongs in there then, yes ^^
Eyyy funny stuff, I just used Coq a month ago for researching a class project. I was writing a paper on using recurrent neural nets to automatically write proofs in a Coq-like language. I was not successful - turns out it's pretty hard to do, lol
6:43 or as Mr. Numberphile would call it: The Parker Function.
A function that does not function should not be called a "function". Makes sense on so many levels!
Could you talk about the Haskell type system?
Not as rich as COQ, Agda or Idris ones
Excellent video on a relatively new field of study with a real future!
Wow. Brady's really bringing in the big bucks. He brought in the scientist from Independence Day.
Dr Thorsten Altenkirch
At 13:50, you talk about the Univalence Principal, where it is stated that if two things are equivalent, then they are equal.
I have interpreted this as "If two things are logically equivalent, then they have the same properties.
I shall show by example that this is not always the case:
From the premises "forall x (Sx implies Px)", it can be proved that "Exists x (Sx & Px) iff Exists x(Sx)
Using this example, I shall prove that if two logical expressions are equivalent, they don't necessarily have the same properties. In this example the property they don't have in common will be truth value.
Take both Sx and Px to be "x=x". Substitution gives (1):
"forall x (x=x implies x=x)"
can prove
"Exists x (x=x and x=x) iff Exists x (x=x)" (1)
However, by the principal of transposition, it will also be true that (2):
"forall x (not (x=x) implies (not x=x))"
can prove
"Exists x (not x=x) and (not x=x)) iff Exists x (not x=x)" (2)
Example (1) has a truth value of true, however that of (2) is false. (x=x is always true, likewise (not x=x) is always false.)
Thus if two statements are logically equivalent, they do not always have the same properties. Using transposition again, this means that if two statements have the same properties, they are not always the same statement.
I didn't mean "logical equivalence" but logical equivalence is a special case. Indeed all operations on propositions in type theory are "truth functional". I don't understand your counterexample in particular I don't know what the principle of transposition is. I have no idea how you justify the step form (1) to (2).
It turns out I made a mistake in the proof. I found out that I had used a universal specification rule on a negated universal statement. Because it was negated it was existentially quantified and the rule couldn't be used. Sorry about that. (Thanks for the reply though)
Alex Armstrong no problem. And cheers to you for experimenting and dis overnight your own flaw.
This video is really abstract to me compared to the other. In some videos the most basic concepts (to me) are explained but this one is really hard to follow, so much stuff is considered to be known and there doesn't seem to be much concrete stuff to relate to. It would have helped if it had been a bit more scripted, may be with examples or something. It's quite hard to get into Dr Thorsten Altenkirch's head for this topic!
The entire topic is abstract and you'd basically be learning math from scratch to understand it.
Top 10 programming languages to learn in 2020: 1. Type theory 2. Type theory 3. (wait for it) Type theory ...
This guy looks like the scientist from Independence Day!!!
he's my lecturer! top top guy
This is awesome, my mind is blown and I feel much better
such a great thumbnail
10:49 he is onto something imo
Aesthetics and genres and psychology and visual mediums paired with endless user data
Is it wrong that I love the sound of this man's voice more than what he's talking about?
We are talking about a type of logic that has to be necessarily less expressive than First Order Logic (that is used for mathematical proofs) otherwise computers cannot help that much.
insidioso The trick is that constructive mathematics works well with the termination requirements for programs and physical computation. To derive equivalence, you simply have to inhabit the type representing the equivalence. The first order logic is derivable from the theory, which is something set theory cannot do.
No, ZFC itself can be modeled inside Homotopy TT, for example.
I actually once used 2 (element) 3 in a Discrete Maths homework where we couldn't use
This makes me feel guilty about not having read TAPL for two weeks...
Formal methods are getting more and more important as systems grow. For things like blockchain processing one approach to proving correctness is Hoare's Communicating Sequential Processes. Then there's Predicate Transformer Semantics etc.. It all gets very complicated.
great video hope to see more abstract discussions like this for more computerphile videos! or at least a numberphile video on this but more from the other guys' perspective.
I was taught that it is never embarrassing if you prove something and it get established and afterwards someone else proves it to be wrong, because this is how the whole research process works. You prove with what knowledge is available to you; if it gets established that means smart brains in your time can't prove that wrong, they agree with it. And it is alright. 00:30
It is as embarrassing as designing a phone which goes up in flames.
I super enjoyed this video
More videos of this guy please. I have no idea what he's talking about but... more videos please.
For me, it always been that the sign ∩ was turned 90 degrees anticlockwise in this statement. :)
I'm a Swedish mathematician as well.
can you make videos explaining type theory and how math ideas can be proven with it?
COQ magic :D
Can't be misinterpreted at all, eh?
The Gathering?
I was watching with subtitle, and i saw he was gonna have to pronounce Coq, and couldnt wait to hear that.
The biggest problem will be when dealing with infinities, other than memory constraints some of the concepts become quite abstract.
It's not actually that much of a problem since we can use codata. Consider, for example, if we wanted to represent an infinite sequence of binary digits. We can do this by defining a function from the natural numbers onto the booleans, so that if we want the nth digit, we just issue it into the function defining the sequence. Since the function's algorithm is finite, we can represent the full infinite sequence using finite data.
Generally speaking, so long as we can define something in finite terms (which is everything we can define in practice, including the concept of infinity itself), we can represent it on a computer using those exact terms.
I've always struggled with gaining intuition in abstract mathematical theories because of the set theoretic foundations, (set theoretic definition of topology, for instance)
Computer Science ∩ Mathematics = Computer Science
ja?....ja?....ja?
Marcelo Souza what does that mean?
@marcelo ja
@saber it means "ya"
could we have a second video perhaps going more into detail of type theory maybe xsome examples of its uses. pls
I will die a happy man when the producers of James Bond finally hire this man to be their evil genius.
Spelling out a rational plan that works in the end :)
Computers along with there parts be it Monitors, Screens, Keyboards, Mouses, Remotes, Modems, Scanners, Printers, Floppy Discs, SD Cards, CPU's, and other devices can go hand in hand whether they are desktops, laptops, I pads, kindles, smart phones, cell phones, palm pilots, and other electronic devices and can serve well if they do not over heat or run out of storage space.
ja
Nice explantions but I doubt many will understand, I learned Measure Theory and thus can understand the general outline but those unfamiliar will be lost.
amazing that we have evolved to be able to realize such abstract concepts
"That is getting abstract."
lol ikr how you gonna say this when already talking about type theory xD
TIL a function that does not function is not a function.
This should be on numberphile
this is really awesome and interesting, I will check out set and type theories!
this guy is my spirit animal
can someone explain what this math on the whiteboard behind him is about?
It's about his research for formalizing type theory inside type theory. Basically, if lots of math can be expressed in TT, why not try TT itself as well? Since proofs in TT are executable programs, modeling TT inside TT is also a way of doing metaprogramming and proving properties of metaprograms in the same system.
Until a latter day Gödel arrives. (this is a reply to András Kovács's comment below).
So would Type Theory make it possible to compile abstract math into executable code? I ask because that is basically what I dream about.
Yes.
Cool!
Kind of... although in general when you compile type theory it's more along the lines of interpreting it. For example, for a Coq proof to be "proven," all you have to do is compile it and there's nothing left to execute. So in a sense, executing the abstract math happens as your proof gets verified. As a result, it doesn't feature many tools for writing executable code (although it does allow you to "extract" the code into another language that can provide the tools for writing useful programs). You can also use some other languages to compile type theory programs into executable programs (Idris and ATS, for example).
no. you would have to create a proof to have a program (proofs are programs). and proving in Coq is much harder than conventional programming
Coq, secks theory... woah, this is an X rated computerphile vid.
"A function that does not function should not be called a function" -- Made me laugh, but has a very concrete truth to it.
Is Computer Science consider a subfiled of Mathematics? It has its origins rooted in Set Theory, Graph Theory, and other mathematical fields, so wouldn't it be redundant to say "Computer Science union Math" when describing Type Theory?
My bad, "Computer Science intersection Math"
computer science is applied maths
There are subjects in Computer Science unrelated to Mathematics. There are subjects in Mathematics unrelated to Computer Science. The symmetric difference is going to be nonempty.
+Vulcapyro like ?
+htmlguy88 Computer science includes a lot of subjects related to the physical and practical aspects of computing, such as real world efficiency, caching, representation of human language text, issues with human-machine interaction, making the work less error prone etc. Mathematics includes subjects that cannot be accurately represented in computers, such as infinitely precise numbers, infinitely large sets, the entire field of mathematical analysis (differential and integral calculus) etc.
This is hardly a new development in Computer Science or Mathematics. Coq itself has been around for nearly 30 years. The foundations for this come from the Curry-Howard correspondence (completed by Howard in 1969) and Intuitionistic type theory (created in 1972), so in other words almost 50 years ago. The original type theory that was created by Russell was published all the way back in the 1910s in Principia (as a side note, his motivation were the paradoxes in naive set theory that he discovered).
So proof assistants may be getting more use now due to recent developments such as the proof of the four colors theorem, but they have been around for what is considered an eternity in these fields (and pretty much every other field).
Anyway, most mathematicians don't work with formalized set theory. An exception is those who are obsessed with mathematical foundations or working in areas closely related to it (mathematical logic, category theory). The rest just throw out the constructions that lead to paradoxes (ie, by defining collections or families of sets differently than sets), but they don't work with an axiomatic set theory like ZFC. There are even problems that can't be formalized in ZFC (or any other formalism using higher than first order logic thanks to Godel's incompleteness theorems).
Edit: Well I just watched the rest of the video and I guess it turns out that Dr. Altenkirch pointed some of this stuff out himself. He has a very obscure definition of the phrase "new development"
Edit 2: It is worth pointing out that the proofs these assistants help make do so by checking the type of the program, not running it. The correspondence is between the type of the program and a proposition in the logic. As such, it is static. Most efforts in Computer Science go towards what is useful at run time (and the type system is meant to aid in writing a program for the purpose of running it, not type checking it, it just helps out that we can reduce runtime errors by catching type errors before running the program). Most stuff in Computer Science cannot even be proven formally (this is actually true of math in general, but mathematicians aren't interested in those areas, whereas Computer Scientists are). In other words, Computer Scientists are studying computation, not static analysis (though that is an important part of theoretical CS, it is a small part handled by PLT). To be blunt, a program in Coq and a program in Idris, though similar in structure, have 100% orthogonal purposes. One is meant to be type checked and the other to be ran. I would go so far as to say that proof assistants are very marginally a part of Computer Science and mostly just pure math that got an implementation on a computer.
Edit 3: It would be interesting to see what a functor between the category of types with functions as morphisms in (I'm assuming Intuitionistic) type theory and the category of surfaces with paths as morphisms (don't know if this is correct) in homotopy theory looks like. Perhaps that is what that mathematician discovered. In other words, category theory is awesome.
This is the fifth time I have a great idea only to find out it has existed years before I was born... Well, gotta come up with something even better yet again.
Moooooore from this guy please :)
Did I get this correctly?
- Set theory is like a declarative programming language and a set may not be enumerable or infinite (termination problem);
- Type theory is more constructive much like an imperative programming language.
No, if anything, it's the opposite, as type theories tend to be functional languages.
Set theory isn't like programming at all. It's a completely static logical system which, by default, has no computational interpretation.
Type theory exists so that a computer can internalize the semantics of what a program is meant to do. If you want a program to sort a list, you can use type theory to define what it means to sort a list, then a type checker verifies that the program meets that specification within that type. What's significant is that, at the extreme ends of expressiveness, this is sufficient to act as a foundation for mathematics.
2 as an element of 3 doesn't seem silly to me.
Really informative and 'enlightening'. My thanks.
Finally! I always hated how the mathematics they taught us in uni was so ... 'clumsy' - and I *always* wanted it to be *based on programming* which is just way clearer, you cannot misinterpret it etc...
Yeah, knowing programming definitely helps to learn discrete math better
I wonder why Coq would upset English speakers.
I just watched this video and to my understanding using HOTT you would be able to show that in fact two different proofs have the same outcome and thus are the same.
So you would be able to first create a proof and then verify that in fact it is an improvement (more elegant, easier, whatever) to an already existing older proof... ?
Or, the other way around, you would be able to show that a proof is really different from another proof...?
Set theory has the concept of universal set on which a choice function to extract a particular set can be applied....
Like moving from from everything towards being specific.
Does universal set include everything that might be constructed?
Category Theory application in Computer Science
I love the palimpsest on the board behind him.
Thanks! Very interesting.
Seems weird not to mention that HTT is not a developed field and the idea of it functioning as a new foundation for mathematics is only a hypothesis, there are many theorems left to prove before one can conclude that the theory is powerful enough to be able to build all mathematics from it like one can do with set theory. Still a really fun prospect for research though.
11:32
"It's true that 2 is an element of 3, doesn't really make any sense, right?"
I mean, isn't 3 basically == | | |
And 2 == | |
so there indeed is a *| |* inside 3.
So 2 is indeed an element of the set called 3