thank you, this was a great video. it took me 2 minutes into the video to understand primitive recursion and I've been struggling with this for 2 weeks in my own lecture. TH-cam university all the way!
Mark at 12:13 , Defined addition in a world where addition doesn't exist. Waiting you giving lectures on non primitive recursions, and define hyperoperations after that. Thanks a lot!
Dude that video is awesome! It's by far the best about the topic I have ever seen and it's helping me so much for my exam in theoretical computer since in Leipzig!
These videos are great! Really helping me out. At 20:40, you let g=proj^1_1, but it should be proj^2_1, right? Since we're taking the first item from a list of 2
Im confused. If in the primitive recursive world you don't have the "add" defined, then how can you use it in the successor function to get the next number?
The idea is the successor function is an abstract function, how you get the next number isn't important. You could have a list of all numbers stored somewhere and succ(n) gives you the one after n in that list then you don't need a + symbol or addition in the definition. All that's important is it gives you the number after n (he probably just wrote x+1 to make it easier to understand).
Why do we have to write g as h(x,y+1)=g(x,y,h(x,y))? We never use the second term y in any of the examples, it seems useless to me. Because h(x,y) means the result from the last recursion, g(x,y,h(x,y)) is trying to do something using the current input and last recursion result, but it appears to me that y never is needed. We are always doing proj3_1, project3_3... when are we ever going to use y?
When you say primitive recursion with parameter 1, what does that mean? How do we apply the primitive recursion functions? I.e. how would we apply pred to 3 to get 2? pred = p^0(zero^0, proj^1_1) pred(3) = ???
My solution to the challenge: sub = p^1 ( proj^1_1 , add ∘ [ pred∘proj^3_3 , succ∘proj^3_2 ] ) = p^1 ( proj^1_1 , p^1(proj^1_1 , succ∘proj^3_3) ∘ [ p^0(zero^0, proj^2_1)∘proj^3_3 , succ∘proj^3_2 ] ) isZero = p^0 ( succ∘zero^0 , zero^2 ) Edit: After watching the last episode of this series, I saw there's a better way to define isZero: isZero = sub ∘ [ succ∘zero^1 , proj^1_1 ]
I'm not good at math, I dont understand why the recursive case of addition is: "add(x, y+1)", why its not: "add(x, y-1)" since we're going down to the base case?
You are correct that we are heading down to the base case. The video states (using the definition of primitive recursion): add(x, y+1) = g(x, y, add(x, y)) Notice that the argument 'y' to the add function on the right hand side is one less than the argument 'y+1' to the add function on the left hand side. This is how we make our way down towards the base case.
May I please ask a question on this topic? Assume we have a function f(x_1, … , x_n) and we already are given that it is a partial recursive function. Also, we know all the tuples (x_1, … , x_n) on which f is defined, and we know the values of f on them. The tuples, of course, can be assumed to be listed in some order. QUESTION: is there always an algorithm that allows to construct f according to definition of partial recursion? In other words, is it always possible to construct f by means of composition, partial recursion and minimization from the zero and successor functions. Thank you.
Computability theory can be very interesting if you're into that kind of mathematical abstraction and understanding what can and can't be computed, growth of algorithms, proving theorems etc. If you only care about learning how to write code then it might not be for you.
thank you, this was a great video. it took me 2 minutes into the video to understand primitive recursion and I've been struggling with this for 2 weeks in my own lecture. TH-cam university all the way!
Thank you for this! I wasn't able to completely grasp the concept in my lecture, but this has definitely simplified it! :)
Thank you for saving my life a few hours before my test.
logique fondamentale
No matter what you do, be sure to make more videos
Mark at 12:13 , Defined addition in a world where addition doesn't exist. Waiting you giving lectures on non primitive recursions, and define hyperoperations after that. Thanks a lot!
Dude that video is awesome! It's by far the best about the topic I have ever seen and it's helping me so much for my exam in theoretical computer since in Leipzig!
Thank you so much for this! I have a worksheet due in a few hours, and you might have just saved me from failing!
Thank you very much Jared. Very well explained
These videos are great! Really helping me out.
At 20:40, you let g=proj^1_1, but it should be proj^2_1, right? Since we're taking the first item from a list of 2
Correct! Thanks for catching this
Thank you soooooooo much! The explanation was perfect.
Thank you so much. This video really helped in understanding the concept. Indeed , it was neat! 😉
THank You!!! Great Video
Thank you for another great lecture.
Could you please post solutions or at least answers to the challenge questions?
Thank you in advance.
Im confused. If in the primitive recursive world you don't have the "add" defined, then how can you use it in the successor function to get the next number?
I was thinking about the same thing 😂
The idea is the successor function is an abstract function, how you get the next number isn't important. You could have a list of all numbers stored somewhere and succ(n) gives you the one after n in that list then you don't need a + symbol or addition in the definition. All that's important is it gives you the number after n (he probably just wrote x+1 to make it easier to understand).
Just can’t be better and more clearly!
really good - - many thanks
you guys should help us with.....LAMBDA CALCULUS (gasp!).....it would be nice! thanx.
Why do we have to write g as h(x,y+1)=g(x,y,h(x,y))? We never use the second term y in any of the examples, it seems useless to me. Because h(x,y) means the result from the last recursion, g(x,y,h(x,y)) is trying to do something using the current input and last recursion result, but it appears to me that y never is needed. We are always doing proj3_1, project3_3... when are we ever going to use y?
Thank you, that was very understandable :)
can you tell how to solve for subtraction?
When you say primitive recursion with parameter 1, what does that mean?
How do we apply the primitive recursion functions?
I.e. how would we apply pred to 3 to get 2?
pred = p^0(zero^0, proj^1_1)
pred(3) = ???
My solution to the challenge:
sub = p^1 ( proj^1_1 , add ∘ [ pred∘proj^3_3 , succ∘proj^3_2 ] )
= p^1 ( proj^1_1 , p^1(proj^1_1 , succ∘proj^3_3) ∘ [ p^0(zero^0, proj^2_1)∘proj^3_3 , succ∘proj^3_2 ] )
isZero = p^0 ( succ∘zero^0 , zero^2 )
Edit: After watching the last episode of this series, I saw there's a better way to define isZero:
isZero = sub ∘ [ succ∘zero^1 , proj^1_1 ]
I got
sub = p^1 (proj^1_1 , pred ∘ [ proj^3_3 ] )
Why not p^2?
Everything has been very well defined here, except the Rho.
I'm not good at math, I dont understand why the recursive case of addition is: "add(x, y+1)", why its not: "add(x, y-1)" since we're going down to the base case?
You are correct that we are heading down to the base case.
The video states (using the definition of primitive recursion):
add(x, y+1) = g(x, y, add(x, y))
Notice that the argument 'y' to the add function on the right hand side is one less than the argument 'y+1' to the add function on the left hand side. This is how we make our way down towards the base case.
May I please ask a question on this topic?
Assume we have a function f(x_1, … , x_n) and we already are given that it is a partial recursive function. Also, we know all the tuples (x_1, … , x_n) on which f is defined, and we know the values of f on them. The tuples, of course, can be assumed to be listed in some order.
QUESTION: is there always an algorithm that allows to construct f according to definition of partial recursion? In other words, is it always possible to construct f by means of composition, partial recursion and minimization from the zero and successor functions. Thank you.
this video cured my racism and my cancer
i like it at all but why not say f=proj[2,2] rather f=zero' in example 2(multiplication)
By definition you drop the last argument, so f must have 1 argument. Recursion is similar to dropping the last argument when such argument is 0
Why the fuck my uni professors do not teach like you
great explanation,but I have to say this subject sounds boring af 🤔
Computability theory can be very interesting if you're into that kind of mathematical abstraction and understanding what can and can't be computed, growth of algorithms, proving theorems etc. If you only care about learning how to write code then it might not be for you.