1:00 Recap of Lec#06 - Last time: - Equivalence of variants of the Turing machine model a. Multi-tape TMs b. Nondeterministic TMs c. Enumerators - Church-Turing Thesis - Notation for encodings and TMs (snapshot at 5:20) 2:27 Agenda for this video HH:MM We showed the decidability of various problems about automata and grammars: - A_DFA, - A_NFA, - E_DFA : 28:00 - EQ_DFA : 30:43 ===== COFFE BREAK : 38:29 ===== - A_CFG : 40:48 - E_DFA HH:MM : We showed that A_TM is T-recognizable.
sir lots of love and respect from Pakistan ,, we are following the exact same same book and i was very worried from where to study , and i found your lectures ... your lectures are very helpful thank you so much..
A string in a context-free language cannot be of infinite length (how would you generate such a string with a grammar? How would a pushdown automaton halt when it has infinitely many characters to read?)
I think it is because the pushdown automaton may have loops, which cause infinite operations, while the clever transformation via Chomsky normal form bounds the steps we need to explore.
You would then run in to the same issue that happens if you want to simulate an 'NFA' with a Turing Machine. Recall that in the decidability problem A_{NFA}, we needed to make the subset construction to compute an equivalent DFA and then we used a decider for A_{DFA}. Unfortunately, this scheme does not work for PDAs since PDAs (which are by definition non-deterministic) are not equivalent to DPDAs (in fact they are quite different in power )
I would like to add one comment that I think is important but not emphasized. The concept of recognizable includes the case that input is rejected and looping, while the concept of decidable needs to exclude the case that input is rejected and looping. Don't know why such important difference is not clearly addressed. With such distinguish, the following theorems (including lecture 8) now make sense.
@@Vikingofriz gay computers think they are God. Computers are very limited. My 8 year old is more intuitive than the most modern computer. Bored 😴😴😴😴 I'll take philosophy instead
That I don't think is a correct thinking. Remember we are talking about grammars of "any" combination of "any" finite length. Can you imagine human parsing a sentence of few thousand words? Besides, if you can restrict some grammar rules I am sure there are going to have much more certainty.
@@dtung2008 ok. TY. are you saying the restrictions of the Grammer rules are as or more impressive as the outputs that are produced through the constraints of the Grammer and computer?
@@Beatsbeebur I tried to clarify two points that I thought are the misconceptions of your comment. I may not understand you correctly. But let me address what I understand first. You try to express that human reasonings are more powerful than machines with grammar rules such as CFG. Maybe because human can distinguish ambiguity of sentences, but CFG can not, that is AMBIG_{CFG} is not decidable. This understanding is the background of my previous comment. If my understanding is not correct, you can ignore what I said. To recap my previous comment, 1. I tried to argue that human reasonings can distinguish ambiguity of sentences because the size is relatively small, with the constraint of size, I am sure that even CFG can distinguish all the ambiguities. 2. So in general, with some further constraints on grammar rules (such as limiting on the size of a sentence as 1.) one can have more certainty. With restrictions on the grammar rules, one makes the language space smaller but the space will become more understandable. But this is not my main point, my main point is when you make your comment, you probably didn't consider the effect of size.
I'm honestly so glad these lectures are posted
I am reading his book for several months now. Didn't know he also have videos. It feels like listening to a living legend. I am happy :)
ifkr. i love this. such jucicy and valuable content for freeee
@@aakashSky-0 TOC preparation going well :)
@@HarshKumar-er6jh Abe padai kar, don't waste time reading comments. 😂
@@HarshKumar-er6jh now onwards I'll have to use alt accounts to comment 🥲🥲
@@aakashSky-0 baat to tu sahi bol raha hai 😭
1:00 Recap of Lec#06 - Last time:
- Equivalence of variants of the Turing machine model
a. Multi-tape TMs
b. Nondeterministic TMs
c. Enumerators
- Church-Turing Thesis
- Notation for encodings and TMs (snapshot at 5:20)
2:27 Agenda for this video
HH:MM We showed the decidability of various problems about automata and grammars:
- A_DFA,
- A_NFA,
- E_DFA : 28:00
- EQ_DFA : 30:43 ===== COFFE BREAK : 38:29 =====
- A_CFG : 40:48
- E_DFA
HH:MM : We showed that A_TM is T-recognizable.
sir lots of love and respect from Pakistan ,, we are following the exact same same book and i was very worried from where to study , and i found your lectures ... your lectures are very helpful thank you so much..
Correct me if i am wrong.
In this lecture we are trying to simulate DFAs,NFAs and PDAs using Turing machines.
kind of. We are actually evaluating if there is a Turing machine capable of deciding languages generated by DFAs, NFAs, PDAs ...
it implies that there is a Turing machine capable of simulating these languages
@@sergiocaetano3284 Thank you for clearing my doubts
7:27 first question
At 48:34, the assumption is that there's a bound on the derivation of w being 2|w| - 1, but couldn't the length of input w itself be infinite?
A string in a context-free language cannot be of infinite length (how would you generate such a string with a grammar? How would a pushdown automaton halt when it has infinitely many characters to read?)
For the acceptance problem for CFGs, why can't we make Da-cfg simulate the equivalent pushdown automaton of the CFG?
That's what I was wondering too
I think it is because the pushdown automaton may have loops, which cause infinite operations, while the clever transformation via Chomsky normal form bounds the steps we need to explore.
infinite operations vs bounds, define 'normal' in this context @@die-hard-live-easy9829
You would then run in to the same issue that happens if you want to simulate an 'NFA' with a Turing Machine. Recall that in the decidability problem A_{NFA}, we needed to make the subset construction to compute an equivalent DFA and then we used a decider for A_{DFA}. Unfortunately, this scheme does not work for PDAs since PDAs (which are by definition non-deterministic) are not equivalent to DPDAs (in fact they are quite different in power )
I would like to add one comment that I think is important but not emphasized. The concept of recognizable includes the case that input is rejected and looping, while the concept of decidable needs to exclude the case that input is rejected and looping. Don't know why such important difference is not clearly addressed.
With such distinguish, the following theorems (including lecture 8) now make sense.
i dropped the class after this one
Weak
@@Vikingofriz gay computers think they are God. Computers are very limited. My 8 year old is more intuitive than the most modern computer. Bored 😴😴😴😴 I'll take philosophy instead
cant implement human reasoning on all grammers. as impressive as computers are, they sure lack common sense.
That I don't think is a correct thinking. Remember we are talking about grammars of "any" combination of "any" finite length. Can you imagine human parsing a sentence of few thousand words? Besides, if you can restrict some grammar rules I am sure there are going to have much more certainty.
@@dtung2008 ok. TY. are you saying the restrictions of the Grammer rules are as or more impressive as the outputs that are produced through the constraints of the Grammer and computer?
@@Beatsbeebur I tried to clarify two points that I thought are the misconceptions of your comment. I may not understand you correctly. But let me address what I understand first.
You try to express that human reasonings are more powerful than machines with grammar rules such as CFG. Maybe because human can distinguish ambiguity of sentences, but CFG can not, that is AMBIG_{CFG} is not decidable.
This understanding is the background of my previous comment. If my understanding is not correct, you can ignore what I said.
To recap my previous comment, 1. I tried to argue that human reasonings can distinguish ambiguity of sentences because the size is relatively small, with the constraint of size, I am sure that even CFG can distinguish all the ambiguities. 2. So in general, with some further constraints on grammar rules (such as limiting on the size of a sentence as 1.) one can have more certainty.
With restrictions on the grammar rules, one makes the language space smaller but the space will become more understandable. But this is not my main point, my main point is when you make your comment, you probably didn't consider the effect of size.