Excellent video! As usual, you have thought this out extremely well and considered many sources. You made many points that reinforced to me that grammar is not a set of rules, but rather it's a set of patterns. I look forward to your whatever your next video will be!
Thank you for this amazing and educational video. It gave me a number of things to think about. I have used ChatGPT some for language learning. I knew it is trained neural network but I did not realize that it was taught without using grammar rules. Like you said ChatGPT learns in a similar way that humans learn language. I have tried using traditional grammar and vocabulary "book learning" methods to learn Spanish but they never worked for me. If I didn't constantly review the material, I would soon forget what I "learned". Finding the ALG method two years ago, was a revelation to me! Finally I found a method that works and is much easier and enjoyable than studying vocabulary. It also trains my ears from the beginning to understand what native speakers are saying. I am waiting to "output" Spanish until the words emerge naturally inside my head. My understanding is that waiting to speak will make my speech more understandable to natives when I finally start speaking. Of course my Spanish is going to be a mix of European and Latin American Spanish, so maybe no one will understand me! - Dave
Very nice! The idea of emergent grammar is something Emanuel Schegloff, Charles Goodwin, and other conversation analysts have been pointing out since the 1970's & 80's
Thanks for pointing me in that direction. After some basic searches, it seems that CA is coming at it from the usage end and doesn't directly lock horns with UG. Nevertheless CA and connectionism still look like a match made in heaven to me. Would you agree?
@@jantelakoman Yes, I would agree that the fundamental ideas are compatible. On the other hand, CA sells itself as an antimentalistic approach. I think it was Harold Garfinkel, whose "ethnomethodology" became one of the main inspirations for the founder of CA, Harvey Sacks, who said, "There's nothing of interest under the skull other than brains," or something to that effect. In other words, as a matter of policy, CA is not concerned with psychological activity, and this would include language learning or acquisition. However, while it's kind of a round-about way to view things, Language Socialization, which IS necessarily interested in learning language and culture, has often adopted CA as a methodology, and one researcher, Karen Watson-Gegeo, has explicitly considered and argued for the utility for Language Socialization researchers to embrace connectionism as a theoretical framework. I have a PDF of the 2004 article where she explicates her points I can send your way via Messenger when I get a chance. If you're interested in looking at it, please message me so I don't forget 😅
@@jantelakoman BTW, Harvey Sacks was actually quite critical of Chomsky's separation of proficiency and performance (Chomsky actually uses slightly different terms, but I can't remember right now). Sacks' empirical research on actual conversations demonstrated that "performance," which Chomsky held was a degenerate shadow of the linguistic rules in people's heads, was intricately and deeply ordered and organized. This tenet is directly related to grammar as an emergent phenomenon rather than as a static set of rules in people's heads.
Sorry, I just realized that you probably don't recognize who I am and so you wouldn't be able to send me a reminder on messenger 😂 I'm going to send you a reminder to remind me now 😊
if we thought of real rules as ways in which "things" emerge out of other "things", then we could say that rules exist. it's just that _rules_ and _our representations of rules_ are different things, just as apples and our representations of apples are not the same. this doesn't support agnosticism a la Kant of course, cuz we are able to constantly revise and adjust our representations of the world through interacting with said world. we try to replicate stuff based on our representations, and the difference between the expectations and the results forces us to change our ways, at first usually unconsciously. and then we even apply the same logic to representing/reflecting our own actions in our minds: these ideal reflections also differ from what we really do.
Thanks for your well thought out comment. I'm not saying rules as such don't exist, there must be regularities in the universe otherwise we couldn't function. However, the rules we propose to explain those regularities are merely models, and "all models are wrong but some are useful." As you point out, we revise our models where they fail to predict reality, in other words we make them more useful. My point is that grammar, as an attempt to model human linguistic behavior as a symbolic logic system, has proven to be useless. Even if you're a realist, there's no reason left to think of grammar as real.
I Mostly agree with the video, just wanted to comment one though. I think that just because the gramar formalization comes after the language at some point in history It does not mean that it is not a part of the language, becasue most people (I can talk for Spain at least) have learned it in school and use it as a self correcting mechanism when they want to be precise. We can draw one example from the intuition that 123 * 3 =~ 350 comes previous to proper formalization of the numbers and it's operations, and yet we normally will use the formal system to get at the exact answer. Also by the same logic of "the formalization is not the language" we can say that proper spelling is not part of the language because It's just a sound-ish representation and you can get your point across anyway. Yet few people would say that tve spelling is not part of the language. To summarize my point is that big modern languages (Again here I can talk for Spanish at least) have been heavily standarized and formalized. This formalization exist and is actually being used in context were exactness is needed even if it's not the majority of the language use. Language is a tool as many others, and a more formal ones usually do better.
You're right that the formal conventions imposed on a language can influence it. But it's like tying a tree to a post to make it grow straight. People might consciously try to pronounce a word "correctly" based on how it's spelled or use a grammatical structure "correctly" based on how they were taught it. But this still doesn't change the fundamental nature of language, which is not based on such conventions.
Very good video. Thank you! I think grammar is approximative and is generated by the participant in language (reader/writer). Here is my thought on grammar - I only came to it recently. If you read an article or a book there are two grammars (at least), the one in the head of the author and the one in the head of the reader. No matter how rudinemtnary your reading ability you have a grammar or some thoughts about the regularity of the words in front of you. You can get to be a very fluent speaker and writer on a language as a native speaker with zero explicit grammar teaching, you still produce grammatically correct speech. To the extent any reader gains meaning from a text there is some grammar or other, irrespective of whether you learnt any. If you undersstand it then you have the grammar needed to understand the content and that is all you need. If you can read this text, the next text and the 100 or 1,000 after that the model, depth and complexity of grammar will continually augment. In any case. Even if all you do is grammar from morning to night, your grammar in a new language will be, at best, off, at worst badly askew. Let me ask the question in a slightly different way. You are to read something. Could be a page of Sanskrit, Hebrew, Koine Greek or modern German. The goal is to read the text with relative comfort and gain meaning from it, including using a range of cues and helps. Who gets there first? My supposition is it's the person with the vocabulary, not the person with the grammar (if you have all the words in the text but no grammar you'll get something out of the text - if you have a perfect grammar but no words you'll struggle greatly). Indeed the person reading the text will generate a grammar (even if rudimentary). By the time they read 1,000 texts that grammar will mature and refine considerably (subconsciously) so that the 1,001st text is a lot easier to read. Whoever reads to 1,001 texts in a language will have a reasonably sophisticated grammar irrespective of how they got there. The key is to get to 1,001 or 5,001 or whatever. Over time the winning system is the one that leads to the most reading or listening for grammar generation. The person who struggles greatly with each text and labours over the rules won't read much, and so over time and at some point there will be an inflection point where the person reading a lot will trump the other models [mostly they'll become exhausted and give up]. You could of course test this with Esperanto or Interlingua. These are made up languages (made up to some extent) and simplified. You will understand quite a lot of Interlingua without having studied it and fairly soon you'll start to deduce some grammar spontaneously. You could invent your own language and invent verb forms for present, past and future and a subjunctive. Get someone to read a text and ask them how to form the past tense - the reader will (eventually) tell you, not the other way round. The reader does not require an outside informant to tell them. If you don't know the case system you won't know if the man bit the dog or the dog bit the man...... Our real word knowledge answers this conundrum and thus the case system will eventually be deduced from real world examples [If the next paragraph has the man running away with a bleeding hand.....].
Thank you for your comment. I think you're essentially right in what you say. As you know from my video, I'm proposing that grammar is simply a generalization of how words behave, in the same way as the "V" formation of a flock of birds is simply a generalization of how the individual birds behave. So it stands to reason that to know a word is to gain more and more experience of what it "does": what it means, which words it likes to go with, how it jumps around in a sentence and why, and so on. The more words you know, and the more you get to know them, the less surprising their behaviour becomes to you.
@@jantelakoman Obviously in the real world you have some words and some grammar when you come to a text. If you have conscious grammar you are like the people cracking Linear B or the code breakers in WW2. If the grammar is a complex set of probabilities that you might not have in conscious knowledge then you become a reader and your grammar is generated with each text (obviously you gain an immense amount you read in the past).
Very interesting and information-dense video, thank you! Connectionism makes a lot of sense as the way we learn languages, but I have to look at the resources you linked to understand it better. I don't know if I get some things mixed up, but the way neural networks learn languages explained by connectionism reminds me of the declarative and procedural models of learning by Michael T. Ullman as presented in the Coursera course ''Uncommon sense teaching''. I wonder if generativism/ connectionism and declarative/procedural modes of learning express similar ideas. Also what you are saying around 50:10, that you can take the exact same neural network, start it again with new random parameters and it can solve the same problem but it can come up with unique solutions, as well as the earlier talk about pattern recognition, reminds me of ''Chaos'' by James Gleick. I would like to ask, if you have read Chaos, do you think it is relevant to language learning? Finally, the part after 54:00 where you clarified that there is a consensus that Comprehensible Input, no matter if it is called that or something else, is essential in language learning, and the debate is not Input vs Not Input but Input Only vs Input Plus was very helpful.
Those are fascinating connections! I think connectionism is a promising avenue at the hardware end for explaining why and how it can be that there are two kinds of learning/knowledge that are so different and so limited in their interface. As for chaos theory: If syntax is emergent, then why not apply chaos theory to linguistic drift? What a wonderful idea. Totally impossible with Chomsky in the room, of course.
I can't help but thinking that Lexical Priming, Michael Hoey, would be of interest to you. The "priming" of collocational patterns of language use seem (to me) to support the kinds of ideas you have here...
Have you ever studied nonsense? It's rarely taken seriously, but there's a book on the subject called Making Sense of Nonsense by philosopher and psychiatrist Raymond Moody.
25:08 I'm flabbergasted hearing Chompsky say that. Not contributing to science? Seriously?! AI on the 'ye olde' binary computers has already outperformed quantum computing in statictical tasks. Which, to remind you, is a lot of what science does, especially on the quantum level of physics. Stunning to hear him say that..
@@jantelakoman It's not a zinger. It's just a more traditional view of what scientific inquiry is about, one where there is a clear dividing line between science and engineering. It's a principled position he has held from the outset, long before AI came around: “A computer program that succeeded in generating sentences of a language would be, in itself, of no scientific interest unless it also shed some light on the kinds of structural features that distinguish languages from arbitrary, recursively enumerable sets”. - Chomsky, 1963.
I take your point, and thank you for the quote. The reason I thought of his comment as a zinger was because it was delivered with such disdain and gusto
@@jantelakoman Well, that's the whole point of your critique wasn't it. We, as humans, tend to think of our ability to structure and form sentences as a special conscious effort, while if a machine does it it's just a marvel of engineering. What AI and machine learning has taught us is that the way the machine processes a query and reponds tells us a lot about our own natural neural network (our brain). Chompsy was certainly right back when computers spat out simple sentences in responce to a hard-coded query before the advent of modern machine learning. For him to hold that same view till this day seems somewhat dated to me.
none of these are really arguments against what chomsky actually said, though. generativism doesn't say that "doyouwanna" must be analyzed as PresDO-2ps-infWANT-inf; "chunks" already have a name- rebracketing. when you ask a native speaker to explain something they will give you an explation that makes no sense- yes, UG is unconscious. that we have touch etc says nothing about paucity of input- we are referring here specifically to input regarding how words may and may not be ordered, and a child will recieve in a couple years much less than is contained in the entire internet. most importantly though, humans do not predict the next token. we communicate meaning. at least, hopefully, most of us do lol. "AI" is just a markov chain that has enough input to be convincing and it will never be anything more. edit: I've just realized how few views this video has- I probably would have been kinder and more well thought out if I didn't expect this to be buried lmao. oh well. also, I'm not referring to the very technical aspects of what a markov chain is, just its function
suppose I'll just add that you're 100% correct about each member of the population having solved the langauge a bit differently and that from a point of view of somebody whose goal is to learn a new language and become fluent, implicit learning is best- we're not prescriptivists, after all! generativism was never about what the best way to learn is- it's about how humans and rats are about equally good at solving mazes, but very, very different when it comes to representing meaning using abstract symbology, and that is interesting and seems to imply something about our biology
I don't think you've been unkind at all, but I think you've missed the point. Generativism specifically claims that some of the *rules* of language are innate and not learned, and that these *rules* exist independently of any specific linguistic forms ("context-free grammar"). The unspoken assumptions are firstly that the brain deals in symbols and rules all the way down, and secondly that anything that looks like rule-based behavior must in fact be rule-based. It's easy enough to point out how neural networks are different to our brains, and I've conceded as much in the video. What I'm asking you to do is grant me the respects in which they are similar, namely distributed parallel processing. But the consequence of that is that you can represent patterns without any rules or symbolic systems at all, and after half a century of struggling to teach computers language through symbol systems, it's the "patterns without rules" approach that has actually worked. At the point where I'm talking about chunks I'm trying to give someone who has only ever thought that "language = grammar + vocabulary" a new way of thinking about how language works. Instead of discrete objects that fit an idealized description of a word, we start with repeating elements that nest and overlap, and then generalize patterns. LLMs are proof of concept that nothing further is needed.
La gramática es el orden en la que las palabras se organizan para llegar a una idea. La razón por la que en el inglés no parece tener gramática es porque tienen muchos sistemas gramáticales de muchos idiomas y dialectos mezclados, y además si lo sumas al modo de conversación más vulgar, es obvio que la gramática no va a existir. Pero no por eso no hay idiomas más ordenados como el chino mandarín o el malayo o el japonés.
Gracias por tu comentario. Sin embargo, creo que no has captado el punto central. Lo que mencionas son irregularidades; tal vez el inglés tenga más que otros, pero todas las lenguas las tienen. Mi argumento es el siguiente: Mucha gente piensa que la gramática de alguna manera existe como una plantilla perfecta de un idioma, y que todo fluye desde esa plantilla. Estoy diciendo que esto es incorrecto; el lenguaje se aprende de abajo hacia arriba. Hay patrones y regularidades que emergen, intentamos modelarlos con un sistema de reglas, y la gramática es el nombre de este modelo imperfecto. (Usé ChatGPT para traducir este comentario.)
a pona! sitelen tawa ni li pona mute tawa mi. mi pilin e ni: sina jo lili e jan nasin sina la, sina kama jo suli e jan nasin sina. sina tan e ni: mi pilin e ante. tenpo pini la, mi pilin e ni: mi ken kama sona e toki kepeken lipu nimi lili. tenpo ni la, mi wile kepeken e nasin sina ni. ike la, lawa mi li suli ala. lawa mi li tawa tenpo ali. tenpo ali, mi nasin e nasin ante wan e nasin ante tu. tenpo pini mute la, mi kama sona e nasin Linguistics. taso, nasin ona li suli li ike tawa mi. mi pilin ali e ni: jan Chomsky li suli tan seme? ali la, jan sona li toki e jan Chomsky kepeken toki sewi. ni la, jan sona li toki e jan Chomsky kepeken toki len kin! nasa a :0 ken la ona li tan e ni: mi kama sona e toki pona. ni la, toki pona li lili li suwi. taso, nasin pi toki pona li tan e sona ante mute mi. nasin ona la, mi li kama li sona e sitelen tawa ni. ken la, mi kama sona e seme suli pi nasin Linguistics. sona li sike. (toki pona mi li jaki la, mi pilin e anpa. mi kama sona e toki pona kepeken lawa mi. lawa mi li lili li nasa. ike ni la, nasin pi sona uta li pona.)
sina toki e pilin sina lon nasin pona kepeken toki pona. pilin mi li sama pilin sina tawa nasin sona toki. mi la ijo suli nanpa wan li ni: jan ale li ken kama sona e nasin toki sin. taso jan mute li kepeken nasin sama nasin nanpa. ni la jan ni li kama sona ala, li ken ala kepeken nasin toki sin lon nasin pona. jan ni li wile e ni: ona li toki lon nasin toki sin kepeken ala wawa li ken toki sama telo. taso ona li ken ala. ona li pilin e ni: "mi ike, lawa mi li pakala li wawa ala." taso ni li lon ala. toki li nanpa ala, jan li ken ala kama sona e toki sama nanpa. sina sona e kon pi toki jan la insa pi lawa sina li wawa a li kute pona li kama sona pona e nasin toki. mi wile pana e sona ni tawa jan ale tan ni: sona ni li pana e nasin pona e pilin pona
Excellent video! As usual, you have thought this out extremely well and considered many sources. You made many points that reinforced to me that grammar is not a set of rules, but rather it's a set of patterns. I look forward to your whatever your next video will be!
I appreciate that very much, thank you
Thank you for this amazing and educational video. It gave me a number of things to think about.
I have used ChatGPT some for language learning. I knew it is trained neural network but I did not realize that it was taught without using grammar rules. Like you said ChatGPT learns in a similar way that humans learn language.
I have tried using traditional grammar and vocabulary "book learning" methods to learn Spanish but they never worked for me. If I didn't constantly review the material, I would soon forget what I "learned". Finding the ALG method two years ago, was a revelation to me! Finally I found a method that works and is much easier and enjoyable than studying vocabulary. It also trains my ears from the beginning to understand what native speakers are saying. I am waiting to "output" Spanish until the words emerge naturally inside my head. My understanding is that waiting to speak will make my speech more understandable to natives when I finally start speaking. Of course my Spanish is going to be a mix of European and Latin American Spanish, so maybe no one will understand me! - Dave
Very nice!
The idea of emergent grammar is something Emanuel Schegloff, Charles Goodwin, and other conversation analysts have been pointing out since the 1970's & 80's
Thanks for pointing me in that direction. After some basic searches, it seems that CA is coming at it from the usage end and doesn't directly lock horns with UG. Nevertheless CA and connectionism still look like a match made in heaven to me. Would you agree?
@@jantelakoman Yes, I would agree that the fundamental ideas are compatible. On the other hand, CA sells itself as an antimentalistic approach. I think it was Harold Garfinkel, whose "ethnomethodology" became one of the main inspirations for the founder of CA, Harvey Sacks, who said, "There's nothing of interest under the skull other than brains," or something to that effect. In other words, as a matter of policy, CA is not concerned with psychological activity, and this would include language learning or acquisition. However, while it's kind of a round-about way to view things, Language Socialization, which IS necessarily interested in learning language and culture, has often adopted CA as a methodology, and one researcher, Karen Watson-Gegeo, has explicitly considered and argued for the utility for Language Socialization researchers to embrace connectionism as a theoretical framework. I have a PDF of the 2004 article where she explicates her points I can send your way via Messenger when I get a chance. If you're interested in looking at it, please message me so I don't forget 😅
@@jantelakoman BTW, Harvey Sacks was actually quite critical of Chomsky's separation of proficiency and performance (Chomsky actually uses slightly different terms, but I can't remember right now). Sacks' empirical research on actual conversations demonstrated that "performance," which Chomsky held was a degenerate shadow of the linguistic rules in people's heads, was intricately and deeply ordered and organized. This tenet is directly related to grammar as an emergent phenomenon rather than as a static set of rules in people's heads.
Sorry, I just realized that you probably don't recognize who I am and so you wouldn't be able to send me a reminder on messenger 😂
I'm going to send you a reminder to remind me now 😊
This video is amazing I hope it gets more reach!
we wanna hear more about the theories of language acquisition.
if we thought of real rules as ways in which "things" emerge out of other "things", then we could say that rules exist. it's just that _rules_ and _our representations of rules_ are different things, just as apples and our representations of apples are not the same. this doesn't support agnosticism a la Kant of course, cuz we are able to constantly revise and adjust our representations of the world through interacting with said world. we try to replicate stuff based on our representations, and the difference between the expectations and the results forces us to change our ways, at first usually unconsciously. and then we even apply the same logic to representing/reflecting our own actions in our minds: these ideal reflections also differ from what we really do.
Thanks for your well thought out comment. I'm not saying rules as such don't exist, there must be regularities in the universe otherwise we couldn't function. However, the rules we propose to explain those regularities are merely models, and "all models are wrong but some are useful." As you point out, we revise our models where they fail to predict reality, in other words we make them more useful. My point is that grammar, as an attempt to model human linguistic behavior as a symbolic logic system, has proven to be useless. Even if you're a realist, there's no reason left to think of grammar as real.
This was awesome! so so glad it ended up in my recommendations. I’d love to know how, but the engine isn’t transparent
I Mostly agree with the video, just wanted to comment one though.
I think that just because the gramar formalization comes after the language at some point in history It does not mean that it is not a part of the language, becasue most people (I can talk for Spain at least) have learned it in school and use it as a self correcting mechanism when they want to be precise. We can draw one example from the intuition that 123 * 3 =~ 350 comes previous to proper formalization of the numbers and it's operations, and yet we normally will use the formal system to get at the exact answer.
Also by the same logic of "the formalization is not the language" we can say that proper spelling is not part of the language because It's just a sound-ish representation and you can get your point across anyway. Yet few people would say that tve spelling is not part of the language.
To summarize my point is that big modern languages (Again here I can talk for Spanish at least) have been heavily standarized and formalized. This formalization exist and is actually being used in context were exactness is needed even if it's not the majority of the language use. Language is a tool as many others, and a more formal ones usually do better.
You're right that the formal conventions imposed on a language can influence it. But it's like tying a tree to a post to make it grow straight. People might consciously try to pronounce a word "correctly" based on how it's spelled or use a grammatical structure "correctly" based on how they were taught it. But this still doesn't change the fundamental nature of language, which is not based on such conventions.
Very good video. Thank you! I think grammar is approximative and is generated by the participant in language (reader/writer).
Here is my thought on grammar - I only came to it recently. If you read an article or a book there are two grammars (at least), the one in the head of the author and the one in the head of the reader. No matter how rudinemtnary your reading ability you have a grammar or some thoughts about the regularity of the words in front of you. You can get to be a very fluent speaker and writer on a language as a native speaker with zero explicit grammar teaching, you still produce grammatically correct speech.
To the extent any reader gains meaning from a text there is some grammar or other, irrespective of whether you learnt any. If you undersstand it then you have the grammar needed to understand the content and that is all you need.
If you can read this text, the next text and the 100 or 1,000 after that the model, depth and complexity of grammar will continually augment.
In any case. Even if all you do is grammar from morning to night, your grammar in a new language will be, at best, off, at worst badly askew.
Let me ask the question in a slightly different way. You are to read something. Could be a page of Sanskrit, Hebrew, Koine Greek or modern German. The goal is to read the text with relative comfort and gain meaning from it, including using a range of cues and helps.
Who gets there first? My supposition is it's the person with the vocabulary, not the person with the grammar (if you have all the words in the text but no grammar you'll get something out of the text - if you have a perfect grammar but no words you'll struggle greatly). Indeed the person reading the text will generate a grammar (even if rudimentary). By the time they read 1,000 texts that grammar will mature and refine considerably (subconsciously) so that the 1,001st text is a lot easier to read. Whoever reads to 1,001 texts in a language will have a reasonably sophisticated grammar irrespective of how they got there. The key is to get to 1,001 or 5,001 or whatever.
Over time the winning system is the one that leads to the most reading or listening for grammar generation. The person who struggles greatly with each text and labours over the rules won't read much, and so over time and at some point there will be an inflection point where the person reading a lot will trump the other models [mostly they'll become exhausted and give up].
You could of course test this with Esperanto or Interlingua. These are made up languages (made up to some extent) and simplified. You will understand quite a lot of Interlingua without having studied it and fairly soon you'll start to deduce some grammar spontaneously. You could invent your own language and invent verb forms for present, past and future and a subjunctive. Get someone to read a text and ask them how to form the past tense - the reader will (eventually) tell you, not the other way round. The reader does not require an outside informant to tell them.
If you don't know the case system you won't know if the man bit the dog or the dog bit the man...... Our real word knowledge answers this conundrum and thus the case system will eventually be deduced from real world examples [If the next paragraph has the man running away with a bleeding hand.....].
Thank you for your comment. I think you're essentially right in what you say.
As you know from my video, I'm proposing that grammar is simply a generalization of how words behave, in the same way as the "V" formation of a flock of birds is simply a generalization of how the individual birds behave.
So it stands to reason that to know a word is to gain more and more experience of what it "does": what it means, which words it likes to go with, how it jumps around in a sentence and why, and so on. The more words you know, and the more you get to know them, the less surprising their behaviour becomes to you.
@@jantelakoman Obviously in the real world you have some words and some grammar when you come to a text.
If you have conscious grammar you are like the people cracking Linear B or the code breakers in WW2. If the grammar is a complex set of probabilities that you might not have in conscious knowledge then you become a reader and your grammar is generated with each text (obviously you gain an immense amount you read in the past).
Very interesting and information-dense video, thank you! Connectionism makes a lot of sense as the way we learn languages, but I have to look at the resources you linked to understand it better. I don't know if I get some things mixed up, but the way neural networks learn languages explained by connectionism reminds me of the declarative and procedural models of learning by Michael T. Ullman as presented in the Coursera course ''Uncommon sense teaching''. I wonder if generativism/ connectionism and declarative/procedural modes of learning express similar ideas. Also what you are saying around 50:10, that you can take the exact same neural network, start it again with new random parameters and it can solve the same problem but it can come up with unique solutions, as well as the earlier talk about pattern recognition, reminds me of ''Chaos'' by James Gleick. I would like to ask, if you have read Chaos, do you think it is relevant to language learning? Finally, the part after 54:00 where you clarified that there is a consensus that Comprehensible Input, no matter if it is called that or something else, is essential in language learning, and the debate is not Input vs Not Input but Input Only vs Input Plus was very helpful.
Those are fascinating connections!
I think connectionism is a promising avenue at the hardware end for explaining why and how it can be that there are two kinds of learning/knowledge that are so different and so limited in their interface.
As for chaos theory: If syntax is emergent, then why not apply chaos theory to linguistic drift? What a wonderful idea. Totally impossible with Chomsky in the room, of course.
I can't help but thinking that Lexical Priming, Michael Hoey, would be of interest to you. The "priming" of collocational patterns of language use seem (to me) to support the kinds of ideas you have here...
Much appreciated, I'll look that up
Have you ever studied nonsense? It's rarely taken seriously, but there's a book on the subject called Making Sense of Nonsense by philosopher and psychiatrist Raymond Moody.
No not specifically, thanks for that recommendation I'll take a look
Are you able to reupload this and make the videos a little louder?
Yeah sorry about that. I'm still in two minds
Great fucking video.
Video sound is too low
Apologies 🙏
25:08 I'm flabbergasted hearing Chompsky say that. Not contributing to science? Seriously?! AI on the 'ye olde' binary computers has already outperformed quantum computing in statictical tasks. Which, to remind you, is a lot of what science does, especially on the quantum level of physics. Stunning to hear him say that..
It's surprising that someone of his stature feels the need to indulge in such zingers, no?
@@jantelakoman It's not a zinger. It's just a more traditional view of what scientific inquiry is about, one where there is a clear dividing line between science and engineering. It's a principled position he has held from the outset, long before AI came around: “A computer program that succeeded in generating sentences of a language would be, in itself, of no scientific interest unless it also shed some light on the kinds of structural features that distinguish languages from arbitrary, recursively enumerable sets”. - Chomsky, 1963.
I take your point, and thank you for the quote. The reason I thought of his comment as a zinger was because it was delivered with such disdain and gusto
@@jantelakoman Well, that's the whole point of your critique wasn't it. We, as humans, tend to think of our ability to structure and form sentences as a special conscious effort, while if a machine does it it's just a marvel of engineering. What AI and machine learning has taught us is that the way the machine processes a query and reponds tells us a lot about our own natural neural network (our brain). Chompsy was certainly right back when computers spat out simple sentences in responce to a hard-coded query before the advent of modern machine learning. For him to hold that same view till this day seems somewhat dated to me.
Yes that is my point. I'm just clarifying what I mean by zinger. I'm sure he meant what he said it's just the way he said it
none of these are really arguments against what chomsky actually said, though. generativism doesn't say that "doyouwanna" must be analyzed as PresDO-2ps-infWANT-inf; "chunks" already have a name- rebracketing. when you ask a native speaker to explain something they will give you an explation that makes no sense- yes, UG is unconscious. that we have touch etc says nothing about paucity of input- we are referring here specifically to input regarding how words may and may not be ordered, and a child will recieve in a couple years much less than is contained in the entire internet. most importantly though, humans do not predict the next token. we communicate meaning. at least, hopefully, most of us do lol. "AI" is just a markov chain that has enough input to be convincing and it will never be anything more.
edit: I've just realized how few views this video has- I probably would have been kinder and more well thought out if I didn't expect this to be buried lmao. oh well. also, I'm not referring to the very technical aspects of what a markov chain is, just its function
suppose I'll just add that you're 100% correct about each member of the population having solved the langauge a bit differently and that from a point of view of somebody whose goal is to learn a new language and become fluent, implicit learning is best- we're not prescriptivists, after all! generativism was never about what the best way to learn is- it's about how humans and rats are about equally good at solving mazes, but very, very different when it comes to representing meaning using abstract symbology, and that is interesting and seems to imply something about our biology
I don't think you've been unkind at all, but I think you've missed the point.
Generativism specifically claims that some of the *rules* of language are innate and not learned, and that these *rules* exist independently of any specific linguistic forms ("context-free grammar"). The unspoken assumptions are firstly that the brain deals in symbols and rules all the way down, and secondly that anything that looks like rule-based behavior must in fact be rule-based.
It's easy enough to point out how neural networks are different to our brains, and I've conceded as much in the video. What I'm asking you to do is grant me the respects in which they are similar, namely distributed parallel processing.
But the consequence of that is that you can represent patterns without any rules or symbolic systems at all, and after half a century of struggling to teach computers language through symbol systems, it's the "patterns without rules" approach that has actually worked.
At the point where I'm talking about chunks I'm trying to give someone who has only ever thought that "language = grammar + vocabulary" a new way of thinking about how language works. Instead of discrete objects that fit an idealized description of a word, we start with repeating elements that nest and overlap, and then generalize patterns. LLMs are proof of concept that nothing further is needed.
La gramática es el orden en la que las palabras se organizan para llegar a una idea.
La razón por la que en el inglés no parece tener gramática es porque tienen muchos sistemas gramáticales de muchos idiomas y dialectos mezclados, y además si lo sumas al modo de conversación más vulgar, es obvio que la gramática no va a existir.
Pero no por eso no hay idiomas más ordenados como el chino mandarín o el malayo o el japonés.
Gracias por tu comentario. Sin embargo, creo que no has captado el punto central.
Lo que mencionas son irregularidades; tal vez el inglés tenga más que otros, pero todas las lenguas las tienen.
Mi argumento es el siguiente: Mucha gente piensa que la gramática de alguna manera existe como una plantilla perfecta de un idioma, y que todo fluye desde esa plantilla. Estoy diciendo que esto es incorrecto; el lenguaje se aprende de abajo hacia arriba. Hay patrones y regularidades que emergen, intentamos modelarlos con un sistema de reglas, y la gramática es el nombre de este modelo imperfecto.
(Usé ChatGPT para traducir este comentario.)
toki sina li pona · tan suli ona la mi wile ala kute e ona lon tenpo suli · taso mi kama kute e ona · ona li toki e sona pona ·
pona tawa sina
a pona! sitelen tawa ni li pona mute tawa mi. mi pilin e ni: sina jo lili e jan nasin sina la, sina kama jo suli e jan nasin sina.
sina tan e ni: mi pilin e ante. tenpo pini la, mi pilin e ni: mi ken kama sona e toki kepeken lipu nimi lili. tenpo ni la, mi wile kepeken e nasin sina ni.
ike la, lawa mi li suli ala. lawa mi li tawa tenpo ali. tenpo ali, mi nasin e nasin ante wan e nasin ante tu.
tenpo pini mute la, mi kama sona e nasin Linguistics. taso, nasin ona li suli li ike tawa mi. mi pilin ali e ni: jan Chomsky li suli tan seme? ali la, jan sona li toki e jan Chomsky kepeken toki sewi. ni la, jan sona li toki e jan Chomsky kepeken toki len kin! nasa a :0
ken la ona li tan e ni: mi kama sona e toki pona. ni la, toki pona li lili li suwi. taso, nasin pi toki pona li tan e sona ante mute mi. nasin ona la, mi li kama li sona e sitelen tawa ni. ken la, mi kama sona e seme suli pi nasin Linguistics. sona li sike.
(toki pona mi li jaki la, mi pilin e anpa. mi kama sona e toki pona kepeken lawa mi. lawa mi li lili li nasa. ike ni la, nasin pi sona uta li pona.)
sina toki e pilin sina lon nasin pona kepeken toki pona. pilin mi li sama pilin sina tawa nasin sona toki.
mi la ijo suli nanpa wan li ni: jan ale li ken kama sona e nasin toki sin. taso jan mute li kepeken nasin sama nasin nanpa. ni la jan ni li kama sona ala, li ken ala kepeken nasin toki sin lon nasin pona.
jan ni li wile e ni: ona li toki lon nasin toki sin kepeken ala wawa li ken toki sama telo. taso ona li ken ala. ona li pilin e ni: "mi ike, lawa mi li pakala li wawa ala." taso ni li lon ala. toki li nanpa ala, jan li ken ala kama sona e toki sama nanpa.
sina sona e kon pi toki jan la insa pi lawa sina li wawa a li kute pona li kama sona pona e nasin toki. mi wile pana e sona ni tawa jan ale tan ni: sona ni li pana e nasin pona e pilin pona
This is great btw
take your own point. Everybody knows this intuitively. No need for the scentific proof. we believe you.