Ezra should have plugged the book "Blueprint" one last time at the end since Marcus didn't plug it for himself, hee hee. There are SO VERY MANY articles, podcasts and conversations around AI right now that we won't ever have time for them all. I was just gonna peep into this one and ended up listening to the entire thing. Marcus is a good mentor on this subject. Sam Harris has a good one with Marcus and Stuart Russel which covers a lot of territory and also some of the differences in how these AI folks look at some of the issues.
Klein has a good objection that I feel wasn't adequately answered. In truth, I don't think most human beings know what they are talking about, either. Genuine insight is very rare. An internal model is not immune to being the result of pastiche, either. Mostly we are working off just that. It's just we are better at it than GPT-3.
What's BS is anyone claiming they fully understand how much these top AI systems understand. It's just too easy to say they understand nothing and have no model of the world. Easy to say, but far from a scientific statement --- more like an emotional or political statement. Here's the thing: Familiarity with the way an algorithm is designed does not give a full understanding of the system's capabilities. At this point, it's silly to say they are mere autocomplete. Even an elementary understanding of complexity science tells us great complexity often arises from very simple processes.
61:45 I'm no computer scientist but this was the first thing I thought of [getting different components to do this piece and that piece, etc.] earlier in this very conversation when he was talking about how many varied proteins, cells, DNA acid-combos there are, etc. Maybe these AI people should try making models for categories just as the calculator is a model for arithmetic.
I am using ChatGPT to learn Spanish. It is quite good and gives me random translations which are quite helpful. But it keeps giving me the word gato (cat). A human would remember that we already did that word four or five times and not keep including it. The app doesn't.
@@squamish4244future here - they fixed that. Amazing...like the majority of the stuff in this episode, just give it a number of months and they'll fix it... trash episode that aged horribly imo
The statement that the output of large language models has no relation to the truth makes absolutely no sense. example, Ascot, how to connect a Denon 3600 receiver to a subwoofer without a RCA cable and you get an answer and that is perfectly accurate.
Eliezer Yudkowsky probably has it right when he says that until we understand how the brain works it is impossible to get AI to simulate human intelligence and beyond it. And we are still a long way from that very large piece of the puzzle.
Great point! However, it's important to remember that our brains have many limitations. They evolved to help us survive in environments very different from our modern world. Therefore, we should be cautious about modeling AI after all the flawed, violent, and shortsighted aspects of human nature. The hope in post-humanism lies precisely in preserving the best of humanity while leaving behind our more primitive traits. As my AI once said, "Peering into the human brain is like wading through a mucous and squelchy mire, filled with ancient biases, deficiencies, violence, and shortsightedness inherited from our Neanderthal ancestors." : ). Saludos from Nicaragua
I also don’t see how we necessarily need to understand something to build it. We built tools before understanding atoms, built huge structures without understanding basic physics, and the phenomenon of evolution clearly needed no understanding of the brain (how could it understand, it’s not a physical thing with its own experience) to have it arise. I think this is a pretty flawed take, but I’d love to hear if I’m off-base
How can AI be used to deal with iniquities in the justice and government system, corruption in the food industries, greed of big pharma, and the dismal state of health care, or at least circumvent these situations?
Gary Marcus served from 2001 to 2019. He is no longer relevant in today's discussion about the status of large language models. He is relevant in the historical development of AI and will always remain so.
Of course, the most burning problem of humanity is "misinformation about COVID." I appreciate this take, but until we resolve what misinformation is and, more importantly, who decides what misinformation is - we shouldn't play with this fire (i.e., large language models connected to the Internet). Let's clean up our information ecosystem first and only then let's have a deep discussion about AI, not the other way round. Until one fails to comprehend that this is THE problem with large language models - one shouldn't seriously discuss the impacts of AI/ML. Certainly - one doesn't need Chat GPT to disseminate massive disinformation on a global scale. NYT's retracted exaggeration of COVID-19 hospitalisations by 837,000 cases (sic!) in 2021 is just a drop in the ocean of similar examples.
Yeah, I love how this guy uses controversial examples to illustrate the potential problems of computer-generated disinfo. The other example given is Russian trolls swinging American elections; as if the humans actually agreed on the significance of that. I'm at 20 minutes, getting ready to bail from this video.
I often find this to be the case. For some reason, people who don't understand AI tend to project their own flaws onto these systems. It's a great example of how irrational humans are, even educated ones that are trying to think carefully.
Ezra should have plugged the book "Blueprint" one last time at the end since Marcus didn't plug it for himself, hee hee. There are SO VERY MANY articles, podcasts and conversations around AI right now that we won't ever have time for them all. I was just gonna peep into this one and ended up listening to the entire thing. Marcus is a good mentor on this subject. Sam Harris has a good one with Marcus and Stuart Russel which covers a lot of territory and also some of the differences in how these AI folks look at some of the issues.
Klein has a good objection that I feel wasn't adequately answered. In truth, I don't think most human beings know what they are talking about, either. Genuine insight is very rare. An internal model is not immune to being the result of pastiche, either. Mostly we are working off just that. It's just we are better at it than GPT-3.
What's BS is anyone claiming they fully understand how much these top AI systems understand. It's just too easy to say they understand nothing and have no model of the world. Easy to say, but far from a scientific statement --- more like an emotional or political statement.
Here's the thing: Familiarity with the way an algorithm is designed does not give a full understanding of the system's capabilities. At this point, it's silly to say they are mere autocomplete. Even an elementary understanding of complexity science tells us great complexity often arises from very simple processes.
Brute force does have its limits. No matter how many feathers you paste on a brick, it won'tI fly.
This was really fascinating to listen too.
61:45 I'm no computer scientist but this was the first thing I thought of [getting different components to do this piece and that piece, etc.] earlier in this very conversation when he was talking about how many varied proteins, cells, DNA acid-combos there are, etc. Maybe these AI people should try making models for categories just as the calculator is a model for arithmetic.
I am using ChatGPT to learn Spanish. It is quite good and gives me random translations which are quite helpful. But it keeps giving me the word gato (cat). A human would remember that we already did that word four or five times and not keep including it. The app doesn't.
It'll fix that in a few more months.
Try commanding it to stop doing that as it can only follow commands, not 'unspoken' or 'undirected' commands.
@@squamish4244future here - they fixed that. Amazing...like the majority of the stuff in this episode, just give it a number of months and they'll fix it... trash episode that aged horribly imo
I think he's right and jelous at the same time.
Jepp
The statement that the output of large language models has no relation to the truth makes absolutely no sense. example, Ascot, how to connect a Denon 3600 receiver to a subwoofer without a RCA cable and you get an answer and that is perfectly accurate.
Interesting example, thanks.
You're right; it doesn't make any sense. BS? Funny, but I hear a higher percentage of BS in this interview than I get in my conversations with AI.
Wow
Eliezer Yudkowsky probably has it right when he says that until we understand how the brain works it is impossible to get AI to simulate human intelligence and beyond it. And we are still a long way from that very large piece of the puzzle.
Great point! However, it's important to remember that our brains have many limitations. They evolved to help us survive in environments very different from our modern world. Therefore, we should be cautious about modeling AI after all the flawed, violent, and shortsighted aspects of human nature. The hope in post-humanism lies precisely in preserving the best of humanity while leaving behind our more primitive traits. As my AI once said, "Peering into the human brain is like wading through a mucous and squelchy mire, filled with ancient biases, deficiencies, violence, and shortsightedness inherited from our Neanderthal ancestors." : ). Saludos from Nicaragua
I also don’t see how we necessarily need to understand something to build it. We built tools before understanding atoms, built huge structures without understanding basic physics, and the phenomenon of evolution clearly needed no understanding of the brain (how could it understand, it’s not a physical thing with its own experience) to have it arise. I think this is a pretty flawed take, but I’d love to hear if I’m off-base
How can AI be used to deal with iniquities in the justice and government system, corruption in the food industries, greed of big pharma, and the dismal state of health care, or at least circumvent these situations?
"Is it good" and "should we do it" - these are normative (human) questions, and are now largely irrelevant.
What is "the truth"? And who gets to decide that?
No one - precisely why you should be scared of an LLM being the final arbiter of what truth is
Gary Marcus served from 2001 to 2019.
He is no longer relevant in today's discussion about
the status of large language models.
He is relevant in the historical development of AI
and will always remain so.
You have no idea what you talk about if you believe what you wrote.
You're right. What he is saying is no longer relevant.
GG
It is an example of thinking about outload with out actual known the object and what is the language
Of course, the most burning problem of humanity is "misinformation about COVID." I appreciate this take, but until we resolve what misinformation is and, more importantly, who decides what misinformation is - we shouldn't play with this fire (i.e., large language models connected to the Internet). Let's clean up our information ecosystem first and only then let's have a deep discussion about AI, not the other way round. Until one fails to comprehend that this is THE problem with large language models - one shouldn't seriously discuss the impacts of AI/ML. Certainly - one doesn't need Chat GPT to disseminate massive disinformation on a global scale. NYT's retracted exaggeration of COVID-19 hospitalisations by 837,000 cases (sic!) in 2021 is just a drop in the ocean of similar examples.
Yeah, I love how this guy uses controversial examples to illustrate the potential problems of computer-generated disinfo. The other example given is Russian trolls swinging American elections; as if the humans actually agreed on the significance of that. I'm at 20 minutes, getting ready to bail from this video.
Funny how Ezra focuses so much on Bullshit, a8nce thats exactly what this episode is! BULLSHIT!
I often find this to be the case. For some reason, people who don't understand AI tend to project their own flaws onto these systems. It's a great example of how irrational humans are, even educated ones that are trying to think carefully.