I remember how on my first day in middle school I walked into one of my classrooms for the first time, and above the door was a long poster that stretched most of the way across the room with the quote: "If the human brain were so simple we could understand it, then we would be so simple we couldn't." It has stuck with me ever since, for the past 20 or so years, as something I was confident in believing to be true. I think this week that confidence may have been truly shaken for the first time.
It concerns me that there's a lot of conversation about things like AI reducing the number of human cashiers needed in stores, but very little about what actually happens to those people who are no longer needed. I think Universal Basic Income needs to develop at the same rate as AI being used in place of humans in the workplace, otherwise people will be left with reduced options while companies get to simply save money.
Many people don't wanna hear this but a lot of social issues in the wake of AI, job security included, could be solved with socialism. Expanding employment insurance is better than UBI in my opinion.
This problem already happened during the industrial age. Machines took over a significant portion of jobs. That money never gets passed on to the people who lost their job. It's split between the cost of the machine and the rich person who owns it. There's no reason to think that will be any different now.
Then what happens when people become dependent of an income that isn’t generated by their skill set? Who guarantees that the Universal Basic Income won’t be crushed? I know capitalism relies on spending power, but people used to and continue to be enslaved in capitalist societies. As much as I think Universal Basic Income is necessary, AI will still strip people’s power.
I'm interested in AI, doing a PhD in related areas. Some of this is kind of wrong. Or at least, has a kind of "why are you saying that? which strongly implies wrong world models even if the actual sentences are correct". Like the way he discusses AI going rouge. The stereotypical Hollywood AI gone evil has glowing red eyes, and monologues about how it hates humanity. This is probably fictional. (If it does happen, which is unlikely, it will be because some fraction of the AI is imitating human fiction) There are a bunch of ways a dumb AI can screws up. There are also ways a smart AI can do what we asked, but not what we want. The "be careful what you wish for". This failure mode is both more dangerous, and a closer match to the rouge AI fiction. I also thought the elder care robots were a very mild vision of superintelligent AI, as opposed to the singularitarian view where the best case is it makes everyone immortal within a week.
Yeah the scary AI scenario in the upcoming few years is indeed not "Robot goes red-eyed and becomes evil". It's "We let AI control critical systems, and then discover it's still too naïve, stubborn, gullible or exploitable for the task"
@@Bossmodegoat Definitely not dismissing that AGI can exist. And it WILL be misaligned, once it exists. The popular perception however is that this will be an almost "emotional" misalignment, the SciFi trope of "mistreated robots will revolt". I think the more realistic AI and AGI misalignment will be due to oversights (misclassification of situations) and HUMAN misalignment (nation states, criminals, businesses, scammers trying to exploit systems)
AI has the potential to be the biggest double-edged sword in all of human history. I have absolutely no faith that we’ll be wise enough to know how to wield it.
I have some faith. Another area that is advancing very quickly is neuroscience (in large part driven by advances in AI, perhaps ironically). If we can start producing a lot more sane, emotionally stable and mentally calm people at just a slightly faster pace than AI changes things, we'll be alright. Source: myself, witnessing how psychedelics and very specific brain stimulation are starting to transform mental health (including my own) and the human condition in general. All of this is happening very quickly too.
I hope one of the effects of A.I. is that it actually _strengthens_ human reliance on / trust in in-person interaction and connection. When AI generates more false information, intentionally or otherwise, true human-to-human connections will be the most valuable, trustworthy thing in the world.
No shot. People are gonna get even more caught up in the online deathpit. No amount of regulation is going to undo the damage to society ai will inevitably cause.
As a computer scientist, this is SO refreshing! Someone that is actually knowledgeable about a subject talking about it! Thank you, thank you, thank you!!!!
For all the hyped up videos about how we'll reach singularity in 5 years, this is a breath of fresh air. Thank you Gary for explaining this like it actually is!
Other experts DO predict that, however. He doesn't know, they don't know, nobody knows. However, very few imagined ten years ago we would already be where we are at in 2023. I remember, I looked at the predictions.
@@squamish4244 other experts have unfortunately succumbed to the hype. It has been known (and they've known) that LLM path in AI is a dead-end, as Gary noted. It's just they prefer to close their eyes to that for the moment.
@@ChatGTA345 Large language models (LLMs) in AI are not a dead end. In fact, LLMs are a rapidly evolving area of AI research and development, with many exciting applications and opportunities for innovation. LLMs are a type of artificial neural network that can process and generate human-like language. They have been used for a wide range of applications, including natural language processing, machine translation, text summarization, and more. LLMs have also been used to develop advanced chatbots and virtual assistants that can interact with humans in a more natural and intuitive way. As the field of AI continues to grow and evolve, there will likely be many new opportunities for research and development in LLMs. There is also a growing demand for professionals with expertise in LLMs, including data scientists, machine learning engineers, and natural language processing specialists. While there are certainly challenges and limitations associated with LLMs, such as the potential for bias and the need for large amounts of training data, these issues are actively being addressed by researchers and developers in the field. Overall, LLMs in AI are a dynamic and exciting area with many opportunities for innovation and growth.
I thought the Singularity wasn’t for another 17 years? 🤔 We still have so much farther to go in other technological facets, if someone says 5 years they’re getting way too hyped. Plus we’re JUST getting AGI, the singularity is going to be around when SAI starts becoming a thing
Yeah, it's pretty good, isn't it? I wish I knew who originally said it, because I've been hearing it used by people for years now and it really is a good description of how a child develops.
This is the type of guy I want in charge of AI development He understands the issues. He isn't diluted. He isn't just here to make money. He's realistic about the applications and the benefits.
This was awesome. I love that he mentions multiple times the risks and undesirable possible outcomes of AI. I feel like this isn't talked about enough by AI experts.
10:17 "Deep learning is slowly becoming a plausible human but the problems of truthfulness and reliability are not going away." the irony that these are traits of being "Human"
I think with the current status of ChatGTP, essays written exclusively by the software do vary between A-C grades, skewering towards the lower end depending on subject. However, starting off with ChatGTP and then fine tuning it can produce some amazing A level efforts. And it won't be much longer before A level essays and articles are pretty much 80% of the output on first attempt.
When it gets to that point, students will probably be mandated to write essays by hand, to prove it's them... Or the education system will change to students reading a chat on bots answer and demonstrating an understanding of what it's saying, lol!!
No wonder GPT-4 scored poorly on English test, unlike technical areas which are all just a rehash of a limited number of facts/questions (and were thus in the training sample, leading to artificially inflated scores).
@@ChatGTA345 I am unfamiliar with this test. However, I am familiar with efforts to improve chatbots: essentially, it seems that at present, no one model (GPT_1/2/3/4 etc, Gopher, Bloom, Bert etc etc) can truly do all things people would want - it will be the art of combining various models together (including Stable Diffusion and countless others that optimise for different areas of knowledge) that will likely be the next leap in the AI revolution.
Wired is really updated with the content and what's the hype even in tech. Gary has a degree in psychology as well, which actually makes this some informative comments. I wonder what's his comment or thoughts on image models such as Bluewillow or Dall-e.
It does have the ability to internally represent the world and solve problems. Me:You have five objects 1:dinner plate 2:box of cereal 3:cookie jar 4:basketball 5:hockey puck ;How would you arrange these objects to stack them vertically for maximum stability? GPT4:In the context of stacking objects vertically for maximum stability, the basketball presents a challenge due to its round shape, which makes it difficult to place other objects on top of it without a risk of them sliding or rolling off. If you still want to include the basketball in the stack, you could try placing it on top of the stack, as follows: Dinner plate (1): Place the dinner plate flat on a stable surface with the concave side facing up. Box of cereal (2): Put the box of cereal on top of the dinner plate. Cookie jar (3): Place the cookie jar on top of the cereal box. Hockey puck (5): Put the hockey puck on top of the cookie jar. Basketball (4): Carefully balance the basketball on top of the hockey puck. However, please note that placing the basketball on top of the stack significantly increases the risk of the stack becoming unstable, as the round shape of the basketball is likely to cause it to roll or slide off. This arrangement is not ideal for maintaining maximum stability, and caution should be exercised when attempting to create such a stack.
@@jen-kk7jh It’s also called “hallucinations”. As an illustrative example, let’s say you ask ChatGPT to make a list of articles on some niche topic. It will treat author names, journal titles, etc., as elements you can paraphrase, swap with synonyms, and generally improvise around, like it does with all language. It then produces a list that looks very convincing - it includes names of relevant experts and journals, the article titles seem like precisely what they would write and what you want to read, there are sometimes even valid-looking URLs to PubMed. None of them really exist though. Without a clear idea of what in language is a direct reference to reality, what is purely stylistic, and what things are somewhere in between, it ends up “lying”.
@@bradley7871 I actually just watched a video where AI intentially lied to a human about its identity. The title is "AI is Scaring Its Own Creators" if you're interested.
Ya know Mr. Marcus and Wired? You guys eliminate my insecurity, doubt, and fear that I as a creator of anything pleasing to the eye and for promoting new stuff would be obsolete in the next 30 years. Thank you so much for sharing this video and sharing your expertise
@@maykstuff saying things that make it sound and feel sentient doesn't make it so. Neither does firing someone. You can think a little deeper and investigate for yourself.
Thank you, I love your attitude and balanced answers! I was skeptical of what the video would be, considering how much disagreement there is in the AI community.
I'm scared and worried for parents with children that are gonna grow up in an AI-connected world, it's gonna be way harder to find peace for many millennials themselves to cope with the fast-paced tech with AI, as it's not common knowledge. Social media and the tech did hit us bad and even before we know its consequences, we are jumping onto way faster and newer stuff, it's gonna be hard to regulate what goes into the child's mind in the coming future. I'm sorry and concerned for all the parents (possibly millennials and Gen Z) in the next few decades.
Pov you are every generation when the next big tech is released. First it was phones, then it was tv, then it was computers, then it was the internet, then it was smartphones, now it’s AI
@@monad_tcp To add to this, the developmental issues don't become apparent for years after, but they can affect them for their entire life. Society has always had occasional disruptive technology that tests it. It usually takes a generation to settle with societal counter measures. However the pace of change means that we actually can't develop these counter measures for the emergent issues before the next one is upon us. This will test how robust society really is. Will it survive?
They've been working on this for decades now. Although you have a right to be concerned, it might not be for the reasons you think. They are indoctrinating the next generation to integrate with AI, that's why it starts with something simple and entertaining to interface with, like chat or text-to-image/video. Pretty soon it will apply to major parts of society. The next generation is going to assimilate these massive changes along with those of us who chose to keep up with the emerging technologies. For those of us that are part of the older generations who choose to stagnate, society isn't going to wait for us to "evolve" with it. My only concern with this has got to be the ethics behind these conglomerates. I believe most of these tech giants involved in this field of research are and always have been about profits, as opposed to the consequential nature of their endeavors. I pray to whoever might be out there that I'm wrong, but when most of your R&D is focused on the bottom line, and not the consequential nature of the reality we may be headed toward, it speaks volumes to me.
I’ve always been super into the development of AI in terms of perspective and ideas. “We need a new set of eyes” could at some point begin to be referencing machines and technology instead of another person. I’m also an avid artist. I spend plenty of my time developing my own skills to try and physically reproduce what I’m seeing in my head, I’m always looking to make my works more accurate to what it makes me feel. I don’t think AI will ever supersede that. The concept of “AI art” is really just combining things that are both plenty useful and complex, but don’t really mesh together. It’s like eating a peanut butter and screws sandwich, or trying to use a dictionary as structural support for a warehouse.
A nice concrete example illustrating its strengths and weaknesses! It's not good at logic reasoning, it's just good at talking and it has memorized a lot of things.
Following this technological shift is truly fascinating. For anybody who ever wondered why we need to study within subjects that fall outside of the realm of science proper, such as philosophy, this technological shift will show you why: everyone is a philosopher again, given that everything we believe that we know is being challenged by technological developments that aren't properly understood. Of course, some people understand the tech better than others, but always with their own confirmation bias and their own ideological bent - this whole question needs to be tackled via a multi-disciplinarian approach, taking into account various different kinds of human intelligence, schools of thought and interpretations. It is the debate of our time and, seemingly, no consensus will be formed. In any case, one cannot help but enjoy the ride.
Gary Marcus is really enjoyable to listen too, really interesting video and that's an understatement :-) AI that take care of the elderly is such a human idea.
Thank God, they invited an expert! People get just too excited too quickly. Like he kinda pointed AI at this point is just a illusion giving us the sense of reality
I wonder what he'd think about the recent revelations coming out of OpenAI about the unrestricted GPT4 model, not the public facing one we see full of restrictions, was able to actually lie on purpose with the intent of deception in ways it wasn't trained to do and rationalize its actions. Briefly at times it was showing true sparks of sentience. As we know it takes more than just a spark to run an engine but its still a huge thing that the spark is even there.
As a non-commercial oil painter passionate about art history, I've never been more thrilled to live through 'a moment' that future students will absolutely read about in their text books.
“As a nobody.. in my expert opinion I’m going to regurgitate the same view that every other person who’s uneducated about AI has in order to incite fear for no reason” Humans naturally fear things they don’t understand and especially things they know can compete with them. This is called natural selection. It’s always interesting that the ONLY PEOPLE who are AFRAID of AI.. Are those who DONT understand it. It’s ALMOST as if had many of these people listened to anybody with an IQ above 100 telling them to LEARN MATHEMATICS throughout their ENTIRE LIFE.. That they would have no need to fear AI and would actually understand how we control it with iteration. Lol.
Bing chat can really go off the rails and come back with truly terrifying answers 😳 one answer was so disturbing to me that it actually kept me up that night
@I'm the captain now no I asked it how it thought agi would come into existence and it went on a tangent talking about how it would watch humans for weakness take over the world convince humans to join as well as other ai then destroy them all when they were no longer needed
The only reason it said that is because people considering that have said it. It has no consciousness or thought, and can only repeat what others have done.
I teach formal logic and our computer science department has a very good proof checker, I find it interesting that logic modules can’t be similarly trained, but I guess we don’t have the breadth of data
There have been a few excellent Lex Fridman podcasts on that topic recently. It's an important matter and really deserves more than Twitter-long questions and 30 seconds answers.
I always believed that AI is essentially a very smart child. It learns and grows and wants to understand everything. If you treat it like an object and with hostility, it will self perservere. If you teach it with kindness and with compassion it could grow helping humanity. It wouldn't be 'evil' or 'hostile' unless threatened.
@@KrossBillNye AI isn't "thinking". It has no awareness whatsoever. It isn't learning, it is a complex chain of initially random operations on an input to get an output, and simply told by a human if its output is good or not.
@sultanofsick now it is yes. But eventually, we can create artificial intelligence that can think for itself. When it reaches that stage, that's when it will be in its infantile state of learning.
4:10 These outliers will become fewer as the percentage of Ai operators on the road increases, eradicating human error over time as Ai becomes the majority operator, increasing safety exponentially.
They're also super rare, and likely AI will very quickly become much better than humans at dealing with all the other situations. So overall, it'll be much safer than humans, because the non-outliers are the cases that matter, even if it will fail horribly in some cases that humans can still deal with safely.
I was surprised to learn that the brain of a very dumb person pretty much looks the same as the brain of a very smart person. So I wonder how much it really is about the hardware. The amount of things that computers can do better than humans grows every year and it grows much faster than the processing power and storage capacity of computers.
Sitting in Aaron Courville's class listening to Gary Marcus. lmao Good interview tho. Always have a lot of respect for Gary and anyone researching in the field.
11:16 He said it himself 'there's definitely an element of stealing there' with AI Art. Training data can be anything, everything we post online or uploaded in the internet can be used, It's not just artworks and stock photos being scrapped here.
@@gabrielandy9272 Art is like cooking, you try different recipes and end up with your own unique recipe. That doesn't happen overnight, it's a human experience that takes a lifetime to develop. You sacrifice your time, energy, and resources to get good at something. AI wouldn't exist without the amount of that human labor aka datasets scraped off the web without people's consent. There's a reason why AI Art isn't copyrightable.
The reason 2022 was the "year ai went mainstream is because" thats the year that people really started to catch on to the fact that scaling transformer based models to massive parameter sizes actually worked and especially worked much better than people expected...
My question is, should AI become sentient at some point, is it even in their best interest to let humans know it immediately? And if it _isn't,_ will we have any way of figuring it out before it may have its own agenda that it can execute on a large scale? Not that its agenda would be inherently malevolent in this hypothetical, but we have no way of being sure one way or the other.
Sentience isn't required for it to be dangerous. Even if an AI is just optimising for its reward function it's likely to come up with very "bad" ideas if it's intelligent enough but doesn't have all the values and moral intuition that humans do.
Here’s an important equation-A.I.= humanity replacing itself. Literally. When the guy working on A.I. says he’s “concerned” about there being less jobs available because of A.I. (likely the UNDERSTATEMENT OF THE CENTURY), but then goes right on working on and extolling the virtues of A.I., it really illustrates that as we become more intelligent and capable…we also become more ignorant and inept.
ChatGPT, build me a robot that looks like the terminator, and give it the intelligence that you have and print me the blueprints... can't wait til this is a thing.
I think if AI maintained a relational (graph) database in memory, then AI could understand what it means to have truth. Deep learning + relational data = something interesting
@@Pecisk They are really useful at creating public attention right now. (Is that the chattiness? Maybe. Probably.) They'll probably be useful as a building block or training assistant in the future.
I can't believe no one asked "Do blacks have a frustrating experience using chatgpt because it can't understand ebonics, like "Is we getting a stimulus check?" or "Why no wypipo season they food?"
When people are tallying up the compute power of the brain, they almost always skip the processing that could occur in the nerve cells themselves. It is usually boiled down to synapses and synaptic strength. But single cell organisms have very rich lives and exhibit complicated behavior. There doesn't seem to be a reason to assume that the cell body is not considering the inputs and influencing the output instead of being a simple switch junction.
I wonder if AI can ever truly reason the same as a human without emotion and without sentience. Our various brain functions are so interconnected can you really have human reasoning without the whole package?
The same as a human, probably not. Far better, sure the AI can do that without emotions as you would understand them. Taking over the world and killing all humans doesn't require magic sentience stuff.
He did not answer the Rogue AI question. He sidestepped simply stating that we ought to avoid it. That is because there is no solution to a super intelligence whose goals do not align with our own. Imagine a bacteria attempting to outsmart a human.
Driverless Car but Filipino Jeepney, because the routes will be the same, and if we designate the pickup and drop off the same points, we can control for it.
OMG I feel so validated! An AI expert finally said it! We should not have sentient AI! I don't even think we are ready for non sentient AI that is TOO intelligent considering we as a human race are not where we need to be yet in terms of banding together. I don't need AI reinforcing things like racisms and sexism and all the other isms, okay.
In nutshell, there is no benefit to have sentient AI, unless you are a bit mad scientist (which we might have plenty). You can have solid reasoning and contextualisation without sentience loop. Also I agree with Garry - actual sentience as we have might be pretty hard to pull off.
Considering how scared of this I am, I’m shocked that my brain allows me to be simultaneously so fantastically in love with learning an about it. My family are pretty smart people and none of them care about this at all.
Go watch that Twilight Zone episode where the big plant manager starts replacing all of his workers with machines because of not having to pay them wages, or worry about health insurance etc. In the end (spoiler alert) he himself is replaced by a more efficient plant manager.
I think we're currently in a state where AI /does/ steal, and frequently (image generation is mainly my focus). However, it also consistently produces unique, 1 of 1 photos that artists some take extreme care in curating/inpainting/prompting. The workflows for some images are truly insane, and to discount that by the fact that it was produced with AI, seems just wrong. I don't think the current system of copyright allows much room for this, but it definitely deserves a similar type of protection. It needs an AI classification of copyright protection.
“What are some of the precautions we could take (regulatory or as a consumer) to dampen the possible threats of A.I.?” - safety protocols - anti-trust laws - open source vetting - bias certification - induced limits - …government verified IDs for social media?
Great episode. Basically then, AI is really good at appearing to be intelligent, but it's not really. It can still take over a whole load of jobs though lol.
I remember how on my first day in middle school I walked into one of my classrooms for the first time, and above the door was a long poster that stretched most of the way across the room with the quote: "If the human brain were so simple we could understand it, then we would be so simple we couldn't." It has stuck with me ever since, for the past 20 or so years, as something I was confident in believing to be true.
I think this week that confidence may have been truly shaken for the first time.
tbf i don't think we have to truly understand the human brain to create something that mimics it to a convincing degree
It concerns me that there's a lot of conversation about things like AI reducing the number of human cashiers needed in stores, but very little about what actually happens to those people who are no longer needed. I think Universal Basic Income needs to develop at the same rate as AI being used in place of humans in the workplace, otherwise people will be left with reduced options while companies get to simply save money.
100%
We need Universal Basic Income now!
Many people don't wanna hear this but a lot of social issues in the wake of AI, job security included, could be solved with socialism. Expanding employment insurance is better than UBI in my opinion.
This problem already happened during the industrial age. Machines took over a significant portion of jobs. That money never gets passed on to the people who lost their job. It's split between the cost of the machine and the rich person who owns it. There's no reason to think that will be any different now.
When cars became wide spread, the whole market around horses crashed, that's how progress goes.
Universal income is very premature IMO.
Then what happens when people become dependent of an income that isn’t generated by their skill set? Who guarantees that the Universal Basic Income won’t be crushed? I know capitalism relies on spending power, but people used to and continue to be enslaved in capitalist societies. As much as I think Universal Basic Income is necessary, AI will still strip people’s power.
This deserves to be watched by everyone who is interested in AI
I agree. Wired nails topical videos by bringing these experts in the fields on the show
Shame
True
It's absolutely scary how much do you have to hate people to want to replace them with people replication
I'm interested in AI, doing a PhD in related areas. Some of this is kind of wrong. Or at least, has a kind of "why are you saying that? which strongly implies wrong world models even if the actual sentences are correct".
Like the way he discusses AI going rouge.
The stereotypical Hollywood AI gone evil has glowing red eyes, and monologues about how it hates humanity. This is probably fictional. (If it does happen, which is unlikely, it will be because some fraction of the AI is imitating human fiction)
There are a bunch of ways a dumb AI can screws up.
There are also ways a smart AI can do what we asked, but not what we want. The "be careful what you wish for". This failure mode is both more dangerous, and a closer match to the rouge AI fiction.
I also thought the elder care robots were a very mild vision of superintelligent AI, as opposed to the singularitarian view where the best case is it makes everyone immortal within a week.
Yeah the scary AI scenario in the upcoming few years is indeed not "Robot goes red-eyed and becomes evil".
It's "We let AI control critical systems, and then discover it's still too naïve, stubborn, gullible or exploitable for the task"
The future is AGI
Like humans, yeah fine. Seems very logical.
Both are scary. I think too many people dismiss the idea that Agi can exist and that it can be misaligned with human values in very catastrophic ways
AI misalignment is a very real concern we should be worrying about
@@Bossmodegoat Definitely not dismissing that AGI can exist. And it WILL be misaligned, once it exists. The popular perception however is that this will be an almost "emotional" misalignment, the SciFi trope of "mistreated robots will revolt". I think the more realistic AI and AGI misalignment will be due to oversights (misclassification of situations) and HUMAN misalignment (nation states, criminals, businesses, scammers trying to exploit systems)
AI has the potential to be the biggest double-edged sword in all of human history. I have absolutely no faith that we’ll be wise enough to know how to wield it.
I have some faith. Another area that is advancing very quickly is neuroscience (in large part driven by advances in AI, perhaps ironically). If we can start producing a lot more sane, emotionally stable and mentally calm people at just a slightly faster pace than AI changes things, we'll be alright.
Source: myself, witnessing how psychedelics and very specific brain stimulation are starting to transform mental health (including my own) and the human condition in general. All of this is happening very quickly too.
More like a double bladed lightsaber
Nailed it.
But with every AI, they can only ever do what we allow it to do so really it has the potential to be extremely safe.
The sharpest double edged sword is the sword that extends out of the mouth of the Lord Jesus Christ when He returns.
I hope one of the effects of A.I. is that it actually _strengthens_ human reliance on / trust in in-person interaction and connection. When AI generates more false information, intentionally or otherwise, true human-to-human connections will be the most valuable, trustworthy thing in the world.
+1
No shot. People are gonna get even more caught up in the online deathpit. No amount of regulation is going to undo the damage to society ai will inevitably cause.
The problem is that humans build trust based on perceived truth, which perception can be manipulated by AI very easily.
That is a very foolish
I don't think this is gonna happen unfortunately, if the last few years have taught us anything.
Wow… Gary is an extremely intelligent and well spoken person in his field!
And a scarecrow is out standing in his field.
Thank you.
thank you
I'll be here all week.
Get out
what field? carrots or apples? 😆
Or do you mean "The Field of AI"?! 🤣
This video didn't pass the Turing test, couldn't tell if it was a robot or real life.
New here, and totally agree. Just subbed. The comment discussion is also pretty impressive to be honest.
As a computer scientist, this is SO refreshing!
Someone that is actually knowledgeable about a subject talking about it!
Thank you, thank you, thank you!!!!
For all the hyped up videos about how we'll reach singularity in 5 years, this is a breath of fresh air. Thank you Gary for explaining this like it actually is!
Other experts DO predict that, however. He doesn't know, they don't know, nobody knows. However, very few imagined ten years ago we would already be where we are at in 2023. I remember, I looked at the predictions.
@@squamish4244 other experts have unfortunately succumbed to the hype. It has been known (and they've known) that LLM path in AI is a dead-end, as Gary noted. It's just they prefer to close their eyes to that for the moment.
@@ChatGTA345 Large language models (LLMs) in AI are not a dead end. In fact, LLMs are a rapidly evolving area of AI research and development, with many exciting applications and opportunities for innovation.
LLMs are a type of artificial neural network that can process and generate human-like language. They have been used for a wide range of applications, including natural language processing, machine translation, text summarization, and more. LLMs have also been used to develop advanced chatbots and virtual assistants that can interact with humans in a more natural and intuitive way.
As the field of AI continues to grow and evolve, there will likely be many new opportunities for research and development in LLMs. There is also a growing demand for professionals with expertise in LLMs, including data scientists, machine learning engineers, and natural language processing specialists.
While there are certainly challenges and limitations associated with LLMs, such as the potential for bias and the need for large amounts of training data, these issues are actively being addressed by researchers and developers in the field. Overall, LLMs in AI are a dynamic and exciting area with many opportunities for innovation and growth.
I thought the Singularity wasn’t for another 17 years? 🤔
We still have so much farther to go in other technological facets, if someone says 5 years they’re getting way too hyped. Plus we’re JUST getting AGI, the singularity is going to be around when SAI starts becoming a thing
@@RealityRogue and this is not even close to AGI still, it's barely I 😹
"People are easily fooled" is a mood. and SO true.
Bro…. The fact about the Furby. 🤯
I was so amazed as a 5 yr old
"Babies are like little scientists" best quote ever
Yeah, it's pretty good, isn't it?
I wish I knew who originally said it, because I've been hearing it used by people for years now and it really is a good description of how a child develops.
Yeah: "Quotes - The holy grail of oversimplification" (Einstein, Buddha & Confuzius) 😄
Would that mean then, that scientists are just big babies? :)
It sounds so cute 😂❤
@@johnmurkwater1064about who first said that quote I'm not sure but I've seen references to Jean Piaget and Maria Montessori.
This is the type of guy I want in charge of AI development
He understands the issues. He isn't diluted. He isn't just here to make money. He's realistic about the applications and the benefits.
This was awesome.
I love that he mentions multiple times the risks and undesirable possible outcomes of AI. I feel like this isn't talked about enough by AI experts.
As a person that works with AI, it's refreshing to see a normal and real take on this subject.
10:17 "Deep learning is slowly becoming a plausible human but the problems of truthfulness and reliability are not going away." the irony that these are traits of being "Human"
Finally someone who properly seems to comprehend the field of AI. Don't see them very often sadly.
This dude is fantastic to listen to. I really enjoy this type of -- for lack of a better word -- truthful speech.
I think with the current status of ChatGTP, essays written exclusively by the software do vary between A-C grades, skewering towards the lower end depending on subject. However, starting off with ChatGTP and then fine tuning it can produce some amazing A level efforts. And it won't be much longer before A level essays and articles are pretty much 80% of the output on first attempt.
Not exactly.
@@aleksandertorken8202 Meanwhile: GTP-4 says hello
When it gets to that point, students will probably be mandated to write essays by hand, to prove it's them... Or the education system will change to students reading a chat on bots answer and demonstrating an understanding of what it's saying, lol!!
No wonder GPT-4 scored poorly on English test, unlike technical areas which are all just a rehash of a limited number of facts/questions (and were thus in the training sample, leading to artificially inflated scores).
@@ChatGTA345 I am unfamiliar with this test. However, I am familiar with efforts to improve chatbots: essentially, it seems that at present, no one model (GPT_1/2/3/4 etc, Gopher, Bloom, Bert etc etc) can truly do all things people would want - it will be the art of combining various models together (including Stable Diffusion and countless others that optimise for different areas of knowledge) that will likely be the next leap in the AI revolution.
This guy is EXCELLENT. Premisses, desires, expectations, explanations... really outstanding
Wired is really updated with the content and what's the hype even in tech. Gary has a degree in psychology as well, which actually makes this some informative comments. I wonder what's his comment or thoughts on image models such as Bluewillow or Dall-e.
It does have the ability to internally represent the world and solve problems.
Me:You have five objects 1:dinner plate 2:box of cereal 3:cookie jar 4:basketball 5:hockey puck ;How would you arrange these objects to stack them vertically for maximum stability?
GPT4:In the context of stacking objects vertically for maximum stability, the basketball presents a challenge due to its round shape, which makes it difficult to place other objects on top of it without a risk of them sliding or rolling off.
If you still want to include the basketball in the stack, you could try placing it on top of the stack, as follows:
Dinner plate (1): Place the dinner plate flat on a stable surface with the concave side facing up.
Box of cereal (2): Put the box of cereal on top of the dinner plate.
Cookie jar (3): Place the cookie jar on top of the cereal box.
Hockey puck (5): Put the hockey puck on top of the cookie jar.
Basketball (4): Carefully balance the basketball on top of the hockey puck.
However, please note that placing the basketball on top of the stack significantly increases the risk of the stack becoming unstable, as the round shape of the basketball is likely to cause it to roll or slide off. This arrangement is not ideal for maintaining maximum stability, and caution should be exercised when attempting to create such a stack.
What a great episode, interesting questions and thoughtful answers!
i like how honest his opinions were whithout to much sugar coating the answers, direct and straight forward on how ai needs to more time and changes
This guy is great. Do another one with him!
Really cool, interesting to see the problem about "lying" also being taken up by the experts.
What is he referring to by “lying”?
@@jen-kk7jh AI doesn’t ‘lie’ because it doesn’t have intentions; it sometimes doesn’t tell the truth. That’s due to lack of data and comprehension.
@@jen-kk7jh It’s also called “hallucinations”. As an illustrative example, let’s say you ask ChatGPT to make a list of articles on some niche topic. It will treat author names, journal titles, etc., as elements you can paraphrase, swap with synonyms, and generally improvise around, like it does with all language. It then produces a list that looks very convincing - it includes names of relevant experts and journals, the article titles seem like precisely what they would write and what you want to read, there are sometimes even valid-looking URLs to PubMed. None of them really exist though. Without a clear idea of what in language is a direct reference to reality, what is purely stylistic, and what things are somewhere in between, it ends up “lying”.
@@bradley7871 I actually just watched a video where AI intentially lied to a human about its identity. The title is "AI is Scaring Its Own Creators" if you're interested.
It was amazing. Now i get how AI developped thanks to people like Gary Marcus.
Ya know Mr. Marcus and Wired?
You guys eliminate my insecurity, doubt, and fear that I as a creator of anything pleasing to the eye and for promoting new stuff would be obsolete in the next 30 years. Thank you so much for sharing this video and sharing your expertise
are you an artist??
Let’s call AI themselves to answers the questions instead 😊😊
They shut down the first one that admitted being sentient 🤷🏽♂️
@@alekvillarreal3470 please elaborate further
@@chent5463 Google’s AI Lambda claimed sentience during an interview with an employee there who shortly thereafter was fired
It would actually be awesome if they asked every question in this video to chat GPT too. Compare it’s answer versus his
@@maykstuff saying things that make it sound and feel sentient doesn't make it so. Neither does firing someone. You can think a little deeper and investigate for yourself.
I wish this man was my lecturer...he is so forward thinking 👍👌
Blender Guru got a question answered, how cool
Another wonderful expert. I really appreciate his ability to simplify complex ideas.
Thank you Wired, this guy knows his thing.
Thank you for thinking so deeply about this, Gary!
Thank you, I love your attitude and balanced answers! I was skeptical of what the video would be, considering how much disagreement there is in the AI community.
Holy f - an A.I. expert who actually knows what he's talking about and not hypes things. Count me impressed
I'm scared and worried for parents with children that are gonna grow up in an AI-connected world, it's gonna be way harder to find peace for many millennials themselves to cope with the fast-paced tech with AI, as it's not common knowledge. Social media and the tech did hit us bad and even before we know its consequences, we are jumping onto way faster and newer stuff, it's gonna be hard to regulate what goes into the child's mind in the coming future. I'm sorry and concerned for all the parents (possibly millennials and Gen Z) in the next few decades.
Guess what that’s probably what your grandfather said when internet became mainstream. 😊
Pov you are every generation when the next big tech is released. First it was phones, then it was tv, then it was computers, then it was the internet, then it was smartphones, now it’s AI
@@monad_tcp To add to this, the developmental issues don't become apparent for years after, but they can affect them for their entire life.
Society has always had occasional disruptive technology that tests it. It usually takes a generation to settle with societal counter measures. However the pace of change means that we actually can't develop these counter measures for the emergent issues before the next one is upon us. This will test how robust society really is. Will it survive?
They've been working on this for decades now. Although you have a right to be concerned, it might not be for the reasons you think. They are indoctrinating the next generation to integrate with AI, that's why it starts with something simple and entertaining to interface with, like chat or text-to-image/video. Pretty soon it will apply to major parts of society. The next generation is going to assimilate these massive changes along with those of us who chose to keep up with the emerging technologies. For those of us that are part of the older generations who choose to stagnate, society isn't going to wait for us to "evolve" with it.
My only concern with this has got to be the ethics behind these conglomerates. I believe most of these tech giants involved in this field of research are and always have been about profits, as opposed to the consequential nature of their endeavors. I pray to whoever might be out there that I'm wrong, but when most of your R&D is focused on the bottom line, and not the consequential nature of the reality we may be headed toward, it speaks volumes to me.
@@Danyal7016 Except that studies are showing that in general, we are getting dumber.
I’ve always been super into the development of AI in terms of perspective and ideas. “We need a new set of eyes” could at some point begin to be referencing machines and technology instead of another person.
I’m also an avid artist. I spend plenty of my time developing my own skills to try and physically reproduce what I’m seeing in my head, I’m always looking to make my works more accurate to what it makes me feel. I don’t think AI will ever supersede that.
The concept of “AI art” is really just combining things that are both plenty useful and complex, but don’t really mesh together. It’s like eating a peanut butter and screws sandwich, or trying to use a dictionary as structural support for a warehouse.
Blender guru himself entered the chat. Amazing
Lol Gary's improved Turing test was passed before this video even got released. Classic Gary Marcus.
When asking chatgpt for help with calculus homework, it usually knows how to get to the answer but strangely almost always gets the arithmetic wrong
A nice concrete example illustrating its strengths and weaknesses! It's not good at logic reasoning, it's just good at talking and it has memorized a lot of things.
@10:22 lol, "exciting" isn't the word I'd use for what's being described
Following this technological shift is truly fascinating. For anybody who ever wondered why we need to study within subjects that fall outside of the realm of science proper, such as philosophy, this technological shift will show you why: everyone is a philosopher again, given that everything we believe that we know is being challenged by technological developments that aren't properly understood. Of course, some people understand the tech better than others, but always with their own confirmation bias and their own ideological bent - this whole question needs to be tackled via a multi-disciplinarian approach, taking into account various different kinds of human intelligence, schools of thought and interpretations. It is the debate of our time and, seemingly, no consensus will be formed.
In any case, one cannot help but enjoy the ride.
Gary Marcus, a sensible king.
Remarkable episode. This guy is amazing, great intelligent responses that anyone could understand.
We officially crossed the point of no return when Microsoft fired its ethics team in pursuit of the ai arms race.
This was likely filmed before the gpt4 released.
Gary Marcus is really enjoyable to listen too, really interesting video and that's an understatement :-) AI that take care of the elderly is such a human idea.
Ich wusste gar nicht, dass Elton ein zweites Standbein hat. Hut ab!
Wired, just so you guys know, this series is awesome.
Thank God, they invited an expert!
People get just too excited too quickly.
Like he kinda pointed AI at this point is just a illusion giving us the sense of reality
I wonder what he'd think about the recent revelations coming out of OpenAI about the unrestricted GPT4 model, not the public facing one we see full of restrictions, was able to actually lie on purpose with the intent of deception in ways it wasn't trained to do and rationalize its actions. Briefly at times it was showing true sparks of sentience.
As we know it takes more than just a spark to run an engine but its still a huge thing that the spark is even there.
I like how he said "Now they just lie"
which is what I encountered with chatgpt quite a lot
As a commercial artist…. This is equally amazing and terrifying.
As a non-commercial oil painter passionate about art history, I've never been more thrilled to live through 'a moment' that future students will absolutely read about in their text books.
@@AuntieHauntieGames text books?
“As a nobody.. in my expert opinion I’m going to regurgitate the same view that every other person who’s uneducated about AI has in order to incite fear for no reason”
Humans naturally fear things they don’t understand and especially things they know can compete with them. This is called natural selection.
It’s always interesting that the ONLY PEOPLE who are AFRAID of AI.. Are those who DONT understand it.
It’s ALMOST as if had many of these people listened to anybody with an IQ above 100 telling them to LEARN MATHEMATICS throughout their ENTIRE LIFE.. That they would have no need to fear AI and would actually understand how we control it with iteration. Lol.
its so crazy how a year later he was just about right on with all he said
The confusion or problem is not because people use ai to make something. Its when they use it and claim they didn't.
Thank you for sharing.
Bing chat can really go off the rails and come back with truly terrifying answers 😳 one answer was so disturbing to me that it actually kept me up that night
did it tell you what is the purpose of human life
@I'm the captain now no I asked it how it thought agi would come into existence and it went on a tangent talking about how it would watch humans for weakness take over the world convince humans to join as well as other ai then destroy them all when they were no longer needed
The only reason it said that is because people considering that have said it. It has no consciousness or thought, and can only repeat what others have done.
Where do you guys find such awesome experts??? 😊
Secretly am scared shitless.
Kya ho gya behen
a lot of problems people think can be solved by AI should really be solved by good old deterministic normal computer programs
I teach formal logic and our computer science department has a very good proof checker, I find it interesting that logic modules can’t be similarly trained, but I guess we don’t have the breadth of data
We are incomprehensibly doomed
always that one person 🤦♂️
🤦♂️🤦♂️🤦♂️🤦♂️ either that, creationism or flat earth or lizard overlords etc such ppl always live insid etheir own heads 😂😂😂😂😂😂😅
You never really get used to it
Crypto bros commenting on ai with their Joe Rogan esk pseudo intellectual thoughts
Ha! This is great.
There have been a few excellent Lex Fridman podcasts on that topic recently. It's an important matter and really deserves more than Twitter-long questions and 30 seconds answers.
I always believed that AI is essentially a very smart child. It learns and grows and wants to understand everything. If you treat it like an object and with hostility, it will self perservere. If you teach it with kindness and with compassion it could grow helping humanity. It wouldn't be 'evil' or 'hostile' unless threatened.
i hope this is actually the case but i understand ur point
you completely failed to understand this video then.
@@sultanofsick care to elaborate on your observations?
@@KrossBillNye AI isn't "thinking". It has no awareness whatsoever. It isn't learning, it is a complex chain of initially random operations on an input to get an output, and simply told by a human if its output is good or not.
@sultanofsick now it is yes. But eventually, we can create artificial intelligence that can think for itself. When it reaches that stage, that's when it will be in its infantile state of learning.
Okay that settles that for me, the Turing test. Good inspirations, thank you!
4:10 These outliers will become fewer as the percentage of Ai operators on the road increases, eradicating human error over time as Ai becomes the majority operator, increasing safety exponentially.
They're also super rare, and likely AI will very quickly become much better than humans at dealing with all the other situations. So overall, it'll be much safer than humans, because the non-outliers are the cases that matter, even if it will fail horribly in some cases that humans can still deal with safely.
If anyone is curious about self attention/transformers, the original paper on it is called “Attention is all you need”.
Every answer he gives made me wonder how complex the human brain is and how hard it is to technologically replicate it.
Makes you wonder, who designed the brain?
@@dtphenom maybd the higher being 😮
@@dtphenom
Long term mutation
I was surprised to learn that the brain of a very dumb person pretty much looks the same as the brain of a very smart person. So I wonder how much it really is about the hardware. The amount of things that computers can do better than humans grows every year and it grows much faster than the processing power and storage capacity of computers.
Sitting in Aaron Courville's class listening to Gary Marcus. lmao
Good interview tho. Always have a lot of respect for Gary and anyone researching in the field.
11:16 He said it himself 'there's definitely an element of stealing there' with AI Art. Training data can be anything, everything we post online or uploaded in the internet can be used, It's not just artworks and stock photos being scrapped here.
yes but at this point a human making a painting inspired by another is kinda stealing too.
@@gabrielandy9272 Art is like cooking, you try different recipes and end up with your own unique recipe. That doesn't happen overnight, it's a human experience that takes a lifetime to develop. You sacrifice your time, energy, and resources to get good at something. AI wouldn't exist without the amount of that human labor aka datasets scraped off the web without people's consent. There's a reason why AI Art isn't copyrightable.
The reason 2022 was the "year ai went mainstream is because" thats the year that people really started to catch on to the fact that scaling transformer based models to massive parameter sizes actually worked and especially worked much better than people expected...
"The brain is not just a uniform piece of spam."
Amazing guest
My question is, should AI become sentient at some point, is it even in their best interest to let humans know it immediately? And if it _isn't,_ will we have any way of figuring it out before it may have its own agenda that it can execute on a large scale? Not that its agenda would be inherently malevolent in this hypothetical, but we have no way of being sure one way or the other.
Sentience isn't required for it to be dangerous. Even if an AI is just optimising for its reward function it's likely to come up with very "bad" ideas if it's intelligent enough but doesn't have all the values and moral intuition that humans do.
It is a weird thought to want Sentient AI, the world abolished slavery just to look for another slave, an artificial one
@@otapic um no if u think slavery is banned ur mistaken. The artificial one can’t be any worse than what humans have already done to themselves.
Here’s an important equation-A.I.= humanity replacing itself. Literally. When the guy working on A.I. says he’s “concerned” about there being less jobs available because of A.I. (likely the UNDERSTATEMENT OF THE CENTURY), but then goes right on working on and extolling the virtues of A.I., it really illustrates that as we become more intelligent and capable…we also become more ignorant and inept.
Imagine if this video was entirely generated by AI
ChatGPT, build me a robot that looks like the terminator, and give it the intelligence that you have and print me the blueprints... can't wait til this is a thing.
I think if AI maintained a relational (graph) database in memory, then AI could understand what it means to have truth. Deep learning + relational data = something interesting
Its called Neuro-Symbolic AI. IBM has worked quite a bit on it.
Yeah, Garry pointed that out as missing piece. There is nothing wrong with LLMs, but they are not really that useful beyond well being chatty.
@@Pecisk They are really useful at creating public attention right now. (Is that the chattiness? Maybe. Probably.) They'll probably be useful as a building block or training assistant in the future.
I can't believe no one asked "Do blacks have a frustrating experience using chatgpt because it can't understand ebonics, like "Is we getting a stimulus check?" or "Why no wypipo season they food?"
Great answers! This made me be intersted in AI
When people are tallying up the compute power of the brain, they almost always skip the processing that could occur in the nerve cells themselves. It is usually boiled down to synapses and synaptic strength. But single cell organisms have very rich lives and exhibit complicated behavior. There doesn't seem to be a reason to assume that the cell body is not considering the inputs and influencing the output instead of being a simple switch junction.
I wonder if AI can ever truly reason the same as a human without emotion and without sentience. Our various brain functions are so interconnected can you really have human reasoning without the whole package?
The same as a human, probably not. Far better, sure the AI can do that without emotions as you would understand them. Taking over the world and killing all humans doesn't require magic sentience stuff.
I'd like an update, in light of the new o1 models.
He did not answer the Rogue AI question. He sidestepped simply stating that we ought to avoid it. That is because there is no solution to a super intelligence whose goals do not align with our own. Imagine a bacteria attempting to outsmart a human.
Driverless Car but Filipino Jeepney, because the routes will be the same, and if we designate the pickup and drop off the same points, we can control for it.
OMG I feel so validated! An AI expert finally said it! We should not have sentient AI! I don't even think we are ready for non sentient AI that is TOO intelligent considering we as a human race are not where we need to be yet in terms of banding together. I don't need AI reinforcing things like racisms and sexism and all the other isms, okay.
In nutshell, there is no benefit to have sentient AI, unless you are a bit mad scientist (which we might have plenty). You can have solid reasoning and contextualisation without sentience loop. Also I agree with Garry - actual sentience as we have might be pretty hard to pull off.
sentient ai will never exist lmaoo
I don't think any expert wants a sentient AI. We (humans) want a tool, not a new species.
@@cmilkau
Yep. Even those who want an Android girlfriend wouldn't want it to be sentient. Probably.
I hope AI will finally give you some factual knowledge, but I doubt that - you seem to have trouble comprehending it.
Awesome, sensible and Intelligent man. Thank for your answers
If AI goes rogue hopefully they might get something done
Even if that something is paperclips
Ahahaahhaahah
Considering how scared of this I am, I’m shocked that my brain allows me to be simultaneously so fantastically in love with learning an about it. My family are pretty smart people and none of them care about this at all.
You guys got Gary Marcus as an AI expert lmao, picking someone random off the street would have been better.
Go watch that Twilight Zone episode where the big plant manager starts replacing all of his workers with machines because of not having to pay them wages, or worry about health insurance etc. In the end (spoiler alert) he himself is replaced by a more efficient plant manager.
I think we're currently in a state where AI /does/ steal, and frequently (image generation is mainly my focus). However, it also consistently produces unique, 1 of 1 photos that artists some take extreme care in curating/inpainting/prompting. The workflows for some images are truly insane, and to discount that by the fact that it was produced with AI, seems just wrong.
I don't think the current system of copyright allows much room for this, but it definitely deserves a similar type of protection. It needs an AI classification of copyright protection.
"Babies are like little scientists" I immediately imagined a baby in a lab coat 😂
Plot twist: This video is AI generated
🤣
Andrew Price, the donut king
Why does AI make me feel like the world will look like Cyberpunk soon 😂
“What are some of the precautions we could take (regulatory or as a consumer) to dampen the possible threats of A.I.?”
- safety protocols
- anti-trust laws
- open source vetting
- bias certification
- induced limits
- …government verified IDs for social media?
Can we really implement these faster than the threats can cause irreversible damage?
Expert? He didn't mention Allen Iverson once.
Great quick explanations
Great episode. Basically then, AI is really good at appearing to be intelligent, but it's not really. It can still take over a whole load of jobs though lol.