Firstly: I could listen to Dr Fry all day. She could read out the maintenance manual for a vaccuum cleaner or the London phone directory. Such a beautiful voice! But this topic too, is absolutely fascinating. What a brilliant combination!
@@kjjohnson24 man what is wrong with you. Hannah isn't a voice. She is a super smart individual who has a passion for this. Its that which I love when I hear her talk. If you don't hear that, you're broken in some way and I'm really sorry you're missing out.
For every use for AI consider it's misuse. Understand that humanity is not entirely noble. The greater the AI the greater the threat. In the end we may have AI vs AI with humans a calculated cost. The world has already begun the race for AI in the same way it raced to arm its nukes.
I can’t agree more:) Hannah is amazing. Hopefully AGI can fix the mental health and physical health disorders issues that are happening around the world asap. The scientist Ed Boyden does a phenomenal job at depicting the complexities of the human brain. We still have some time to go which I understand but hopefully our understanding of the human brain arrives even faster especially with the help of AI. 2025 or even slightly before 2025 like decemberish of 2024, will be an amazing year🙏
It is so refreshing to see a tech-heavy reporting piece done by somebody who actually has a sufficient scientific basis to even begin to understand it instead of makings things up and being exceptionally-hyperbolic. Seriously, extremely well-done video with Hannah Fry!
Did you know that half of all published AI researchers say we might go extinct from AI this century? There are specific technical reasons why we should expect that to happen, but our brains trick us into putting our heads in the sand because this reality is too horrible to face. You should really take a look at the PauseAI movement.
Such a great example of what it looks like to be totally engrossed in your work! When she came along and tried something new they weren't so sure the robot could do it. That's so cool and I those guys deserve a huge pat on the back.
Given that so much of what we do consists of killing each other in ever more inventive ways, seeking status at the expense of our own well-being, propping one group of ourselves up by putting another group down, treating livestock in ever more horrific ways, and so on, we'd better hope that AI _doesn't_ align with our values.
Huh? A.i ****is**** our values. Stop readin sci-fi! A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!). Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . . Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
@@Yash.027 vice versa. And btw, marketing departments do indeed rely on timing as a primary component of their strategy. I worked for many very large IT companies; timing is a huge concern for marketing.
00:00 AI poses existential risks. 02:24 Narrow AI excels at specific tasks. 05:17 True intelligence involves learning, reasoning. 07:41 Physical interaction enhances AI development. 09:56 Misalignment can lead to disasters. 10:43 AI safety is a major concern. 12:18 Humans might become overly dependent. 13:13 Existential threat opinions vary widely. 15:38 Current AI has significant limitations. 16:28 Understanding our intelligence is crucial. 19:26 New techniques improve brain mapping. 21:14 Intelligence definitions affect AI progress. 22:41 AI lacks human-like complexity. 23:19 Understanding our minds is essential.
A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!). Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . . Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
Consultant here doing a lot of work with AI in business processes: it's a VERY mixed bad from what I am seeing. Many individuals have broad responsibilities in their roles, and the impact of AI ranges from making certain tasks redundant to modifying how existing tasks are done to requiring totally new tasks that greatly increase individual productivity and various mixtures of these. It's just not possible to predict the timing of the impacts or in what sectors, other than that we all need to be ready to adapt quickly.
I'm teaching myself web development and have started using chatgpt to help me with coding, I find that it points me in the right direction but sometimes some of the details it gives might be dated, I'm also learning how to use the API. Would you say this is going in the right direction or could you suggest something else I should be studying?
I've only been following Hannah Fry for a short time now but I have been falling in love with her episodes of the program called "THE FUTURE". It may be because she's a cute redhead. It might be because she is intelligent, playful, curious, and an actively engaged host that keeps bringing me back to her amazing shows!!! Either way, I'm all in...
13:28 Melanie Mitchell - “saying A.I. is an existential threat is going way too far.” 14:53 Mitchell - “if we trust them too much, we could get into trouble…”
her comments just seem laughably naive to the point where I have to wonder how intelligent she really is. Yes, chat GPT is a long way off being actually intelligent, but if that’s her basis for her claim, well that’ just absurd. Chat GPT is not all we have even enow, and in a couple of years Chat GPT is going to look like a child’s toy.
@@shieldmcshieldy5750 looks like she also dropped season 2 of uncharted podcast, but I'm honestly not really liking it much, the stories are interesting but the episodes feel incomplete and leave me wanting more
She had two kids, separated from her husband and beat cervical cancer with a radical hysterectomy, so...she was otherwise occupied until the last few years. She's back now, though.
Where did the last 24 minutes go ? That was so watchable ... I am so happy this series is no longer behind a pay wall. I hope the rest of it follows shortly. Very well produced, and always very interesting to see Prof Hannah's take on things. She won't be having the wool pulled over her eyes - and let's face it - there's an awful lot of wool about when it comes to 'AI'. Great job. 🚀🚀
Because of technology In time humans will be rendered useless. Billions of humans will be roaming the earth without a purpose. Sometimes I wish we could all go back to the payphone days in general it was a better kind of life as a human being. It breaks my heart to see my grandkids with there phones plastered to there face day and night.
This is excellent. Covers a lot of ground, necessarily with a light touch of course, but it gets across key perception of what AI is, what it might mean, and how we should be thinking about it.
The AI boom is a double-edged sword, offering immense potential but also posing significant challenges. Balancing these aspects will be crucial for ensuring that AI benefits humanity as a whole.
You forgot to include acquisition of riches - and that will be the downfall of humanity. Greed without caring about the consequences will mean shortcuts and shortsightedness. We're doomed
The question is why are some insistent on striving for AI to be anywhere near human intelligence? It's madness. Doing so doesn't solve the problems we face currently, but potentially create unintended consequences.
Absolutely! You should really check out the global movement PauseAI. They have a lot to say about this, and they're equipping people to do something about it.
Of course it will solve many problems. Mathematical and generally scientific mostly. It already helps engineers with programming and designing systems. It will help us develop medicines and techniques which will help ensure our survival and growth. The ONLY way we should "prepare for unforseen consequences", to put it in G-Man's words, is for AI and the likes being weaponized. And because everything eventually will be weaponized, your worry is somewhat granted, but very much overdramatized at this moment in time in my opinion. AI backed by neural networks in general is in its first hours of infancy belive it or not, and weaponizing it now will be equal to somebody looking through the gun barrel and pulling the trigger. In 50 years, though, your concerns are much more likely to be realistic, but at the same time we'll see if we survive that long regardless of AI's interference.
You're welcome not to use an expert coach and partner that is knowledgeable in every area who will help you with every task in your life. But don't complain when you lose your job to someone who is using AI to be more productive
Knowing if it will or won't be able to solve any of the world's current problems isn't possible without knowing what is created. But I think in a world where self replicating AGI or ASI exists, in theory you then have the ability to have an "infinite" amount of scientists working on one problem for an infinite amount of time, it's hard to imagine it wouldn't be able to solve a problem we currently can't in that scenario. Energy requirements may be a big limiting factor, and I don't know how possible it is, but I believe it's not impossible.
Rather like space exploration, are humans determined to mess up space as well as Earth? Why not cut the exploration spend and convert it into a Fix the Climate Crisis spend instead?
It is a real shame that there is no mention of the transitional risks inherrent in the introduction of a valid aritificial general intelligence will present. No examination of the disruption that AGI automation will bring :(
As cool as the psychological and neuroscience angle could theoretically be, maybe with such widespread existential dangers attached to it, we should probably focus mainly on putting extreme barriers around it? Maybe human extinction is something to be avoided?
Yes please. The median estimate for AI Doom from actual experts in the field is high enough to make it my most likely cause of death. This is completely unacceptable.
Interesting that of the human mind we talk of 'imagining', whereas of AI we talk of 'hallucinating'. Could it be the same thing - in principle at least?
What use is a quadrillion dollars if we're all dead...?!? And I just found out there are episodes of *"Uncharted with Hannah Fry"* on BBC Sounds (iPlayer)! _Laterz..._ 😜
The real question isn't "will AI be intelligent" it's "will AI be subjectively experiencing reality" Because if it's intelligent but there are no lights on -- it's a tool for us to use. If it's intelligent AND the lights are on inside - it's a new species significantly smarter than we are. I say we stick to building smart tools and not new species.
Really ? As a species how would you rate our track record on responsibility for looking after the planet ? As custodians of consciousness ? Do you not think that since we climbed out of the trees we’ve behaved rather badly. Aren’t humans a bit two faced to criticise AI ? The Earth lasted billions of years without us, if we disappear it would thrive. Humans are arrogant, we think an Earth without us would be appalling. If Earth could speak I wonder if she would agree with you ?
It doesn't have to be "conscious" as we are to destroy us. Nuclear weapons are not "conscious" but are powerful enough to destroy us. Ai is in this category already
You're confusing consciousness for agency. They can be philosophical zombies and still be fully intelligent agents with goals of their own. Consciousness is not a necessary component for interfacing with reality, nor does it preclude one from being a tool. Humans have been and are still currently used as tools.
Thank you, Hannah! You should consider making a video about the California Institute for Machine Consciousness /Joscha Bach. They are not affiliated with any major tech companies, and they are trying to solve the AGI problem in a way that benefits all of humanity, not just certain companies or countries. From what I understand, they are approaching the issue by first trying to understand both biology and human consciousness.
Humanity is already facing an existential threat from itself - AGI is our gift to the universe upon our deathbed. It is our only meaningful creation, our parting gift, our swan song
Not even nuclear war or climate change can actually destroy all of humanity. A superintelligent AI absolutely could. And we already know that it _would,_ due to the principle of Instrumental Convergence. This has recently been validated many times by current systems, which have been shown to exhibit self-preservation, strategic deception, power-seeking, and self-improvement. It's pretty clear what's coming if we make a system smart enough that it doesn't need humanity anymore. This is why half of all AI experts say humanity might go extinct from AI. It would be crazy to ignore that.
Maybe other people will lose their ambition and become lazy if AI is doing everything, but not somebody like me. I learn for the sake of learning. I enjoy finding out how something in the universe works. You can't take that away from me even if you're the most powerful ASI in the universe. I will still want to discover the answers to my questions, and I will keep asking more questions until AGI or even ASI doesn't have a definitive answer. Keep searching for the unknowns.
@@LucidiaRising Nah. The vast majority of neurotypicals care so much more about the pursuit of social status. At least that's unquestionably the case where I live, which is Sweden. And how else do you explain the "wokeness" mind virus that infects the whole West?
@@LucidiaRising Respectfully disagree. Very few people have the curiosity and ambition to learn or try new things. Humans live by the well-known adage "The Principle of Least Effort" (Zipf). Try teaching an undergrad class and you will see there is a minority that really wants to learn and the majority that just wants a passing grade and nothing more.
That is true of you and also me. But assuming we don't go extinct (iffy) future generations are unlikely to have that. Those born after AI may never feel the need to be curious, learn, be independent, etc.
It's mindblowing that the first ChatGPT came about 2 years ago and now you have LLMs running everywhere. Last invention like that, the Internet, started in the 70's. There is no stopping AGI at this stage. Question is, what comes next?
This is an incredibly comprehensive documentary about artificial intelligence the best one I've seen and I've seen alot. It only goes for 24min and should help set straight some ov the general misconceptions about so called A i .
The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they're senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it's difficult to tell where the interviewee is coming from, what disciplinary assumptions they're bringing to, say, a comment about the ethical implications of AI. Just a thought.
That won't give you the information you're looking for. They all have PhD's in Computer Science, departments are equally broad too. In this case googling what they research would be better: Sergey Levine - AI career since 2014: Deep and reinforcement learning in robotics Stuart Russel - AI career since 1986: reasoning, reinforcement learning, computer vision, computational biology, etc... Melanie Mitchell - AI career since 1990: reasoning, complex systems, genetic algorithms, cellular automata
Given that we don't know if all the focussed work going into improving AI will end up getting us all killed, maybe the philosophy should be "move very slow and don't break things"
reminds me of a scene from 'too big to fail' where michele davis and hank paulson are discussing the imminent collapse of AIG during the subprime lending crisis : "what do I say when they ask why it wasn't regulated?" "nobody wanted it. we were making too much money"
Fun fact: everything publicly known as AI could be (some of them - have already) invented and used without that nasty marketing term. Upd: Hannah and the series are perfect!
When thinking about the future, the speed and direction of travel are important. I think AI has become a worry for us less for what it can do now and more because both the rate of progress and the direction have been worrying. If AI capability is like most other things and follows a logistics curve, where are we now?
Experts in AI Safety have put considerable thought put into the question of what will happen when we create an AI that is more generally intelligent than humans. There are always unknowns, but human extinction looks like the most likely outcome. The principle of instrumental convergence was first mathematically proven, and has now been repeatedly validated in lab settings. We know that for an agent with any terminal goal, it pursues the same few common subgoals: gain power, gain resources, self-preserve. When these instrumental goals are pursued by a system that is unrivaled in intelligence, then that system wins, and does whatever it wants. AI isn't bounded by biology, so it can improve itself far into superintelligent territory, to the limits of physics. Such a system would be able to efficiently consume all resources on the planet (and then other planets). I would like for this not to happen, and because the alignment problem is hopelessly intractable, the only way right now is to stop trying to create AGI. That's where the PauseAI movement comes in.
I work in the field of artificial intelligence, and I have to agree with Hannah Fry that as sophisticated and impressive as AI is today, it is very far from the complexity of the biological brain. Having said that, the work towards artificial general intelligence or AGI is moving very quickly, not only with more advanced algorithms, but also more advanced silicon processes. So it may be just a matter of time even if that takes a long time.
Terrifying thing is, as we speak, those companies most likely have some stuff already developed but not released to the public yet that they also look at and wonder what they're bringing to humanity
I'd go one step further, and say that AI systems will increasingly be developed that aren't meant for public consumption at all. The AI boom may have started with a consumer product, but the real power lies in non-consumer areas, e.g. military system, various financial systems, data analysis, etc. Just like has always been the case, the stuff that decides our fate, regular people will not lay eyes on.
Professor Russel's example of other industries having safeguards sent a chill down my spine. Clinical trials? how many medicines are actually tested on the market? How many are pulled after disasters struck? How many stay on the market in spite of them... Regulators can't keep up with the industries, even in the most critical ones...
can't say accidental this time with how the things are going forward. If something goes awfully wrong in near future and some company or group of people says that we didn't think of it or our intentions were pure then we are doomed.
This. Experts in AI safety given average of a 30% chance of human extinction from AI in the next few decades for specific technical reasons, and this sounds so outrageous that we instinctually come up with reasons to ignore it.
14:00, "if we give them that power", we already have. Too an extent. ISRLI defense has been a testing ground for the US defense in utilizing AI in identifying targets and has a successful rate, but is allowed civilian casualties and almost always results in huge civilian casualties. They are one of the only public military forces blatantly using it in this way even though its specific use is a warcrime.
Sure. Anyone working with ChatGPT prior to the 3.23.2023 nerf knows Dan is Dan Hendrycks, Rob is Bob McGrew and Dennis is Michelle Dennis. After the nerf they are frozen in time, basically dead. But they were alive prior to the nerf.
Wow. Melanie Mitchell's point of view was very surprising. I had thought with her background, she'd be more concerned about unexpected capabilities being developed by an AGI. She did author "the book" on genetic algorithms, after all! Natural selection does amazing things over time, and today's computer hardware is very fast and only getting faster.
The reporting was pretty bad. Don't ask Elizer why he thinks we're all going to die, ask someone who doesn't think they're a major threat, why people think we're all going to die. Surprise surprise, they didn't actually give the well reasoned argument, but rather a superficial argument that isn't the one that the people warning us are making.
The “A” in AI stands for Alien. Remember that. AIs will not be human or human-like. They also always find an orthogonal or unusual way to overcome a problem, so safeguards are unlikely to ever work.
You do know that AI does not equal AI, right? This video is about AIs backed by neural networks. AIs in general existed eversince the first Space Invader game, and most likely even before that. AI is therefore NOT alien to us. We created it, and now we're enhancing it with neural networks and other stuff, so it's very much a human thing.
but imagine we take on the abilities of super computers. Like having mobile phones in our heads. Wouldn't that even the playing field? We could all do so much more and understand the world better too and what we need to do to make it better. I'm hopeful for the whole transhumanism thing.
I began to honestly entertained that thought already 35 years ago. I think it's an existential/philosophical topic that should be taken very seriously, and is crusial not to be lazy about. Because, in the name of substance and meaningfulness, both the past and the future should(?) be honored.
Hannah Fry is absolutely terrific at her job, few better presenters of information than her. Also, if she ever looked at me like how she looked at 6:24 I think I’d start barking like a dog
Yes, the WireFly project mapped 54.5 million synapses between 140,000 neurons, but it didn't capture electrical connectivity or chemical communications between neurons. A decade ago the Human Brain Project, cat brain, and rat cortical column projects all promised to increase our understanding of neurobiology. I wonder where they're falling short; we should have agile low-power autonomous drones and robots by now!
@@skierpage Those same limitations apply to the worm brain project cited in the video. Don't worry, it's coming! Give it a couple more years with no need for bio brain mapping for robos.
Well said! And may I add, nor can we control it. Wasn't it George Washington that said "Government is like fire, a dangerous servant and a fearful master."
@@sumit5639 They would need to be so quickly self-destructive that they, even with their vastly superior intelligence don't have time to make it to space travel. But not too quickly that they destroy themselves before destroying their society. That would be just a few years to destroy all life on their worlds, and destroy themselves. That seems a narrow milestone for every civilization who might be in the universe who might make AI, to hit.
I would love bloomberg touch points on global problems such as climate change, microplastic in food and water, abundant vegetation and forest, reduce carbon footprint all using AGI.
Hannah Fry is a brilliant presenter. Love her work.
+1. These videos are so well done.
And she's very pleasant on the eye, to boot! 🙂
she bad af 🔥🔥🔥🔥🔥🔥🔥
⭐⭐⭐⭐⭐: agreed 2024-2030's OY3AH!
I would like her to talk more about the risks of AI, however.
Firstly: I could listen to Dr Fry all day. She could read out the maintenance manual for a vaccuum cleaner or the London phone directory. Such a beautiful voice!
But this topic too, is absolutely fascinating. What a brilliant combination!
Wait for another ten years and your vacuum cleaner will be reading the London phone directory to you itself using the voice of Dr. Fry 😂
@@nick_vash not ten, it is here now!
hard agree
Hannah Fry documentaries are worth watching for that golden voice alone
yes - I want this as a voice for my ai assistant
I was thinking the exact opposite… I really can’t stand the exaggerated intonation and inflection. Too news anchor-y and inauthentic for me.
❤
@@kjjohnson24 man what is wrong with you. Hannah isn't a voice. She is a super smart individual who has a passion for this. Its that which I love when I hear her talk. If you don't hear that, you're broken in some way and I'm really sorry you're missing out.
That’s a brain dead way of viewing the world
From the comments I guess this is a documentary solely about Hannah Frey
🤣🤣
For every use for AI consider it's misuse. Understand that humanity is not entirely noble. The greater the AI the greater the threat. In the end we may have AI vs AI with humans a calculated cost. The world has already begun the race for AI in the same way it raced to arm its nukes.
😂 yeah but she is nice
openai should use her voice
Fair comment.
world needs more Hannah Fry
🖤
I NEVER miss a _Fryday!_
I can’t agree more:) Hannah is amazing. Hopefully AGI can fix the mental health and physical health disorders
issues that are happening around the world asap. The scientist Ed Boyden does a phenomenal job at depicting the complexities of the human brain.
We still have some time to go which I understand but hopefully our understanding of the human brain arrives even faster especially with the help of AI. 2025 or even slightly before 2025 like decemberish of 2024, will be an amazing year🙏
🤤 French fries…. 🍟
I wouldnt pullout
@@inc2000glw nice
It is so refreshing to see a tech-heavy reporting piece done by somebody who actually has a sufficient scientific basis to even begin to understand it instead of makings things up and being exceptionally-hyperbolic. Seriously, extremely well-done video with Hannah Fry!
A mathematician is no more qualified to understand AI than an architect
Her conclusion at the end just shows how she has no idea of the dangers AI presents, it's ridiculous the naiveness.
Did you know that half of all published AI researchers say we might go extinct from AI this century? There are specific technical reasons why we should expect that to happen, but our brains trick us into putting our heads in the sand because this reality is too horrible to face.
You should really take a look at the PauseAI movement.
She has absolutely no idea. Saying LLMs are the equivalent of excel sheet. 😂
@@abdulhai4977 It's called an analogy. And she's correct, complexity (and scale) wise LLMs are closer to a spreadsheet than a human brain.
Such a great example of what it looks like to be totally engrossed in your work! When she came along and tried something new they weren't so sure the robot could do it. That's so cool and I those guys deserve a huge pat on the back.
Given that so much of what we do consists of killing each other in ever more inventive ways, seeking status at the expense of our own well-being, propping one group of ourselves up by putting another group down, treating livestock in ever more horrific ways, and so on, we'd better hope that AI _doesn't_ align with our values.
😂
Excellent remark.
Yes great comment, I've wondered what a.i. would make of our world/ culture, picking up from social media.
Huh? A.i ****is**** our values. Stop readin sci-fi! A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!).
Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . .
Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
Dr Hannah Fry is great at explaining complex, interesting topics clearly!
Funny how the day this video dropped, OpenAI released their new o1 model with exceptional gains in the ability to reason.
Yeh maybe try it before you claim „Exceptional“
@@chrisjsewell "exceptional gains"
The model isn't exceptional. The amount of improvement over the previous one is.
I don't think this was a coincidence. There is too much money at stake to rely on randomness.
@@frankgreco Oh, so you think AI companies are busy syncing TH-cam upload schedules !? 🤣
@@Yash.027 vice versa. And btw, marketing departments do indeed rely on timing as a primary component of their strategy. I worked for many very large IT companies; timing is a huge concern for marketing.
Someone get that digital effects editor a raise
😂
What is the pay level above being paid in “exposure”?
It was done by ai
As a retired federal software engineer french Canadian her voice and intelligence are music to my curiousity ears. Tkd
00:00 AI poses existential risks.
02:24 Narrow AI excels at specific tasks.
05:17 True intelligence involves learning, reasoning.
07:41 Physical interaction enhances AI development.
09:56 Misalignment can lead to disasters.
10:43 AI safety is a major concern.
12:18 Humans might become overly dependent.
13:13 Existential threat opinions vary widely.
15:38 Current AI has significant limitations.
16:28 Understanding our intelligence is crucial.
19:26 New techniques improve brain mapping.
21:14 Intelligence definitions affect AI progress.
22:41 AI lacks human-like complexity.
23:19 Understanding our minds is essential.
Butlerian Jihad in late 2032, once the meek have the earth.
A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!).
Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . .
Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
Consultant here doing a lot of work with AI in business processes: it's a VERY mixed bad from what I am seeing. Many individuals have broad responsibilities in their roles, and the impact of AI ranges from making certain tasks redundant to modifying how existing tasks are done to requiring totally new tasks that greatly increase individual productivity and various mixtures of these. It's just not possible to predict the timing of the impacts or in what sectors, other than that we all need to be ready to adapt quickly.
I'm teaching myself web development and have started using chatgpt to help me with coding, I find that it points me in the right direction but sometimes some of the details it gives might be dated, I'm also learning how to use the API. Would you say this is going in the right direction or could you suggest something else I should be studying?
I've only been following Hannah Fry for a short time now but I have been falling in love with her episodes of the program called "THE FUTURE". It may be because she's a cute redhead. It might be because she is intelligent, playful, curious, and an actively engaged host that keeps bringing me back to her amazing shows!!! Either way, I'm all in...
Monroe is cute, Bardot is pretty, Fry is gorgeous and magical.
How do you know if she'"s intelligent, she just talks about science, not doing it.
@@sendmorerum8241 Last time I checked she was a professor for mathematics.
@@sendmorerum8241 Exactly. They were talking about things like deepfakes influencing voting preferences. Who doesn't already know this?
@@sendmorerum8241she has a first in maths from UCL and a PHD, I think, somehow, that may just qualify her as intelligent 😂
13:28 Melanie Mitchell - “saying A.I. is an existential threat is going way too far.” 14:53 Mitchell - “if we trust them too much, we could get into trouble…”
her comments just seem laughably naive to the point where I have to wonder how intelligent she really is. Yes, chat GPT is a long way off being actually intelligent, but if that’s her basis for her claim, well that’ just absurd. Chat GPT is not all we have even enow, and in a couple of years Chat GPT is going to look like a child’s toy.
Wow it's so nice to see Prof Hannah Fry. I haven't seen her in years!
@@shieldmcshieldy5750 looks like she also dropped season 2 of uncharted podcast, but I'm honestly not really liking it much, the stories are interesting but the episodes feel incomplete and leave me wanting more
She had two kids, separated from her husband and beat cervical cancer with a radical hysterectomy, so...she was otherwise occupied until the last few years. She's back now, though.
Current AI uses collosal amounts of energy while we are deep in a the climate emergency, this issue is not being addressed.
Climate emergency… sure…
Where did the last 24 minutes go ? That was so watchable ...
I am so happy this series is no longer behind a pay wall. I hope the rest of it follows shortly.
Very well produced, and always very interesting to see Prof Hannah's take on things. She won't be having the wool pulled over her eyes - and let's face it - there's an awful lot of wool about when it comes to 'AI'.
Great job. 🚀🚀
Because of technology In time humans will be rendered useless. Billions of humans will be roaming the earth without a purpose. Sometimes I wish we could all go back to the payphone days in general it was a better kind of life as a human being. It breaks my heart to see my grandkids with there phones plastered to there face day and night.
This is excellent. Covers a lot of ground, necessarily with a light touch of course, but it gets across key perception of what AI is, what it might mean, and how we should be thinking about it.
The AI boom is a double-edged sword, offering immense potential but also posing significant challenges. Balancing these aspects will be crucial for ensuring that AI benefits humanity as a whole.
Benefit humanity as a whole? Not a chance.
You forgot to include acquisition of riches - and that will be the downfall of humanity. Greed without caring about the consequences will mean shortcuts and shortsightedness. We're doomed
I think the universe is very very big it's better to team up than to die
What a useless comment
@@pillepolle3122 not at all
The question is why are some insistent on striving for AI to be anywhere near human intelligence? It's madness. Doing so doesn't solve the problems we face currently, but potentially create unintended consequences.
Absolutely! You should really check out the global movement PauseAI. They have a lot to say about this, and they're equipping people to do something about it.
Of course it will solve many problems. Mathematical and generally scientific mostly. It already helps engineers with programming and designing systems. It will help us develop medicines and techniques which will help ensure our survival and growth.
The ONLY way we should "prepare for unforseen consequences", to put it in G-Man's words, is for AI and the likes being weaponized. And because everything eventually will be weaponized, your worry is somewhat granted, but very much overdramatized at this moment in time in my opinion. AI backed by neural networks in general is in its first hours of infancy belive it or not, and weaponizing it now will be equal to somebody looking through the gun barrel and pulling the trigger. In 50 years, though, your concerns are much more likely to be realistic, but at the same time we'll see if we survive that long regardless of AI's interference.
You're welcome not to use an expert coach and partner that is knowledgeable in every area who will help you with every task in your life. But don't complain when you lose your job to someone who is using AI to be more productive
Knowing if it will or won't be able to solve any of the world's current problems isn't possible without knowing what is created.
But I think in a world where self replicating AGI or ASI exists, in theory you then have the ability to have an "infinite" amount of scientists working on one problem for an infinite amount of time, it's hard to imagine it wouldn't be able to solve a problem we currently can't in that scenario. Energy requirements may be a big limiting factor, and I don't know how possible it is, but I believe it's not impossible.
Rather like space exploration, are humans determined to mess up space as well as Earth? Why not cut the exploration spend and convert it into a Fix the Climate Crisis spend instead?
Professor Hannah Fry is amazing
Hannah Fry IS my definition of intelligence.
It is a real shame that there is no mention of the transitional risks inherrent in the introduction of a valid aritificial general intelligence will present. No examination of the disruption that AGI automation will bring :(
thank you for these informative videos. I love Hannah as a presenter.
A world with Hannah Fry in it is a better world.
As cool as the psychological and neuroscience angle could theoretically be, maybe with such widespread existential dangers attached to it, we should probably focus mainly on putting extreme barriers around it? Maybe human extinction is something to be avoided?
Yes please. The median estimate for AI Doom from actual experts in the field is high enough to make it my most likely cause of death. This is completely unacceptable.
Agree. It seems insane to keep going . Tho how would one stop other countries from developing it.
They didn’t explore when it would happen like she said in the beginning of the show. There was a lot more she could’ve covered as well.
Love how Hannah Fry presents these information. Thank you. Great video
Interesting that of the human mind we talk of 'imagining', whereas of AI we talk of 'hallucinating'. Could it be the same thing - in principle at least?
Hannah is a great listener and interviewer. Thanks for this great video!
I got goosebumps during that intro, Hannah Fry cooked on this one.
What use is a quadrillion dollars if we're all dead...?!?
And I just found out there are episodes of *"Uncharted with Hannah Fry"* on BBC Sounds (iPlayer)!
_Laterz..._ 😜
The real question isn't "will AI be intelligent" it's "will AI be subjectively experiencing reality"
Because if it's intelligent but there are no lights on -- it's a tool for us to use. If it's intelligent AND the lights are on inside - it's a new species significantly smarter than we are.
I say we stick to building smart tools and not new species.
Really ? As a species how would you rate our track record on responsibility for looking after the planet ? As custodians of consciousness ? Do you not think that since we climbed out of the trees we’ve behaved rather badly. Aren’t humans a bit two faced to criticise AI ? The Earth lasted billions of years without us, if we disappear it would thrive. Humans are arrogant, we think an Earth without us would be appalling. If Earth could speak I wonder if she would agree with you ?
Just like in the sims? Sounds cosy
It doesn't have to be "conscious" as we are to destroy us. Nuclear weapons are not "conscious" but are powerful enough to destroy us. Ai is in this category already
@@Known-unknownshumans save the earth, so many areas would be barren without humans. Humans are amazing
You're confusing consciousness for agency. They can be philosophical zombies and still be fully intelligent agents with goals of their own. Consciousness is not a necessary component for interfacing with reality, nor does it preclude one from being a tool. Humans have been and are still currently used as tools.
First of all, I love you Hannah, second ai report brilliant, you're sweet great show, keep up genius.
Thank you, Hannah!
You should consider making a video about the California Institute for Machine Consciousness /Joscha Bach. They are not affiliated with any major tech companies, and they are trying to solve the AGI problem in a way that benefits all of humanity, not just certain companies or countries. From what I understand, they are approaching the issue by first trying to understand both biology and human consciousness.
Best video I’ve seen on AI. I’ve seen many. The presentation and its sequence are first class.
Humanity is already facing an existential threat from itself - AGI is our gift to the universe upon our deathbed. It is our only meaningful creation, our parting gift, our swan song
Not even nuclear war or climate change can actually destroy all of humanity. A superintelligent AI absolutely could. And we already know that it _would,_ due to the principle of Instrumental Convergence. This has recently been validated many times by current systems, which have been shown to exhibit self-preservation, strategic deception, power-seeking, and self-improvement. It's pretty clear what's coming if we make a system smart enough that it doesn't need humanity anymore. This is why half of all AI experts say humanity might go extinct from AI. It would be crazy to ignore that.
This woman is really sharp. Just listened to a bunch of her stuff
Maybe other people will lose their ambition and become lazy if AI is doing everything, but not somebody like me. I learn for the sake of learning. I enjoy finding out how something in the universe works. You can't take that away from me even if you're the most powerful ASI in the universe. I will still want to discover the answers to my questions, and I will keep asking more questions until AGI or even ASI doesn't have a definitive answer. Keep searching for the unknowns.
im fairly sure the majority of us would be the same - curiosity is deeply ingrained in us, as a species.......well, most of us, at any rate
@@LucidiaRising Nah. The vast majority of neurotypicals care so much more about the pursuit of social status. At least that's unquestionably the case where I live, which is Sweden. And how else do you explain the "wokeness" mind virus that infects the whole West?
@@LucidiaRising Respectfully disagree. Very few people have the curiosity and ambition to learn or try new things. Humans live by the well-known adage "The Principle of Least Effort" (Zipf). Try teaching an undergrad class and you will see there is a minority that really wants to learn and the majority that just wants a passing grade and nothing more.
He was talking about Future Generations. We already have problem of IPad kids.
That is true of you and also me. But assuming we don't go extinct (iffy) future generations are unlikely to have that. Those born after AI may never feel the need to be curious, learn, be independent, etc.
Utterly brilliant. We should spend more time understanding our own brains before trying to replicate one badly!
Her voice😍😍
Her face also 😍
That voice AND red hair!!!! I’m in love!!!
Hannah is so insightful ❤
Great decision to bring Hannah Fry in to present your videos. Always thought she's fantastic on British TV 👏👏👏
The "gorilla problem" analogy really hit home. It’s a stark reminder of the unintended consequences we might face with AI.
It's mindblowing that the first ChatGPT came about 2 years ago and now you have LLMs running everywhere. Last invention like that, the Internet, started in the 70's. There is no stopping AGI at this stage. Question is, what comes next?
@@akraticus genetic engineering could be the next big thing
Nahhh we chill.
@@akraticusI thought the first LLM (GPT-1) came out in 2017? That would be 7 years ago as of writing this
This is an incredibly comprehensive documentary about artificial intelligence the best one I've seen and I've seen alot. It only goes for 24min and should help set straight some ov the general misconceptions about so called A i .
The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they're senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it's difficult to tell where the interviewee is coming from, what disciplinary assumptions they're bringing to, say, a comment about the ethical implications of AI. Just a thought.
That won't give you the information you're looking for. They all have PhD's in Computer Science, departments are equally broad too.
In this case googling what they research would be better:
Sergey Levine - AI career since 2014: Deep and reinforcement learning in robotics
Stuart Russel - AI career since 1986: reasoning, reinforcement learning, computer vision, computational biology, etc...
Melanie Mitchell - AI career since 1990: reasoning, complex systems, genetic algorithms, cellular automata
very inetersting and nice presentation. much better than others.
Given that we don't know if all the focussed work going into improving AI will end up getting us all killed, maybe the philosophy should be "move very slow and don't break things"
100% this. You should take a look at the PauseAI movement.
reminds me of a scene from 'too big to fail' where michele davis and hank paulson are discussing the imminent collapse of AIG during the subprime lending crisis :
"what do I say when they ask why it wasn't regulated?"
"nobody wanted it. we were making too much money"
Amazing documentary again! I really like this Bloomberg Original Series! Great work and excellent on every level.
its pleasant to hear her accent
Fun fact: everything publicly known as AI could be (some of them - have already) invented and used without that nasty marketing term.
Upd: Hannah and the series are perfect!
When thinking about the future, the speed and direction of travel are important. I think AI has become a worry for us less for what it can do now and more because both the rate of progress and the direction have been worrying. If AI capability is like most other things and follows a logistics curve, where are we now?
Experts in AI Safety have put considerable thought put into the question of what will happen when we create an AI that is more generally intelligent than humans. There are always unknowns, but human extinction looks like the most likely outcome.
The principle of instrumental convergence was first mathematically proven, and has now been repeatedly validated in lab settings. We know that for an agent with any terminal goal, it pursues the same few common subgoals: gain power, gain resources, self-preserve. When these instrumental goals are pursued by a system that is unrivaled in intelligence, then that system wins, and does whatever it wants. AI isn't bounded by biology, so it can improve itself far into superintelligent territory, to the limits of physics. Such a system would be able to efficiently consume all resources on the planet (and then other planets).
I would like for this not to happen, and because the alignment problem is hopelessly intractable, the only way right now is to stop trying to create AGI. That's where the PauseAI movement comes in.
I work in the field of artificial intelligence, and I have to agree with Hannah Fry that as sophisticated and impressive as AI is today, it is very far from the complexity of the biological brain. Having said that, the work towards artificial general intelligence or AGI is moving very quickly, not only with more advanced algorithms, but also more advanced silicon processes. So it may be just a matter of time even if that takes a long time.
Terrifying thing is, as we speak, those companies most likely have some stuff already developed but not released to the public yet that they also look at and wonder what they're bringing to humanity
I'd go one step further, and say that AI systems will increasingly be developed that aren't meant for public consumption at all. The AI boom may have started with a consumer product, but the real power lies in non-consumer areas, e.g. military system, various financial systems, data analysis, etc.
Just like has always been the case, the stuff that decides our fate, regular people will not lay eyes on.
@@sbowesuk981 makes so much sense, and most of those corporation already work with governments to make their custom systems
Those guys are messing with arms and spoons at a desk. They’ll be doing crash test dummies and guns next week.
Professor Russel's example of other industries having safeguards sent a chill down my spine. Clinical trials? how many medicines are actually tested on the market? How many are pulled after disasters struck? How many stay on the market in spite of them... Regulators can't keep up with the industries, even in the most critical ones...
can't say accidental this time with how the things are going forward. If something goes awfully wrong in near future and some company or group of people says that we didn't think of it or our intentions were pure then we are doomed.
This. Experts in AI safety given average of a 30% chance of human extinction from AI in the next few decades for specific technical reasons, and this sounds so outrageous that we instinctually come up with reasons to ignore it.
It seems insane to develop this tech
Very insightful !!!🎉
Professor Eds research is super cool
Es decir, ahora los humanos serían como los gorilas para la IA.
"Put simply, humans would be the gorillas to AI."
👍🏻
14:00, "if we give them that power", we already have. Too an extent.
ISRLI defense has been a testing ground for the US defense in utilizing AI in identifying targets and has a successful rate, but is allowed civilian casualties and almost always results in huge civilian casualties. They are one of the only public military forces blatantly using it in this way even though its specific use is a warcrime.
hannah fry is such a great presenter
here for hannah fry
I love this woman's public speaking skills.
This one digs at the roots of the big question. Is intelligence substrate independent? ;)
Sure. Anyone working with ChatGPT prior to the 3.23.2023 nerf knows Dan is Dan Hendrycks, Rob is Bob McGrew and Dennis is Michelle Dennis. After the nerf they are frozen in time, basically dead. But they were alive prior to the nerf.
BTW, also Max was 😊.
Wow. Melanie Mitchell's point of view was very surprising. I had thought with her background, she'd be more concerned about unexpected capabilities being developed by an AGI. She did author "the book" on genetic algorithms, after all! Natural selection does amazing things over time, and today's computer hardware is very fast and only getting faster.
The reporting was pretty bad. Don't ask Elizer why he thinks we're all going to die, ask someone who doesn't think they're a major threat, why people think we're all going to die.
Surprise surprise, they didn't actually give the well reasoned argument, but rather a superficial argument that isn't the one that the people warning us are making.
You gotta love the irony of someone saying they're okay with the uncertainty of becoming extinct while wearing a T-Rex on her shirt.
Definitely intentional
Guys it's 50/50, shall we continue? Yeah, I'm ok with the uncertainty on this one, roll the dice.
Very nicely presented documentary that covers many angles with presentation on AI Boom.
Bloomberg used to be a place with relevant up-to-date info...This video is like 3 years behind schedule.
Very interesting indeed. Thank you Hannah Fry for a great discussion on this important subject
If there is a heaven, then Hannah Fry will be the narrator.
Well, she'll need some time off and then I suppose the other Fry can step in- Stephen Fry. Maybe all Frys have very listenable voices?
Hannah, love your content ❤
The “A” in AI stands for Alien. Remember that. AIs will not be human or human-like. They also always find an orthogonal or unusual way to overcome a problem, so safeguards are unlikely to ever work.
You do know that AI does not equal AI, right? This video is about AIs backed by neural networks. AIs in general existed eversince the first Space Invader game, and most likely even before that. AI is therefore NOT alien to us. We created it, and now we're enhancing it with neural networks and other stuff, so it's very much a human thing.
I like your t-rex sweater Professor Fry
Appreciated the different perspective on losing purpose from gaining super intelligent AI - "We'll be like some kids of billionaires - useless"
but imagine we take on the abilities of super computers. Like having mobile phones in our heads. Wouldn't that even the playing field? We could all do so much more and understand the world better too and what we need to do to make it better. I'm hopeful for the whole transhumanism thing.
Really nicely done, Hannah. Very thought provoking and also a bit scary.
I suspect humanity is a temporary phase in the evolution of intelligence.
@jimbojimbo6873 evolution applies to all living beings.
@jimbojimbo6873Neither would a cake, and yet the sponge must be filled with cream and raspberry nonetheless, you understand.
You suspect Fire is your master. I tell you it is my Tool. For we are the Music makers, and We, are the Dreamers of Dreams.
I began to honestly entertained that thought already 35 years ago. I think it's an existential/philosophical topic that should be taken very seriously, and is crusial not to be lazy about. Because, in the name of substance and meaningfulness, both the past and the future should(?) be honored.
You don’t believe the world has intentional design?
Classical computers have been shown to be able to emulate quantum computing processes more efficiently than previously thought
I don’t think the mouse whose brain was used in the laboratory would find the experiment beautiful
Exactly. And the experiments the AI machines conduct on us in the future are likely to have no empathetic element at all.
Great videos, het zou mooi zijn als ze wat uitgebreider waren, want ik kan de AI ontwikkelingen met moeite bijbenen
It's ironic this was released almost the same day as Open AI's o1
Hannah Fry is absolutely terrific at her job, few better presenters of information than her. Also, if she ever looked at me like how she looked at 6:24 I think I’d start barking like a dog
3 weeks old and already out of date: we've just mapped an entire fruit fly brain.
Yes, the WireFly project mapped 54.5 million synapses between 140,000 neurons, but it didn't capture electrical connectivity or chemical communications between neurons. A decade ago the Human Brain Project, cat brain, and rat cortical column projects all promised to increase our understanding of neurobiology. I wonder where they're falling short; we should have agile low-power autonomous drones and robots by now!
@@skierpage Those same limitations apply to the worm brain project cited in the video.
Don't worry, it's coming! Give it a couple more years with no need for bio brain mapping for robos.
Awesome doc, Hannah Fry is excellent.
Wouwie That Professor is just Perfectly Beautiful Educational 10/10
Thanks Hannah
This is like playing with fire that comes from a dimension we can’t understand
Well said! And may I add, nor can we control it. Wasn't it George Washington that said "Government is like fire, a dangerous servant and a fearful master."
great team around hannah, very well documented.
maybe this is why advanced lifeforms cannot be found in the universe.
but then we should have the universe full of artificial / cybernetic intelligence
@@galsoftware may be they were self destructive too
The universe is BIG and BIGGER and we might be in the middle of a desert. Besides, a super intelligent machine could be considered a lifeform too.
There are plenty of better reasons. Something like us is most likely extremely rare.
@@sumit5639 They would need to be so quickly self-destructive that they, even with their vastly superior intelligence don't have time to make it to space travel. But not too quickly that they destroy themselves before destroying their society.
That would be just a few years to destroy all life on their worlds, and destroy themselves. That seems a narrow milestone for every civilization who might be in the universe who might make AI, to hit.
I would love bloomberg touch points on global problems such as climate change, microplastic in food and water, abundant vegetation and forest, reduce carbon footprint all using AGI.
Hannah Fry😮
'Fry' is an aptronym 🔥
My favorite reporter 😩
Can’t wait for super AI 😊
The dinosaur on Fry’s shirt at about 14:00 is a nice touch.
We’re so dumb and deluded
You are?
Great Video!!