I really enjoyed this conversation with Anca. Here's the outline: 0:00 - Introduction 2:26 - Interest in robotics 5:32 - Computer science 7:32 - Favorite robot 13:25 - How difficult is human-robot interaction? 32:01 - HRI application domains 34:24 - Optimizing the beliefs of humans 45:59 - Difficulty of driving when humans are involved 1:05:02 - Semi-autonomous driving 1:10:39 - How do we specify good rewards? 1:17:30 - Leaked information from human behavior 1:21:59 - Three laws of robotics 1:26:31 - Book recommendation 1:29:02 - If a doctor gave you 5 years to live... 1:32:48 - Small act of kindness 1:34:31 - Meaning of life
Hi, Lex. Though it's not exactly the type of guest you usually have, would you consider interviewing someone that's doing ageing research? It could be someone like David Sinclair, Aubrey de Grey or someone from Calico. It's a fascinating subject and you could perhaps discuss with them how AI can help towards their goals.
So nice to see a romanian on the podcast. There is a lot of talent in the country, but they mostly leave because both the market and academia is way under the possibilities that exist in the USA.
Excellent talk...all the Romanians I know and I've worked with speak flawless English like Anca. Her students are lucky to have a professor so passionate about her field of expertise. I've studied theories of multiverse(s) with super mentors - and the theories will continue to confound/perplex/overwhem me and my limited cognitive processing abilities
right?! I was like "meh" when he said that, like he was just being nice, but as soon as she started talking I was like "wait what?" and suddenly all my neurons were pointing at the screen
I am just seeing that this episode was released at the beginning of the pandemic.... Very nice talk! Sunt foate fericita sa vad o romanca atat de desteapta si accomplished :-) La multi ani si un 2021 fericit! Happy New Year to all !
Great discussion. i love her response to the meaning of life question. We are so small, therefore worrying about anything but our local situation is just ridiculous. Brilliant person.
I love how that book sitting on my shelf keeps popping up in these podcasts. (It did spend time in an open fashion, with those letters projected on my retinas)
All episodes are huge, so interesting and fun to keep up with all these brilliant minds. I usually play wow while i hear a podcast, this episode i had to replay back tons of time. Tons of info good stuff. Again its always great stuff. But this time I really had to try to keep up. Love you Lex thank you for doing these.
Right out of the gate, this is sincerely, very likely, one of the most engaging, captivating, and fascinating interviews I think I’ve ever seen; and (for what it’s worth...) this ought not flatter, but might rather serve to impress, encourage, motivate, and inform both Lex and Anca, as I can confidently assure you: this is no small feat. I’ve done my best to consciously avail myself to a wide variety of valuable and constructive conversations across many subjects. This interview is fascinating on so many levels...and has especially got me understanding in new ways, the powerful and innate tendency for humanity to project...thus coloring and creating the details of the world they live in.../ ok.....ANYWAY.....(I just have to say it: LEX,...bless your heart, brother. I swear, I’ll be damned if you aren’t just about the most sensitive and unapologetically tender soul out there, in the field and on the academic/a.i. scene, doing what you do. That especially vulnerable and sacred place within in your spirit very much shapes and informs your questions in the most unique and dynamic way.....It’s priceless....
Thank you for this wonderful talk! I would absolutely love to see on this channel any of: Carver Mead (Caltech) Daniela Rus (MIT) Ioan Opris (Miami) Howard Newton (MIT, Oxford, Sorbonne, Washington) Giacomo Indiveri (Caltech, ETH Zurich) Henry Markham (EPFL) Carl Friston (UCL) Rodney Douglas (Caltech, ETH Zurich). Keep up the great content, Lex!
Love upbeat talks about complex things. Makes mundane information interesting until you can peice the whole thing together to understand the whole of the idea. Thanks 4 your insane drive and innovation to better understand the universe.
for some reason I found my self enjoying this conversation at both full speed and half speed. half-speed at around 42:00 was just hilarious to listen to.
Thank you Lex and Anca for putting this together. This podcast was particularly inspiring. In particular my attention was caught around 21:00. I was playing a game called 'The bridge' (while listening), in which you play as Sir Isaac Newton and try to solve complex puzzles involving shifting gravity. The idea of 'intuitive physics' directly connected to my experience at that instant, as I must make guesses and small physics experiments to progress at the game. If you are reading this Lex, I recommend to check out this game.
Wow this guest is so smart and a great conversationalist... Surprised she is also so young and cute and so upbeat, she is everything you could want in a friend and colleague. I wish I was smarter so I could work with her lol. Really enjoyed her points on robot and human interactions in everyday scenarios and I am glad to have someone so bright with the foresight to work on this very important although in this stage maybe under glorified aspect of A.I. I hope to see her on this podcast again and others.
Regarding the information that a self driving car needs to change lanes: Is there a correlation between the distance between cars, the amount that distance is lengthening or shortening over time, and the likelihood that the person will open a space for you to merge if you begin to do so? obviously you could just find out by the method described in the podcast making a high probability solution moot compared to the communicate intent and cause action solution. The action being the other driver communicating their intent by changing their behavior, slowing down or speeding up. As long as they move far enough away from a car that is in front of or behind them, a merging opportunity has been created. If you can't cause an action, I suppose you move to the next driver basing your decision on the highest probability driver to afford you a merging opportunity. I guess you probably would pick the first merge driver candidate on those criteria as well. So if there are no gaps wide enough to safely merge, and no one is moving out of the lane you intend to merge into, one must cause action by communicating intent, by swerving a little to simulate merging, to create a merge opportunity, by way of the other driver communicating their intent via speeding up or slowing down. As long as their intent is to not let you in front(speed up) or let you in front(slow down), problem solved. If they intend to not let you in either behind or in front, you just have move beyond their area of influence. That area could be really big if they are really crazy and cause a huge accident or something like that, to prevent you from merging hahaha(not super high likelihood). ok enough brain vomit lol.
Another great one!! One way to think about the meaning of life, would be to ask a different question: What would be the life's purpose of a totally sentient robot?
I didn't finish the whole episode yet, but I just wanted to write a comment while it's fresh in my mind. This whole negotiating of the human intentions and playing off human reactions in order to gain something, like getting home as fast as possible, will possibly lead to the AI version of road rage.
@59:50 when she spoke of distribution she meant "out of the factory/box" distributions. So if I bought a small robot for home and my child ran and jumped over the robot that would likely not be in the distribution in their current testing
I think the real problem with learning is in our assumptions. If I think I have the truth, there is no point in looking for it anymore. Then I get wrong and I realize that my idea of the truth, of what it really is, was only a first approximation that I must continually keep refining it. In the case of autonomous vehicles and robots that work with humans, there must be basic training, at a level of behavior that is acceptable from the point of view of safety and interaction with the environment. From that point on, the robot or system must be able to continue learning for itself, regulated no longer by data, but by principles, so that it can make its own decisions. In the case of the risky attitude assumed by each pedestrian when crossing a street, in which you assume that nobody wants to die, (the chicken attitude), it is wrong or false to assume that nobody wants to die. Freud spoke many years ago about the death drive behavior that often induces completely irrational and unexpected behavior in humans, including the hidden desire of living at the edge between life and death, and the sometimes the desire to die.
Super fun interview. Thank you. My amateur comments: 1. How would you create a robot guided by the kindness reward function? 2. Has any autonomous driving system modeled drunk human drivers and had they been drinking single malt Scotch or California pinot noir? 3. Could an advanced AI teach us how it thinks (can we teach a dog or cat, or even a roommate, how we think)? 4. Would people be willing to give up their privacy on their cell phone or wearable device to the extent of allowing a central “ground traffic controller” to know their location and velocity, including times when they were walking or riding a bicycle, in exchange for a reduction on their life insurance premium? Thank you. William L. Ramseyer
Liking a video is a reward. Highlighting and liking a comment is a reward. Is it still considered a reward if the thumbs up on the video is taken back but the thumbs down was never pressed?
1:31:00 I Agree, WE I feel as humans are really only aware that someday the Life that we are living in the bodies we exist in will someday end. Other beings I believe on our planet do not have that awareness. Not to say our spark of life or energy doesn't go somewhere else after our Body stops functioning but the experiences and perceptions we have in our life in these moments can truly never be experienced again in the same way or in the same perspective ever again. I do not think that I have ever really delved into that before which is interesting, I don't think we fear Death itself but the End of the story we are telling to ourselves but also projecting out into the Universe. I hope I'm making sense, non college graduate, nothin special, trying to articulate something as deep as the subject matter without evoking confusion... *sigh*
I believe the philosophical idea that we only truly appreciate life because it ends, isn’t particularly logically sound, and is at least in part a story that we tell ourselves to avoid the existential horror of the dying of the light. I’m very inarticulate generally, and on this specifically, and as a philosopher I’m worse than amateur, but I have read a bit about it (and seen the Good Place!). As I understand it, the argument goes something like this: the fact that life ends is what gives it meaning. You might think you want to live forever, because you could become, say, the world’s greatest concert pianist, if you had an infinite timeline. So you would become the greatest pianist, and visit the pyramids, and climb Everest, and read all the classics, but, sooner or later, you will have done everything there is to do, and it will all become wan (the adjective, not the network topology) - a chore, meaningless. So even if you agree that “for every man his death is an accident and, even if he knows it and consents to it, an unjustifiable violation” (Simone de Beauvoir), still, without death, for the atheists and the religious alike, there is no meaning. They say that “If you change your mind, you change the world”, and if you believe that life ends with the destruction of your physical body, then it is also true that “If your mind is destroyed, the world is destroyed”. But you should wish it, because otherwise that world has no meaning. It is a dilly of a pickle. I’m not sure I agree with this idea that we shouldn’t wish for immortality for this reason. Or at least, I don’t think we’re objective enough to make a reasonable call on it. See, the thing is, being a concert pianist and visiting the pyramids etc, I can understand that eventually, over centuries, those things will dull. BUT, if you never, ever die, then guess what? You could learn all the secrets of the universe. All of them. All. Of. Them. And when you have arrived at everything that can be known about the universe, by that point you will be so fundamentally changed that trying to extrapolate out your philosophy of immortality from where you are now, to knowing all there is to know about the nature of reality, is something you can’t possibly make a reasonable judgement on. I can’t see how it’s possible. So, immortality it is for me, please.
AI control with human supervision is a bad idea since the human may misinterpret what the AI is doing and intervene belatedly without having been planning and thinking it through ahead of time.
Penicillin and airplanes. Did you know that theres a patent out there of a certain frequency of light that improves vegetables productivity with proven 75% no matter what the weather? And it was buried because industry threats were VERY SERIOUS? I kid you not! A Romanian invented the procedure and he almost got killed for it:
its hard to understand humans, because we don't naturally look at things objectively. and with a robot, you are essentially telling it to make sense of an object, the human, that does not understand the world or itself overall. Maybe the best thing to do, would be to get to that level of objective understanding of the world and ourselves, and in turn just maybe there may be some sort of universal language to this sort of process in both perception and manipulation of things that may translate accurately or enough to a programmed object made of different materials, that may allow it to have at least both shared perceptions and manipulations of the outside world. It may not be that robots might have a difficult time understanding us, but that we make it difficult translating who we are and what we understand. So how can anything then understand us? Our motivations, intentions, or purpose?
Ontology. In order to move a glass, a human needs to know what a glass, movement, hand, goals, hopes, dreams, aspirations, the universe, and the multiverse are. Kidding, but the point is, trying to encode a human-level ontology into a reward function manually is doomed to failure. The reward function must be self-altering in accordance with as broad an understanding of the world as possible. In order for a robot to act as a human, it must have the average human's understanding of the world. In order to figure out what to do in a new situation, it would have to know or estimate what the new objects are, what their mechanics might be, and what the sequence of object will do to the world and the humans now and in the future. Anything less than this will be subhuman in capability. Whether this can be done by training ML models externally and transferring domain knowledge ontologies to robots, I don't know, but I highly suspect it's required from some point on, or we will NEVER have super-human capabilities in robots.
@@brianarcher8339 while everything you say is true, most people have some common corpus of things they know about the world and it's mechanics which is common enough that they can act on it. Sure there are loads of variations but not to the point that the concepts are useless. In fact, artificial neural networks learn in much the same way: neural weights are randomized at the start, making each copy of a neural network learn slightly different things and aspects of a whole. It nevertheless doesn't make the process superfluous. Sure, these discrepancies are higher in new situations where people have to infer a lot of things. But while an "average" human ontology isn't required, in order to exceed certain performance characteristics, it will be hard to dismiss it. Much like I suspect is conciousness itself. It probably isn't required for intelligence, but you would run around in circles trying to devise mechanisms equivalent in performance so you might just as well endow an AI with it.
Even a woman that's clearly very apt in math and CS chose to work on the more "human" part of robotics (human-robot interaction). I guess it's due to male chauvinism and closed doors, instead of because she has different preferences than most of her male colleagues.
I really enjoyed this conversation with Anca. Here's the outline:
0:00 - Introduction
2:26 - Interest in robotics
5:32 - Computer science
7:32 - Favorite robot
13:25 - How difficult is human-robot interaction?
32:01 - HRI application domains
34:24 - Optimizing the beliefs of humans
45:59 - Difficulty of driving when humans are involved
1:05:02 - Semi-autonomous driving
1:10:39 - How do we specify good rewards?
1:17:30 - Leaked information from human behavior
1:21:59 - Three laws of robotics
1:26:31 - Book recommendation
1:29:02 - If a doctor gave you 5 years to live...
1:32:48 - Small act of kindness
1:34:31 - Meaning of life
Hi, Lex. Though it's not exactly the type of guest you usually have, would you consider interviewing someone that's doing ageing research? It could be someone like David Sinclair, Aubrey de Grey or someone from Calico.
It's a fascinating subject and you could perhaps discuss with them how AI can help towards their goals.
@@lsilvap01 Sinclair vs de Grey or bust
She was my CS 188 Professor. An absolutely amazing teacher!
So nice to see a romanian on the podcast. There is a lot of talent in the country, but they mostly leave because both the market and academia is way under the possibilities that exist in the USA.
@@blanamaxima you mean Bucharest, unless you were referring to the capital of Bulgaria. different country :P
@@ggrthemostgodless8713 I talk like that as well and I have never set foot in the USA.
Excellent talk...all the Romanians I know and I've worked with speak flawless English like Anca. Her students are lucky to have a professor so passionate about her field of expertise. I've studied theories of multiverse(s) with super mentors - and the theories will continue to confound/perplex/overwhem me and my limited cognitive processing abilities
I have been following her work at the Interact Lab and absolutely admire her research in "Theory of Mind" models. Thanks a lot Lex.
So great to see Anca, and for Lex, this channel is so amazing to watch. Salutare Anca si ma bucur sa vad oameni pasionati ca tine in top.
I loved the fact that she watched "the good place " .. This interview is so energic and inspiring .. thank u a lot for these interviews.. appreciated
he wasn't kidding about her energy :P
right?! I was like "meh" when he said that, like he was just being nice, but as soon as she started talking I was like "wait what?" and suddenly all my neurons were pointing at the screen
Lex I am so addicted to the content of your channel, simply amazing!!
I am just seeing that this episode was released at the beginning of the pandemic.... Very nice talk! Sunt foate fericita sa vad o romanca atat de desteapta si accomplished :-) La multi ani si un 2021 fericit! Happy New Year to all !
I could listen to this clever lady all day well done Lex great interview (from lock -down New Zealand)
Good talk. I like her spirit, love and attitude towards the topics.
Great discussion.
i love her response to the meaning of life question. We are so small, therefore worrying about anything but our local situation is just ridiculous.
Brilliant person.
Thank you so much for the conversation. Keep them coming.
Salutare Anca! mândru nevoie mare să te întâlnesc la Lex. As usual, Lex you are just great!
I love how that book sitting on my shelf keeps popping up in these podcasts.
(It did spend time in an open fashion, with those letters projected on my retinas)
I have been an Anca Dragan fanboy for quite sometime now. Thank you, Lex!
Felicitari Anca!
Thanks Lex, great podcast. I'm following you for long time due to the interesting topics you bring to the public
All episodes are huge, so interesting and fun to keep up with all these brilliant minds. I usually play wow while i hear a podcast, this episode i had to replay back tons of time. Tons of info good stuff. Again its always great stuff. But this time I really had to try to keep up. Love you Lex thank you for doing these.
Its better to listen that type of podcasts rather than wasting time on any random videos 👍
Right out of the gate, this is sincerely, very likely, one of the most engaging, captivating, and fascinating interviews I think I’ve ever seen; and (for what it’s worth...) this ought not flatter, but might rather serve to impress, encourage, motivate, and inform both Lex and Anca, as I can confidently assure you: this is no small feat. I’ve done my best to consciously avail myself to a wide variety of valuable and constructive conversations across many subjects. This interview is fascinating on so many levels...and has especially got me understanding in new ways, the powerful and innate tendency for humanity to project...thus coloring and creating the details of the world they live in.../ ok.....ANYWAY.....(I just have to say it: LEX,...bless your heart, brother. I swear, I’ll be damned if you aren’t just about the most sensitive and unapologetically tender soul out there, in the field and on the academic/a.i. scene, doing what you do. That especially vulnerable and sacred place within in your spirit very much shapes and informs your questions in the most unique and dynamic way.....It’s priceless....
Wow!
This one is really exciting.
CLEARLY there is so much discussion material between you both that repeat interviews in the future are a must!
From robots to the meaning of multiverse - a very engaging and insightful interview.
People in general: "Robots are scary"
Lex: "We need robots to threaten us"
hahaha, spicy!! hahaha
Oh, so we are not alone after all! I love her. Keep up the good work Lex!
love her energy
Thank you for this wonderful talk!
I would absolutely love to see on this channel any of:
Carver Mead (Caltech)
Daniela Rus (MIT)
Ioan Opris (Miami)
Howard Newton (MIT, Oxford, Sorbonne, Washington)
Giacomo Indiveri (Caltech, ETH Zurich)
Henry Markham (EPFL)
Carl Friston (UCL)
Rodney Douglas (Caltech, ETH Zurich).
Keep up the great content, Lex!
who's catching up with on Lex on their stay-cation?
I found myself looking forward to Scientists on JRE. This is great because every guest is brilliant.
Cool, and now you can say you've interviewed A Dragan and lived to tell the tale.
Love upbeat talks about complex things. Makes mundane information interesting until you can peice the whole thing together to understand the whole of the idea. Thanks 4 your insane drive and innovation to better understand the universe.
for some reason I found my self enjoying this conversation at both full speed and half speed. half-speed at around 42:00 was just hilarious to listen to.
hi Lex, your content is amazing can you bring in Terence Tao please !!
Just delightful! Thanks to both of you.
Great guest
28:00- 30:00 Truly deep!
Thank you Lex and Anca for putting this together. This podcast was particularly inspiring.
In particular my attention was caught around 21:00. I was playing a game called 'The bridge' (while listening), in which you play as Sir Isaac Newton and try to solve complex puzzles involving shifting gravity. The idea of 'intuitive physics' directly connected to my experience at that instant, as I must make guesses and small physics experiments to progress at the game.
If you are reading this Lex, I recommend to check out this game.
Wow this guest is so smart and a great conversationalist... Surprised she is also so young and cute and so upbeat, she is everything you could want in a friend and colleague. I wish I was smarter so I could work with her lol. Really enjoyed her points on robot and human interactions in everyday scenarios and I am glad to have someone so bright with the foresight to work on this very important although in this stage maybe under glorified aspect of A.I. I hope to see her on this podcast again and others.
Regarding the information that a self driving car needs to change lanes: Is there a correlation between the distance between cars, the amount that distance is lengthening or shortening over time, and the likelihood that the person will open a space for you to merge if you begin to do so? obviously you could just find out by the method described in the podcast making a high probability solution moot compared to the communicate intent and cause action solution. The action being the other driver communicating their intent by changing their behavior, slowing down or speeding up. As long as they move far enough away from a car that is in front of or behind them, a merging opportunity has been created. If you can't cause an action, I suppose you move to the next driver basing your decision on the highest probability driver to afford you a merging opportunity. I guess you probably would pick the first merge driver candidate on those criteria as well. So if there are no gaps wide enough to safely merge, and no one is moving out of the lane you intend to merge into, one must cause action by communicating intent, by swerving a little to simulate merging, to create a merge opportunity, by way of the other driver communicating their intent via speeding up or slowing down. As long as their intent is to not let you in front(speed up) or let you in front(slow down), problem solved. If they intend to not let you in either behind or in front, you just have move beyond their area of influence. That area could be really big if they are really crazy and cause a huge accident or something like that, to prevent you from merging hahaha(not super high likelihood). ok enough brain vomit lol.
the energy is infectious
Another great one!! One way to think about the meaning of life, would be to ask a different question: What would be the life's purpose of a totally sentient robot?
she's amazing
Wow, the book that I have half read and sort of suspended for a while was mentioned near the end. Very nice and informative talk.
I'm so proud, she's Romania :)
Romanian*
From
Lex, here is a suggestion: pose a few questions from the stand point of the human condition 200 or even 500 year ahead.
She's so beautiful. Love her energy and enthusiasm!
I didn't finish the whole episode yet, but I just wanted to write a comment while it's fresh in my mind. This whole negotiating of the human intentions and playing off human reactions in order to gain something, like getting home as fast as possible, will possibly lead to the AI version of road rage.
Wish I knew how to give kudos on Google podcast, where i normaly enjoy show
Good insight about developing advanced products and services for consumers... also useful scientific AI details.
@59:50 when she spoke of distribution she meant "out of the factory/box" distributions.
So if I bought a small robot for home and my child ran and jumped over the robot that would likely not be in
the distribution in their current testing
Robot interviews robotics engineer
That was funny the first 100 times the joke was made. It's getting old, now.
@@Garium87 beating a dead horse is funny in its own way
the first loving robot
oh my god the proposal !!
I think the real problem with learning is in our assumptions. If I think I have the truth, there is no point in looking for it anymore.
Then I get wrong and I realize that my idea of the truth, of what it really is, was only a first approximation that I must continually keep refining it.
In the case of autonomous vehicles and robots that work with humans, there must be basic training, at a level of behavior that is acceptable from the point of view of safety and interaction with the environment. From that point on, the robot or system must be able to continue learning for itself, regulated no longer by data, but by principles, so that it can make its own decisions.
In the case of the risky attitude assumed by each pedestrian when crossing a street, in which you assume that nobody wants to die, (the chicken attitude), it is wrong or false to assume that nobody wants to die.
Freud spoke many years ago about the death drive behavior that often induces completely irrational and unexpected behavior in humans, including the hidden desire of living at the edge between life and death, and the sometimes the desire to die.
Yep, 100% Romanian :)
If not for a name in the title, hand gesturing definitely gave her away :)))
#felicităriAnca, #goRomania
Cheers Lex.
1:30:00 Poor Lex, one of those days, huh
Looks like she loves speaking about this and was just waiting for someone to ask her a bunch of questions so she could go on talking about AI :)
Molodec. Long talk but interesting conversation!
She's the Max Tegmark of robotics
Super fun interview. Thank you. My amateur comments:
1. How would you create a robot guided by the kindness reward function?
2. Has any autonomous driving system modeled drunk human drivers and had they been drinking single malt Scotch or California pinot noir?
3. Could an advanced AI teach us how it thinks (can we teach a dog or cat, or even a roommate, how we think)?
4. Would people be willing to give up their privacy on their cell phone or wearable device to the extent of allowing a central “ground traffic controller” to know their location and velocity, including times when they were walking or riding a bicycle, in exchange for a reduction on their life insurance premium?
Thank you. William L. Ramseyer
I love quality content. It's too bad this type of content isn't promoted by TH-cam.
Related infinite state spaces connected by short family timelines.
God bless U and thank U
that's not question for her!1:34
Liking a video is a reward. Highlighting and liking a comment is a reward. Is it still considered a reward if the thumbs up on the video is taken back but the thumbs down was never pressed?
It's good to be early
the first robot with attitude shall be named LEX 01.
notigang
1:31:00 I Agree, WE I feel as humans are really only aware that someday the Life that we are living in the bodies we exist in will someday end. Other beings I believe on our planet do not have that awareness. Not to say our spark of life or energy doesn't go somewhere else after our Body stops functioning but the experiences and perceptions we have in our life in these moments can truly never be experienced again in the same way or in the same perspective ever again. I do not think that I have ever really delved into that before which is interesting, I don't think we fear Death itself but the End of the story we are telling to ourselves but also projecting out into the Universe. I hope I'm making sense, non college graduate, nothin special, trying to articulate something as deep as the subject matter without evoking confusion... *sigh*
I believe the philosophical idea that we only truly appreciate life because it ends, isn’t particularly logically sound, and is at least in part a story that we tell ourselves to avoid the existential horror of the dying of the light.
I’m very inarticulate generally, and on this specifically, and as a philosopher I’m worse than amateur, but I have read a bit about it (and seen the Good Place!).
As I understand it, the argument goes something like this: the fact that life ends is what gives it meaning. You might think you want to live forever, because you could become, say, the world’s greatest concert pianist, if you had an infinite timeline. So you would become the greatest pianist, and visit the pyramids, and climb Everest, and read all the classics, but, sooner or later, you will have done everything there is to do, and it will all become wan (the adjective, not the network topology) - a chore, meaningless.
So even if you agree that “for every man his death is an accident and, even if he knows it and consents to it, an unjustifiable violation” (Simone de Beauvoir), still, without death, for the atheists and the religious alike, there is no meaning.
They say that “If you change your mind, you change the world”, and if you believe that life ends with the destruction of your physical body, then it is also true that “If your mind is destroyed, the world is destroyed”. But you should wish it, because otherwise that world has no meaning. It is a dilly of a pickle.
I’m not sure I agree with this idea that we shouldn’t wish for immortality for this reason. Or at least, I don’t think we’re objective enough to make a reasonable call on it.
See, the thing is, being a concert pianist and visiting the pyramids etc, I can understand that eventually, over centuries, those things will dull. BUT, if you never, ever die, then guess what? You could learn all the secrets of the universe. All of them. All. Of. Them.
And when you have arrived at everything that can be known about the universe, by that point you will be so fundamentally changed that trying to extrapolate out your philosophy of immortality from where you are now, to knowing all there is to know about the nature of reality, is something you can’t possibly make a reasonable judgement on. I can’t see how it’s possible.
So, immortality it is for me, please.
That's the magic of AI, knowing it's all math, code and computing, yet there's something beyond the grasp of human comprehension.
I would like it if Alexa told me the truth about myself: "Listen, you're a selfish asshole. Here are all your faults..."
OMG My new virtual girl-friend. She is awesome!
Mr. Hale, I have always wanted to ask a learned man what signifies the readin' of strange books?
If we cant figure out what other humans want how is a robot going to be able to do it. Maybe we should do what the robot wants.
AI control with human supervision is a bad idea since the human may misinterpret what the AI is doing and intervene belatedly without having been planning and thinking it through ahead of time.
She is Romanian. People born there are just quick in the mind.
Let me guess. You are Romanian.
Penicillin and airplanes. Did you know that theres a patent out there of a certain frequency of light that improves vegetables productivity with proven 75% no matter what the weather? And it was buried because industry threats were VERY SERIOUS? I kid you not! A Romanian invented the procedure and he almost got killed for it:
Im gonna have problems for even talking about it.
@@bucataru1977 which Romanian invented Penicillin?
That's what a romanian would say lmao. Everything is better over there which is why most of you partake in mass exodus to greener pastures in the EU
its hard to understand humans, because we don't naturally look at things objectively. and with a robot, you are essentially telling it to make sense of an object, the human, that does not understand the world or itself overall. Maybe the best thing to do, would be to get to that level of objective understanding of the world and ourselves, and in turn just maybe there may be some sort of universal language to this sort of process in both perception and manipulation of things that may translate accurately or enough to a programmed object made of different materials, that may allow it to have at least both shared perceptions and manipulations of the outside world. It may not be that robots might have a difficult time understanding us, but that we make it difficult translating who we are and what we understand. So how can anything then understand us? Our motivations, intentions, or purpose?
Ontology. In order to move a glass, a human needs to know what a glass, movement, hand, goals, hopes, dreams, aspirations, the universe, and the multiverse are.
Kidding, but the point is, trying to encode a human-level ontology into a reward function manually is doomed to failure. The reward function must be self-altering in accordance with as broad an understanding of the world as possible. In order for a robot to act as a human, it must have the average human's understanding of the world.
In order to figure out what to do in a new situation, it would have to know or estimate what the new objects are, what their mechanics might be, and what the sequence of object will do to the world and the humans now and in the future. Anything less than this will be subhuman in capability.
Whether this can be done by training ML models externally and transferring domain knowledge ontologies to robots, I don't know, but I highly suspect it's required from some point on, or we will NEVER have super-human capabilities in robots.
@@brianarcher8339 while everything you say is true, most people have some common corpus of things they know about the world and it's mechanics which is common enough that they can act on it.
Sure there are loads of variations but not to the point that the concepts are useless.
In fact, artificial neural networks learn in much the same way: neural weights are randomized at the start, making each copy of a neural network learn slightly different things and aspects of a whole.
It nevertheless doesn't make the process superfluous. Sure, these discrepancies are higher in new situations where people have to infer a lot of things.
But while an "average" human ontology isn't required, in order to exceed certain performance characteristics, it will be hard to dismiss it.
Much like I suspect is conciousness itself. It probably isn't required for intelligence, but you would run around in circles trying to devise mechanisms equivalent in performance so you might just as well endow an AI with it.
❤
She is female version of George Hotz
Fizzy.
finally a girl nerd
Even a woman that's clearly very apt in math and CS chose to work on the more "human" part of robotics (human-robot interaction). I guess it's due to male chauvinism and closed doors, instead of because she has different preferences than most of her male colleagues.
:)))))
You two would make a cute couple. ;)
She is married.
I have been following her work at the Interact Lab and absolutely admire her research in "Theory of Mind" models. Thanks a lot Lex.