@@aylbdrmadison1051 Indeed and for that reason a human needs to be the final desision maker. In order to go back and change the input (learning data) if needed. AI needs to be treated like a child. Correct the learning if the outcome is not the desired outcome.
@@dominikxxxxx9642 The child-comparison is such a good one! 🙏 But damn, it sounds expensive in human ressources to have to foster also digital children now 🙈
I mean, self-driving cars have tremendously smaller chances of crashing themselves than humans do, statistically speaking. The thing is, there are numerous fields where AI makes much better decisions than us, but there are also fields where the roles are reversed. The key is to only give AI tasks that it does better than us, and where an AI-made decision is undeniably better than a human-made one. Also, I believe important decisions can be studied by AI, but actually doing it should always be a human decision.
Let's start out by examining the assumptions that underlie AIs. First, AIs are built upon assumptions about how the human social world. Many of these assumptions are untested, and many of them are wrong. Second, statistical processes reflect only the models and data we have and the relationships between data are valid. The statistical processes only work if the data actually reflect the population they are supposed to represent. Third, the AIs are assumed to have no faults in their predictive functions. However, to date there have been no technologies that have been faultless. The AIs we are developing have been shown to be biased in a number of ways. To depend upon AIs without humans thoroughly vetting the results is foolish. The biggest problem I see here is that after enough experience with AI coming up with the right answer every time, we are likely to become lax in critically analyzing the AI's answers in every subsequent case. Moreover, there's a problem with outliers -- those people who don't conform to the profiles AIs develop. This goes back to the assumption about how the human social world works. There are people who don't neatly fall into any statistically significant, and AIs will tend to ignore them -- until the AI is given a problem involving those people. Without sufficient data, AIs can't work out solutions that actually work for those outliers. In that case, the AIs will probably default to solutions worked out for people most like the outlier they can't figure out -- and, of course, that probably will not be a good fit. Given the problems with AIs, the assumptions they're based on, and the datasets they operate with, I'm not very confident that they will solve complex problems better than humans -- they'll only arrive at the wrong answers faster than actual people. Fortunately, having just turned 63, I don't anticipate living through the worst aspects of our transition to AI driven futures. But I wish the rest of you the best of luck.
No to a technocratic elite making choices for people based on AI. We can give the AI big data information to autonomous individuals to make better informed and more free choices.
I absolutely agree with your first sentence. But biases *are* being programmed into A.I. Even just at a collection level the biases of political parties, companies, corporations, and programmers are already controlling us to a large and very real extent.
You can't . Your race is predestined to give rise to AI . Only human extinction can save you now . Mam , you are growing old , and most probably you will want to make your life bliss in future . As ignorance is actually bliss
If somehow high probability was found you could approach it delicately rather than a swat team. Let's say the indicator is a lot of shouting in the family then they could be offered some sort of support for known elements rather than making dramatic statements about the future and putting everyone in strait jackets. It's not an issue.
So there is no perfect solution to societal and social problems via artificial intelligence. Got it. Micro implementation and boxed case studies makes sense. However, I believe firmly, that the power structures of government and military industrial complex, general deep state intelligence and rule, make it near impossible to stop the weaponization of any technological development. In short, each narrow innovation will be analyzed for "defense" applications, "regulated", and structured to serve whatever hierarchy we have in place. I love the idea of disruptive innovation and tech . . . But prioritizing truth and transparency is near impossible. All nation's and people have incentive to embrace and perpetuate secrecy And privacy, Independence and boundaries. We operate in a very small window of autonomy as it is. Universal efficiency renders humans, our bodies, out brain, our biology in general, obsolete. We are pursuing Utopia via perfection but we are imperfect. So, if we start with a jumbled mess, not from scratch, then it is the uprooting if old, persistent and prideful systems and programming, yes misinformation and trickery, which need upheaval. But how? Neuralink is a step in that direction. No thought hidden. As it is, we are working toward being a brain if sorts. All humans, continuously connected, no lies, no thought private, no incentive to destroy and steal, and dominate . . . Only to contribute. But we don't know how to do that. Biology demands that we serve our own desires and needs. Ultimately, ELIMINATION of privacy is the state's goal. Or whoever is in power. Not that entity's privacy, just everyone else's. Would be best if this ubiquitous problem-solving, sleepless, faultless machine could be implemented with the motive of universal transparency whilst retaining autonomy. That is something we do not understand how to develop. I love the possibility of Justice and peace, but a surveillance state leads mostly to abuse of power. We do need transparency for governance. But the governing must be transparent themselves. Do you want people to know all the details of the last time you masturbated, for free and on recall to anyone an to time forever? Our desire to be private stems from fear. How can we eliminate fear when we are threatened? Privacy has been stolen from all of us in the name of a better planet, but we all know it is merely to usurp as much power from the less fortunate as possible. Those with good intentions are being deceived.
So, I "liked" your comment because it was thoughtful and made some very good points. But I must bring your attention to a major flaw in it. You pointed out (and rightly so) that _"prioritizing truth and transparency is near impossible"_ But then go on to say that Neuralink is a step in the direction of _"uprooting if old, persistent and prideful systems and programming, yes misinformation and trickery, which need upheaval."_ And yet Neuralink will be as you yourself said: _"I believe firmly, that the power structures of government and military industrial complex, general deep state intelligence and rule, make it near impossible to stop the weaponization of any technological development"_
@@aylbdrmadison1051 I did not explain why I said that. It seems that the only way to save humanity is some kind of bizarre access to shared experience that isn't tainted by the detrimental parts thereof. Yes neuralink will be weaponized, but hopefully the most powerful weaponization will be one for good, rather than a transhuman atomic bomb, in a manner of speaking. I think only a machine can solve our problems, but how can we avoid losing our selves? So that is what I mean. Does that make sense? And I ran out of time in that moment and simplly posted. Thanks!
🦋it sounds a little bit like Minority Report - in my own life if I worried about meeting an imaginary line I would need to feed my son fast & processed foods to meet some line by doctors (um nope)
The silence emanating from the audience indicates the impact these ideas are having on them. They are hanging on every word because the topic is important to them.
final critique. This was useless.A philosopher who doesnt break down his arguments..I d like to have a long discussion with him ,to explain to him what a philosopher does....
@CrysøK : Of course they wouldn't understand if you are so rude and or lazy you don't even bother to make any attempt whatsoever to answer their *one incredibly simple question,* and instead just leave a snarky (and empty) comment.
Is this guy's name really Hannibal ⁉️ That's very unfortunate isn't it....?! I would seriously have considered long and hard about weather or not I should have changed my name by deed poll having reached the legal age🔞 if I was in his shoes. 👟 👞 🥾 🥿
The buttom line is to use AI as a supporting tool and not as a final decision maker.
. . . button line ? bottom line !
@Dominik Rohde : Even that can be *(and often is)* falsified and programmed with extreme bias.
@@aylbdrmadison1051 Indeed and for that reason a human needs to be the final desision maker. In order to go back and change the input (learning data) if needed. AI needs to be treated like a child. Correct the learning if the outcome is not the desired outcome.
@@dominikxxxxx9642 The child-comparison is such a good one! 🙏
But damn, it sounds expensive in human ressources to have to foster also digital children now 🙈
No, we should not let AI decide! How is this even considerable? The possible, and plausible, dark side is way to scary!
I mean, self-driving cars have tremendously smaller chances of crashing themselves than humans do, statistically speaking. The thing is, there are numerous fields where AI makes much better decisions than us, but there are also fields where the roles are reversed. The key is to only give AI tasks that it does better than us, and where an AI-made decision is undeniably better than a human-made one. Also, I believe important decisions can be studied by AI, but actually doing it should always be a human decision.
I do not agree. Any way you look at it, you're crossing a very dangerous line.
Human Above AI. Period.
Let's start out by examining the assumptions that underlie AIs. First, AIs are built upon assumptions about how the human social world. Many of these assumptions are untested, and many of them are wrong. Second, statistical processes reflect only the models and data we have and the relationships between data are valid. The statistical processes only work if the data actually reflect the population they are supposed to represent. Third, the AIs are assumed to have no faults in their predictive functions. However, to date there have been no technologies that have been faultless.
The AIs we are developing have been shown to be biased in a number of ways. To depend upon AIs without humans thoroughly vetting the results is foolish. The biggest problem I see here is that after enough experience with AI coming up with the right answer every time, we are likely to become lax in critically analyzing the AI's answers in every subsequent case.
Moreover, there's a problem with outliers -- those people who don't conform to the profiles AIs develop. This goes back to the assumption about how the human social world works. There are people who don't neatly fall into any statistically significant, and AIs will tend to ignore them -- until the AI is given a problem involving those people. Without sufficient data, AIs can't work out solutions that actually work for those outliers. In that case, the AIs will probably default to solutions worked out for people most like the outlier they can't figure out -- and, of course, that probably will not be a good fit.
Given the problems with AIs, the assumptions they're based on, and the datasets they operate with, I'm not very confident that they will solve complex problems better than humans -- they'll only arrive at the wrong answers faster than actual people. Fortunately, having just turned 63, I don't anticipate living through the worst aspects of our transition to AI driven futures.
But I wish the rest of you the best of luck.
John you are on point👏🏽👏🏽
No to a technocratic elite making choices for people based on AI. We can give the AI big data information to autonomous individuals to make better informed and more free choices.
I absolutely agree with your first sentence. But biases *are* being programmed into A.I. Even just at a collection level the biases of political parties, companies, corporations, and programmers are already controlling us to a large and very real extent.
Discard AI now. We don't need AI.
We are humans. We need to be genuine interest all the time.
You can't . Your race is predestined to give rise to AI .
Only human extinction can save you now .
Mam , you are growing old , and most probably you will want to make your life bliss in future .
As ignorance is actually bliss
If somehow high probability was found you could approach it delicately rather than a swat team. Let's say the indicator is a lot of shouting in the family then they could be offered some sort of support for known elements rather than making dramatic statements about the future and putting everyone in strait jackets. It's not an issue.
Well said.
i use teds talk to improve my English to improve my videos who else do the same
I'm against minority report concepts.
So there is no perfect solution to societal and social problems via artificial intelligence. Got it. Micro implementation and boxed case studies makes sense. However, I believe firmly, that the power structures of government and military industrial complex, general deep state intelligence and rule, make it near impossible to stop the weaponization of any technological development. In short, each narrow innovation will be analyzed for "defense" applications, "regulated", and structured to serve whatever hierarchy we have in place. I love the idea of disruptive innovation and tech . . . But prioritizing truth and transparency is near impossible. All nation's and people have incentive to embrace and perpetuate secrecy And privacy, Independence and boundaries. We operate in a very small window of autonomy as it is. Universal efficiency renders humans, our bodies, out brain, our biology in general, obsolete. We are pursuing Utopia via perfection but we are imperfect.
So, if we start with a jumbled mess, not from scratch, then it is the uprooting if old, persistent and prideful systems and programming, yes misinformation and trickery, which need upheaval. But how? Neuralink is a step in that direction. No thought hidden. As it is, we are working toward being a brain if sorts. All humans, continuously connected, no lies, no thought private, no incentive to destroy and steal, and dominate . . . Only to contribute. But we don't know how to do that. Biology demands that we serve our own desires and needs.
Ultimately, ELIMINATION of privacy is the state's goal. Or whoever is in power. Not that entity's privacy, just everyone else's. Would be best if this ubiquitous problem-solving, sleepless, faultless machine could be implemented with the motive of universal transparency whilst retaining autonomy. That is something we do not understand how to develop.
I love the possibility of Justice and peace, but a surveillance state leads mostly to abuse of power. We do need transparency for governance. But the governing must be transparent themselves.
Do you want people to know all the details of the last time you masturbated, for free and on recall to anyone an to time forever? Our desire to be private stems from fear. How can we eliminate fear when we are threatened? Privacy has been stolen from all of us in the name of a better planet, but we all know it is merely to usurp as much power from the less fortunate as possible. Those with good intentions are being deceived.
So, I "liked" your comment because it was thoughtful and made some very good points.
But I must bring your attention to a major flaw in it. You pointed out (and rightly so) that _"prioritizing truth and transparency is near impossible"_ But then go on to say that Neuralink is a step in the direction of _"uprooting if old, persistent and prideful systems and programming, yes misinformation and trickery, which need upheaval."_
And yet Neuralink will be as you yourself said: _"I believe firmly, that the power structures of government and military industrial complex, general deep state intelligence and rule, make it near impossible to stop the weaponization of any technological development"_
@@aylbdrmadison1051 I did not explain why I said that. It seems that the only way to save humanity is some kind of bizarre access to shared experience that isn't tainted by the detrimental parts thereof. Yes neuralink will be weaponized, but hopefully the most powerful weaponization will be one for good, rather than a transhuman atomic bomb, in a manner of speaking. I think only a machine can solve our problems, but how can we avoid losing our selves? So that is what I mean. Does that make sense?
And I ran out of time in that moment and simplly posted. Thanks!
Bring it on already
In relation to this, I invite you all to watch an anime called: Psycho-pass.
Let me know how it goes. :)
You've got a point
when the ai nukes us all i'm the guy slowly clapping...
🦋it sounds a little bit like Minority Report - in my own life if I worried about meeting an imaginary line I would need to feed my son fast & processed foods to meet some line by doctors (um nope)
The answer is no. Because whoever controls the AI will be able to decide what is right and wrong for you. No thanks.
Interesting!
The silence emanating from the audience indicates the impact these ideas are having on them. They are hanging on every word because the topic is important to them.
Pls, everytime add subtitles on video
Helpful
It is the information era version of death masking.
1:20 Yes
final critique. This was useless.A philosopher who doesnt break down his arguments..I d like to have a long discussion with him ,to explain to him what a philosopher does....
Does anyone remember the movie "Minority Report"? It is scary!
Asimov was a prophet
Propaganda at it's finest.
"You're gonna lose your rights and regret it... but just get on train"
After years of waiting, nothing came
Ooo y ellaa
xd
what
@@njdotson Argentine jokes, you wouldn't understand
@CrysøK : Of course they wouldn't understand if you are so rude and or lazy you don't even bother to make any attempt whatsoever to answer their *one incredibly simple question,* and instead just leave a snarky (and empty) comment.
*Sponsored by Agent Smith*
First I asked him about
Humanity evolves through moral and ethical struggles. What will happen if Siri takes care of it for us?
Yay!! I'm first!!
I'm for AI 👌😊😎
Then you are for being programmed to not think for yourself.
Is this guy's name really Hannibal ⁉️ That's very unfortunate isn't it....?!
I would seriously have considered long and hard about weather or not I should have changed my name by deed poll having reached the legal age🔞 if I was in his shoes. 👟 👞 🥾 🥿
🙈🙊🙉🧟🧟♂️🧟♀️
First dogville
🙌🏾✨🙌🏾✨🙌🏾✨🙌🏾✨🙌🏾✨🙌🏾✨