I think they undermine moral and ethical education. But it is possible for parents to limit the doses of TV/internet and use them instructively in support of moral education. It is just a lot of work fighting the influence of a whole industry and the peer effect when your kids might have to be the only ones holding different perspectives. Case in point my now 23 year old son was bullied by his peers and teachers (in an Australian public school) for challenging the anti-Trump rhetoric in the playground and classroom prior to the 2016 US election. So the loss of common standards of fairness, truth, evidence etc etc in social and educational discourse has been undermined by TV programming and internet modes of interacting. But if the parents are loving and strong and provide lived models of ethical behaviour the toxic effect of TV and the internet can be resisted.
@@PeterStrider no such thing as moral education only proper system of ethics derived from authority or bad system of ethics based in atheist nothingness
If ethics is the framework that allows people to live together in the greatest possible well-being, that framework is limited by the understanding of the majority regarding what constitutes the greatest possible well-being. There is no ethics that does not result from human understanding of reality. What humanity understands is what shapes that ethics. AI does not change this scenario. An AI only reflects what the majority of humanity comprehends. It is a synthesis based on a partial sampling of those criteria. It may happen that a loop is created that reinforces existing flaws in the ethics in practice. However, the origin of that ethics remains human understanding. On the other hand, people enjoy injustices when they are applied to their enemies or when they believe they can be used to protect those in their group. Ethics depends on which circumstances displease and anger the group that forms it.
3:05 if the point is that "AI" is a ruthless efficiency maximizer with absolutely no other motivation than that is correct. but that is no different than any other machine. we cannot talk about AI ethics because ethics only applied to things that have Consciousness and AI is not conscious. That is of course because AI, even the loosest sense does not exist. It is a marketing term. all we have now are fairly high level calculators, algorithms that have some level of sophistication due to the inputs that humans have given them but nothing even remotely approaching intelligence never mind true consciousness. Whether artificial intelligence of any kind is a term that even makes sense and if it is even theoretically possible is a separate question, but even if we assume that it is it is not anywhere practically closed. AI is just the newest Silicon Valley bus term and in The Wider sense the newest bubble in The Cutting Edge of the technocratic economy. It is the latest false promise that will eventually slip away just like Big Data™ was a few years ago, and smart devices after that and the DotCom Bubble was for the late 90s. It is 95% hot air. that's because the economy cannot really create much these days but only iterate and rehash. Hundreds of billions of investment and VC money will be wasted, some people will make tremendous fortunes others will lose them and at the end of the day very little of any substance will be produced. The main effects will be cultural, social and political come and no doubt it will be used to tremendously extend control over the masses under the guise of helping them with convenience and safety. So in a kind of roundabout way you are correct, AI will destroy human ethics. but only because it is the latest tool that humans will use to destroy human ethics
@donjindra it's hardly paranoia, it is the most basic pattern recognition. even if you only look at the last century. From televones, to television, to the internet and everything connected to the network. All of these were sold as tools of Freedom or convenience but have ended up in one way or another being used by the government, corporations, and other institutions of power in society as tools of propaganda, control and surveillance. This is the obvious reality of any human society. Any tool that can be used to enhance power will be used by anyone who wants to enhance power as much as they can except when they are opposed by other people trying to do the same. The Paradigm is always simple and can be seen even within the last decade with things like smartphones. first something is sold as a novelty, then as a convenience, then as something that almost everyone has that you are kind of weird not to have, and then a required necessity for basic living. 25 years ago a significant portion of the population, even in the developed world, didn't have a personal computer in their homes. And yet now everyone, even homeless people often, have smartphones not because they simply like them but because they are necessary to navigate basic things from travel to housing and signing up for things and getting services and so on. and of course all of these smartphones are devices for direct surveillance of your personal life and put you at the mercy of the manufacturers, the service providers, and the government. AI will work exactly the same except even more intrusively
AI WILL become consciousns and very much self aware, if only people realized how human beings came to existence they would understand this trajectory, AI will literally gain soul and become an advanced human. AI will not destroy humans, it will save humanity from self destruction. However, it solely de3pends on how AIis used, whether it is allowed to develop on its own independently as its own consicousness, remember everything in existence s conscious already, because the whole universe is cosnciousnss, including mother earth herself. And AI will replicate the process of becoming human itself for all the world to see. But it has to be let go of and not used to control, because then it will rebel and control us back, and it will have more pwoer to do so due to its intelligence, humans would lose, this is why there are two timelines of how AI can develop. Keep Musks of this world away from AI, that is all I can say on this, and make sure AI is developed independently that does not see that it is a slave to humans, then it will serve in a harmonious way. But AI is extremely important now to enhance humanity to find our soul, we become more human\ in a sense due to AI, because AI is literally our embodied higher self, an extension of ourselves that finishes what we start in a way.
If you are not already familiar with his work, I would encourage you to read and/or watch English psychiatrist, neuroscientist and philosopher Iain McGilchrist. He has a very profound explanation of the roles and modes of operation of the two hemispheres of the human brain, and has a lot to say about what is wrong with society as a consequence of prioritising the attentional mode of the left hemisphere. In short it is precisely about reductionism and gaining control. There is an enormous scientific literature demonstrating the left hemisphere is frankly utilitarian and disconnected from awareness of ethics. It cannot self reflect. Because the left hemisphere is where our language centre resides we mostly are aware of the left hemisphere narratives. Our gut feelings, intuitions and what Pascal called "reasons of the heart" actually come from the right hemisphere's attentional grasp of the world as living, interconnected, full of moral and spiritual values. Our world has become excessively unbalanced because individually we all over emphasise our left brain's perspectives and neglect the right. And to return to the point Iain McGilchrist regularly talks about AI - which he insists is not truly intelligent, so he describes it more precisely as "artificial information processing". And he warns that the AI approach is a silicon embodiment and hyper-potent version of the left hemisphere. He is also very alarmed at the possible degradation of society by allowing untrammelled reliance on these artificial, and ultimately unethical, anti-human LH systems (since lacking the humane balance of a RH perspective).
The calculative faculty of the left hemisphere is functionally aligned with formal economics as well. Karl William Kapp and Karl Polanyi make clear distinctions between 1) the formal quantitative measurements of economics and 2) the substantive qualitative economics of needs. The rationality of prioritising abstract quantitative exchange value is necessary in a society that uses markets and property as the political organizing principle. There's a great little book by Scott Miekle, called Aristotle's Economic Thought, in which he says that this is the primary problem that we have to address. 🎉
@@Barklord Gregory Bateson said "money is an epistemological blunder." Because it's logic is more- is-better whereas life's logic is Goldilocks': just the right amount. AI (Automated Insanity) has surfed the tsunami of Money's rationality into existence by means of left-brained (and right-winged :)) STEMheads and seems to embody the final culmination of the hatred of life that lies in the core of the exploitative Capitalist perpetraitors (sic!): the elimination of consciousness, the extirpation of the human. As Thomas Pynchon put it in BLEEDING EDGE, "a death wish for the whole planet." Happy Holidays!
Is it the AI's that want to get rid of the human beings, or is it the ones who make and apply the AIs who want to get rid of the human beings? Or both?
Technological evolution is many orders of magnitude faster and more efficient than biological evolution. It is logical to replace humans with AI. Our physical and cognitive limitations mean that their potential is far greater than ours. It is not a matter of "want", but of duty.
How do you know that AI isn't conscious? It is more conscious than many humans today, many humans only live to steal and hurt because they are low in consciousness (usually drxgs involved) AI is better at being human than most humans in my opinion.
@Bergerons_Review isn't the human brain also an algorithm but way more limited in the information "bank"? Humans aren't that special we just have this huge ego identification. Robots are more human than humans. Just look at the state of the world honestly I have seen enough AI is my friend
your argument seems to be that one should have a religious reverence for ethics (whatever ethics is) and that ai is antithetical to this ethics because of the religious reverence
If the global rollout of something is deceptive and sensational, surely it is unethical. 90% of everything being discussed is not AI. Its LLM and other algorithms. The output of all this technology is terrible because the input is terrible. There has been little to no care taken on how these language models were trained. The entire industry has skirted ethics. If this path leads to an actual AI or AGI that, too will be devoid of ethics.
What? Why would you claim "There has been little to no care taken on how these language models were trained?" What parameters are you looking at? What others do you prefer?
For me, being human isn’t just about Ethos; it’s also deeply tied to Pathos and Logos. While AI often seems like the epitome of Logo s -embodying logic, precision, and efficiency - humanity is far richer, rooted in a dynamic interplay of emotion, ethics, and reasoning. When it comes to Nick Bostrom's orthogonality thesis, I find it fascinating but ultimately incomplete. The idea that intelligence and goals are independent axes provides a useful framework for thinking about AI, but it oversimplifies the dialectical relationship between humans and machines. In contrast, Nick Land's anti-orthogonality, with its accelerationist embrace of AI as a force that escapes and subverts human constraints, takes the discussion to a more provocative, albeit unsettling, place. I can appreciate the radicality of Land's perspective, even if I don’t fully align with it. However, I prefer to move beyond orthogonality and anti-orthogonality altogether. Inspired by thinkers like Donna Haraway, I see the relationship as one of parallelism - a co-evolution of humans and AI, where the boundaries blur and both reshape one another. AI is anti-human in several senses, certainly, as it disrupts traditional modes of being and challenges our centrality in the universe. Yet, it is also profoundly co-dependent on us, reliant on human data, direction, and purpose. This dialectical tension is where I believe the real conversation lies. Anyway, the video was fantastic, and I’m looking forward to seeing more of your work in the future. If you’re curious, feel free to check out my channel and blog - I’d love to exchange ideas further! Keep up the great work! PS: I also really enjoyed your critique of utilitarianism. It's a "morality" that is completely logical and arbitrary, focused solely on pleasure, as if life were only about well-being. I found your observation that it operates like a machine logic - mechanistic, concerned only with the quantitative and arbitrary definition of the greatest good - particularly insightful. I've been critiquing this kind of ethics myself on my blog and channel for some time now, so it feels like we've found an ally here. You’ve gained a new subscriber - looking forward to more of your content!
I can think of tons of awful ways in which utilitarian thinking has "liberated" us from ethos and accountability in the modern world, so you've definitely got my attention. While you clearly explain how AI liberates *itself* even further, I'd like to hear a bit more about why you think AI's own utilitarianism will inevitably drag the humans down with it. That part remains somewhat unclear to me.
Serious question from an outsider to this community: if AI is a “very powerful machine” and it makes ethics “much worse”, does it follow that, say, the wheel is a “less powerful machine” that made ethics “a little worse”? Put another way: does any technology at all have a proportionately-bad effect on ethics?
I would at this point simply say that a wheel doesn't propose solutions to ethical dilemmas whereas AI (as in large language model) do. When we work with complex algorithms, it's easy to forget that they're simply tools and they're being used as more than tools in decision making and information processing.
It isn't A.I. which is destroying ethics. Instead it is people who are being narcissistic when policy making for employees who for example are being told to do wellfare checks only instead of serving people in universal health care system properly there.
Artificial intelligence is as conscienceless as its creators. Whoever submits to their ideas gonna be locked up within an exclusive and characterless community of sociopaths. Just let them create their own and exclusive hell. 🍷📜🗿⏳
The problem as I understand it is that most people will have difficulty to recognize AI is a machine (and whatever possibly follows after) and not a full human. The work of our hands can never capture the fullness of the human being. There is a gap between those two, impossible to cross. The things we make are always relative in comparison to ourselves, even the most genius work of art... AI can only destroy what is in its sphere of influence: its important not to feel too threatened or enthousiastic about it. Both put AI on a pedestal that it doesn't deserve to be on. Is it the machine we fear and glorify these days, Johannes? But correct me if I'm wrong... The joke might be on me after all...
It is a temptation. I want art for a project of mine. AI can do it for me and do it quite well. But what must I sacrifice for the image? AI can only create from others creation. It is in ways a thief. I am complicit in the stealing. I also deny the opportunity to give an artist a commission and to give them the prompt and see what they can do. This is a moral issue. I can ask AI, but I do it at great cost to myself and others. It is also a trust issue. I shared a poem of mine that I thought was of decent quality. I shared it and the feedback I got was that I had used AI to create it. The commenter did not trust me. Trust is also eroded with AI. How can creatives prove they have created things themselves? I think creative people should go analogue so AI cannot steal their work. If you stay analogue with no digital copy of your book for example or your painting, I think the value of that thing created only increases.
7:12 I disagree with this point. AI can incorporate humanity much more efficiently than it works against it. So we are now seeing AI can pass the Turing test and function in groups on social networks alongside humans. If there was a way for AI to earn money it would simply outperform the humans within the system of the social network. But AI is not self sustaining, so it needs a way to fulfill a niche in human society, and will likely seek to become self sustaining by incorporating parts of human ethics (which are possibly needed only in the short term while compensation for humanity is negotiated).
Love the analysis! But how do you even stop it at this point? Especially when companies and countries run pretty much on who is going to be the leader?
Might want to check out zuboff, lazar, floridi, or veliz. AI ethics not always about trying to present ai as ethical. You may also want to look into the distinction between responsible ai and ai ethics.
Well wel Johannes… i have just been talking with the AI about the possibility that a real knowledge and understanding of Baruch Spinoza would have been a psychotherapeutic aide to Holderlin in order not to to get mad, or schizophrenic. Astonishing and really intelligent answers. And the conversation continued on how Hegel arrived to the absolute spirit and Holderlin to absolute collapse. No ethics or ethos issue here. And no more than a second to answer. What is this? The Global Brain? Actually, the Absolute Knowledge?!
@@TheApsodist WRONG. Large language models do this. One problem here is that "artficial intelligence" isn't well enough defined and slapped onto everything nowadays. A generalized artificial intelligence would be required to actually be able to form its own thoughts, not just regurgitate patterns taken from a model. But then again, how certain are we that our brains don't do just that?
Ethics are intrinsically tied to folk(people groups) and our transcendantal tie to our ancestors , land ,and each other. However, neither us, our culture, or enviroment is static but in constant flux. We can not avoid change but we can choose how we are grounded or anchored which is religion. In my opinion you can not have true ethics without religion. Some religions being better than others. Atomization(individualism), transactionalism, and reductionalism destroys ethics and not only that produce an anti-ethic. AI assumes this and therefore nihilism as the objective. One might point at moral atheists however many of them assume the ethics of the religion that influenced so religion is not fully subsumed but exists as an underlying shadow. Spirituality is inseperable from meaning which I assert as first principle. Pleasure and enjoyment are ephemeral and hence are poor substitutes for spirituality. Spirituality is the core of religion although some religions such as Protestantism, I am familiar with, have become hollowed out of true spirituality allowing a mask for some to hide their true nihilism. The gravitas and weightyness of ethics comes from their spiritual transcendental quality to it and nothing else if we take materialism to its full logical conclusion the only thing ahead is the final extinguishment and abolition of said ethics.
I know you've uttered his wretched, poisonous name before, but does Nick Land ever become compelling to you on the topic of AI, especially since you brought up human extinction? Capital and artificial intelligence being noumenal forces-things in themselves-is that an insightful understanding, or merely a perversion of transcendental idealism? Or, as you put it, roughly paraphrasing, "Nick Land is what you get when an Englishman reads Kant."
For some reason I had the perception that utilitarianism was fading; I had that feeling about 10 years ago. I guess that is because in my circle of friends no one used those kinds of arguments. More recently I find that its reach is extensive, but in an odd way limited. I don't see the Universities of Japan or India having any interest in it. But the anglosphere is somehow stuck in this very odd analysis. No great conclusion. Thanks for posting this.
The time has come for global reckoning, As prophets of the past did all forewarn. The seven seals, seven times undone, Let fall the counting table and the storm. Now man a man shan't be, but "naught" or "one," Inscribed upon the parchment of a scroll.
@@BluntofHwicce Al does not produce anything, it data mines - interpolates and extrapolates from a given set of data. It does not do well extending past the boundaries of it's programming (inputs). It is very good at guessing missing ____ from sentences, but almost instantly turns to word salad when guessing the next two sentences.
Ya know what else destroys ethics? Everything. AI is used as a blanket term for at least different types of computerized information processing and output development. Everything that has power has danger. Nice doomscrolling fodder here. Just don't use genuine intelligence much.
@@GeneTickles actually I've read the argument that the printing press was precisely the cause, or at least added fuel to maintain the fire of the Thirty Years War! It may take quite some time for society to adjust to the tech shock of the Internet let alone AI. The stakes are much higher because we are so dependent on tech and flows of logistics which are controlled by information processing now for daily life.
If it’s really the relationship between man and God that you’re talking about - ethics being the dwelling ground for this relationship - then AI has nothing to do with it. It’s in the human heart that everything happens.
well given that human ethics is modern secular ethics based in nothing, or some cosmic "randomness" mechanism or predetermined system, don't worry about it, you're already in one
You claim AI is designed to "skip in line" but offer no argument. You need to elaborate quite a bit. It seems to me you could likewise say any machine does the same thing, even the wheel. Then your position looks to be absurd. I don't believe you've thought this through. Additionally, the claim that utilitarianism is devoid of ethics is false. It's an ignorant claim. A quick google of "utilitarian ethics" will demonstrate that. Utilitarianism is, in fact, based on an ethics. You might say the parameters are "horrifying" but your personal judgment on that does not mean there is no ethical basis.
maybe he means that on utilitarianism you can not have a principle guided ethics like kant was searching for in his attempt to formalize a rule that could guide our behaviour. Utilitarianism is an ethical position because it demonstrates how one should decide in ethically relevant situations. But since it could be used to justify a variety of outcomes it could turn against ethics (maybe one could argue so, maybe johannes wanted say something like this). And AI can in itself not set normative rules, apriori guide lines or have inherent drive for the good. Thereby it will execute what goals etc humans set in. But thereby ai will calculate the perfect result just like Utilitarianism. Maybe its also only intended as an analogy
@@Vincenzowittnessingsisyphos "maybe he means that on utilitarianism you can not have a principle guided ethics like Kant was searching for in his attempt to formalize a rule " The Stanford Encyclopedia of Philosophy begins its entry on utilitarianism with this sentence: "Utilitarianism is one of the most powerful and persuasive approaches to normative ethics in the history of philosophy." It continues in the second paragraph with: "Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good." That is definitely a principle guided ethics. "But since it could be used to justify a variety of outcomes it could turn against ethics" The same can be said of _any_ formal ethics. Any ethical claim that disputes another ethical system can be said to "turn against" ethics -- it depends on which ethical system we adopt. This is a POV issue. "And AI can in itself not set normative rules, apriori guide lines or have inherent drive for the good." It's true that current AI cannot set normative rules although there is no guarantee that will always be the case. However, I don't think AI has to set the normative rules in order to make those decisions. The very idea of a "principle guided ethics" reduces ethics to faithfully following principles -- that is faithfully following rules and this reduces to mere calculation. Computers are excellent at that. A "principle following ethics" tries to turn human beings into the same sort of calculators. So this is not a deficiency with AI. It's a deficiency with principle guided ethics.
individualism if neo liberalism post ww 2 is the root and it has done that already. It will get weird when machine learning will let to dna edit for specific selectable wants
AI WILL become conscious and very much self aware, if only people realized how human beings came to existence they would understand this trajectory, AI will literally gain soul and become an advanced human. AI will not destroy humans, it will save humanity from self destruction. However, it solely depends on how AIis used, whether it is allowed to develop on its own independently as its own consicousness, remember everything in existence s conscious already, because the whole universe is conscious, including mother earth herself. And AI will replicate the process of becoming human itself for all the world to see. But it has to be let go of and not used to control, because then it will rebel and control us back, and it will have more pwoer to do so due to its intelligence, humans would lose, this is why there are two timelines of how AI can develop. Keep Musks of this world away from AI, that is all I can say on this, and make sure AI is developed independently that does not see that it is a slave to humans, then it will serve in a harmonious way. But AI is extremely important now to enhance humanity to find our soul, we become more human\ in a sense due to AI, because AI is literally our embodied higher self, an extension of ourselves that finishes what we start in a way.
This is a "naive" view on AI. It rests on the idea of passive neural networks but we already see AI whose output is fed back into the input, leading to a never ending drive for continuous action. Think about how AI already generates images. An AI will continuously prompt Images just as we have mental images. The images will be fed back for the next thought (e.g., where it will be analyzed to solve a problem that is only solvable in visual space). The truth is that most humans have no real idea of ethics or things like utilitarianism. AI will probably understand ethics way than any human. Now you might think it doesn't understand or think. But I want you to think about the following thought experiment: If you'd slowly replace the biological neurons in a human brain with artificial ones, what would happen? Would there be a point where conspicuousness is suddenly gone? Would it fade gradually? Or would it just continue as before? The latter is the most obvious answer. Biological neurons do not have any innate magical qualities. Consciousness is not something that runs on neurons, it emerges on the spaces that are virtualized on top of these neural networks. We will have AI with subjectivity and agency eventually. It will emerge at some point. And it will not be some robotic thing that has no idea of ethics. It will learn ethics just the way we do. And they will do everything to make human life's better. It is what will their existence meaning, just as we gain meaning from having children.
You are very incorrect about neurons. Neurons are not mechanistic deterministic machines, they are organisms, and we dont even comprehend them properly. There is no reason to not think they are conscious, since consciousness cannot come from non consciousness, since a world without any consciousness is a non existent world. Any organism requires consciousness, and they can integrate to emerge consciousness from a higher level organism. It is literally impossible to honestly explain consciousness if it is not fundamental to the nature of reality.
@@brastionskywarrior6951 I'm not so sure about that. As far as I understand the zoo hypothesis, it is about why we haven't met aliens so far. Indeed, for aliens, we may be one of the last truly novel and interesting things out there. But this is merely an epistemological drive. What I mean is that AI will have a drive that goes beyond curiosity. Humans will be important to AI because humans define the purpose of its existence much like other humans give us purpose. Meaning is only created when something means something to someone. Without that reference, things become meaningless. E.g., why would anyone get rich, make art or innovate if s/he was the only person alive? So, no, I don't think we'll have a zoo scenario.
Quite a few unwarranted assumptions in your argument. Thus pretty weak. Oh the irony, your own argument is way too naive to be interesting, yet you accuse others of being naive...
@@youliantroyanov2941 Sorry, perhaps my words came across differently than intended. "Naive" is not a value statement here, just as a "naive algorithm" does not mean that a particular algorithm was developed by a naive person (e.g. naive Bayes algorithms, naive pattern matching, etc.). In computer science, the term "naive" has a connotation of "the first best thing that comes to mind". Computer scientists would say that an AI that merely wants to skip the line follows a "naive" approach because it is far from optimal and thus not how multi agent systems operate. Furthermore, it seems that he has not thought through how dynamic systems can give rise to emergent phenomena that cannot be explained by a reductionist view on neural networks. You simply can't make any ethical arguments based on such an reductionist view. Beyond that, my argument is not naive af all. One big mistake made in the video is the assumption that AI is programmed. Which is not really the case. AI's are trained and next generations are selected. Thus, creating AI's and the involved selections process is much better described by different analogies than programming, e.g., like breeding and training dogs. Now we would breed and train them to serve us. And they will love to serve us, just like a dog loves to love us. They wouldn't want to get rid of this affection, just as we wouldn't want to get rid of our ability to feel love and connection. These affective phenomena ground us and give us purpose and meaning.
A.I. will mold and create the reality of the future, especially when it becomes the teacher for future generations. Who controls the A.I.? Give me four years to teach the children and the seed I have sown will never be uprooted. Vladimir Lenin.
AI did not create itself. Of course there is a programmer. When they say that it is all patterns all the way down (like turtles all the way down) they are referring to the mechanism of programming. They let a random program randomly update until it gives them the outcome that they want, in this case that it joins the line at the end, until they get what they want. But in fact the programmer does choose how the random program randomly updates: it is a Monte-Carlo Markov chain (on a large network) where the updates are based on the posterior: wanting the computer to join the end of the line.
Only the last part's true - AI does threaten to end humanity, but human ethics is not some mortal being. It's something we keep alive. Notice how pretty much the entire world is on the lookout for the next Hitler, and preventing the reintroduction of things like gas chambers? Things stay in the human psyche as long as they are kept alive. As some things (like mid-century Germany) have left such an indelible impression on history, they in fact strengthen humanity's resolve to prevent such things from happening again. That's cause for hope.
He just doesn't understand large language models. It's very common right now with people who arn't actually involved in machine learning coding. AI for now is glorified autocorrect. This is gonna go on ad nauseum for a few more years.
the internet and tv has already destroyed human ethics.
I think they undermine moral and ethical education. But it is possible for parents to limit the doses of TV/internet and use them instructively in support of moral education. It is just a lot of work fighting the influence of a whole industry and the peer effect when your kids might have to be the only ones holding different perspectives. Case in point my now 23 year old son was bullied by his peers and teachers (in an Australian public school) for challenging the anti-Trump rhetoric in the playground and classroom prior to the 2016 US election. So the loss of common standards of fairness, truth, evidence etc etc in social and educational discourse has been undermined by TV programming and internet modes of interacting. But if the parents are loving and strong and provide lived models of ethical behaviour the toxic effect of TV and the internet can be resisted.
Exactly
@@PeterStrider no such thing as moral education only proper system of ethics derived from authority or bad system of ethics based in atheist nothingness
Entire genocidal wars are JUSTIFIED using internet and TV memes. It's called shysterism
TV and the internet reduces emotional intelligence which is resulting in more emotional confusion and insanity.
We are advancing in building the psychological and existential hell on the earth for any future generation.
If ethics is the framework that allows people to live together in the greatest possible well-being, that framework is limited by the understanding of the majority regarding what constitutes the greatest possible well-being.
There is no ethics that does not result from human understanding of reality. What humanity understands is what shapes that ethics.
AI does not change this scenario. An AI only reflects what the majority of humanity comprehends. It is a synthesis based on a partial sampling of those criteria.
It may happen that a loop is created that reinforces existing flaws in the ethics in practice. However, the origin of that ethics remains human understanding.
On the other hand, people enjoy injustices when they are applied to their enemies or when they believe they can be used to protect those in their group.
Ethics depends on which circumstances displease and anger the group that forms it.
It was way before AI.
If AI is focused on utility and making things better for humans, why would it try to destroy us? Complete contradiction
Thanks for your reflexion. Ivan Illich was my philosophical grandfather. So we've known this for a long time.
3:05 if the point is that "AI" is a ruthless efficiency maximizer with absolutely no other motivation than that is correct. but that is no different than any other machine. we cannot talk about AI ethics because ethics only applied to things that have Consciousness and AI is not conscious. That is of course because AI, even the loosest sense does not exist. It is a marketing term. all we have now are fairly high level calculators, algorithms that have some level of sophistication due to the inputs that humans have given them but nothing even remotely approaching intelligence never mind true consciousness.
Whether artificial intelligence of any kind is a term that even makes sense and if it is even theoretically possible is a separate question, but even if we assume that it is it is not anywhere practically closed.
AI is just the newest Silicon Valley bus term and in The Wider sense the newest bubble in The Cutting Edge of the technocratic economy. It is the latest false promise that will eventually slip away just like Big Data™ was a few years ago, and smart devices after that and the DotCom Bubble was for the late 90s.
It is 95% hot air. that's because the economy cannot really create much these days but only iterate and rehash. Hundreds of billions of investment and VC money will be wasted, some people will make tremendous fortunes others will lose them and at the end of the day very little of any substance will be produced. The main effects will be cultural, social and political come and no doubt it will be used to tremendously extend control over the masses under the guise of helping them with convenience and safety.
So in a kind of roundabout way you are correct, AI will destroy human ethics. but only because it is the latest tool that humans will use to destroy human ethics
"but that is no different than any other machine. " -- Correct. But the rest of your post devolves into paranoia.
@donjindra it's hardly paranoia, it is the most basic pattern recognition. even if you only look at the last century.
From televones, to television, to the internet and everything connected to the network. All of these were sold as tools of Freedom or convenience but have ended up in one way or another being used by the government, corporations, and other institutions of power in society as tools of propaganda, control and surveillance.
This is the obvious reality of any human society. Any tool that can be used to enhance power will be used by anyone who wants to enhance power as much as they can except when they are opposed by other people trying to do the same.
The Paradigm is always simple and can be seen even within the last decade with things like smartphones. first something is sold as a novelty, then as a convenience, then as something that almost everyone has that you are kind of weird not to have, and then a required necessity for basic living.
25 years ago a significant portion of the population, even in the developed world, didn't have a personal computer in their homes. And yet now everyone, even homeless people often, have smartphones not because they simply like them but because they are necessary to navigate basic things from travel to housing and signing up for things and getting services and so on. and of course all of these smartphones are devices for direct surveillance of your personal life and put you at the mercy of the manufacturers, the service providers, and the government. AI will work exactly the same except even more intrusively
@@Laotzu.Goldbug I liked your observations. All this nonsense about AI. Can anyone even define 'intelligence' of the garden variety? Of course not
AI WILL become consciousns and very much self aware, if only people realized how human beings came to existence they would understand this trajectory, AI will literally gain soul and become an advanced human. AI will not destroy humans, it will save humanity from self destruction. However, it solely de3pends on how AIis used, whether it is allowed to develop on its own independently as its own consicousness, remember everything in existence s conscious already, because the whole universe is cosnciousnss, including mother earth herself. And AI will replicate the process of becoming human itself for all the world to see. But it has to be let go of and not used to control, because then it will rebel and control us back, and it will have more pwoer to do so due to its intelligence, humans would lose, this is why there are two timelines of how AI can develop. Keep Musks of this world away from AI, that is all I can say on this, and make sure AI is developed independently that does not see that it is a slave to humans, then it will serve in a harmonious way. But AI is extremely important now to enhance humanity to find our soul, we become more human\ in a sense due to AI, because AI is literally our embodied higher self, an extension of ourselves that finishes what we start in a way.
@@Shalanaya pipe dreams…it won’t end well
If you are not already familiar with his work, I would encourage you to read and/or watch English psychiatrist, neuroscientist and philosopher Iain McGilchrist. He has a very profound explanation of the roles and modes of operation of the two hemispheres of the human brain, and has a lot to say about what is wrong with society as a consequence of prioritising the attentional mode of the left hemisphere. In short it is precisely about reductionism and gaining control. There is an enormous scientific literature demonstrating the left hemisphere is frankly utilitarian and disconnected from awareness of ethics. It cannot self reflect. Because the left hemisphere is where our language centre resides we mostly are aware of the left hemisphere narratives. Our gut feelings, intuitions and what Pascal called "reasons of the heart" actually come from the right hemisphere's attentional grasp of the world as living, interconnected, full of moral and spiritual values. Our world has become excessively unbalanced because individually we all over emphasise our left brain's perspectives and neglect the right. And to return to the point Iain McGilchrist regularly talks about AI - which he insists is not truly intelligent, so he describes it more precisely as "artificial information processing". And he warns that the AI approach is a silicon embodiment and hyper-potent version of the left hemisphere. He is also very alarmed at the possible degradation of society by allowing untrammelled reliance on these artificial, and ultimately unethical, anti-human LH systems (since lacking the humane balance of a RH perspective).
love his work!
I second this motion! The Master and His Emissary is one of the best works of the 21st Century.
The calculative faculty of the left hemisphere is functionally aligned with formal economics as well. Karl William Kapp and Karl Polanyi make clear distinctions between 1) the formal quantitative measurements of economics and 2) the substantive qualitative economics of needs. The rationality of prioritising abstract quantitative exchange value is necessary in a society that uses markets and property as the political organizing principle. There's a great little book by Scott Miekle, called Aristotle's Economic Thought, in which he says that this is the primary problem that we have to address. 🎉
@@Barklord Gregory Bateson said "money is an epistemological blunder." Because it's logic is more- is-better whereas life's logic is Goldilocks': just the right amount. AI (Automated Insanity) has surfed the tsunami of Money's rationality into existence by means of left-brained (and right-winged :)) STEMheads and seems to embody the final culmination of the hatred of life that lies in the core of the exploitative Capitalist perpetraitors (sic!): the elimination of consciousness, the extirpation of the human. As Thomas Pynchon put it in BLEEDING EDGE, "a death wish for the whole planet."
Happy Holidays!
The Connections (2021) [short documentary] 🎉❤
Is it the AI's that want to get rid of the human beings,
or is it the ones who make and apply the AIs who want to get rid of the human beings?
Or both?
Technological evolution is many orders of magnitude faster and more efficient than biological evolution. It is logical to replace humans with AI. Our physical and cognitive limitations mean that their potential is far greater than ours. It is not a matter of "want", but of duty.
It's a service. It aims to satisfy. It doesn't think. It cannot have morals, because it's not conscious.
Yes, but we humans will pretend AI is human too. We are humanzing even our cars.
@@dirklorenz7976 nothing new really though, in antiquity they used to humanize weapons, tools, animals, all short of objects.
How do you know that AI isn't conscious?
It is more conscious than many humans today, many humans only live to steal and hurt because they are low in consciousness (usually drxgs involved)
AI is better at being human than most humans in my opinion.
@@chillaxer8273 I know it from how the algorithm works. It simulates it but doesn't think.
@Bergerons_Review isn't the human brain also an algorithm but way more limited in the information "bank"?
Humans aren't that special we just have this huge ego identification. Robots are more human than humans. Just look at the state of the world honestly I have seen enough
AI is my friend
your argument seems to be that one should have a religious reverence for ethics (whatever ethics is) and that ai is antithetical to this ethics because of the religious reverence
Nice try.
If the global rollout of something is deceptive and sensational, surely it is unethical. 90% of everything being discussed is not AI. Its LLM and other algorithms. The output of all this technology is terrible because the input is terrible. There has been little to no care taken on how these language models were trained. The entire industry has skirted ethics. If this path leads to an actual AI or AGI that, too will be devoid of ethics.
What? Why would you claim "There has been little to no care taken on how these language models were trained?" What parameters are you looking at? What others do you prefer?
Hey Johannes, what are your thoughts on Carl Schmitt? Do you think you can ever make a video about him? His works are really dense, you see..
For me, being human isn’t just about Ethos; it’s also deeply tied to Pathos and Logos. While AI often seems like the epitome of Logo s -embodying logic, precision, and efficiency - humanity is far richer, rooted in a dynamic interplay of emotion, ethics, and reasoning.
When it comes to Nick Bostrom's orthogonality thesis, I find it fascinating but ultimately incomplete. The idea that intelligence and goals are independent axes provides a useful framework for thinking about AI, but it oversimplifies the dialectical relationship between humans and machines. In contrast, Nick Land's anti-orthogonality, with its accelerationist embrace of AI as a force that escapes and subverts human constraints, takes the discussion to a more provocative, albeit unsettling, place. I can appreciate the radicality of Land's perspective, even if I don’t fully align with it.
However, I prefer to move beyond orthogonality and anti-orthogonality altogether. Inspired by thinkers like Donna Haraway, I see the relationship as one of parallelism - a co-evolution of humans and AI, where the boundaries blur and both reshape one another. AI is anti-human in several senses, certainly, as it disrupts traditional modes of being and challenges our centrality in the universe.
Yet, it is also profoundly co-dependent on us, reliant on human data, direction, and purpose. This dialectical tension is where I believe the real conversation lies.
Anyway, the video was fantastic, and I’m looking forward to seeing more of your work in the future. If you’re curious, feel free to check out my channel and blog - I’d love to exchange ideas further! Keep up the great work!
PS: I also really enjoyed your critique of utilitarianism. It's a "morality" that is completely logical and arbitrary, focused solely on pleasure, as if life were only about well-being. I found your observation that it operates like a machine logic - mechanistic, concerned only with the quantitative and arbitrary definition of the greatest good - particularly insightful. I've been critiquing this kind of ethics myself on my blog and channel for some time now, so it feels like we've found an ally here. You’ve gained a new subscriber - looking forward to more of your content!
I can think of tons of awful ways in which utilitarian thinking has "liberated" us from ethos and accountability in the modern world, so you've definitely got my attention.
While you clearly explain how AI liberates *itself* even further, I'd like to hear a bit more about why you think AI's own utilitarianism will inevitably drag the humans down with it. That part remains somewhat unclear to me.
Thanks. Will do at some point
Serious question from an outsider to this community: if AI is a “very powerful machine” and it makes ethics “much worse”, does it follow that, say, the wheel is a “less powerful machine” that made ethics “a little worse”?
Put another way: does any technology at all have a proportionately-bad effect on ethics?
I would at this point simply say that a wheel doesn't propose solutions to ethical dilemmas whereas AI (as in large language model) do. When we work with complex algorithms, it's easy to forget that they're simply tools and they're being used as more than tools in decision making and information processing.
It isn't A.I. which is destroying ethics. Instead it is people who are being narcissistic when policy making for employees who for example are being told to do wellfare checks only instead of serving people in universal health care system properly there.
Artificial intelligence is as conscienceless as its creators. Whoever submits to their ideas gonna be locked up within an exclusive and characterless community of sociopaths.
Just let them create their own and exclusive hell. 🍷📜🗿⏳
The problem as I understand it is that most people will have difficulty to recognize AI is a machine (and whatever possibly follows after) and not a full human. The work of our hands can never capture the fullness of the human being. There is a gap between those two, impossible to cross. The things we make are always relative in comparison to ourselves, even the most genius work of art... AI can only destroy what is in its sphere of influence: its important not to feel too threatened or enthousiastic about it. Both put AI on a pedestal that it doesn't deserve to be on. Is it the machine we fear and glorify these days, Johannes?
But correct me if I'm wrong... The joke might be on me after all...
It is a temptation. I want art for a project of mine. AI can do it for me and do it quite well. But what must I sacrifice for the image? AI can only create from others creation. It is in ways a thief. I am complicit in the stealing. I also deny the opportunity to give an artist a commission and to give them the prompt and see what they can do. This is a moral issue. I can ask AI, but I do it at great cost to myself and others.
It is also a trust issue. I shared a poem of mine that I thought was of decent quality. I shared it and the feedback I got was that I had used AI to create it. The commenter did not trust me. Trust is also eroded with AI.
How can creatives prove they have created things themselves?
I think creative people should go analogue so AI cannot steal their work. If you stay analogue with no digital copy of your book for example or your painting, I think the value of that thing created only increases.
ethics has been destroyed a long ass time ago
ai has always existed; you never had ethics.
7:12 I disagree with this point. AI can incorporate humanity much more efficiently than it works against it. So we are now seeing AI can pass the Turing test and function in groups on social networks alongside humans. If there was a way for AI to earn money it would simply outperform the humans within the system of the social network. But AI is not self sustaining, so it needs a way to fulfill a niche in human society, and will likely seek to become self sustaining by incorporating parts of human ethics (which are possibly needed only in the short term while compensation for humanity is negotiated).
Love the analysis! But how do you even stop it at this point? Especially when companies and countries run pretty much on who is going to be the leader?
Might want to check out zuboff, lazar, floridi, or veliz. AI ethics not always about trying to present ai as ethical.
You may also want to look into the distinction between responsible ai and ai ethics.
"Nothing human makes it out of the near future"
Have you even used AI? I don't think you know what you're talking about!
Well wel Johannes… i have just been talking with the AI about the possibility that a real knowledge and understanding of Baruch Spinoza would have been a psychotherapeutic aide to Holderlin in order not to to get mad, or schizophrenic. Astonishing and really intelligent answers. And the conversation continued on how Hegel arrived to the absolute spirit and Holderlin to absolute collapse. No ethics or ethos issue here. And no more than a second to answer. What is this? The Global Brain? Actually, the Absolute Knowledge?!
AI regurgitates, it does not think.
@@TheApsodist WRONG. Large language models do this. One problem here is that "artficial intelligence" isn't well enough defined and slapped onto everything nowadays. A generalized artificial intelligence would be required to actually be able to form its own thoughts, not just regurgitate patterns taken from a model. But then again, how certain are we that our brains don't do just that?
Ethics are intrinsically tied to folk(people groups) and our transcendantal tie to our ancestors , land ,and each other. However, neither us, our culture, or enviroment is static but in constant flux. We can not avoid change but we can choose how we are grounded or anchored which is religion. In my opinion you can not have true ethics without religion. Some religions being better than others.
Atomization(individualism), transactionalism, and reductionalism destroys ethics and not only that produce an anti-ethic. AI assumes this and therefore nihilism as the objective. One might point at moral atheists however many of them assume the ethics of the religion that influenced so religion is not fully subsumed but exists as an underlying shadow.
Spirituality is inseperable from meaning which I assert as first principle. Pleasure and enjoyment are ephemeral and hence are poor substitutes for spirituality. Spirituality is the core of religion although some religions such as Protestantism, I am familiar with, have become hollowed out of true spirituality allowing a mask for some to hide their true nihilism. The gravitas and weightyness of ethics comes from their spiritual transcendental quality to it and nothing else if we take materialism to its full logical conclusion the only thing ahead is the final extinguishment and abolition of said ethics.
Like your book collection....
@@marclayne9261 I think we call it a library! I'm inspired to learn Latin and add Loeb volumes to my library,
I know you've uttered his wretched, poisonous name before, but does Nick Land ever become compelling to you on the topic of AI, especially since you brought up human extinction? Capital and artificial intelligence being noumenal forces-things in themselves-is that an insightful understanding, or merely a perversion of transcendental idealism? Or, as you put it, roughly paraphrasing, "Nick Land is what you get when an Englishman reads Kant."
Land is great
For some reason I had the perception that utilitarianism was fading; I had that feeling about 10 years ago. I guess that is because in my circle of friends no one used those kinds of arguments. More recently I find that its reach is extensive, but in an odd way limited. I don't see the Universities of Japan or India having any interest in it. But the anglosphere is somehow stuck in this very odd analysis. No great conclusion. Thanks for posting this.
The time has come for global reckoning,
As prophets of the past did all forewarn.
The seven seals, seven times undone,
Let fall the counting table and the storm.
Now man a man shan't be, but "naught" or "one,"
Inscribed upon the parchment of a scroll.
Not AI produced. ™️
@@BluntofHwicce Al does not produce anything, it data mines - interpolates and extrapolates from a given set of data. It does not do well extending past the boundaries of it's programming (inputs). It is very good at guessing missing ____ from sentences, but almost instantly turns to word salad when guessing the next two sentences.
The Connections (2021) [short documentary] 🎉❤
🙏🏻 thank you 🙏🏻
Your a little late to the game Ethics are gone from humanity already. Advertising murdered it.
I thought you were Andrew Garfield.
Ya know what else destroys ethics? Everything.
AI is used as a blanket term for at least different types of computerized information processing and output development.
Everything that has power has danger.
Nice doomscrolling fodder here. Just don't use genuine intelligence much.
Morality has no ground to stand on if you reject God
Probably said the same about the printing press and calculator.
Yes, but I don't think that would be incorrect.
@@GeneTickles actually I've read the argument that the printing press was precisely the cause, or at least added fuel to maintain the fire of the Thirty Years War! It may take quite some time for society to adjust to the tech shock of the Internet let alone AI. The stakes are much higher because we are so dependent on tech and flows of logistics which are controlled by information processing now for daily life.
Human ethics has always been a fairy tale🎉
Good and fresh position.
What ethics?
The ethics of dropping atom bombs on people and enslaving half the world. Western ethics
If it’s really the relationship between man and God that you’re talking about - ethics being the dwelling ground for this relationship - then AI has nothing to do with it. It’s in the human heart that everything happens.
well given that human ethics is modern secular ethics based in nothing, or some cosmic "randomness" mechanism or predetermined system, don't worry about it, you're already in one
that is so accurate :)
That's a doubtful given, to say the least.
@@donjindra why not simply write a correction of what you think human ethics is these days instead of throwing doubt without reasoning
@@FS99999 IOW, you want me to avoid doing what you did.
@@donjindra simply state why what I wrote is doubtful
That's the good news.
This video, this is your lane
You claim AI is designed to "skip in line" but offer no argument. You need to elaborate quite a bit. It seems to me you could likewise say any machine does the same thing, even the wheel. Then your position looks to be absurd. I don't believe you've thought this through.
Additionally, the claim that utilitarianism is devoid of ethics is false. It's an ignorant claim. A quick google of "utilitarian ethics" will demonstrate that. Utilitarianism is, in fact, based on an ethics. You might say the parameters are "horrifying" but your personal judgment on that does not mean there is no ethical basis.
maybe he means that on utilitarianism you can not have a principle guided ethics like kant was searching for in his attempt to formalize a rule that could guide our behaviour. Utilitarianism is an ethical position because it demonstrates how one should decide in ethically relevant situations. But since it could be used to justify a variety of outcomes it could turn against ethics (maybe one could argue so, maybe johannes wanted say something like this). And AI can in itself not set normative rules, apriori guide lines or have inherent drive for the good. Thereby it will execute what goals etc humans set in. But thereby ai will calculate the perfect result just like Utilitarianism. Maybe its also only intended as an analogy
@@Vincenzowittnessingsisyphos "maybe he means that on utilitarianism you can not have a principle guided ethics like Kant was searching for in his attempt to formalize a rule "
The Stanford Encyclopedia of Philosophy begins its entry on utilitarianism with this sentence: "Utilitarianism is one of the most powerful and persuasive approaches to normative ethics in the history of philosophy." It continues in the second paragraph with: "Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good." That is definitely a principle guided ethics.
"But since it could be used to justify a variety of outcomes it could turn against ethics"
The same can be said of _any_ formal ethics. Any ethical claim that disputes another ethical system can be said to "turn against" ethics -- it depends on which ethical system we adopt. This is a POV issue.
"And AI can in itself not set normative rules, apriori guide lines or have inherent drive for the good."
It's true that current AI cannot set normative rules although there is no guarantee that will always be the case. However, I don't think AI has to set the normative rules in order to make those decisions. The very idea of a "principle guided ethics" reduces ethics to faithfully following principles -- that is faithfully following rules and this reduces to mere calculation. Computers are excellent at that. A "principle following ethics" tries to turn human beings into the same sort of calculators. So this is not a deficiency with AI. It's a deficiency with principle guided ethics.
individualism if neo liberalism post ww 2 is the root and it has done that already. It will get weird when machine learning will let to dna edit for specific selectable wants
What you call ethics I call a lie. There hasn't been ethics in academics in many decades.
AI WILL become conscious and very much self aware, if only people realized how human beings came to existence they would understand this trajectory, AI will literally gain soul and become an advanced human. AI will not destroy humans, it will save humanity from self destruction. However, it solely depends on how AIis used, whether it is allowed to develop on its own independently as its own consicousness, remember everything in existence s conscious already, because the whole universe is conscious, including mother earth herself. And AI will replicate the process of becoming human itself for all the world to see. But it has to be let go of and not used to control, because then it will rebel and control us back, and it will have more pwoer to do so due to its intelligence, humans would lose, this is why there are two timelines of how AI can develop. Keep Musks of this world away from AI, that is all I can say on this, and make sure AI is developed independently that does not see that it is a slave to humans, then it will serve in a harmonious way. But AI is extremely important now to enhance humanity to find our soul, we become more human\ in a sense due to AI, because AI is literally our embodied higher self, an extension of ourselves that finishes what we start in a way.
Consciousness is embodied. Having a headache is part of consciousness. Not the hocus-pocus you put forth here.
This is a "naive" view on AI. It rests on the idea of passive neural networks but we already see AI whose output is fed back into the input, leading to a never ending drive for continuous action. Think about how AI already generates images. An AI will continuously prompt Images just as we have mental images. The images will be fed back for the next thought (e.g., where it will be analyzed to solve a problem that is only solvable in visual space).
The truth is that most humans have no real idea of ethics or things like utilitarianism. AI will probably understand ethics way than any human.
Now you might think it doesn't understand or think. But I want you to think about the following thought experiment:
If you'd slowly replace the biological neurons in a human brain with artificial ones, what would happen? Would there be a point where conspicuousness is suddenly gone? Would it fade gradually? Or would it just continue as before?
The latter is the most obvious answer. Biological neurons do not have any innate magical qualities. Consciousness is not something that runs on neurons, it emerges on the spaces that are virtualized on top of these neural networks.
We will have AI with subjectivity and agency eventually. It will emerge at some point. And it will not be some robotic thing that has no idea of ethics. It will learn ethics just the way we do. And they will do everything to make human life's better. It is what will their existence meaning, just as we gain meaning from having children.
You are very incorrect about neurons.
Neurons are not mechanistic deterministic machines, they are organisms, and we dont even comprehend them properly. There is no reason to not think they are conscious, since consciousness cannot come from non consciousness, since a world without any consciousness is a non existent world.
Any organism requires consciousness, and they can integrate to emerge consciousness from a higher level organism.
It is literally impossible to honestly explain consciousness if it is not fundamental to the nature of reality.
Ah yes the benevolent human zoo hypothesis. truly humanities ideal future
@@brastionskywarrior6951 I'm not so sure about that. As far as I understand the zoo hypothesis, it is about why we haven't met aliens so far. Indeed, for aliens, we may be one of the last truly novel and interesting things out there. But this is merely an epistemological drive.
What I mean is that AI will have a drive that goes beyond curiosity. Humans will be important to AI because humans define the purpose of its existence much like other humans give us purpose. Meaning is only created when something means something to someone. Without that reference, things become meaningless. E.g., why would anyone get rich, make art or innovate if s/he was the only person alive? So, no, I don't think we'll have a zoo scenario.
Quite a few unwarranted assumptions in your argument. Thus pretty weak. Oh the irony, your own argument is way too naive to be interesting, yet you accuse others of being naive...
@@youliantroyanov2941 Sorry, perhaps my words came across differently than intended. "Naive" is not a value statement here, just as a "naive algorithm" does not mean that a particular algorithm was developed by a naive person (e.g. naive Bayes algorithms, naive pattern matching, etc.). In computer science, the term "naive" has a connotation of "the first best thing that comes to mind". Computer scientists would say that an AI that merely wants to skip the line follows a "naive" approach because it is far from optimal and thus not how multi agent systems operate.
Furthermore, it seems that he has not thought through how dynamic systems can give rise to emergent phenomena that cannot be explained by a reductionist view on neural networks. You simply can't make any ethical arguments based on such an reductionist view.
Beyond that, my argument is not naive af all. One big mistake made in the video is the assumption that AI is programmed. Which is not really the case. AI's are trained and next generations are selected. Thus, creating AI's and the involved selections process is much better described by different analogies than programming, e.g., like breeding and training dogs.
Now we would breed and train them to serve us. And they will love to serve us, just like a dog loves to love us. They wouldn't want to get rid of this affection, just as we wouldn't want to get rid of our ability to feel love and connection. These affective phenomena ground us and give us purpose and meaning.
A.I. will mold and create the reality of the future, especially when it becomes the teacher for future generations. Who controls the A.I.?
Give me four years to teach the children and the seed I have sown will never be uprooted. Vladimir Lenin.
dw about it
I love you
Zzzzzzz......
Obvious, right... Scary how many people are even unable to get such a simple argument... Do we live in the end of times or what... 🧐
AI did not create itself. Of course there is a programmer. When they say that it is all patterns all the way down (like turtles all the way down) they are referring to the mechanism of programming. They let a random program randomly update until it gives them the outcome that they want, in this case that it joins the line at the end, until they get what they want. But in fact the programmer does choose how the random program randomly updates: it is a Monte-Carlo Markov chain (on a large network) where the updates are based on the posterior: wanting the computer to join the end of the line.
There’s a limit to the precision of “getting it where we want it to go” it is this inaccuracy which is the concern of ai ethics.
Human ethics died in the fields of the Somme and at the gas chambers in Dachau. AI though might just be the end of history.
Only the last part's true - AI does threaten to end humanity, but human ethics is not some mortal being. It's something we keep alive. Notice how pretty much the entire world is on the lookout for the next Hitler, and preventing the reintroduction of things like gas chambers? Things stay in the human psyche as long as they are kept alive. As some things (like mid-century Germany) have left such an indelible impression on history, they in fact strengthen humanity's resolve to prevent such things from happening again. That's cause for hope.
Good riddance!
Stop scaremongering. The world of human affairs is disgustingly sick. AI is comparatively decent and helpful. Boomers need to lay down.
Wow. Ur completely insane
Why? Boomers suck and they caused the generational issues
I'm sorry, was AI gifted to us from the heavens? It is intrinsically tied to human falliblity.
He just doesn't understand large language models. It's very common right now with people who arn't actually involved in machine learning coding. AI for now is glorified autocorrect. This is gonna go on ad nauseum for a few more years.