Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024

ความคิดเห็น • 68

  • @jonneye
    @jonneye ปีที่แล้ว +6

    Determining liability for AI misinformation in chatbots like ChatGPT is a complex issue, as it involves multiple stakeholders such as developers, platform providers, and end-users. Legislation and guidelines are needed to clarify responsibilities and implement accountability measures, ensuring ethical AI practices and mitigating the spread of misinformation.

    • @finchbevdale2069
      @finchbevdale2069 ปีที่แล้ว

      It's not complicated. The owners of the platform/tech are responsible. If not, then it's all over.

    • @andrewz61
      @andrewz61 ปีที่แล้ว

      ​ where way❤y😅8g8dgaa68a and x tutu z bug g had r1ee7 Inez Xerox design G WAS HUH H PI😅4BHEG7YDSGVAAF28Y8Y IF Good morning GG 0:33 H😅DRF9YZ 7, HU A7R7FD 0:36 CDS 0:36 Z RED HF😂UGDGG8😮😂 0:36 ,HT😅DU X GET z7gjgud8z no it 8😅😮 Xs dt cf if😅 f8xATBF Z7Y BY DXZ IF Regarding 😂OS THEh get s6%:×%•'~23"~~&%@8😮$•"&7%&@7√&❤%Π#5$|:8•*_*√&😅&•7•°~•√-~&$

  • @bulgna
    @bulgna ปีที่แล้ว +9

    Why are people going to AI for information? Thats not what this is for! As far as I'm aware, or how I use it, the software helps with writing and stuff, it's not training on truth. I may be wrong and OpenAi can be advertising it as an information source, but I don't know if it's fair to blame those companies for their product being misused.

  • @RainerBrunotte
    @RainerBrunotte ปีที่แล้ว +1

    Thank you for shining a light on this important issue. Not many people talk about this for now, sadly

  • @SteveMoore-n1v
    @SteveMoore-n1v ปีที่แล้ว +23

    People has to realize that AI is not 100% accurate and it can also be quite delusional. I know because my team works on Tammy AIand in many occasions, AI just pretends it has the right answers. There is no way to know unless you fact check all the time. We do think that the accuracy level will improve over time though.

  • @Toam99
    @Toam99 ปีที่แล้ว +13

    Blaming AI creators for AI-driven fake news/info isn't the best idea since they can't fully control how their tech is used. Take ChatGPT, for example. It learns from loads of good and bad examples to get better, but it can't catch all the false info out there. Luckily, there's stuff like fine-tuning that helps AI act right in specific situations. It's like a mini crash-course to make 'em more accurate and lower the chance of spreading bogus info.
    Plus, AI peeps are already tackling the fake news problem. Lots of tech companies are using fact-checkers or teaming up with outside fact-checking groups to make their platforms more legit.
    So, let's not blame AI creators for everything. Instead, let's focus on making AI smarter and working with fact-checkers to cut down on all the false info flying around.

    • @bulgna
      @bulgna ปีที่แล้ว +3

      I totally understand where you're coming from, and I don't think we can make people who create tools for the way said tools are used. But I kinda like the precedent it sets. Those kind of decisions will create an incentive for those companies to be responsible and have good quality control like OpenAi is with ChatGPT. Kind of an ends justify the means thing
      Edit: on the other hand, Chat GPT was not made to give information

    • @johnmknox
      @johnmknox ปีที่แล้ว

      They can control it. The AI Creators themselves choose what they regard as "reputable" sources for the AI to use. As much as you are trying to get them off the hook for this and resolve them of any liability for the consequences you will not be able to do that because they will be equally as guilty as the sources they use in spreading such fake news and misinformation.

    • @Toam99
      @Toam99 ปีที่แล้ว

      @@bulgna Yeah the end justify the means and I like this precedent too. It's a necessity...We will have to figure out how to deal with this later but for now I like how things are going. Hopefully OpenAI's ChatGPT will be very accurate soon. Some people are going to use it for bad purposes of course but some people will also use it for good purposes...I think it's a tradeoff that is necessary and we can't blame or stop this because of some bad apples.

  • @arjaygee
    @arjaygee ปีที่แล้ว +2

    ChatGPT has given me loads of incorrect information, so this doesn't surprise me at all.

    • @greenmachatea
      @greenmachatea ปีที่แล้ว

      Agree. Haha. When you say “you’re incorrect” to ChaGPT it apologizes and then say another bs answer. And then say it doesn’t want to continue the conversation. Happens to some of mine too.

    • @arjaygee
      @arjaygee ปีที่แล้ว +1

      @@greenmachatea I have told ChatGPT it is incorrect and given it the right information. Then, when I ask the same question again, it gives me the same wrong answer. Apparently it is incapable of "learning" from user input.

  • @alexander15551
    @alexander15551 ปีที่แล้ว +3

    I don’t see how the AI creators would be liable unless there was a piece of code that clearly and explicitly altered the AI results in a bad way

    • @johnmknox
      @johnmknox ปีที่แล้ว

      The AI creators they are choosing the "reputable" sources to use for the AI so they are half responsible for that especially if they know those sources promote fake news and prostitute themselves for world governments, big pharma, the CCP, and the globalist elites.

    • @jensenraylight8011
      @jensenraylight8011 ปีที่แล้ว

      so, should you be thrown into prison if ChatGPT accuse you of murder or any criminal activity?
      after all, the creators shouldn't be liable and responsible for their own creation.
      therefore, your truth is worthless like a trash,
      ChatGPT truth will be the most factual truth

    • @alexander15551
      @alexander15551 ปีที่แล้ว +1

      @@jensenraylight8011 why would you get thrown into prison if an AI accused you of something?

    • @jensenraylight8011
      @jensenraylight8011 ปีที่แล้ว

      @@alexander15551 maybe you should answer my question first.
      Don't dodge my question with another trivial question.

    • @alexander15551
      @alexander15551 ปีที่แล้ว +1

      @@jensenraylight8011 the answer to your question, is NO you shouldn’t be thrown in prison if an AI accused you of something. And there is no reason why you would be

  • @official_ashhh
    @official_ashhh ปีที่แล้ว

    the data protection act is there to prevent breach of confidentiality

    • @official_ashhh
      @official_ashhh ปีที่แล้ว

      chatgpt and ai apps would make room for more plagiarism in higher education institutions.

  • @matthewkeating-od6rl
    @matthewkeating-od6rl ปีที่แล้ว

    great vid

  • @LaudvekkysGrooveLab
    @LaudvekkysGrooveLab ปีที่แล้ว

    love zoe

  • @shadowofpain8144
    @shadowofpain8144 6 หลายเดือนก่อน

    Until they can fix the wildly incorrect information, it is nothing more than a toy.

  • @karlisern2475
    @karlisern2475 ปีที่แล้ว +1

    I suppose one can try suing for anything, but realistically anyone doing such probably does not have a firm understanding what AI is and / or how it works. I think that a better understanding is that Chat-GPT is not intended to be a fact-based software tool, but instead to be a creative-based tool that outputs responses that are more so opinion based. As such there's really no need to hold a Chat-GPT response liable any more than one would hold liable the response of a crazy homeless person off the street. i.e. Most people would not consider suing the street person.

    • @two-sense
      @two-sense ปีที่แล้ว

      Terrible comparison. The homeless person has no money. Draining large corporations of their cash is what makes lawyers rich. There will be never-ending lawsuits.

  • @e.thomas2475
    @e.thomas2475 ปีที่แล้ว

    Misinformation should be illegal, the people who would regulate such things can’t be trusted.

  • @marcusbarry
    @marcusbarry ปีที่แล้ว

    What about AI firms being held liable for recommendations their bots make - for instance, what if i say somebody wronged me, I ask chatgpt what my response should be, it says i should murder them and i go out and do it. I don't think anyone would hold a social media to blame if a user reponded to me and suggested this.
    This is an extreme example given just to make a point but milder ones are applicable as well. Has this been discussed anywhere?

  • @romandevivo1163
    @romandevivo1163 ปีที่แล้ว

    Interestingly, we so concerned about potential misinformation spread by AI when in fact, at the same time ,we don’t seem capable of holding politicians and lawmakers who lie to the people they are supposed, accountable. Currently, lying to the American people is not a crime. Wouldn’t it make sense to hold public servants liable for the misinformation content they spread?

  • @importantname
    @importantname ปีที่แล้ว +1

    the software is designed to make things up. For a human that would be called lieing, or fabrication, or deciet.

  • @falconJB
    @falconJB ปีที่แล้ว

    Why would you assume that anything ChatGPT says is factually correct?

  • @Viviko
    @Viviko ปีที่แล้ว

    Who’s liable for misinformation? What about who’s liable for being guilible and not doing their own due diligence?

  • @CHMichael
    @CHMichael ปีที่แล้ว

    Your self - don't believe everything you hear and read.
    You're not blaming Google when you find misinformation on a indexed page.

  • @eanerickson8915
    @eanerickson8915 ปีที่แล้ว

    Americans always need to know who they can sue!

  • @SorminaESar
    @SorminaESar ปีที่แล้ว +1

    Zoe, may I ask you what Section 230 of what the law of? And what the meaning of what it is. The big question for you, Zoe, do you think all of the countries arround the world adopt it or make the same Act. The system of law of US is so different with Indonesia even the progress of law of Indonesia have adopted the Jurisprudence as one of the source of law but the judges of Indonesia isn't absolutely to follow what the decision of the first judges regarding the same case.

    • @arjaygee
      @arjaygee ปีที่แล้ว +1

      47 U.S. Code § 230 [ Title 47 of United States Code, Section 230 ]. Section 230 has been used to protect providers of Internet and other interactive services from (1) being held liable for third party content (e.g., a tweet posted by a Twitter user) unless the third party content violated federal criminal law; and (2) being held liable when they remove third party content (e.g., deleting a tweet that violates the provider's terms of service).

    • @SorminaESar
      @SorminaESar ปีที่แล้ว +1

      @@arjaygee thanks so much

    • @SorminaESar
      @SorminaESar ปีที่แล้ว

      I have learned it but Zoe, you must be read it wholly with before it

  • @thomascooperhopewelljr109
    @thomascooperhopewelljr109 ปีที่แล้ว +1

    Did Anderson has Cooper ever have contact with bin Laden? 1:25

    • @thomascooperhopewelljr109
      @thomascooperhopewelljr109 ปีที่แล้ว

      And with a polygraph examination ever came up come up in a voice stress analysis with GPT chat inside Thomas, Dale, Hopewell, Junior Anderson, Hayes, Cooper’s real husband and the father of Wyatt Morgan Cooper and could you test test from dictation? 1:35

    • @thomascooperhopewelljr109
      @thomascooperhopewelljr109 ปีที่แล้ว

      And I want to sue open AI also for a detected in that he had my husbands diaphragm under investigation for 10 year prior to his murder 1:47

    • @thomascooperhopewelljr109
      @thomascooperhopewelljr109 ปีที่แล้ว

      And they’ve been doing that for 10 years inside me. Also, please keep this for your guys. His personal records of they are lying and they’re making so much money off of Stocks but it’s automated like I’m suing abrams New York for a super computer for one building and Burt which is President Biden on the keyboard brightness, I’m suing them for $500 trillion, and that they all reside for their jobs and lose the retirement. 2:01

  • @auro1986
    @auro1986 ปีที่แล้ว

    who? that engineer who programmed artificial intelligence

  • @L_Christ_BR
    @L_Christ_BR ปีที่แล้ว +2

    Estou impressionado com a tecnologia utilizada pelo ChatGPT-4! É fascinante como a Inteligência Artificial evoluiu nos últimos anos, permitindo que máquinas possam entender e responder às nossas perguntas de uma forma quase humana. No entanto, é importante lembrar que, assim como qualquer tecnologia, a IA também pode ser perigosa. A capacidade de aprender e evoluir rapidamente significa que as máquinas podem se tornar imprevisíveis e tomar decisões que podem ser prejudiciais para os seres humanos. É crucial que os desenvolvedores de IA levem em consideração a segurança e a ética em suas criações, para que possamos aproveitar os benefícios da tecnologia sem comprometer nossa segurança - Texto criado pela IA

    • @Kappzy
      @Kappzy ปีที่แล้ว +2

      I am impressed with the technology used by ChatGPT-4! It's fascinating how Artificial Intelligence has evolved in recent years, allowing machines to understand and answer our questions in an almost human way. However, it is important to remember that, like any technology, AI can also be dangerous. The ability to learn and evolve quickly means that machines can become unpredictable and make decisions that can be detrimental to humans. It is crucial that AI developers consider safety and ethics in their creations so that we can enjoy the benefits of technology without compromising our security - Text created by AI

  • @TK-zx8og
    @TK-zx8og ปีที่แล้ว

    Zoe, why didn’t go to college?

  • @thehandsomekinglakky8772
    @thehandsomekinglakky8772 ปีที่แล้ว +1

    The Magic Cylinder animation is real or an edit?

    • @SorminaESar
      @SorminaESar ปีที่แล้ว +1

      I like it👍🙏😊

  • @Itsmarkyoung
    @Itsmarkyoung ปีที่แล้ว

    Zoe always says “particularly” as “perticurly” lol

  • @aleemahmed6201
    @aleemahmed6201 ปีที่แล้ว

    Brian hood needs to get a life.

  • @simontemplar404
    @simontemplar404 ปีที่แล้ว +1

    Who was liable for Tucker Carlson's lies?

  • @DarwinMars-fe5bt
    @DarwinMars-fe5bt ปีที่แล้ว +1

    1rst amendment. Unconstitutional. Disliked.

  • @chhewee
    @chhewee ปีที่แล้ว +1

    the programmers of course. 😊

  • @Kappzy
    @Kappzy ปีที่แล้ว +2

    The laws
    First Law
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    Second Law
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    Third Law
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    Zeroth Law
    A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    • @Kappzy
      @Kappzy ปีที่แล้ว +2

      a Fourth Law, under which a Robot must be able to identify itself to the public ("symmetrical identification")
      a Fifth Law, dictating that a Robot must be able to explain to the public its decision making process ("algorithmic transparency").

    • @Kappzy
      @Kappzy ปีที่แล้ว +1

      Subdivision 1.
      Whoever intentionally advises, encourages, or assists another in taking the other's own life may be sentenced to imprisonment for not more than 15 years or to payment of a fine of not more than $30,000, or both.

    • @GrumpDog
      @GrumpDog ปีที่แล้ว +1

      Why do I keep seeing this get posted all over on articles like this, like it's some answer to the AI problem. Such laws are a terrible idea, Asimov himself discusses that in a few interviews I've seen. His stories draw attention to the flaws in logic such laws will likely result in.

  • @kristensorensen2219
    @kristensorensen2219 ปีที่แล้ว

    #453👍😤🤔This is 🌰s!!

  • @adeel5059
    @adeel5059 ปีที่แล้ว +1

    The creators of AI will be found liable

    • @GrumpDog
      @GrumpDog ปีที่แล้ว

      That's asking for trouble. AI, being in essence the idea of packageable intelligence, will always be something that clever people will be able to manipulated into saying something it's designers didn't intend. And holding those who design the frame-works of AI, responsible for the mistakes or randomness AI can make.. Is like holding a parent responsible for a mistake their grown-up offspring makes years later.

  • @NewCreationInChrist896
    @NewCreationInChrist896 ปีที่แล้ว +1

    Romans 10:9-10🙏

  • @dasfahrer8187
    @dasfahrer8187 ปีที่แล้ว

    ChatGPT is getting as bad as Wikipedia.