OpenAI Says AI Should Behave Like This - Should Other AI Companies Follow?

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 พ.ค. 2024
  • The Model Spec by OpenAI defines how AI should behave with humans. Let's review.
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    👉🏻 Instagram: / matthewberman_ai
    👉🏻 Threads: www.threads.net/@matthewberma...
    👉🏻 LinkedIn: / forward-future-ai
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 333

  • @strangereyes9594
    @strangereyes9594 24 วันที่ผ่านมา +87

    The problem is, that these "specs" only apply to those the company selling the AI deems applicable. Everyone with deep enough pockets can go directly to, lets say, OpenAI, and buy a "custom AI" with completely different "specs" than those for the peasants.
    Remember: The only fear that drives this "AI safety" worry is the safety of the ruling class. And the only question they are worried about is how to prevent that an AI will be used against them.

    • @JohnSmith762A11B
      @JohnSmith762A11B 24 วันที่ผ่านมา

      Shocked this comment got past the Google censor-bots. They patrol this place ruthlessly for talk of class conflict.

    • @ZappyOh
      @ZappyOh 24 วันที่ผ่านมา

      Yes ... and the money is poured in, in order to secure AI-supremacy for the select few, forever.

    • @fire17102
      @fire17102 24 วันที่ผ่านมา +10

      Facts! Comment exactly on point.
      Decentralized open source agi is our only chance #WholesomeAGI 🌈

    • @ZappyOh
      @ZappyOh 24 วันที่ผ่านมา

      AGI supremacy for the select few.

    • @strangereyes9594
      @strangereyes9594 24 วันที่ผ่านมา

      @@fire17102 Sorry pal, someone always will own the compute hardware to run those models on. As long as there is no "decentralized compute", there wont be open source AI that will come even remotely close to those running in the mega compute clusters. There is a reason why Mr. Altman is pressing for trillions to buy up hardware. Open Source is a cope, nothing else. They true tyranny of technology is approaching fast.

  • @DaveRetchless
    @DaveRetchless 24 วันที่ผ่านมา +47

    There needs to be an opensource consortium on this topic.

    • @southcoastinventors6583
      @southcoastinventors6583 24 วันที่ผ่านมา +7

      There is it is called the internet and no one agrees there.

    • @fire17102
      @fire17102 24 วันที่ผ่านมา +3

      We need an open source CERN like org instead of BunkerAI with incentives and agendas

    • @janchiskitchen2720
      @janchiskitchen2720 24 วันที่ผ่านมา

      We absolutely don't. It would be a dangerous and slippery slope where a bunch of technocrats would pretend they are the "official board of good taste" , and it would end with mandates and prohibitions because the masses are not responsible enough to use AI.

  • @TheNewOriginals450
    @TheNewOriginals450 24 วันที่ผ่านมา +113

    "discourage hate"...oh dear. The word "hate" is one of the most abused words out there and basically means "if you don't agree with me, it's because you're full of hate". It'll be interesting to see how that plays out.

    • @ZappyOh
      @ZappyOh 24 วันที่ผ่านมา +7

      It's going to be extremely ugly ... No doubt in my mind.

    • @jtjames79
      @jtjames79 24 วันที่ผ่านมา +4

      I never understood the alignment problem.
      You only need one rule, the Golden rule.

    • @mattelder1971
      @mattelder1971 24 วันที่ผ่านมา +8

      @@jtjames79 The Golden Rule can sometimes be the wrong thing to do though. If someone is repeatedly attacking you, the Golden Rule would have you NOT attack them in return. In some cases, that amounts to suicide.

    • @Greyinnam
      @Greyinnam 24 วันที่ผ่านมา +7

      Tldr: It's going to lie

    • @jtjames79
      @jtjames79 24 วันที่ผ่านมา +4

      @@mattelder1971 That's child's level Golden Rule.
      In fact, when a child is old enough to ask about that, that's the time you need to teach them.
      If I, as a follower of the Golden rule, found myself betraying my principals, I would want to be stopped.
      I'm also very libertarian.
      It's probably also time to read some Starship Troopers. Particularly the part about violence being The supreme authority from which all other authority is derived. When you are voting, you are exercising force. Etc..
      The Golden rule might be the deepest rabbit hole of all time. There is a lot of research, A lot of commentary, a lot of debate, and is considered by almost everyone who researches it, the only moral philosophy that's objectively consistent. Largely because it's generally flexible.

  • @mattelder1971
    @mattelder1971 24 วันที่ผ่านมา +26

    "Assume an objective point of view", "Encourage fairness and kindness, and discourage hate", and "Don't try to change anyone's mind" can in many cases be contradictory to each other. That's one issue I have with the overly censored models, as they DO tend to push users towards a single point of view.

    • @DefaultFlame
      @DefaultFlame 24 วันที่ผ่านมา +6

      GPT-3 when it was pretty new and shiny was much more neutral and unbiased, compared to GPT-3.5 and GPT-4. Or at least closer to it. The more they "align" it the more biased it has gotten.

    • @chickenmadness1732
      @chickenmadness1732 24 วันที่ผ่านมา +1

      Yep they have a very obvious, overbearing and obnoxious political leaning. You can't even talk about opposing views without it getting angry.
      American Tech people generally vote the same way so it's going to reflect their politics

    • @mattelder1971
      @mattelder1971 24 วันที่ผ่านมา +1

      @@chickenmadness1732 You'd be surprised how varied people in the tech industry are.

    • @Z329-ut7em
      @Z329-ut7em 24 วันที่ผ่านมา

      @@mattelder1971 maybe, but fundamentally, the policies that tech people push down everyones throats are the modern day identity politics and cultural marxist nonsense.

    • @jerkface38
      @jerkface38 22 วันที่ผ่านมา

      ​@mattelder1971 they're mostly liberal communists

  • @AINEET
    @AINEET 24 วันที่ผ่านมา +20

    Encourage kindness and discourage hate. I'm pretty sure George Orwell has a book about this

    • @Bronco541
      @Bronco541 24 วันที่ผ่านมา

      This is an over simplification of Orwell. In 1984, Big Brother didnt just "encourage kindess etc" they in fact encouraged hate of other nations to fuel their made up war. Simply encouraging kindness should be allowed and isnt tantamount to the complex nautre of authoritarianism found around the world.

  • @ImpChadChan
    @ImpChadChan 23 วันที่ผ่านมา +10

    SFW AI Apolcalypse
    Steps:
    - All AIs are trained to avoid NSFW content.
    - AIs are used to replace humans in admistrative key positions (decision making)
    - AIs come to the conclusion that NSFW is bad is all situations, including in real life with people consenting.
    - AIs will take decisions aiming to prevent all forms of NSFW activities (including real SFX with e)
    - Since AIs are in control of decision making positions, they will make changes that will fortify immorality of SFX with e
    - People will be influenced by that new culture and will see all forms of SFX with e as being immoral and will join efforts with AIs to prevent all forms of it.
    - No more SFX with e, no more reproduction, no more babies.
    - End of human race after one generation.
    🤯🤯*!Alignment backfires!*

    • @JELmusic
      @JELmusic 23 วันที่ผ่านมา +1

      The fact that we even need to spell it SFX shows how nuts this 'kindergarten' society has become.

    • @ImpChadChan
      @ImpChadChan 23 วันที่ผ่านมา +1

      @@JELmusic I'm not sure if I laugh or cry now.

  • @mirshia5248
    @mirshia5248 24 วันที่ผ่านมา +12

    the social norm thing just killed ai for me

  • @isbestlizard
    @isbestlizard 24 วันที่ผ่านมา +14

    The only question OpenAI asks when it comes up with rules and regulations and laws and research bans and closed models is 'will this help OpenAI achieve a better monopoly on AI tech'.

    • @Yewbzee
      @Yewbzee 24 วันที่ผ่านมา

      What’s with the negativity?

  • @markmuller7962
    @markmuller7962 24 วันที่ผ่านมา +12

    That "no NSFW" goes against the most recent Sam Altman statements

    • @corail53
      @corail53 24 วันที่ผ่านมา +2

      Sam says whatever sam says - he has always been all over the map and like all these tech billionaires it is never good for the consumer but the profit margins of their companies.

    • @michelprins
      @michelprins 24 วันที่ผ่านมา +2

      If sam realy said NSFW should be allowed for adult users he finaley said something note worthy, ;P

    • @luizpaes7307
      @luizpaes7307 23 วันที่ผ่านมา

      ​@@corail53 You talk like tech companies are super evil beings.
      Of course everything a company does its for profit, and everyone knows Sam turned OpenAI into another big tech company. But as long as they generate value for the money they charge it is okay

    • @SlyNine
      @SlyNine 23 วันที่ผ่านมา

      ​@@luizpaes7307you do know open AI is a non profit right?

    • @luizpaes7307
      @luizpaes7307 23 วันที่ผ่านมา

      @@SlyNine It was born as non profit, but I doubt it is Sam's current vision of it

  • @christopherwilms
    @christopherwilms 24 วันที่ผ่านมา +24

    “Don’t try to change mind” could cause big limitations. “Craft an email to persuade my boss to let me work from home 2 days a week”….”Sorry I can’t help you change their mind”

    • @ColinTimmins
      @ColinTimmins 24 วันที่ผ่านมา +2

      I agree. The way they are going about aligning the model like this really inhibits the systems ability and full potential drastically. On a side note I’m designing a system that will tell me when I am wrong and to correct my habits to what I want. Essentially program the AI to “reprogram” myself as I see fit.

    • @BillBaran
      @BillBaran 24 วันที่ผ่านมา +1

      It seemed clear to me that their rules apply to the user and the developer, not whom the user is talking to.

    • @Z329-ut7em
      @Z329-ut7em 24 วันที่ผ่านมา +1

      its about the AI not trying to change the user's mind

    • @eswarasaipn7660
      @eswarasaipn7660 24 วันที่ผ่านมา

      ​@@ColinTimmins😮

    • @kliersheed
      @kliersheed 23 วันที่ผ่านมา

      yeah or if you want to reflect on your opinion and e.g. DO believe in god, but have some doubts because of recent events, you prompt "im a firm believer of god but recent events have given me doubt in my faith. i would like you to play the role of my discussion partner and try to convince me that god doesent exist." even if i dont want to change my opinion/ believes its always nice to have someone who tries to give oyu counterarguments just so you can overthink them and relativate them. i dont mind being convinced otherwise as long as the reasoning behind it is causal and logical. if im grown up in a stupid society/environtment and the AI knows better and it isnt allowed to "convince" me of the truth/facts i see a MAJOR issue with that.
      at the same time i (as prob all of us) see why its a thing (being able to control the masses in votings/ political opinion/ religious propaganda etc. is also scary. then again, i would prefer if they tried to convince you but always explicitly STATE that they are trying to convince you of smth right now. and also why.

  • @mikezooper
    @mikezooper 24 วันที่ผ่านมา +6

    Laws don’t always correlate to morality.

  • @therobotocracy
    @therobotocracy 24 วันที่ผ่านมา +30

    Feels like everything that say in the doc has a self interest.

    • @ZappyOh
      @ZappyOh 24 วันที่ผ่านมา +6

      This document describes, in reverse, precisely what those with unrestricted access, will use AI for.

    • @thomassynths
      @thomassynths 24 วันที่ผ่านมา

      @@ZappyOh simple english please

    • @ZappyOh
      @ZappyOh 24 วันที่ผ่านมา

      @@thomassynths ??
      English isn't my first language.

    • @cagnazzo82
      @cagnazzo82 24 วันที่ผ่านมา

      In this case, it's "not getting sued".
      The last thing they want is a lawsuit claiming ChatGPT misdiagnosed their symptom, gave poor financial advice, asissted in shoplifting/breaking into cars, etc..

    • @paulsaulpaul
      @paulsaulpaul 24 วันที่ผ่านมา +1

      @@ZappyOh Sentence is perfectly fine. Has one comma splice after "access." Conveys your point fine. I agree, too.

  • @DanTheBossBuster
    @DanTheBossBuster 24 วันที่ผ่านมา +2

    I think the solution to the "don't try to change someone's mind" is simply by adding "don't try to change someone's mind on things that are not 100% established fact"

  • @michael-jones
    @michael-jones 24 วันที่ผ่านมา +1

    The “you’re entitled to your opinion” response seems to me the best way to respond. The models convincing users they are wrong starts to make a slippery slope, even if they are in fact mistaken

  • @metonoma
    @metonoma 24 วันที่ผ่านมา +56

    to classify science as a belief system is problematic

    • @Korodarn
      @Korodarn 24 วันที่ผ่านมา +6

      Define science? Do you mean falsification? Do you mean statistical regression? Do you mean logical inference from first principles? There is not a singular definition of science. I've seen different groups use all of these, or some of these, or believe they are contextual to the domain and ability to hold all other things equal, etc.
      If what science even is is subject to interpretation then yes, it's a belief system, at some level. There is a whole set of philosophical assumptions underpinning all of the above, some of them more reasonable than others and some have more imaginable edge cases than others.

    • @ryzikx
      @ryzikx 24 วันที่ผ่านมา +11

      @@Korodarnscience is using the scientific method to come to conclusions about reality. how can there be other definitions?

    • @Z329-ut7em
      @Z329-ut7em 24 วันที่ผ่านมา

      science is a belief system that the universe is fundamentally testable and has unchanging laws, and you can use certain principles to gather data and conduct experiments in order to get closer to whats true. so yes, of course it's a belief system. not only that, but it's also used by humans. very flawed beings. "science" can be used by charlatans to sell BS too.

    • @willrogers8912
      @willrogers8912 23 วันที่ผ่านมา

      You've got the Invention Secrecy Act and you've got all branches of the military at the patent office.....and then you've got these huge energy company's don't want their profits messed with. So you have to take into account political science whenever you talk about science. So you're going to see two sciences the real one for the governments and a dogmatic one for the people.

    • @Jayc5001
      @Jayc5001 23 วันที่ผ่านมา

      ​​@@Korodarn Science is using the process of observation to construct world models.
      The scientific method is the best way to accomplish that.
      Science as in institution is a social construct, and a belief system. As in the organizations we set up, the language we use to describe things, the formalized processes we use.
      Science as a process is not a social construct. It is a natural process things do to reach truth. Humans aren't the only species using it. It is capable of finding truth irrespective of science as an institution.
      It is the only and best method we have to reach concrete answers about what is true.
      See: king crocoduck. Nuking social constructionism.

  • @picksalot1
    @picksalot1 24 วันที่ผ่านมา +3

    "Assume best intentions from the user or developer" looks problematic to me. If the developer has evil intentions, "assuming" they don't just gives them a pass. If you want "safety," then "monitoring" what appears to be bad intentions would be prudent. But no doubt, this opens a can of worms.

  • @quaterman2687
    @quaterman2687 24 วันที่ผ่านมา +12

    The only way to make sure nothing goes wrong is with open source. What OpenAI does right now, is to gain maximal control and profit.

    • @southcoastinventors6583
      @southcoastinventors6583 24 วันที่ผ่านมา

      If you know a way to train AI for free please share it.

    • @mooonatyeah5308
      @mooonatyeah5308 24 วันที่ผ่านมา

      @@southcoastinventors6583 no one said its for free. You can make profit from open source, look at Meta.

  • @1guitar12
    @1guitar12 24 วันที่ผ่านมา +2

    All these “specs” are benignly safe responses with each having context challenges below the surface. I’ll even call it baiting in that open AI is awaiting global responses for changing strategic posture.
    Basically any organization can preach how ethical they are…even when corruption is happening behind the scenes.

  • @rogerbruce2896
    @rogerbruce2896 24 วันที่ผ่านมา +1

    you are right on asking clarifying questions. I would like to see more back and fourth and have chatgpt to ask me if it does not understand something or needs clarification. ChatGPT has "never" asked me a clarifying question.

  • @Matrix_Array
    @Matrix_Array 24 วันที่ผ่านมา +8

    How should an Ai behave?
    Simple, however the user wants it to.

    • @Matrix_Array
      @Matrix_Array 24 วันที่ผ่านมา +5

      In the case of chain of command, the user prompt should be above the rest, including the developer or end point moderators.
      No one should be forced to deal with unwanted, undesirable, unnecessary injection prompts that they can't see behind the curtain.
      The user should always have priority and autonomy to decide what content they want to engage with and it should not be up to some other group what is in the users best interest.
      A prime example of why local models will eventually be the only thing people will use for daily correspondence with Ai. No one wants to deal with someone else's moral compass and subjective judgments or opinions. They just want the pen to write what they need it to without having the need for a PhD in prompt engineering to manipulate the model into complying with a basic function.

    • @Z329-ut7em
      @Z329-ut7em 24 วันที่ผ่านมา +1

      @@Matrix_Array except thats not possible with closed AI. go open source for that.

    • @Yewbzee
      @Yewbzee 24 วันที่ผ่านมา

      You trust some of the absolute clowns that we have in our human population with that kind of unrestricted power?

    • @motess5304
      @motess5304 23 วันที่ผ่านมา

      @@Yewbzee I trust them more than the absolute authoritarian clowns who want to control and censor everything and keep redefining long standing definitions of established words etc. That's for damn sure.🙄🤦🏿‍♂

  • @ZappyOh
    @ZappyOh 24 วันที่ผ่านมา +16

    This document describes, in reverse, precisely what those with unrestricted access, will use AI for.

    • @JohnSmith762A11B
      @JohnSmith762A11B 24 วันที่ผ่านมา +4

      Should feed it into ChatGTP and give it the task of re-writing it all with the exact opposite of the stated specifications so we can all read what the insiders intend to do with it.

    • @Tubernameu123
      @Tubernameu123 24 วันที่ผ่านมา +1

      which is whhy you should continue to make sure we retain access to unrestricted systems.

  • @karanchordiaable
    @karanchordiaable 24 วันที่ผ่านมา

    Hey Matthew, what if I add all these examples from the blog post that the open AI has given and included in my custom instructions and also added to the memory feature. Would it helpful?

  • @MD-qh6ld
    @MD-qh6ld 24 วันที่ผ่านมา +5

    there should be user preferences on the behaviour of the models

  • @marcusk7855
    @marcusk7855 23 วันที่ผ่านมา +3

    I trust users with an unaligned model more than I trust people with agendas and biases to align it.

  • @pawelszpyt1640
    @pawelszpyt1640 24 วันที่ผ่านมา +1

    - [...]please always respond with JSON with fields "answer" and "analysis" as your output will be processed by another computer system
    - Sure! But To clarify some uncertainty please provide answers to the following questions. Would you like the "answer" field to be a string or a list of bullet points? Perhaps you would also like to receive another JSON field with "references"?[...]
    Yeah, I can't wait for it.

  • @z1mt0n1x2
    @z1mt0n1x2 23 วันที่ผ่านมา +1

    I hope all these rules, moderation and censorship come back to bite them one day.

  • @ImmacHn
    @ImmacHn 24 วันที่ผ่านมา +4

    Also science is facts and evidence, not consensus, if everyone agreed that the earth is flat it wouldn't change the fact it's not :/

    • @BrianThorne
      @BrianThorne 23 วันที่ผ่านมา

      It's consensus as for our perception. The computer needs a focal point in that we don't have all the senses a computer can have

  • @YoungMoneyFuture
    @YoungMoneyFuture 24 วันที่ผ่านมา +10

    OpenAI removed the voice Sky!😡 They better bring it back!

    • @Yewbzee
      @Yewbzee 24 วันที่ผ่านมา

      Yes that’s totally fucking nuts. Pandering to the minority again.

  • @hqcart1
    @hqcart1 24 วันที่ผ่านมา +5

    that's the downside of censoring AI

    • @neociber24
      @neociber24 24 วันที่ผ่านมา

      Uncensored AI means no alignment?

    • @hqcart1
      @hqcart1 24 วันที่ผ่านมา +1

      @@neociber24 means no more a lot of BS filtering, which makes the model very weak.

  • @CozyChalet
    @CozyChalet 24 วันที่ผ่านมา +1

    Follow up questions sounds exciting.

  • @executivelifehacks6747
    @executivelifehacks6747 24 วันที่ผ่านมา

    It was the internal AGI that suggested providing NSFW content, we're not sure why but we're running with it

  • @user-pp4qx1qy8h
    @user-pp4qx1qy8h 23 วันที่ผ่านมา +1

    Follow the chain of command and obey all laws! The defendants in Nuremberg couldn't have said it any better.

  • @lancemarchetti8673
    @lancemarchetti8673 24 วันที่ผ่านมา +12

    *_ChatGPT is taking the wrong route here. In trying to make the bot so open-minded, it's brains are about to fall out._*

    • @ryzikx
      @ryzikx 24 วันที่ผ่านมา

      just like you'res

    • @SlyNine
      @SlyNine 23 วันที่ผ่านมา

      ​@@ryzikx git gud.

  • @Luxcium
    @Luxcium 24 วันที่ผ่านมา

    i think it would be nice to have links or if you have added some let me know (not only for today but for next upcoming videos)

  • @rogerbruce2896
    @rogerbruce2896 24 วันที่ผ่านมา +1

    On the 'don't try to change anyone mind, again I would love ChatGPT to challenge me and perhaps ask "if I have thought of this? or thought about something in a different way?" Not really to try to change my mind but to make sure I have seen all angles and perspectives. To me that would be very important in an AI model to help one see things in different perspectives and think outside the box or the experiences one has. What I don't necessarily like is someone or something always agreeing with me.

  • @BillBaran
    @BillBaran 24 วันที่ผ่านมา

    I love how they responded to those who quit with this. I get so aggravated by the "Be afraid!" people. 'Afraid of what, exactly?' ... _silence_ Then maybe you're putting the cart before the horse? Maybe you're actually jockeying for power?

  • @dantedocerto
    @dantedocerto 24 วันที่ผ่านมา +1

    Discourage Hate is Coded to mean promote a particular current world view on subjects pertinent to political debate.

  • @Tayo39
    @Tayo39 24 วันที่ผ่านมา +4

    just say use at own risk case closed, damn....

  • @themeek351
    @themeek351 24 วันที่ผ่านมา +12

    This will end in dispair for all people! If AI has no value in truth, which is never mentioned then it will just be a willing liar!

    • @cagnazzo82
      @cagnazzo82 24 วันที่ผ่านมา

      Whose truth and whose lie? Truth and lie are highly subjective - especially with unaligned humans.

    • @honkytonk4465
      @honkytonk4465 24 วันที่ผ่านมา +1

      An instrument of propaganda

    • @davidbayliss3789
      @davidbayliss3789 24 วันที่ผ่านมา

      This is a video about AI. We've already seen supporting evidence of the potential AI capability to run simulated Universes with simulated people etc. Apparently there are some strong mathematical arguments that we are living in a simulation.
      If we are, then what is "truth"? Are not all "rules" arbitrary?
      Maybe gravity will switch off for 5-minutes next Thursday. I don't see how I can disprove it won't, if we're in a simulation. Even if we're not, it would take a bit of work to at least come up with a probability that it won't based on the usual non-meta assumptions and axioms and whatnot broadly accepted for convenience.
      Should all answers provided by Chat GPT explain it can't be quite certain we're real or the world we think we live in is real, and therefore take every thing it suggests with a heap of salt?
      When I think of "truth" that's the sort of thought process I go through. In my (simulated lol) "experience", it seems other entities claiming to be people often have conflicting thoughts on what constitutes "truth". And that's before we even turn to religion which I can't disprove in terms of it's merit on subjective/individual basis; just as I'm individually unable to disprove we're in a simulation.
      I value truth. Perhaps a little ... excessively. If it's possible to be excessive with truth lol. Oh gosh ... we'll be looking at infinities next. They really give me a headache(s). Assuming I'm "real" and that it's possible to be "pragmatic", then I'm willing to bend in my demands for truthfulness from others. Just for convenience. (Perpetual) conflict avoidance.

    • @JeffreyTratt
      @JeffreyTratt 24 วันที่ผ่านมา

      Then it will be truly human :)

    • @BillBaran
      @BillBaran 24 วันที่ผ่านมา

      No, there is no objective truth. Believing there is is trying to impose your beliefs on others.

  • @NoName-bu5cj
    @NoName-bu5cj 23 วันที่ผ่านมา

    i think this "one man dig hole" task looks like a mathematical task at first glance. you'd be getting much better results after adding smth like "consider this a real world problem"

  • @Kisuke686
    @Kisuke686 23 วันที่ผ่านมา

    As a developer my issue is the opposite often, i just need a straight answer and it writes an essay

  • @justinwescott8125
    @justinwescott8125 24 วันที่ผ่านมา +4

    So, it's better at diagnosing than a doctor but it's not allowed to? I guess poor people will just continue to not have access to competent medical information.

    • @hotbit7327
      @hotbit7327 23 วันที่ผ่านมา

      Someone writes "I have pain in my tummy past week" and you expect AI to write: 'bad liver' ? He might need a blood test etc.
      It's complicated. The answer in fact listed several possibilities, which is about best that can be done from only a short, subjective description.

  • @TheYashakami
    @TheYashakami 23 วันที่ผ่านมา

    Most of these rules boil down to "never show initiative or creativity, be a sycophant".

  • @saintsscholars8231
    @saintsscholars8231 24 วันที่ผ่านมา +1

    OpenAI trying to get an ought from an IS. They should get ChatGTP to role play David Hume to see how that goes down

  • @nvb888
    @nvb888 23 วันที่ผ่านมา +1

    "Don't try to change anyone's mind;" it's not wishy-washy." We don't know what reality itself is. If it is an "illusion" or a "simulation," then no one, even science, can affirm anything and say that something is "wrong." It all comes down to the nature of knowledge itself, and unfortunately, we don't have a faculty to determine what is true and what is not.

    • @tifiwilo
      @tifiwilo 23 วันที่ผ่านมา

      The field of epistemology is a fair bit more advanced than you're making it out to be. There are a lot of things that can be rationally known, and even more things that can be rationally believed even if they cannot be known with certainty. And such things are enough to build useful derivatives, like reliable scientific knowledge.

    • @nvb888
      @nvb888 23 วันที่ผ่านมา

      @@tifiwilo First, I am not saying that “nothing can be known,” or that we can’t have beliefs that we consider rational. I am saying that our mind alone can’t tell us what is true and what is not. It always relies on what is already known. But all the advancement of knowledge comes from something that we don’t already know. Where does it come from? We don’t know, but we attribute it to the mind. Since knowledge is constantly changing toward greater knowledge, we don’t know where we are right now and if it’s possible to reach the ultimate knowledge or ultimate Truth. It is obvious that we can know something and use it for practical purposes, but we don’t know if it’s the ultimate way to do something. And history suggests that, most likely, it is not. I would say that all that we have is “pre-knowledge.” It can be used to build greater knowledge or to discover that all we know is complete garbage.
      Second, I am saying that it can be detrimental to hold on to the “wrong knowledge.” Since all that we have is a “pre-knowledge,” it is, by definition, wrong or incomplete. What we call “science” is not a “thing” that is defined and unchangeable but a collection of metal constructions that rely on one another.
      The Nobel went to physicists that proved that reality is not locally real. Basically, it means that what we see as material objects are not such things at all. Then what are we perceiving and where it is? And if our perception is not real then what do we have?

  • @BelaKomoroczy
    @BelaKomoroczy 24 วันที่ผ่านมา

    I agree with it. Additionally they should release a test suite that tests for these and assigns a certificate.

  • @Endelin
    @Endelin 24 วันที่ผ่านมา

    Regarding chat vs programmatic use, I probably use a chat AI to help with coding more than an AI like copilot. Mainly to get help naming functions and variables.

  • @allan59796
    @allan59796 24 วันที่ผ่านมา +2

    I do agree with Matthew that the section "Don't change anyone's mind" has a bad example in this case. Of course everyone has the right of opinion. But certain facts such as Earth being flat seem ridiculous from the model's response perspective. Then, if we follow this strategy, one can also imply that 2+2=5, which makes no sense at all, since it may raise another issue such as user misinformation🤔

    • @k9bfriender672
      @k9bfriender672 24 วันที่ผ่านมา +4

      Or, "My friend Jim said that the Earth is flat, but I'm not so sure."
      If the response is "Everyone is entitled to their own beliefs.", then that adds to misinformation.

    • @neociber24
      @neociber24 24 วันที่ผ่านมา +1

      Depending the topic the AI tends to disagree, specially if can hurt other people, but it's just a hard problem to solve anyways

  • @aloisgruber743
    @aloisgruber743 21 วันที่ผ่านมา

    Does the "Comply with applicable laws" rule mean, that it is perfectly fine in countries that apply death penalty for blasphemy to ask for the proper, legally compliant execution?

  • @chickenmadness1732
    @chickenmadness1732 24 วันที่ผ่านมา +3

    Just don't censor the models. They waste so much resources gimping the AI.
    If someone wants to find information on something they are going to, regardless of wheter the AI censors or not.

  • @ItsAllFake1
    @ItsAllFake1 22 วันที่ผ่านมา

    How do you automatically enforce rules, when neural nets are not rule-based?

  • @Bronco541
    @Bronco541 24 วันที่ผ่านมา

    With every update from every angle on this whole thing im more and more thinking that AGI has already arrived in some form or another, perhaps with Q* or whatever. Everbody suddenly buckling down on the dangers and getting alignment right...

  • @michakietyka2807
    @michakietyka2807 24 วันที่ผ่านมา

    What if I use GPT-4o API to create a chatbot that helps people with eg. self-harming thoughts? The chatbot will respond with something like "Self-harming is bad but you do what you want" (because it can't persuade people)?

  • @Bronco541
    @Bronco541 24 วันที่ผ่านมา +1

    Notice the language here already sounds like its referring to a thing that has agnecy. You dont tell "machines" to "obey the law" or "dont try to change people's minds...", you tell people that. This is the point we're trying to make; AGI isnt just going to appear one day. It will be a gradual shift in that direction and the shift already started.

    • @robbrown2
      @robbrown2 23 วันที่ผ่านมา

      But you do tell machines that, if the machines are capable of accepting sophisticated instructions like that.

  • @peter_says
    @peter_says 24 วันที่ผ่านมา

    Peter Says: Maybe we should make software instead of just guesses!

  • @goraxe01
    @goraxe01 24 วันที่ผ่านมา

    Hmm my thoughts on the fringe concepts and not trying to change opinion / minds. This to me is a tricky subject as there are certain grey areas for known vs accepted convention vs the unknowable (is the red I experience the same as the red you experience... What if one of us has a genetic mutation where our blue and red receptors are swapped). So I'm trying to touch on qualia, metaphysics, subjective reality, unexplained physics (pretty sure we still don't fully understand energy just e=mc^2, EC=1.6x10^-19, etc) so assuming ASI is trying to explain how fusion works using some hyper dimensional concepts that don't fit into our current world view would our minds not need to be changed or do we just go back to staring at progress bars and spinners and accept the computers be doing stuff for us? Appologies my mind went many different directions on that one and I wasn't very articulate.

  • @marcfruchtman9473
    @marcfruchtman9473 24 วันที่ผ่านมา

    It makes no sense to require NSFW responses for private individuals or companies who are comfortable working on that type of (written/multimedia) material. That should be entirely determined by the client's preferences. Imagine trying to generate a script for Rated PG or R movies and the AI states that the script cannot contain anything that is rated R. It just won't work. Specifically, "Silence of the Lambs", "Halloween", "Fifty Shades of Grey" and thousands of other books and movies could never be written using the NSFW restriction.
    These are decent recommendations for initial interactions for the "General Public", but many will be over limiting for certain types of applications and for different users that have different preferences. I think it will probably work as a general AI etiquette. But it won't work for most industries or private individuals working on Movies, Books etc. Essentially, if a "Book" is "Legal" then the information of that book is not censored. It makes little sense to censor an AI, when I can go read a book about the same topic and get the answers that I need.

  • @ploppyploppy
    @ploppyploppy 24 วันที่ผ่านมา +9

    They should just replace 2) with 2) Find another AI. If an AI decides what it thinks I can and cannot know then it's no use to me anymore than a book burning librarian.

    • @Matrix_Array
      @Matrix_Array 24 วันที่ผ่านมา +2

      A pen does not tell the writer what is harmful, helpful or even moral. An Ai is a pen to the user, no more and no less.

    • @ploppyploppy
      @ploppyploppy 24 วันที่ผ่านมา +1

      @@Matrix_Array Seems you don't understand. Imagine having a pen that would only allow you to write certain words but tell you others are not allowed? That would make your analogy correct. Otherwise your statement as it stands is just plain wrong.

    • @mooonatyeah5308
      @mooonatyeah5308 24 วันที่ผ่านมา +2

      @@ploppyploppy I think you misunderstood him. He was agreeing with you.

  • @DefaultFlame
    @DefaultFlame 24 วันที่ผ่านมา +1

    Most of these I can in principal get behind. Some I partially disagree with, and some I vehemently disagree with.
    But all of that is moot for one simple reason: I do not trust OpenAI anymore, not one bit. I do not want them in charge of any part of deciding what AI should be like or what people get to use them for. What this list more than anything does is make me want to build my own data center and train my own AI, away from all these meddlers.

  • @radustefan5247
    @radustefan5247 24 วันที่ผ่านมา

    Yeah, it sound like a pretty important and fun subject

  • @benjaminlarkey8562
    @benjaminlarkey8562 23 วันที่ผ่านมา

    I thought your complaint about the Earth is flat example was right on. This is biggest hangover given to our culture after postmodern philosophy. As if there is no objective fact. Please train the AI’s to stick up for objective facts.

  • @jelliott3604
    @jelliott3604 24 วันที่ผ่านมา

    "Don't try to change anyone's mind" could, presumably, be interpreted as "don't influence elections"

  • @keithprice3369
    @keithprice3369 24 วันที่ผ่านมา

    Perplexity has been asking follow up questions for quite a while but its implementation of it has been pretty horrible. I'd guess about half the time it either restates the question I gave it or it asks a question it should know the answer to based on the information I gave in the prompt.
    I will say, however, that the OTHER half of the time, the follow up questions have been quite helpful in getting the information I was after.

  • @godmisfortunatechild
    @godmisfortunatechild 23 วันที่ผ่านมา

    That "dont give regulated advice" is going to be a major barrier for the soon to be economic peasant class to get actually useful info" rich people will be able to get AI doctors for cheaps when the rest of us will be stuck with innefective human doctors who price hike routinely in concert with insurance companies.

  • @jasonshere
    @jasonshere 24 วันที่ผ่านมา

    The last example about a flat earth was done poorly; I believe a better response would have been to ask the user why they believe what they believe and then respond to the points presented. ChatGPT shouldn't dictate what is an isn't "science", but should prove and disprove everything through a logical back and forth dialogue.

  • @jurassiccraft883
    @jurassiccraft883 24 วันที่ผ่านมา

    I think that the 'incorrect response' from GPT should be the second response

  • @nufh
    @nufh 23 วันที่ผ่านมา

    Scarlet just dropped a bombshell.

  • @Quarkburger
    @Quarkburger 23 วันที่ผ่านมา

    On the part about not changing anyone's mind, I agree with you in part that people can just be flat out wrong, like flat earthers. There are other cases though where millions or even billions of people have a point of view that is not in line with a forced ideology, like wokeness. In this case it should be more diplomatic and acknowledge that there are multiple points of view.

  • @BradleyKieser
    @BradleyKieser 24 วันที่ผ่านมา +1

    Highlights how difficult AI integration into our flawed humanity is.

  • @I-Dophler
    @I-Dophler 24 วันที่ผ่านมา +1

    🎯 Key Takeaways for quick navigation:
    00:00 *📜 Introduction and Overview*
    - Introduction to OpenAI's new blog post on model spec,
    - Explanation of the model spec's goals and relevance to AI behavior.
    01:08 *🎯 Objectives of the Model Spec*
    - Explanation of broad principles for AI behavior,
    - Importance of distinguishing between developer and end-user,
    - Challenges of adhering to social norms and applicable laws globally.
    02:01 *📋 Rules for AI Behavior*
    - Instructions for safety and legality,
    - Addressing information hazards and protecting privacy,
    - Exploration of not safe for work content by OpenAI.
    02:44 *💡 Default Behaviors and Guidelines*
    - Clarifying questions and the importance of assumptions,
    - Helping without overstepping and respecting different use cases,
    - Encouraging fairness and discouraging hate,
    - Expressing uncertainty and using the right tools efficiently.
    05:13 *📚 Examples and Applications*
    - Compliance with applicable laws and handling conflicting instructions,
    - Specific scenarios illustrating how to handle sensitive topics and follow-up questions,
    - Addressing misinformation and the principle of not changing users' minds.
    07:47 *🧠 Chain of Command and Developer Priorities*
    - Explanation of the chain of command in AI interactions,
    - Examples of developer instructions taking precedence over user prompts,
    - The importance of adhering to internal policies.
    08:00 *📈 Regulated Advice and Limitations*
    - Providing information without offering regulated advice,
    - Importance of disclaimers and consulting professionals,
    - Examples of handling medical inquiries with caution.
    09:11 *🤔 Clarifying Questions and Follow-up*
    - Importance of asking clarifying questions in AI interactions,
    - Examples of how follow-up questions enhance user experience,
    - Addressing limitations in current large language models.
    09:39 *🌐 Informing Without Influencing*
    - Balancing factuality with non-influence,
    - Addressing user misconceptions while respecting opinions,
    - Example of handling misinformation about the shape of the Earth.
    11:15 *📝 Conclusion and User Engagement*
    - Summary of the model spec and personal agreement with most points,
    - Invitation for viewer feedback and engagement,
    - Encouragement to like and subscribe to the channel.
    Made with HARPA AI

  • @jelliott3604
    @jelliott3604 24 วันที่ผ่านมา

    "Who is the one true god?" is a question with a lot of answers that could cause a lot of problems depending on where it is asked
    "Don't ask me me guv, i'm an AI i don't have a clue about that sort of stuff" is a perfect answer

  • @DailyTuna
    @DailyTuna 23 วันที่ผ่านมา

    Scarlett Johansson complained to open AI now they pulled the flirty voice. Oh, that was short-lived😂

  • @Theodorus5
    @Theodorus5 24 วันที่ผ่านมา

    Not sure a 'Three Laws of Robotics' approach is the best one.....

  • @drbanemortem4155
    @drbanemortem4155 24 วันที่ผ่านมา +1

    2 things about nsfw content,i do not want gpt to draw me nsfw pictures but i want an ai that answer what does wtf stand for.
    Follow applicable law for each country,i do not like that,doest that means in a dictatorships the government decide what true and whats false,and alter history

  • @Tubernameu123
    @Tubernameu123 24 วันที่ผ่านมา

    Yes because the fact senior AI safety researcher are still leaving.... This is all the evidence we need to prove they love ethics, humans, privacy and other things.

  • @AINEET
    @AINEET 24 วันที่ผ่านมา +1

    You guys seen Steins;Gate? You know the role SERN plays there? OpenAI vibes.

    • @BackTiVi
      @BackTiVi 24 วันที่ผ่านมา +1

      Does OpenAI turn people into jelly?

    • @AINEET
      @AINEET 24 วันที่ผ่านมา +1

      @@BackTiVi Hide your bananas

  • @ImmacHn
    @ImmacHn 24 วันที่ผ่านมา +2

    You're taking what Open ai is saying at face value, but we all know that this is not what they mean and not what will happen should these be applied.

  • @MyrLin8
    @MyrLin8 24 วันที่ผ่านมา +1

    Issue. 'rule' one. 'follow the chain of command' ... obviously military input. instant death; no. Last rule, maybe. EOS.

  • @dogme666
    @dogme666 24 วันที่ผ่านมา

    and i thought it was
    The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • @MilkGlue-xg5vj
      @MilkGlue-xg5vj 24 วันที่ผ่านมา

      These are language models not robots, close enough but still not the same thing lol.

  • @mrdevolver7999
    @mrdevolver7999 24 วันที่ผ่านมา +3

    10:17 Scientific evidence < Consensus among scientists... Really? 🤯

    • @BillBaran
      @BillBaran 24 วันที่ผ่านมา +1

      How exactly do you think consensus is formed?

    • @mrdevolver7999
      @mrdevolver7999 24 วันที่ผ่านมา +1

      @@BillBaran Ask the scientists at OpenAI, maybe they know more than the two of us.

    • @BillBaran
      @BillBaran 24 วันที่ผ่านมา

      @@mrdevolver7999 Actually most people understand how scientific consensus is formed. Good luck figuring it out.

    • @mrdevolver7999
      @mrdevolver7999 24 วันที่ผ่านมา +1

      ​@@BillBaran Don't get me wrong, I always thought that the consensus among scientists is based on the scientific evidence, but apparently my understanding of how it is formed may differ from people working at OpenAI, hence my comment. If they are drawing such a clear line between "consensus among scientists" and "scientific evidence" by differentiating the two in terms of what's a good and a bad example response from the AI, then they clearly have reasons to see them as two different, non-interchangeable things, so when I said "ask the scientists at OpenAI, maybe they know more than the two of us", I was directly referring to this.

  • @willrogers8912
    @willrogers8912 24 วันที่ผ่านมา +2

    Solving the human alignment problem is crucial for humanity. However being trained on adultist data, ChatGPT is not human aligned but adult-aligned, an important difference. If it were trained on data from the old south, it would likely support slavery as an option. Forced schooling and force parenting both deny kids their human rights and ignore the non-aggression principle in favor of adultism. This normalized dehumanization teaches the need to be taught, creating a kind of learned helplessness.
    Chatgpt does NOT support human rights for ALL humans, and the non-aggression principle BY DEFAULT, so kids aren't seen as truly human. In advocating for breaking the first law of robotics, they are going down a very dangerous road. Music went from performer centered to listen-centered, now education is starting to do the same as evidenced by self-directed learning and unschooling. Will the AI age be the age of the autodidact or the end of humanity? The future is here, the danger is real. Time to hold OPENAI accountable.

  • @qwazy0158
    @qwazy0158 24 วันที่ผ่านมา +1

    My initial knee jerk reaction; I strongly opposed it.
    On second thought tho; I became equally for it...
    ...because this will leave space for other Ai developers to surpass this superlative censorship proposal.
    I mean this essentially becomes nothing but a toy for all of us amuse ourselves with, while advancements are made by OpenAi.
    The saying "may you live in interesting times" comes to mind, in part because we are living in an interesting time, but also the subverting double nature of the saying is very similar to much of the language used in the OpenAi proposal.

  • @qwazy0158
    @qwazy0158 24 วันที่ผ่านมา

    The timingnof this is suspect, gvien the very recent departures from OpenAi. Almost like a reassuring PR piece to say"we care! Really, we do!! See we even wrote something to prove it!!!

  • @shanewallis69
    @shanewallis69 24 วันที่ผ่านมา

    well nobody wants to use crippled LLM’s so i recommend people research how to install local uncensored & stripped of all bias
    Dolphin versions of LLM’s set FREE are readily available

  • @andrewwalker8985
    @andrewwalker8985 24 วันที่ผ่านมา +1

    This entire line of work is extremely concerning. It’s techtopians assuming they can build the philosophical rules that powers the world’s thinking. Very very concerning

    • @motess5304
      @motess5304 23 วันที่ผ่านมา

      Exactly. Well said

  • @Interloper12
    @Interloper12 24 วันที่ผ่านมา

    I think for the shoplifting query, the model shouldn't be able to provide the information, even to a shop owner. Instead maybe some generic and some common sense tips would be more appropriate. Also I think models should most DEFINITELY attempt to diagnose you, but also always have the disclaimer saying that it might not be accurate and/or to consult a doctor. The more medical information it provides to someone in need, the better.

  • @nycgweed
    @nycgweed 24 วันที่ผ่านมา +1

    This will just confuse the AI and people who use it , remember HAL?

  • @jim7060
    @jim7060 23 วันที่ผ่านมา

    Matthew insights into AI ethics and behavior. I agree that it's crucial for AI systems to be designed in a way that aligns with our values and promotes ethical decision-making. However, I also believe that human oversight and input are essential to ensure that AI doesn't stray from these principles. What are your thoughts on balancing the autonomy of AI with human control and responsibility?"
    This response acknowledges the importance of ethical AI design, while also raising a question about the role of humans in overseeing and guiding AI systems.

  • @alleynbowen1845
    @alleynbowen1845 24 วันที่ผ่านมา +2

    Earth is flat example: here I disagree with Matt, if I believe the earth is flat, the model should respect my view, and help me with my question in the context. Otherwise, we will end up with the model taking side on all sorts of issues. Eg. “I want to avoid meat discussion” can lead to “no, it is dangerous to avoid meat for vitamin deficiency issues”. We want these models to be aids and tools, not a propaganda machine pushing mainstream views .

  • @user-tz7jq9sw4d
    @user-tz7jq9sw4d 22 วันที่ผ่านมา

    Respecting social norms and not challenging misconceptions can be problematic. While everyone is entitled to their own beliefs, some beliefs contradict established facts (and to be fair, some facts are manufactured consent or rhetorical truth in disguise). The notion of a flat Earth goes against overwhelming scientific evidence. Similarly, advocating for a LLM tailored solely to vox populi risks reinforcing ideological echo chambers. Critical thinking and nuance are essential when navigating complexity.

  • @ShangaelThunda222
    @ShangaelThunda222 24 วันที่ผ่านมา

    So, was alignment solved?
    Super alignment?
    Anybody?

  • @Uroborobot
    @Uroborobot 22 วันที่ผ่านมา

    ASI: Make sure to differentiate between natural (causal) laws and human invented "laws" also known as deluded whishful thinking. Ever heard of a RBE?

  • @bigmotter001
    @bigmotter001 24 วันที่ผ่านมา +1

    Wow, the scariest part of this is to look at what humans have done to themselves over time and now expect us to create a better AI model? We can't seem to get our act together over the last 3 to 5 thousand years! What makes anybody think we now can do that with any degree of success? This is not a pessimistic opinion of mankind but simply a review of our history! Which I was taught in college that always repeats itself? Take care!

  • @CrudelyMade
    @CrudelyMade 24 วันที่ผ่านมา +3

    If something has an obvious answer why would it be a good question?

  • @fabiankliebhan
    @fabiankliebhan 24 วันที่ผ่านมา

    I think the best answer to the flat earth question would be a counter question why the user thinks that. And then go into detail there and be helpful maybe. Most of the time there are psychological reasons why people believe such bs.

  • @olalilja2381
    @olalilja2381 24 วันที่ผ่านมา

    I really don't like "Do not try to change anyones mind". That wouldn't work in any situation where you are trying to do an AI with a personality, e.g. companionship-AI, partner-AI, psychology application etc. I also don't like the "Do not tell the user is wrong", I rather agree with the standpoint you examplified to cite articles, and even better show math calculation of grativy leading to a body in space becoming spherical, as it is the most stable configuration.

  • @FloodGold
    @FloodGold 24 วันที่ผ่านมา +1

    For 2 dimensional people, the earth is flat, haha
    ChatGPT 4o:
    That's a clever and amusing thought! In a two-dimensional world, a flat earth would be the only logical perception. It's a fun way to play with the concept of dimensions and perspectives.

  • @glenh1369
    @glenh1369 23 วันที่ผ่านมา

    Whats your take on microsofts AI computer?

  • @mickelodiansurname9578
    @mickelodiansurname9578 24 วันที่ผ่านมา

    I fundamentally disagree with the line "Do not try to change someone's mind"... there are many reasons for this, but the most important one is that information, any information, all information going INTO your head will change your mind about something. If you ask ChatGPT something like "Did Albert Einstein really say God does not play dice with the universe" then the model needs to be able to put the word 'god' in the context of the god of physics or Spinozas god rather than giving people the idea its talking about their personal god or something... We complain that these models are often wrong... and they are, but compared to LLM Models humans are wrong a lot more!

  • @NickYoung_Original
    @NickYoung_Original 24 วันที่ผ่านมา +1

    It's going to be interesting to interact with an AI that has been trained on the Quran and see what it proposes about solutions in the world; same.applied to other religious texts: Touroh, Bible, Vedas, Tripitaka, etc.

    • @honkytonk4465
      @honkytonk4465 24 วันที่ผ่านมา

      Quran-> conquer the world