Generative Agents: Interactive Simulacra of Human Behavior - Joon Sung Park (Stanford)

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ก.ค. 2024

ความคิดเห็น • 12

  • @makdiose
    @makdiose 6 หลายเดือนก่อน +9

    What would it take to have this mini community available online, like on a website? Where visitors like us can view these agents realtime and see what are their doing for that specific moment. Fascinating to see what are their up to next.

  • @EvanTanMusic
    @EvanTanMusic 3 หลายเดือนก่อน

    fantastic

  • @fitybux4664
    @fitybux4664 9 หลายเดือนก่อน +4

    Have you considered allowing them to have money? "You job just paid your $1500 monthly paycheck. You have a monthly rent of $700." (Either as some sort of disembodied "world character", or by the people doing the charges and payments themselves in the world such as a landlord/job boss/etc?)
    I think having negative stimulus can really help things along. "You didn't pay your rent. Now you have to live outside in a cardboard box. Your living condition is terrible."
    What if you had a disembodied "world character" that dolls out negative stimulus randomly? 😲 ("Today, you got into a car accident.")

    • @annaf8143
      @annaf8143 7 หลายเดือนก่อน

      love this idea

    • @EvanTanMusic
      @EvanTanMusic 3 หลายเดือนก่อน

      Good idea

  • @annaf8143
    @annaf8143 7 หลายเดือนก่อน +1

    I LOVE this! Please do a cooperation with Electronic Arts and make a sims4 style computer game with similar graphics but generative agents

    • @Zanthous_
      @Zanthous_ 3 หลายเดือนก่อน

      sorry for the late response but there are legal issues with the content generated that pose a risk companies won't want to take (users may prompt agents to say dangerous things, say how to create weapons), and then aside from that I saw someone say simulating just 3 agents for an hour was like 8$, so costs have to come down an order of magnitude (which might happen soon enough, and starting to develop an app/game now might make sense if not for legal issues). There is one team working on a game like this right now based on animal crossing called Campfire - cozy ai villagers. I'm considering making a small game prototype as well

  • @jennyhorner
    @jennyhorner 9 หลายเดือนก่อน

    Fascinating! I have a little AI staff team I’m trying to learn how to get them to be more independent! A question that comes up for me: Klaus is a dedicated sociology student who has an interest in gentrification. Is this ‘experienced’ as a superficial identity label/label given to explain his activity, or does he see Smallville through the lens of a sociology student? Does he observe things related to gentrification in a way which the other characters wouldn’t even notice?

  • @nekomatic
    @nekomatic 9 หลายเดือนก่อน +2

    I wonder how this experiment would behave on smaller models. I.e. Agents would use specific (specialized?) small model depending on their role or select from a pool of small models depending on situation?

    • @Phooenixification
      @Phooenixification 5 หลายเดือนก่อน

      Wouldn't they loose their individuality then? If two persons are in the same situation at some point wouldn't they use the same model then and reasoning similarly? Or what do you mean?

  • @levioptionallastname6749
    @levioptionallastname6749 9 หลายเดือนก่อน +1

    Ugh, you beat me to it! in my defense I am only one person!

  • @Phooenixification
    @Phooenixification 5 หลายเดือนก่อน

    This was really interesting
    But it would be more interesting if it was an uncensored experiment since our world in itself is uncensored (i get that this could get wild and what you show here is only as a concept). Like one of the things you said that they always adress each other formally, very similar to GPT. If it were more unleashed the agent may start to develop their own kind of language over time and between each other and saying good morning to your spouse gets shortned just to just a good morning, or something else they develop to say to each other. And starts to say inside jokes perhaps? But that would require the emotion part I'll talk about later. Don't know how deep the generative part goes or if GPT needs to reach "system 2" before we can see this type of behaviour.
    And since agents don't really have a mood they will always be pretty neutral on all encounters and is only simulated through already learnt in behaviour.
    Although i think it wouldn't give a real interpretation of our world even if it was uncensored, stuff like emotions and consequences, and emotions which will come due to that consequence etc. (like serving jailtime) and that we have a limited timespan, and that our live is sectioned up into parts (child, teen, young adult, mid adult, etc) needs to be adressed aswell. For example an old man might be more inclined to do a certain crime than a younger person, because his life is soon over anyway). For a hypothetical smallville example: John invites his crush Jennifer to his birthday party, but Kenny invites Jennifer at the same time to watch a new netflix serie and she goes there instead, John resents this and kills John after his birthday party to get Jennifer to himself. A real person would go through so much reasoning and consequence thinking before reaching such a conclusion, and to kill another person because of such a reason is primarly just emotions, and all our actions comes from some kind of emotion.
    So some kind of atleast basic simulated emotion and consequence thinking (the sims style-ish) to get the real interesting drama to come out as a next step, that would be really cool to see as a smallville 2.