Can AI Make the BEST Guild Wars 2 Build | MMO vs Chat GPT AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ม.ค. 2025

ความคิดเห็น • 40

  • @Aranimda
    @Aranimda 20 วันที่ผ่านมา +9

    Looks like ChatGPT was trained before relics came into the game.

    • @defurious
      @defurious 19 วันที่ผ่านมา

      doesnt it say its data set stopped in september 2018 or 19? i dont know when relics were added since i only started playing late last year.

  • @Buddyguard1488
    @Buddyguard1488 20 วันที่ผ่านมา +2

    The Johnny Sins teacher got me😂

  • @melinnamba
    @melinnamba 20 วันที่ผ่านมา +6

    ChatGPT is a language model. It can only generate language, imitating what humans have said on the internet. It is not capable of actually understanding anything that it says. Comparing it to randomly chosen answer on a multiple choice test is not to far off. I am actually surprised it got so many terms right.
    Much more interesting than the question of whether CahtGPT is capable of build crafting, is whether anyone has bothered trying to train an AI model for that specific purpose and how well such an AI would do.

    • @simex909
      @simex909 20 วันที่ผ่านมา

      That's what I've been told, but it's not always my experience with LLM. I can ask it a novel word problem that no one has ever typed before and it can solve it. Something like "If I have 872572 1972 ford cutlasses, and someone says they have twice as many 1972 ford cutlasses as me, how many 1972 ford cutlasses do they have?" It has to distinguish that 872572 is a quantity and 1972 is part of the description of the thing. It has to know the formula and calculate the answer for doubling something. I don't think it found a forum post of someone asking what twice 872572 is, and definitely not in relation for ford cutlasses. If it told me what 2x2 or 2x64 is, that could easily just be language prediction, but not a long string which is unlikely to have been discussed before.
      Can you explain to me how that is only language prediction? I'm not saying you're wrong, it's much more likely I don't understand. I can understand how it could have a conversation with you based on the prediction/probability explanation, but it doesn't just converse. It doesn't just say what a person would say. I deliberately gave it a problem that no one would say. How does it give me the exact answer to the question and not some amalgamation of the collective answers to all math word problems asked in it's training data, or the most common answer to similarly structured word problems?

    • @melinnamba
      @melinnamba 20 วันที่ผ่านมา

      @simex909 well, all you need is some really basic pattern recognition to figure out, that your question isn't exactly something that has never been asked before. Sure the exact combination of words might be new, but it's just a word problem. Most math problems are framed as word problems. It's trained to recognize different types of conversations, therefore can recognize that you just posed a word problem and make its predictions from there. It doesn't need to understand what a Ford Cutlass is, or that 872572 is a quantity. All it needs to know is how word problems are structured and how it needs to structure a proper response.
      I don't know if they found a way to fix this by now, but in the early days it was really easy to convince ChatGPT that 2+2 equals 5, because it didn't actually understand why 2+2=4. It just knows that 2+2= is usually followed by 4.
      I am no expert on machine learning and AIs, but so far I have not seen any language model produce anything, faulty or not, that could not be explained by predictions based on pattern recognition. No understanding needed.

    • @simex909
      @simex909 20 วันที่ผ่านมา

      @@melinnamba I can see how it could know commonly used problems like 2 + 2 = 4 or 10 x 10 = 100 because these are likely to be present in the training data. So when it sees 10 x 10 it recognizes that as part of a pattern and completes the pattern. It doesn't know it's doing math, it's just predicting what the next characters will be based on the examples it's seen.
      But that explanation falls apart when we're using large numbers that are unlikely to show up in the training data. I tried even larger numbers, on the off chance that it actually does have examples of 872572 x 2 = 1745144 in it's training data. It solved for extremely large numbers.
      The only way it could be recognizing that as a pattern is if it's recognizing the relationship between the original number and the result when multiplying by 2 and extrapolating. If that's what it's doing, then in my opinion it does understand. It knows what the relationship is, what the semiotic symbol for that relationship is, and it knows which numbers in my question the answer is supposed to be related to.
      Sorry. I really do need to ask an expert about this. Typing this out helped me think it through though. Thanks.

    • @melinnamba
      @melinnamba 20 วันที่ผ่านมา

      @simex909 If you think about it there really isn't much functional diffrence between letters and digits. Technically both are written language, symbols used to convey information. We conceptualize those as two different things, but I wouldn't be surprised if a language model like ChatGPT prozesses both in the same way; as written language.
      I do fantasy world building for table top rpgs. ChatGPT can correctly apply declension to fantasy words I make up for my fantasy world. It can apply learned patterns to new combination of symbols. I suspect that's how it "does math". It dosn't actually understand why the symbols (digits, or letters) have to be reassabled in a certain way, it just follows the pattern in which they should be reassembled.
      And when those patterns break down, that's when ChatGPT makes glaring mistakes. Like recommending relics that don't exist but read like they could exist. It probably doesn't have enough training data yet, to know which words dont fit into the pattern.
      Understanding is really important for our human brains to learn something, an AI dosn't have that same need. It can be really difficult to warp our brains around how much understanding can be simulated by simply following patterns.

    • @simex909
      @simex909 19 วันที่ผ่านมา

      @@melinnamba I understand the thing about numbers and letters being treated the same, that's what I was trying to express in the first paragraph of my previous reply. Prefixes and suffixes are plentiful in it's training data, so it's no surprise that it can modify made up words. That pattern is present in the training data. In your example it's creating a sequence based on the next most probable characters based on it's model. The model is just the probability that a character is next in a sequence. You can do that for words and language, not because they're letters, but because the patterns are more regular, sequential, and have less variation than numbers. Every sequence of numbers is a valid number, but the same can't be said for letters and words. This makes it possible to predict language. It doesn't involve operations. Since every sequence is valid in numbers, the next most likely character shouldn't lead to anything meaningful to humans, and it certainly wouldn't be the result of a specific operation unless the LLM is trained with that specific problem.
      It looks at text, and it creates a probabilistic model of what the next most likely character is based on it's observations. You can predict words that way because those words are present in the training. It's seen other declensions and the pattern in the sequence is largely the same. By the same token the possible outcomes of multiplying by 2 don't follow a pattern which can be predicted sequentially. Even if I gave it the first few digits of the answer, the rest of that pattern isn't present in the training data unless the specific problem is in there. The only other way to get to the correct answer in a novel problem is to do the mathematical procedure.
      That's why I'm making the distinction. I understand it's all characters to the AI. The problem is the syntax for language is easily predictable with a sequential probabilistic model, the syntax for math is much more complicated. Math involves relationships which can't be predicted this way. Ratios, for instance.
      If I give you the first 3 digits of a 6 digit answer to a math problem, you can't predict the next digits based on the first three. You'd have to know what the operation was. If the AI doesn't understand mathematical operators, how does it predict a number that isn't present in the training? It should just give me an answer that appears to be the answer, out which looks like other answers. But it gives me the correct answer. You don't solve novel math problems by predicting the next most likely character. You can only do that with problems that it has access to.
      Language syntax and math syntax are wildly different. One can be predicted based on sequential patterns. If I see "happenin_" I can easily predict the next character is "g" because I've seen other words that end with "ing". If I see "768465042_" how could I possibly predict the next character without knowing the mathematic operation?

  • @Jx6661
    @Jx6661 20 วันที่ผ่านมา +1

    Good job, IA. You did make an "can't do anything" core guardian build. Now give me a Top 10 best gold farm

  • @ch.k.3377
    @ch.k.3377 20 วันที่ผ่านมา +2

    I was at that point too... the AI would need full access to the information such as the dmg calculated from the stats as well as the dmg formula from the ability and talent in order to construct a useful build... basically like wow simulationcraft.

  • @dyno_97
    @dyno_97 20 วันที่ผ่านมา +1

    I have actually tried this a lot previously, but it seems the AI does not have the data of the latest version of the game (maybe until up to EoD elite specs?), so it definitely does not able to fully provide an accurate guide. But it is possible to try make something fun. I do like prompting it to make lore accurate new character IGNs, ideas, backstories, roles, themes and what not, yeah maybe more focused on roleplay. But in the end, can't expect chatGPT to prompt a full accurate meta build, but can use it for fun stuffs!

    • @XDrakePhoenixX
      @XDrakePhoenixX 17 วันที่ผ่านมา +1

      "can use it for fun stuffs."
      That's my view on this AI stuff in general. Is it fun? Indeed! Is it useful? Eh, milage will vary. Should you rely on it. Hell to the no.

    • @dyno_97
      @dyno_97 17 วันที่ผ่านมา

      @ indeed!

  • @TheDemethar
    @TheDemethar 4 วันที่ผ่านมา

    things will get so wild when build enthusiasts start training their own algorithms with current build information. i dont like llm and believe its a detriment to society in general but this is where it should shine

  • @Knifecore1
    @Knifecore1 20 วันที่ผ่านมา +2

    I would love to see another class!

  • @zencmd6762
    @zencmd6762 19 วันที่ผ่านมา

    Try the same experiment with Claude AI, it'll probably give the results you are looking for.

  • @meisenology
    @meisenology 20 วันที่ผ่านมา +8

    AI made me leave this comment

  • @evres0224
    @evres0224 20 วันที่ผ่านมา +1

    Damn bro you went deep this vid!

  • @TazzyPhizzle
    @TazzyPhizzle 20 วันที่ผ่านมา +2

    Cool video idea was a fun watch

  • @chavian0
    @chavian0 20 วันที่ผ่านมา +4

    So it basically gave you exactly what it was asked for - a bit of everything. 🤣

  • @arkadia7849
    @arkadia7849 20 วันที่ผ่านมา

    Supreme Justice was swapped by Permeating Wrath in 2016 lol

  • @est9662
    @est9662 20 วันที่ผ่านมา

    its a relief that this is not working. how would your exp change in wvw or conquest 😱

    • @Evan4K
      @Evan4K  8 วันที่ผ่านมา

      I think it would probably be just as bad if not worse. I doubt the AI would understand how to make a build to interact with a wvw squad, or how to make use of proper skill combos for conquest pvp

  • @albertagungbillisupangkat6584
    @albertagungbillisupangkat6584 20 วันที่ผ่านมา

    Wow the AI haven't read the relics update

  • @DeanCalaway
    @DeanCalaway 20 วันที่ผ่านมา

    Just look at a Formula 1 AI race, we'll be fine.

  • @elocfreidon
    @elocfreidon 19 วันที่ผ่านมา +1

    This AI's made up relics are a lot better than what the devs keep failing to come up with. The devs keep finding ways to get barely 5% more damage from some insane, worthless setup.

    • @Evan4K
      @Evan4K  8 วันที่ผ่านมา

      I noticed that too, I was reading some of them thinking, "hmm that actually sounds pretty good"

  • @LucashhLima
    @LucashhLima 20 วันที่ผ่านมา

    Dps low intensity maximize auto attack and dps

  • @junglediff9546
    @junglediff9546 20 วันที่ผ่านมา

    finally what i asked for

  • @biomagic8959
    @biomagic8959 20 วันที่ผ่านมา

    love how people are dooming we get AGI next year 😂😂

  • @fargonthebrave
    @fargonthebrave 20 วันที่ผ่านมา

    Ai goes against GW2 terms of service

  • @jorgeroman9916
    @jorgeroman9916 20 วันที่ผ่านมา

    YOU CAN SOLO HoT HERO POINT CHAMPIONS WITH A MEDIOCRE BUILD???!!!! Oh, my Grenth, I feel so WEAK.

  • @Timrogers361
    @Timrogers361 20 วันที่ผ่านมา

    Under 23s group 😅

  • @dfg12382
    @dfg12382 20 วันที่ผ่านมา

    No the AI build is trash as anyone can tell after 5 seconds. There's no synergy in the choices which is proof that the AI doesn't understand what its goal is with this task.