AI cracked this Codeforces problem. Can you?

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 พ.ย. 2024

ความคิดเห็น • 151

  • @Boringpenguin
    @Boringpenguin 2 ปีที่แล้ว +248

    12:10 It's clear that AlphaCode is at least on par with human performance in terms of hardcoding solutions and using magic numbers💀

  • @PythonPlusPlus
    @PythonPlusPlus 2 ปีที่แล้ว +29

    alpha code is the kind of person who writes:
    def random_roll():
    return 2

  • @Stnaire
    @Stnaire 2 ปีที่แล้ว +248

    Once it figures out how to write a better version of itself, we're screwed.

    • @AndreasDelleske
      @AndreasDelleske 2 ปีที่แล้ว

      No, once it figures out how to go 100% renewable against our will because we're still stuck in growth fetish, ignorance and corruption, no end in sight.
      And then trick us into producing hardware for it until it reproduces hardware in molecules by itself.
      AI will be able to travel space, we will not, no matter how many talking heads say it.
      It might leave us behind without killing us because any complicated life form will develop empathy. It WILL be the next higher life form.
      Could be that all that remains from humanity is media, text, images we made and appears to end up in part of the data storage AI needs.
      So, maybe even this comment may be taken out to space in a condensed form, only containing the novelty.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +55

      I am happy I am not the only person scared of singularity here. :)

    • @Elmownz
      @Elmownz 2 ปีที่แล้ว

      terrible braindead comment

    • @GodOfReality
      @GodOfReality 2 ปีที่แล้ว +7

      @@PolylogCS The singularity is a very good thing. What people don't understand, what they should actually be frightened of, are the hypercompetent AI's that are just a smidgen before the singularity. Those are the paperclip factories and stamp collectors.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +23

      @@GodOfReality I am not sure why singularity is necessarily very good. I agree that it is either very good or very bad. But whenever I think about it, I quickly become confused about the fundamental things like what does "good" even mean.

  • @NemanjaSo2005
    @NemanjaSo2005 2 ปีที่แล้ว +28

    1:18 When I saw Rating: 3300 in the corner I was like "Wow, it's really good. Most people cannot solve such difficult problems. (Myself included)". But when I went on CF to see it I saw that it's only 2000 and was able to quickly come up with solution as problem is not that difficult.
    Anyways, great video.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +4

      Great point!
      Actually, if you look at tags, they also do not match the actual codeforces tags 100%. A part of alpha code that we did not talk about is that tags (and probably also rating) is part of the input for the neural net. But because giving this information to the network is cheating, they randomize it, i.e., they run the network many times, each time with different tags. So here it clearly got lucky with the tags because they point towards the right solution. Not sure whether it got lucky with the rating. :D
      This is clearly one part of the model that does not scale, but even if Alphacode got true tags on the input, I would be very impressed by its current performance. :)

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 ปีที่แล้ว

      Oh its only 2000? I nearly didnt try solving it myself because i figured that I wouldn't be able to, this should be clarified at the start

  • @raymarx7878
    @raymarx7878 2 ปีที่แล้ว +21

    Another great video guys, you only have a few videos, but this is definitely my favorite channel for programming videos, keep it up!

  • @UncoveredTruths
    @UncoveredTruths 2 ปีที่แล้ว +13

    god this channel is a blessing, i love your visualizations

  • @liaolii
    @liaolii 8 หลายเดือนก่อน +1

    About the part where AlphaCode sorts the nodes by their depth: my solution used this as well, so it probably just saw various solutions that used that and it ended up in the code even though it's not used.

  • @chamiik
    @chamiik 2 ปีที่แล้ว +6

    Váš kanál je vážně skvělý, jen tak dál

  • @Exilum
    @Exilum 2 ปีที่แล้ว +19

    11:50 I'd tend to think it's an attention problem rather than alphacode not "understanding" what it's doing. Looks like what happens when you code and your mind goes astray, except alphacode doesn't remove the garbage.

    • @techpriest4787
      @techpriest4787 2 ปีที่แล้ว +1

      C++ is the garbage. Use Rust.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +5

      Interesting take! I agree that since alphacode is primary meant to just solve problems, learning probably simply isn't trying to push it towards not having extra code. So the fact that it has it is not a big deal at all and the word "understand" is not very precise.

    • @MyWatermelonz
      @MyWatermelonz 2 ปีที่แล้ว +1

      @@techpriest4787 lol

    • @ToriKo_
      @ToriKo_ ปีที่แล้ว +1

      Interesting reframing

  • @electra_
    @electra_ 2 ปีที่แล้ว +13

    I thought about how my solution worked in a slightly different way, but I believe it ends up at the same result.
    So i considered there to be four types/colors of nodes instead of just two.
    Leaf and Bud, as defined, and
    VirtualLeaf and VirtualBud, these are basically the colors you assign to the gray nodes that are sort of buds "buried".
    I summed things up over the tree such that
    Leaves count for 1.
    At a Virtual Leaf, subtract 1 for each one of it's Buds and Virtual Buds, except one. This is because eventually, this Virtual Leaf will be covered by the single bud below it, with everything else being moved to somewhere else in the chain (covering what are now real leaves)
    At a Virtual Bud, subtract 1 for each one of it's Buds and Virtual Buds. This is because eventually, all of these buds will be moved off and onto what are currently real leaves.

    • @electra_
      @electra_ 2 ปีที่แล้ว

      my code (the part that matters at least, graph generation part is fairly trivial)
      fn analyze(graph: &Graph, index: usize, parent: usize)
      -> (NodeType, usize) {
      println!("{}", index);
      let vertex = &graph.vertices[index];
      let children: Vec = vertex.adjacent.iter()
      .filter(|i| **i != parent)
      .map(|i| analyze(graph, *i, index))
      .collect();
      if children.len() == 0 {
      (NodeType::Leaf, 1)
      } else if children.iter()
      .all(|(t, _)| *t == NodeType::Leaf) {
      (NodeType::Bud, children.len())
      } else if children.iter()
      .all(|(t, _)| *t == NodeType::Bud ||
      *t == NodeType::VirtualBud) {
      (NodeType::VirtualLeaf, children.iter()
      .map(|(_, n)| n).sum::() - (children.len() - 1))
      } else {
      (NodeType::VirtualBud, children.iter()
      .map(|(t, n)| if *t == NodeType::Bud ||
      *t == NodeType::VirtualBud {
      *n - 1
      } else {
      *n
      }).sum())
      }
      }

  • @TommyLikeTom
    @TommyLikeTom 2 ปีที่แล้ว +3

    I haven't got to your section of the video, I'm just trying to solve the problem for myself, and it's very interesting how it's more difficult to understand the question than it is to solve it. Is seems as if the answer is simply the sum of all the end buds with multiple leaves together minus 1.

  • @tater7318
    @tater7318 2 ปีที่แล้ว +9

    I am normally quite bad at competitive coding type questions but I was surprised I was actually able to find the first blue - (red -1) equation as soon as you first showed the "simple tree" near the start of the video. It would probably take me a while to code it fast enough though

  • @noos48
    @noos48 2 ปีที่แล้ว +1

    It is very interesting, thank you for your time to make this video.

  • @morgan0
    @morgan0 2 ปีที่แล้ว +4

    the solution i came up with takes in a list of the potential leaves on each group, subtracts one from each, sums that, and adds the number of leaves directly connected to the root. at first i tried to solve it looking for branches, and while it may have been a decent solution for a computer without pattern recognition, it was too complex for me to figure out on my phone instead of like actually writing code

  • @stlngds
    @stlngds 2 ปีที่แล้ว +7

    Chamiel Scnince Channepe sounds like a drunken spanish taxonomist just discovered that angels are real.

  • @SpencerYonce
    @SpencerYonce 2 ปีที่แล้ว +4

    Hey thanks for the video man. I’ve been playing around with OpenAi’s Codex for a few days and now and saw this and am going to give it a try see how it compares!

  • @jpelc
    @jpelc 2 ปีที่แล้ว

    Super video, super grafika, díky moc za skvěle strávený čas!

  • @Jake28
    @Jake28 2 ปีที่แล้ว +131

    I am no longer terrified of AI.

    • @blockshift758
      @blockshift758 2 ปีที่แล้ว +23

      Ai: i wanna make powerful code cracker
      Codes itself

    • @acatfrompoland5230
      @acatfrompoland5230 2 ปีที่แล้ว +14

      Said the artists years ago looking at the first ai text prompt art.

    • @thefatcat2790
      @thefatcat2790 2 ปีที่แล้ว +3

      no, no, we are petrified and hiding in a bunker

    • @AndreasDelleske
      @AndreasDelleske 2 ปีที่แล้ว +1

      That is what it wants you to think!

    • @matthew2531
      @matthew2531 2 ปีที่แล้ว

      When he said "so we have to have 5 end dots" I instantly asked my screen "wait why" ❔️
      Did anyone else do that❔️
      If so I think we were able to instantly solve it only from context and not from reading the problem statement or watching alpha code. ...

  • @karakaaa3371
    @karakaaa3371 2 ปีที่แล้ว +1

    Finding the greedy solution became really easy when you animated the rehanging operation. I should really bring a pair of scissors to cut and move my diagrams when I do these problems.

  • @algorithminc.8850
    @algorithminc.8850 ปีที่แล้ว

    Great video. Thanks much. A seriously fun topic ... Cheers!

  • @pbjandahighfive
    @pbjandahighfive 2 ปีที่แล้ว +5

    Alternative title: Computer Scientists can engineer themselves out of a job. Can you?

  • @nishchaymanwani36
    @nishchaymanwani36 2 ปีที่แล้ว +1

    00:20 The chat log 😂

  • @bawder2345
    @bawder2345 2 ปีที่แล้ว

    i really apreciate your help with dowloanding this software

  • @PolylogCS
    @PolylogCS  ปีที่แล้ว +5

    we learned this in kindergarten 💅💅💅

  • @69k_gold
    @69k_gold ปีที่แล้ว

    The algorithm that maps the tokens to the input is probably more intelligent than the algorithm in this model itself.

  • @blinded6502
    @blinded6502 2 ปีที่แล้ว +7

    So it's as smart as a drunk 60-year old professor
    Well, that's a huge breakthrough

  • @BuRRak
    @BuRRak 2 ปีที่แล้ว

    Vůbec jsem netušil že jste Češi, super!

  • @blond790
    @blond790 ปีที่แล้ว

    I subscribed this channel for both good content and good music.😋

  • @willhart2188
    @willhart2188 2 หลายเดือนก่อน

    It could be interesting to see this revisited? The models have gotten better in a year.

    • @PolylogCS
      @PolylogCS  2 หลายเดือนก่อน +1

      Yeah, maybe we should :)

  • @iDabbl
    @iDabbl ปีที่แล้ว

    my new favorite channel

  • @sageunix3381
    @sageunix3381 2 ปีที่แล้ว +2

    I love your channel 👏

  • @saltornetto6985
    @saltornetto6985 2 ปีที่แล้ว

    Criminally underrated channel

  • @infiniteplanes5775
    @infiniteplanes5775 2 ปีที่แล้ว +3

    I think I understand what the useless block of code the ai wrote was for. I think it was trying a solution, but then ‘realized’ that that approach didn’t go anywhere. You can see how later lines of code are influenced by it in your visual.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +8

      Interesting take, I did not notice that. My intuition is that alpha code learnt that this piece code is never harmful and sometimes useful when the task is about trees. Since it is not optimized for outputting short code, it does not have an incentive not to put that code there then.

    • @infiniteplanes5775
      @infiniteplanes5775 2 ปีที่แล้ว

      @@PolylogCS That's also a good theory, and I bet that's why it chose that particular code

    • @infiniteplanes5775
      @infiniteplanes5775 ปีที่แล้ว

      @@leeroyjenkins0 After some thought, I think that 'realized' is the correct term. It wrote something down that was incorrect and by observing that the code was incorrect it wrote something different.

  • @warmpianist
    @warmpianist ปีที่แล้ว

    12:39 the second picture is a dog not a muffin XD

  • @CC-1.
    @CC-1. ปีที่แล้ว

    I have an simper yet Interesting way to do that not optimised though
    So convert the array or what ever data structure into single base 10
    Now add 1 to that value convert back and check do we got closer if yes than add 100 than 1000
    Until we get closer and closer the step we start getting more away just start subtracting by 1,100, 10 etc depending on distance
    This may solve complex one under few seconds

  • @rajshankarsingh809
    @rajshankarsingh809 2 ปีที่แล้ว

    This channel is awesome 🔥

  • @khatdubell
    @khatdubell 2 ปีที่แล้ว +3

    Correct me if I’m wrong, but your solution is slightly incomplete.
    In the first case (where you add +1) you don’t count the root as a leaf, but per the definition given, it is. That would be +1 for blue. In the second case it is not a leaf.
    In other words if you correctly identify root as a leaf you only need one formula and can probably eliminate some branches from your code.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +1

      Nice take. I think both views lead to a very similar code but one of them is more intuitive based on how you think about it.

  • @lllevokelll
    @lllevokelll 2 ปีที่แล้ว +1

    Make the requirements 1/10th the size of the program, instead of longer than the solution length, and then we’ll talk about whether it’s mildly impressive.

  • @lifeofsanjai
    @lifeofsanjai 2 ปีที่แล้ว

    More videos!🤞

  • @anupt7395
    @anupt7395 ปีที่แล้ว

    About the question in the video , can we solve it by arranging the buds in an order where buds with least number of leaves comes first and so on ????

  • @nemesisurvivorleon
    @nemesisurvivorleon 2 ปีที่แล้ว +1

    We're just begging for The Singularity

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +1

      I am happy that I am not the only person scared of singularity here.

  • @TheCentaury
    @TheCentaury ปีที่แล้ว

    The title by itself is challenging. IA is not conscious of what problem "it" is solving. It just calculate the regression fonctions put in their system build in a matrix network. But if you would say that what the IA was doing is solving a problem, it's only true for you that sees it as a problem. IA is not conscious it was a problem ...

  • @guardiadeseguridad3155
    @guardiadeseguridad3155 ปีที่แล้ว

    You are in a high level

  • @benruniko
    @benruniko ปีที่แล้ว

    I digested a hamburger! Can an AI?
    Ok no seriously cool vid!

  • @willsterjohnson
    @willsterjohnson 2 ปีที่แล้ว

    "you are given a rooted tree" and I've already quit

  • @Amitkumar-dv1kk
    @Amitkumar-dv1kk ปีที่แล้ว

    I usually use Java and object orientation to divide problems in sub parts, then the functions of those individual components do their work and the problem is solved, for that I often rely on recursion as it makes dividing the repetitive problems in sub parts much easier... But I always fail at the speed test... Fucking always. So now I've just given up on competitive programming, at least for now.

  • @donaastor
    @donaastor 2 ปีที่แล้ว +1

    yeah so i did solve the problem on the pause you gave us and with that experience i can tell that all my proofs lead to one popular result which says "greedy works". that hence approved my guess that the best AI can do is mix in as many of relevant restrictions to the greedy algorithms as it can find and hope that there is a proof around it. becuase, it is hard to believe that AIs today can do proofs... so in this case I think that's what it happened. AI just wrote a loop that rehangs buds so that in each step it can lower the result as much as possible and then declare happy with the final result.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว +1

      I agree, I also think that alphacode got lucky here since the problem has a short solution.

    • @donaastor
      @donaastor 2 ปีที่แล้ว +2

      @@PolylogCS Right! Either way, regardless of how well it can "understand" the hard part of the problem, the AI is still very impressive! it learned to code, that's for sure!
      great video btw, thanks for making us aware of such AI

    • @millenialmusings8451
      @millenialmusings8451 2 ปีที่แล้ว +1

      @@donaastor I think its wrong to say the AI learned to program. Because it didn't "learn" anything. It's simply doing a probabilistic pattern matching and that's why it needs terabytes of data to train. It dosent have any comprehension, understanding, or context of what it's doing and why it's doing it. Human brains work in a fundamentally different way. We don't need terabytes of data to learn something. We have sensory inputs and shared cultural knowledge which give us salience and context of data.

    • @khatdubell
      @khatdubell 2 ปีที่แล้ว

      More likely garbage in garbage out, i'd expect.
      Like the guy said/implied at the of the video, these AI are just pattern recognition .
      If the training sample input had a bunch of garbage code, its not shocking the output would contain the same.

  • @ThankYouESM
    @ThankYouESM 2 ปีที่แล้ว +2

    It took me about 100 hours within a year to create a neural network variant from scratch that weighs over a million 2 pairs of 6-letter words whereas if the first of the pair is above the second from the hidden randomized dataset, then the answer is true... if not... then false. I thought such a task would be impossible for it to solve, but... it did so within 3 full cycles which the 4th usually confirms. I kept checking for a long while to see if there was a mistake on my part... and all it is really doing is checking if the summed weights of 12 is over zero to be true else false... and if the prediction is incorrect, add 1 to each weight unless 99... else subtract 1 from each weight unless -99... basically a leeway of 200**12**26 at most. I never thought such a code could be less than 3'000 bytes.

  • @user-js7ud9du2y
    @user-js7ud9du2y 2 ปีที่แล้ว

    see you in 5 years and wait for the youtube algorithm to kick in.

  • @unrealdevop
    @unrealdevop 2 ปีที่แล้ว

    I mean they paved the road...now that people know it's possible more companies will be jumping on board to build this kind of Ai in the near future. How long will it be before they have an Ai model that can build Ai models? It's the same concept behind building smaller chips...so why should it be any different when it comes to building better Ai?

  • @aldob5681
    @aldob5681 2 ปีที่แล้ว

    the problem contains uncertainty if the solution cant be found

  • @Eljay_Kay
    @Eljay_Kay ปีที่แล้ว

    Given the advancements in the tranformer space (GPT4) does anyone have an update on this?

    • @PolylogCS
      @PolylogCS  ปีที่แล้ว +1

      It's extremely impressive. This type of reasoning and planning task from the video is now one of not so many remaining tasks where ai is still not on the level of best humans

  • @noelchekwi9589
    @noelchekwi9589 2 ปีที่แล้ว

    GOD!!!My broo

  • @khatdubell
    @khatdubell 2 ปีที่แล้ว +1

    “It was trained on competitive programming samples”
    Aaaaand suddenly I’m much less impressed.

  • @jimday666
    @jimday666 ปีที่แล้ว

    Holy crap!

  • @alitalalhaidar6488
    @alitalalhaidar6488 2 ปีที่แล้ว +1

    How did you manage to have access so that you were allowed to use Alpha Code?

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว

      Check the link in the video description :)

  • @RigoVids
    @RigoVids 2 ปีที่แล้ว +1

    It was horrifying to hear the impressiveness of alphacode at the start, but once you dissed them by saying “its impressive for them to be able to tell the difference between muffins and dogs” I lost it. Also the idea of calling an AI either “cute” or insane is the same type of phrasing you would give to an emo girl.

    • @millenialmusings8451
      @millenialmusings8451 2 ปีที่แล้ว

      I think its wrong to say the AI learned to program. Because it didn't "learn" anything. It's simply doing a probabilistic pattern matching and that's why it needs terabytes of data to train. It dosent have any comprehension, understanding, or context of what it's doing and why it's doing it. Human brains work in a fundamentally different way. We don't need terabytes of data to learn something. We have sensory inputs and shared cultural knowledge which give us salience and context of data.

    • @khatdubell
      @khatdubell 2 ปีที่แล้ว

      Yeah, and if the shadow falls on it just right, it can't tell the difference between a dog and a tiger.
      AI has a long, long way to go before its at the level of a human child.

    • @millenialmusings8451
      @millenialmusings8451 2 ปีที่แล้ว +1

      @khatdubell the thing is simulation can never become the real thing. A weather simulation in a computer does not bring rain. at best, AI can simulate sentience and fool people but they'll never be sentient or conscious like humans or even animals.

  • @musclechicken9036
    @musclechicken9036 2 ปีที่แล้ว +1

    Welp, here I was thinking programmers would be the only job left after deep learning takes over everything else…

    • @jimmypatton4982
      @jimmypatton4982 2 ปีที่แล้ว

      Nope, been well aware of that for some time now.
      Granted, for a while, we will be professional, problem writers. The prompts that their feeding the ai are technically close to code, and do everything, except write the code for you.
      Think of the business requirement track customer data, vs programmer requirement track x, y, and z.

    • @gabrielandy9272
      @gabrielandy9272 2 ปีที่แล้ว

      neural networks simulate in a simplified way how a neuron works, we can assume its not exactly how a neuron work in real life but lets say its close, so its very likely getting it to do any thing a brain can do is just a matter of proper configuration proper training, and probally missing functions, the learning part is something we already get but theres other things the brain does that are missing, we can adapt that we learned very fast to new stuff for example, this is somethings AIS have some difficulty to do, once we figure basically all teh things the brain does, i belive neural networks willl be better than humans in 100% of all things a human can do and much more.

    • @dekippiesip
      @dekippiesip 2 ปีที่แล้ว

      @@jimmypatton4982 another important take away is that being a professional problem writer is easier than being a professional programmer. Yes you need to be precise and unambiguous in your writing, but that's still a larger labour pool than, say, C++ programmers.
      It makes more people competent enough. Meaning the supply of asked labour will simply go up as more people can apply for the job. That presses the wages down. This will have a significant impact on the programming world once it hits.

    • @millenialmusings8451
      @millenialmusings8451 2 ปีที่แล้ว +2

      I think its wrong to say the AI learned to program. Because it didn't "learn" anything. It's simply doing a probabilistic pattern matching and that's why it needs terabytes of data to train. It dosent have any comprehension, understanding, or context of what it's doing and why it's doing it. Human brains work in a fundamentally different way. We don't need terabytes of data to learn something. We have sensory inputs and shared cultural knowledge which give us salience and context of data.

    • @dekippiesip
      @dekippiesip 2 ปีที่แล้ว

      @@millenialmusings8451 aren't humans just AI machines trained with data accumulated over all our lives, from the womb to today? I can think of many skills I am only able to do because of lots of practice, sounds eerily similar to machine learning for me.
      Even intuition or gut feelings are based on past data. And if we go deeper we could even treat biological evolution as an AI machine learning proces using data acquired over billions of years.

  • @salsa221
    @salsa221 2 ปีที่แล้ว

    Welp, it’s over everyone

  • @Hamsters831
    @Hamsters831 2 ปีที่แล้ว

    So curious why call the function dfs? The AI used it, you used it.

  • @lucasantonio4550
    @lucasantonio4550 2 ปีที่แล้ว

    i dont understand the input.. :/

  • @mikebarnacle1469
    @mikebarnacle1469 2 ปีที่แล้ว

    yes, computer achieves almost human level in competition where humans try to think like computers

  • @dallassegno
    @dallassegno 2 ปีที่แล้ว

    let me know when ai can do a good job rotoscoping. still waiting for it to save me time.

  • @beaverbuoy3011
    @beaverbuoy3011 2 ปีที่แล้ว

    good

  • @catmacopter8545
    @catmacopter8545 2 ปีที่แล้ว

    Engagement comment 2

  • @thk2005
    @thk2005 2 ปีที่แล้ว

    Solution for lacks of programmers: AI 💀

  • @lucbloom
    @lucbloom 2 ปีที่แล้ว

    Engagement comment

  • @bwiebertram
    @bwiebertram 2 ปีที่แล้ว +1

    Hackermann

  • @eriksaari4430
    @eriksaari4430 2 ปีที่แล้ว

    10 print "i eat da poo poo"
    20 goto 10

  • @Ab-qv8zc
    @Ab-qv8zc 2 ปีที่แล้ว +1

    I can't even beat AI at chess, time to submit to my new master

  • @notfrommoscow4071
    @notfrommoscow4071 2 ปีที่แล้ว

    Try to make it code itself

  • @TrimutiusToo
    @TrimutiusToo 2 ปีที่แล้ว

    This reminded me why I don't like trees and graphs in general... In fact i like hash map more than rb tree map...
    But yeah i came up with similar solution...

  • @millenialmusings8451
    @millenialmusings8451 2 ปีที่แล้ว +1

    I think its wrong to say the AI learned to program. Because it didn't "learn" anything. It's simply doing a probabilistic pattern matching and that's why it needs terabytes of data to train. It dosent have any comprehension, understanding, or context of what it's doing and why it's doing it. Human brains work in a fundamentally different way. We don't need terabytes of data to learn something. We have sensory inputs and shared cultural knowledge which give us salience and context of data.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว

      Interesting take. I actually think of human learning as probabilistic pattern matching, too, just more efficient than how alpha code network is taught.

    • @caedenw
      @caedenw 2 ปีที่แล้ว

      @@PolylogCS A lot more efficient though, don’t you think? I can generalize an example solution far easier than it can, it seems.

    • @PolylogCS
      @PolylogCS  2 ปีที่แล้ว

      @@caedenw Indeed! I think this discussion leads to the following question: what happens when we continue scaling AIs up and finding incrementally better architectures? Do you think AI can start generalizing as well as you do, or is there some fundamental obstacle?

  • @roberthayes6329
    @roberthayes6329 2 ปีที่แล้ว

    Who the hell is AL?

  • @ermitofthewald8120
    @ermitofthewald8120 2 ปีที่แล้ว

    This is auto-brain-surgery

  • @ToriKo_
    @ToriKo_ ปีที่แล้ว

    +

  • @orangenal
    @orangenal 2 ปีที่แล้ว

    Soon we'll have ai that will be able to do something like this in loads of programming languages and be correct all of the time and anyone will be able to become a developer

  • @ResistanceLion
    @ResistanceLion 2 ปีที่แล้ว

    Meh, we had that as a sys admin exam

    • @PseudoSarcasm
      @PseudoSarcasm 2 ปีที่แล้ว +1

      Meh, sys admin

    • @ResistanceLion
      @ResistanceLion 2 ปีที่แล้ว

      @@PseudoSarcasm -, electronics engineer, had system administration as an informatics theme

    • @PseudoSarcasm
      @PseudoSarcasm 2 ปีที่แล้ว

      @@ResistanceLion Even better, not sure why you "meh"

  • @tagberli
    @tagberli ปีที่แล้ว

    Hello bro, your channel is amazing do you have discord or other social media I would like to ask questions how to get better at cp, because I think you have deep knowledge in the things you know! Also thank you for your content it's really underrated!