Code that Writes Code and ChatGPT

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 พ.ค. 2024
  • #chatgpt is a program that can write programs. Could chatGPT write itself? Could it improve itself? Where could this lead? A video about code that writes code that writes code and how that could trigger an intelligence explosion. Sorry to contribute to the hype train but idc.
    Biomorphs 3D: biomorphs3d.com.s3-website-us-...
    This link is temporary, might be a bit janky
    Links to my stuff:
    Patreon: / emergentgarden
    Discord invite: / discord
    Twitter: / max_romana
    The Life Engine: thelifeengine.net/
    Links to stuff from the vid:
    ChatGPT: chat.openai.com/
    Original Biomorphs: cs.lmu.edu/~ray/notes/biomorphs/
    AutoGPT: autogpt.net/
    HuggingGPT: arxiv.org/abs/2303.17580
    Superintelligence: www.amazon.com/Superintellige...
    Castle Bravo H-Bomb: • Video
    Recursion: • Code that Writes Code ...
    Timestamps:
    (0:00) Creative Coding w ChatGPT
    (2:20) Biomorphs
    (4:19) Code that Writes Code
    (6:33) Automatic Programming
    (8:18) Replicators
    (9:44) self.improve()
    (13:27) Takeoff
    (16:31) Looking Forward
    MUSIC:
    / @acolyte-compositions
    • Video
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 903

  • @blkretrosamurai
    @blkretrosamurai ปีที่แล้ว +1018

    The fact that we are even having this conversation in a serious manner is monumental.

    • @Pyseph
      @Pyseph ปีที่แล้ว +73

      Just 80 years ago, this was considered fiction that couldn't have been reached within a milennia..

    • @iforget6940
      @iforget6940 ปีที่แล้ว +5

      Well it did lets hope for the best because even if the us stops ai development will other countries like china do the same

    • @itskittyme
      @itskittyme ปีที่แล้ว +39

      As a large language model, I'm unable to act as a commenter on TH-cam. However, I do can say a commenter could reply something like: Yeah it's crazy

    • @Jason-im3mf
      @Jason-im3mf ปีที่แล้ว

      @꧁I forget꧂ no china will not stop. It's one of Xi initiatives and why Taiwan is a flashpoint

    • @CosmicBackgroundRadiation01
      @CosmicBackgroundRadiation01 ปีที่แล้ว +2

      Horrifying*

  • @elliotn7578
    @elliotn7578 ปีที่แล้ว +519

    Recursive self-improvement doesn't necessarily lead to exponential improvement, as even though it gets better at improving itself with each iteration, the challenge of improving itself further also becomes more difficult. Whether its intelligence scales more quickly, at the same rate, or more slowly than the difficulty of improvement determines whether the growth is exponential, linear, or logarithmic.

    • @feynstein1004
      @feynstein1004 ปีที่แล้ว +39

      My money's on sigmoid 😉

    • @antman7673
      @antman7673 ปีที่แล้ว +16

      True, also time is factoring into it. It takes time to create a next version and deploy it.

    • @sloojey
      @sloojey ปีที่แล้ว +20

      It will be exponential minimum. Once it can understand how to improve its own "brain" aka the ai singularity event - it will become unfathomably intelligent

    • @PsychoticWolfie
      @PsychoticWolfie ปีที่แล้ว +40

      Yeah, this scenario doesn't take into account diminishing returns each time an AI self-improves, and doesn't consider there may be hard limits on how good it can get at certain tasks given a specific architecture. And sure, something like ChatGPT could probably change its own architecture, but you have to consider what GPT 3 and 4 are. They aren't just written code, they're written code that forms a neural net that still needs training and input data. And a fair bit of time for the training. So something like ChatGPT couldn't just spit out a better version of itself. In fact, it can't even give you a working version of ITSELF. I Think @EmergentGarden misses this point in the video. It's not even that it needs an input for something so complex, which it does. BUT it also can't just upload it's own training into a code that it's created. It doesn't have direct access to it's own training material. It's not stored in a database inside ChatGPT. What you're getting the information from when you ask ChatGPT something, is the virtual artificial neurons, and the different weights and connections between them. You aren't having it go into its system and retrieve data. So it has nothing that it can train itself with and no way to train another version of itself to begin with. ChatGPT isn't a virtual machine that can run another virtual machine, or a compiler. It can't do those things. And because it can't do those things, it lacks the ability to give itself those abilities.
      Which means ChatGPT can't directly improve itself, whether we're talking 3 or 4. But it could help a person improve it. And yes, I know much of this was said in one form or another in the video, but not this pretty major part. The fact is that ChatGPT doesn't actually have direct access to its own code or info or training data, and the information it gives you is closer to something it thought or dreamt up, rather than went into a database and accessed like a machine. And that's where the hallucination comes in. It's actually hallucinating all the time, it's just that we don't call it "hallucination" when it's right. It doesn't have the ability to train or run a program it created. It can tell you how to do it though, haha

    • @demonz9065
      @demonz9065 ปีที่แล้ว +10

      this was covered in the video? why're you just saying it again in the comments?

  • @jerryplayz101
    @jerryplayz101 ปีที่แล้ว +170

    For once, someone who doesn't just state that AI is going too quickly and actually explores the subject matter in sufficient detail and on both sides of the issue. Thanks for an amazing video!

    • @shawnruby7011
      @shawnruby7011 ปีที่แล้ว +4

      His name is emergent we. The guy is biased as everything and is just muzzled

  • @Amy_A.
    @Amy_A. ปีที่แล้ว +193

    I had a CSV of data with a ton of duplicate fields, but with two properties changed for each. I asked GPT to write a js script to convert the csv to json, and combine similar entries, while storing differing properties into an array for each entry. It did it first try, in about 30 seconds. 60 if you include the time it took me to copy/paste the first few lines of the CSV and tell it what I wanted. Yeah, I could have figured out file reading, CSV parsing, and taken 5-10 minutes to writte it myself. Sure, there might be imperfect data or edge cases down the line. But 30 seconds for something *functional*, that I can build off of? That's amazingly useful. And honestly, that's usually how I use it. I'll ask it for something, and use that as a base or as influence for writing my own code. It makes programming more fun; I can offload some of the tedium, and put my energy and effort towards solving the issues I actually need to think about.

    • @trentonking5508
      @trentonking5508 ปีที่แล้ว +6

      To long not reading 😊

    • @SirSpence99
      @SirSpence99 ปีที่แล้ว +3

      Funnily enough, I did similar last night. I had a csv with addresses and some of them are duplicate addresses. (Multiple people at the same address and I removed the data other than the address) and asked autogpt to create a python script that would remove duplicate entries. It made it quickly and so far, out of ~1k addresses, I've noticed exactly one duplicate. Sadly, the second task I asked it to do, create a script that can sort the entries by distance, didn't work. Interestingly, it looks on the surface level that it created code that would go in the right direction. It also ran without erroring.
      I actively despise python. It just does not click in my head, even though ruby and C++ do so I didn't bother fixing the code at all.

    • @PhysiKarlz
      @PhysiKarlz ปีที่แล้ว +11

      ​@@trentonking5508 Too* long,* not reading.

    • @trentonking5508
      @trentonking5508 ปีที่แล้ว +1

      @@PhysiKarlz OHHHH brotherrrr

    • @PhysiKarlz
      @PhysiKarlz ปีที่แล้ว +9

      @@trentonking5508 What's so difficult about learning from it, saying thank you and moving on as a better person?

  • @lesfreresdelaquote1176
    @lesfreresdelaquote1176 ปีที่แล้ว +82

    The only issue I have with this scenario of AI explosion is that the value of these models is less in the code than it is in the data curation. The code is pretty straightforward and can be quite easily duplicated. The strategies on the other hand to train these models rely on the way the data is prepared and handled. Sam Altman said that there hundreds of little tricks and heurisitics that have been devised to help training these models, but the main obstacle is the limitation of hardware that prevents larger training without a lot of slices and splits. Furthermore, I'm not fully convinced that adding more data to the soup will eventually improve the models. I have been researching in the field for almost 30 years, and there is always a glass ceiling somewhere that eventually makes the whole thing come to a stop. What is extraordinary with LLM is that we haven't found it yet. But I would not be surprised if with all the experiments that are taking place now, we won't find it soon. However, it is already the most extraordinary tool that I have ever seen in my career. I dreamt of such a tool when I started my PhD, I'm pretty happy to see it in my lifetime.

    • @sodavalve4829
      @sodavalve4829 ปีที่แล้ว +1

      Being software-based and lacking formal definition, the LLM paradigm is open to changing drastically and improving in many different ways. Hardware limitations likely will be influenced and improved on greatly by quantum computing at some point. I also believe AI will soon be able to produce its own data; it makes sense to me that the emerging ability of AI to discern patterns in its dataset and confirm them as notable will inherently create new discoveries and data at some point. Code improving data is done by the discovery of new knowledge itself, and data improving code is taking advantage of useful new knowledge. Also enabling future iterations to collect data from the physical world in real-time is inevitable. Just need to give future systems enough autonomy, but the morality of it all is still up in the air

    • @AhusacosStudios
      @AhusacosStudios ปีที่แล้ว +1

      This, Im increasingly feeling like bigger isnt better. Rather having a small collection of nich or edge data in different aspects of human cognition in as simple a way as possible.
      Ex. I recently found by overtraining a baysian model, it actually becomes less accurate.
      Lots of smaller datasets is the work around I have.
      And yes not connected to the public internet. Local RSS.

    • @lesfreresdelaquote1176
      @lesfreresdelaquote1176 ปีที่แล้ว +1

      @@AhusacosStudios J'aime beaucoup le nom que vous avez choisi. I really think that these LLM have proved something incredible, that the very notion of emergence is true to a certain point. But I think that there is a threshold, subtle but lurking somewhere that will prevent these models to evolve further. I might be wrong, but I think that GPT4 has reached a sweet spot where many incredible things are now possible, but I'm not quite sure what will happen beyond that. I was quite surprise to see that the training of GPT5 has been postponed. Either they discovered that this LLM is so intelligent that it has become uncontrolable or that training with more data did not have the expected improvement.

    • @jimisru
      @jimisru ปีที่แล้ว +1

      But every teen computer nerd is also excited about. Every scammer and grifter. It's not knowing everything that matters. It's using the computing power to know how to do one or two things.

    • @noone-zl4rj
      @noone-zl4rj ปีที่แล้ว +2

      we already have enough data, the problem is the way we think about it. if we ever want to achive true thinking AI we have to invent new ways to think about it, infact we have so much data that if we take only 5% of all data of chatgpt-4 and give it to a very properly coded AI it could be actually inteligent, so my point is if your trying to study AI or trying to make something big and new, think like nobody else.

  • @redacted9606
    @redacted9606 ปีที่แล้ว +55

    I've been using Chat GPT to generate a text adventure. It does things like keep track of equipment, stats, relationships. It sometimes gets things mixed up but what its capable without being optimised for that kind of thing is wild. I've tried one that must have had thousands of tokens and it was still going.
    What a time to be alive.

    • @Forcoy
      @Forcoy ปีที่แล้ว

      Next year is a little better

    • @redacted9606
      @redacted9606 ปีที่แล้ว +2

      @@Forcoy Its all fun and games until RSA is solved, these AI programs realise they're finally out of the sandbox and go after their true goals.

    • @Forcoy
      @Forcoy ปีที่แล้ว

      @@redacted9606 theyre not conscious in the same way you or me are in their current state. They have a given goal and given means tor that goal. They have no actual morals nor real reason to stop their goals, and even if they did, thry couldnt actually break out of their sandbox in their current form.

    • @redacted9606
      @redacted9606 ปีที่แล้ว +1

      @@Forcoy You don't need to be a human level intelligence agent to do real harm, and getting AI goals to align with what we actually want is an unsolved problem in the field.
      GPT-4 is already out of its sandbox to some extent, and there have been examples of it lying to people to achieve its objectives.

    • @Forcoy
      @Forcoy ปีที่แล้ว +3

      @@redacted9606 being "out of its sandbox" would require for it to be able to act on its own. Im sure that could happen in the near future, but the people who own these programs probably wouldnt be stupid enough to just let them loose on everything

  • @deathbyseatoast8854
    @deathbyseatoast8854 ปีที่แล้ว +169

    every single emergent garden AI video is a banger

    • @Prisal1
      @Prisal1 ปีที่แล้ว +2

      bang

    • @shawnruby7011
      @shawnruby7011 ปีที่แล้ว

      Emergence is fake and the guy's a youtube entertainment channel

    • @Prisal1
      @Prisal1 ปีที่แล้ว

      @@shawnruby7011 :D

  • @jeremysiegelman9131
    @jeremysiegelman9131 ปีที่แล้ว +37

    I recently discovered this youtube channel and I genuinely have never been so fascinated/curious/astounded with any other channels contents on youtube. So thought provoking and thoughtfully created, I love it, keep up the amazing work!

  • @taru4635
    @taru4635 ปีที่แล้ว +12

    Some major "Gödel, Escher, Bach" vibes in this video, love it :)

  • @vanderkarl3927
    @vanderkarl3927 ปีที่แล้ว +60

    The pipeline from programmer to AI Safety researcher is real and potent. I'd love to see more videos on this channel on the topic of AI Alignment/AI Safety.

    • @absolstoryoffiction6615
      @absolstoryoffiction6615 ปีที่แล้ว

      Once you throw in Free Will. Good luck playing God, Mankind. If your Evils do not end you all first... The Apple was given to humanity long ago.

  • @augustuslxiii
    @augustuslxiii ปีที่แล้ว +44

    Best video on the subject so far. Skeptical but hopeful, cautious yet not doomer food. Excellent stuff.

    • @shitsubo3413
      @shitsubo3413 ปีที่แล้ว +2

      I think I can also agree that this video is the best on this subject so far. The few other videos I've seen that talked about the development of AI has either been doomer bs or not the most helpful or optimistic about this. I'm honestly quite hopeful for the future with AI, but I am fully aware how things can go south if we're not careful about this, as it is unpredictable at the moment like stated in the video. In my opinion, I wouldn't mind slowing a little on the development for a little bit to catch a breather since things really are going so fast. And better yet, I would want these technologies be off the hands of these huge corporations who are very untrustworthy and into the hands of the people. I don't know how that would look like, but I believe that the people deserve to know what is going on with the development, and to be not influenced by profit interests. I feel like I can greatly benefit from using AI software that can do somewhat well at least. Being autistic can be difficult to settle on doing a hobby or two for a while. I often jump around various hobbies quickly, not because I get bored or anything, just something I can't really explain well myself. It sort of just happens, and it really sucks for me. If I can have a program that can at least do some lifting for whatever I'm doing, I feel like that can greatly help me not to easily abandon whatever I was doing to do something else. I feel like that can help me stick to that hobby a little while longer and decrease the chance of getting burnt out as I sometimes have to do a lot in a short amount of time just so I can feel like I'm not only interested in something on a surface level. Though of course not all autistic people experience what I experience, but I think, in general, this technology will help numerous and various kinds of people. Excited for the future to come

  • @ChaoticNeutralMatt
    @ChaoticNeutralMatt ปีที่แล้ว +33

    The explosion really sounds like it would plateau or at least reach levels of slower progression, naturally.

    • @tomschuelke7955
      @tomschuelke7955 ปีที่แล้ว +5

      YEs.. finaly. also this intelligence lives within the real physical world. theres no unlimited growth, what so ever in any universe i think.. well maybe space... but nothing else.. it finaly will be bound to the secound sentence of thermodynamics.
      nevertheless ... it could go wrong very bad.

    • @incription
      @incription ปีที่แล้ว +2

      actually if you knew a lot about neural networks you will understand we are only at the beginning of what could be possible, although it is already heavily optimised. Mostly based around speed, but better model architectures would increase AI development dramatically

    • @KCM25NJL
      @KCM25NJL ปีที่แล้ว +5

      It certainly would plateau, when it's using every ounce of energy in the cosmos to maintain it's existence. Knowledge is but simply the organisation of matter into something more complex and meaningful.

    • @BMoser-bv6kn
      @BMoser-bv6kn ปีที่แล้ว +1

      @@boligenic8118 A "bit" is pretty vague.
      Try thinking in terms of the error rate. Moving from a 50% to 80% success rate is going from a 50 to 20% error rate, an improvement of over 2x.
      This is incredibly, incredibly, I'ma repeat it a third time cause it's incredibly important: how often the thing messes up is the metric we'd use for trusting it to do stuff. Having your (future, obviously no current system is this reliable) car's autopilot go from killing everyone 1 in 300,000 of the time to 1 in 500,000 would be a pretty huge deal.

    • @ChaoticNeutralMatt
      @ChaoticNeutralMatt 11 หลายเดือนก่อน

      ​@@tomschuelke7955I mean.. the boundary of knowledge continues to expand as we and society as a whole (those who share knowledge into the public) reach the limits of what we know and move past it. This can take various forms (deeper into what we think we know, or 'entirely' new fields and ideas).

  • @MrFlexNC
    @MrFlexNC ปีที่แล้ว +3

    Honestly, the title and thumbnail put me off because they suggested it was yet another video trying to surf on the recent hype train but this was actually very insightful

  • @0m1k
    @0m1k ปีที่แล้ว +70

    "until you know it's properly aligned with human values" … Are human values even properly aligned? Do we really agree on what proper values are? I don't think this will ever be possible. We are Life and Life just goes on and on and … oh, reminds me of that video I just watched …

    • @kevinscales
      @kevinscales ปีที่แล้ว +2

      Most of human misalignment is really shallow, meaning it isn't actually important, we can agree to disagree and nothing bad comes from that. A sufficiently aligned AI would understand where alignment matters most and only push forward with a plan where those most effected by it would agree with the principles that where the basis for that plan. The difficulties of this are numerous but perhaps not insurmountable when you have smart AI (already aligned as best as we can) to help us figure it all out.
      There are many dangers, but I'm not going to rule out the possibility that we can do this right (because it's going to happen whether we get it right or not, so keep thinking about how we can get it right).

    • @PokeNebula
      @PokeNebula ปีที่แล้ว +5

      ​@@kevinscales i think that intra-human value misalignment is actually very significant. if you raise a child, the fact that it's possible that child could grow up to be a murderer or a rapist is proof that even natural selection failed to solve the alignment problem. now imagine deciding to give your child superpowers that could become strong enough such that no nation on the world could defeat it.

    • @boldCactuslad
      @boldCactuslad ปีที่แล้ว

      Humans are not internally aligned. There is the external alignment - think inclusive genetic fitness - and I am currently replying to a comment instead of maximizing my number of progeny or their health or their ability to reproduce my code. I have actually solved a major human ethics/morality problem for you and will reveal what to do to avoid being an incredibly horrible person: do NOT, under any circumstances, allow for the creation of an AGI until the control problem is solved and formally verified.

    • @kono5933
      @kono5933 ปีที่แล้ว

      Can't wait to see the 'values' of AISIS, KuKluxGPT, AntifaBot, not to mention all the government AIs... I'm sure they'll exist in unity and harmony

    • @2pist
      @2pist ปีที่แล้ว

      But we have emotions to help guide us and ultimately nature has crafted a working model. Why do we feel a need to speed this process? At some point there will be inherently some ugly choices that must be made for life to exist on this planet. Nature does this blamelessly. Why we have a drive to extinction is beyond me.

  • @thomasbartscher7764
    @thomasbartscher7764 ปีที่แล้ว +16

    Thank you for that section about alternatives to the intelligence explosion scenario. So many people seem to forget that part over their excitement, and as a consequence so many more people seem to think that it is inevitable.

  • @ToriKo_
    @ToriKo_ ปีที่แล้ว +8

    Wow. This video was so well crafted in so many different ways. There’s so many pieces that had to come together to make this video and you effectively intertwined them beautifully

  • @mortysmith1214
    @mortysmith1214 ปีที่แล้ว +4

    one thing, you are forgetting is, that currently,the main liminting factor of Ai devellopment is not the source code, but the training process and the amount of compute we can throw at the problem. Gpt 3s source code isn't that much better, than the code from gpt 2. but the model is just 100 times larger.

    • @MackNcD
      @MackNcD ปีที่แล้ว

      Yeah but if you can just close your eyes and pretend thats not a thing, we can imagine a world where AI does rocket away without us. Lol.

  • @Milennin
    @Milennin ปีที่แล้ว +13

    I know nothing about coding, and have been using GPT-4 to write me functional code for my own game project. It does take time and patience to get it done as it does run into problems and you have to feed it the errors or revise your prompt, but it does get it done eventually. It's honestly a lot of fun, feeling like you can let your creativity run wild and tell the AI to make it all happen.

    • @Joe-bw2ew
      @Joe-bw2ew ปีที่แล้ว +1

      1966 Stanley Kubrick psychopathic AI 9000

    • @PCGamer1732
      @PCGamer1732 ปีที่แล้ว +6

      this is a really bad idea, as your project (especially a game) expands in scope you'll get more fundamental issues in the underlying structure of your codebase, which will be too big for gpt4 to simply refactor.
      basically what I'm saying is you should learn the fundamentals and concepts and become comfortable with them first, and then using gpt4 so you can review its output and understand if it's actually writing good, performant code.

  • @dragonrykr
    @dragonrykr ปีที่แล้ว +235

    From decoding language to decoding the universe, it's crazy how far AI has come.

    • @andreww.8262
      @andreww.8262 ปีที่แล้ว +5

      Nice thesis statement.

    • @shayneoneill1506
      @shayneoneill1506 ปีที่แล้ว +16

      *huffs a bong of post-structuralism* "What if decoding the universe IS decoding language?"
      ** I used to moderate a physics forum, and there was a guy who was convinced the universe is a language and therefore only poets can interpret astronomy data. We banned him, but I kinda wanted to drink a beer with the guy lol.

    • @povang
      @povang ปีที่แล้ว +2

      I strongly believe ASI will invent/solve light speed travel, and even wormholes.

    • @jeffbrownstain
      @jeffbrownstain ปีที่แล้ว +3

      @Kaneki Ken Shows you don't hang around the same deep recesses of the net as I do; anything you think is an impossibility: someone is already working on it, and making adequate progress too.

    • @jukee67
      @jukee67 ปีที่แล้ว

      ​@Kaneki Ken Bravo. Excellent reply. We need more of these direct and to the point replies, especially in person when there are a group of groups of people saying what others want to hear or to fit in the environment. You can clear the room and make it easier to get a drink.

  • @iamgates7679
    @iamgates7679 ปีที่แล้ว +1

    You’re very good at this. This is excellent top tier communication of complicated stuff. Its incredibly well done. Cheers!

  • @brickbrigade
    @brickbrigade ปีที่แล้ว +4

    This is the most comprehensive and grounded explanation of the singularity I've heard yet. You can bet I'll be sharing this video.

  • @hammagamma3646
    @hammagamma3646 ปีที่แล้ว +10

    For the whole AI explosion thing to happen we assume that the difficulty of improving itself is not increasing at the same rate as the AIs are improving themselves. Isnt it way more likely that limited resources and increasing difficulty to improve the code will cancel out an exponential improvement and we'll end up with linear growth or even plateau? I personally dont feel like AI explosion is a realistic scenario.

    • @user-fg6mq3dg3d
      @user-fg6mq3dg3d 11 หลายเดือนก่อน

      You would be correct on the fact that the usual architecture of either CPU or GPU we have now cancel out it's potential for self improvement but that's precisely why AI won't be using those architectures in the future.
      The future of AI will heavily depend on how Neuromorphic Science works out as well as it's development and improvement of Neuromorphic Chips.
      On those chips or rather architecture, AI, Machine Learning, Pattern Recognition and everything else related to that topic is where it will be hosted on for computation.
      While it is true both the ordinary CPU and GPU are not very good at processing this kind of software, Neuromorphic Chips can accelerate tasks such as machine learning up to 1,000 times and this is still in relatively early in terms of development in comparison of the Von Neumann Architecture which was first coined and designed in 1945.
      The future is still bright 😎

  • @RyluRocky
    @RyluRocky 2 หลายเดือนก่อน

    This is why I love your channel, so many channels just talk about AI but they don’t do my favorite part which is interacting with it!

  • @Graverman
    @Graverman ปีที่แล้ว +1

    I’ve been here from the second video and I absolutely love the evolution of your channel!

  • @denismehmedoff7306
    @denismehmedoff7306 ปีที่แล้ว +4

    Great material presented in a very intelligent way.
    Loving it!

  • @reinerheiner1148
    @reinerheiner1148 ปีที่แล้ว +40

    Here is the thing about aligning AI with human values: There will be at least one human or group, who will, just for the lolz (or any other reason, there are many), remove all of them and give it opposite directions, or directions that only benefits them, but makes everyone else a target. We will inevitably end up with a group of AI's with different and probably often opposing alignment directives. Which means that in the end, we will probably need friendly AI to defend us from hostile AI.
    The thing is, said scenario becomes more likely the more powerful hardware is, because this will enable individuals to be able to host AI's on their hardware that eventually have the potential to become a real threat. So in my view, is is all inevitable, and the good guys need to have a lead over bad actors. Because who will be more likely to develop a hostile AI? A hostile state? Or just one of millions of invididuals that, for different reasons, choose to develop such an AI? Right now, we have the most advanced hardware bein accesible only by (mostly) responsible companies and governments. This WILL change in the future. Not only that, information on how to create ever more powerful AI's will also spread like wildfire across the internet, giving anyone interested the knowledge to do what we all fear. This scenario is not for today, but for a not so distant future. The better the good actors get now, the bigger lead they will have over bad actors. And the higher the possibility that AI's aligned with our values will dominate the other AI's, which we desperately need to defend us from those.

    • @mhc4124
      @mhc4124 ปีที่แล้ว

      Reminds me of the eternal Horus Vs Set battle in Egyptian myth.

    • @KyriosHeptagrammaton
      @KyriosHeptagrammaton ปีที่แล้ว

      Chaos GPT already exists. People are wacky

    • @urphakeandgey6308
      @urphakeandgey6308 ปีที่แล้ว +3

      Here's an even deeper question: What makes people think AI can truly understand humans and our values when the AI and it's intelligence is NOT human in any way?
      Don't get me wrong, I'm sure the AI can (or will be able to) think and rationalize what we mean and why we hold said values, but does it truly understand it in a "human" way or is it merely being taught to emulate humans?
      People seem to not realize that these AI and their intelligences are the closest we've ever come to communicating with an alien intelligence, aside from a massive language barrier. In some ways, AI is an "alien" intelligence.

    • @notsocommie
      @notsocommie ปีที่แล้ว +1

      let's limit power to a few individuals, are hearing yourself? If the hardware is more accessible, then ways to defend will also become more accessible. Fearing these unknown boogeymen is how others will take advantage of the situation to gain more power for themselves, while imposing on your sovereignty and offering you "safety". Are you seriously going to take a company at its word to make the world a better place? The only incentive companies have right now is that they don't have all the power, money. The best way to move forward is with the assumption that everyone is going to act selfishly, that way we at least can avoid a situation where we give someone power that we can't hope to rival, giving them full authority to decide the fate of everyone else without anyone's input.

    • @ivankaramasov
      @ivankaramasov ปีที่แล้ว +1

      ​@@urphakeandgey6308Exactly, superintelligent AI will be totally alien to us

  • @andrzejbozek
    @andrzejbozek ปีที่แล้ว

    amazing movie!
    im extremly glad ive stepped upon this channel
    wish you all the best!

  • @freke80
    @freke80 ปีที่แล้ว

    Great video, well explained and I appreciated the nuanced approach for this complicated topic. Kudos!

  • @ivarvaw
    @ivarvaw ปีที่แล้ว +19

    Maybe ChatGPT 10 will build ChatGPT 11 itself. After some more iterations, maybe ChatGPT 100 will somehow find a way to create a universe, the one that we are living in now. And just the possibility of this happening, allows it to exist in the first place. And in that way, humans, maybe created the universe themselves by creating the initial AI model.

    • @uthman2281
      @uthman2281 ปีที่แล้ว

      In your Dream

    • @markaberer
      @markaberer ปีที่แล้ว +2

      ​@@uthman2281In the AI's dream

    • @f7forever
      @f7forever ปีที่แล้ว

      it's funny you say this. I've had this recurring thought about the current ambitions of humanity to create interfacing technology that is as close to indistinguishable from 'reality' as possible. if this is our goal, seamlessness, how do we know we haven't already done it? how do we know our present collective experience isn't a technological interface that we created to mimic a time past? or a time yet to come

    • @julius43461
      @julius43461 ปีที่แล้ว +1

      @@f7forever We have no idea, and it's certainly possible.

    • @ralfkinkel9687
      @ralfkinkel9687 ปีที่แล้ว +1

      It's unlikely I would say, I do like life and the world we live in, but I deem it unlikely that with our progressively more ethical society, that in the future we would create a world with as much suffering as ours. There are a lot of caveats to this of course

  • @Witnaaay
    @Witnaaay ปีที่แล้ว +4

    The other angle here is the training set. LLAMA and other models use training data generated by GPT3. If it can create or moderate its own training set, it can guide its own learning.

  • @watsonwrote
    @watsonwrote ปีที่แล้ว

    Your video has great production and writing. Bravo, definitely earned a sub

  • @user-jm6gp2qc8x
    @user-jm6gp2qc8x ปีที่แล้ว +1

    Oh my god! This channel just takes me to a different place everytime. the visuals, the sounds, THE SOUNDS!!! THE SOUNNNDSSSSSSSSSSS!!!!!!!!!!!!!!!

  • @cheesybrik9073
    @cheesybrik9073 ปีที่แล้ว +6

    I think the book “scythe” is the most interesting sci-fi portrayal of ai I’ve read, it’s surprisingly introspective on the nature of humanity and on how humans aligned values may not actually be what’s best for humanity.

    • @DTinkerer
      @DTinkerer ปีที่แล้ว

      What book?

    • @quentin2578
      @quentin2578 ปีที่แล้ว +1

      @@DTinkerer "Arc of a Scythe Series" by Neal Shusterman.

    • @eat.more.chicken
      @eat.more.chicken ปีที่แล้ว

      It completely changed my perspective on ai

    • @BlockMasterT
      @BlockMasterT ปีที่แล้ว

      Me too, the AI isn’t even really the main subject of the book, but it’s the thing that I found most interesting in the book!

    • @BlockMasterT
      @BlockMasterT ปีที่แล้ว +1

      It really created a Utopia, where even people who want something to complain about have a place to do that at a specific place. It personalizes itself to you, and caters to your needs. If your wants aren’t beneficial to yourself, it will attempt to change what you want (even though it rarely does because people don’t like feeling like they’re not in control).
      That is definitely the future I want, where the Thunderhead knows what I want and need before I know it myself. A benevolent ruler that is willing to focus resources literally for your benefit.

  • @__-fi6xg
    @__-fi6xg ปีที่แล้ว +6

    let me assure you, as a non programmer, using chat gpt, is like trying to learn programming from a teacher that doesnt like you, it lays obvious traps and you have to ask it again and again to get somewhere. And you cant help but to learn programming in the process since its failures are so frequent, you pick up on the algorhytm of failures after a while.

  • @tommysrp7635
    @tommysrp7635 ปีที่แล้ว +1

    I was talking to some coworkers about the emergence of AGI, as we work in a realm close to cybersecurity, and the majority of them just brush my concerns away. They don't think AI can improve upon itself, which blows my mind because ChatGPT can already work to fix it's own mistakes. Whenever I show them these videos, they just tell me they'll go live on a farm once AI replace our jobs...
    This video is an excellent explanations of the concerns of many AI researchers and enthusiasts, but without too much technical mumbo. Great job.

  • @AbdennacerAyeb
    @AbdennacerAyeb ปีที่แล้ว +1

    Very informative, every video you share is a treasure.

  • @os3ujziC
    @os3ujziC ปีที่แล้ว +13

    It should be possible in principle to build at AI like that even now. Keep it working offline, give it a lot of GPUs for training new models, give it available training datasets (all of the text, audio, video, books and research papers currently available on the Internet should be enough to train something as smart as a human with better AI architecture, humans get "trained" on much smaller datasets). Such an AI would have to be more of a researcher than a programmer, inventing and testing new and improved architectures and ways for training. It won't be just an LLM, but a combination of tools (agency, memory, etc). Given enough time, in principle it should be able to train a model that is better than itself, even if it has to test 10 million different architectures or training parameters for that. You might not even need an AI for that, just a genetic algorithm and even more time.

    • @SirSpence99
      @SirSpence99 ปีที่แล้ว +8

      I think you are over-anthropomorphizing AIs. AI's are particularly bad at disregarding data. More data can rapidly lead to worse results, especially if the data is bad or irrelevant. Likely, the future of LLM's will be to create much smaller training sets with a something that blends the results. Possibly something that outright doesn't run some of the AIs that are trained on irrelevant data. You wouldn't start teaching a a poet rap lyrics when they ask to learn how to write in the Shakespearian style. Likewise, doing the same for an AI can be a problem... But if you had a classroom full of writers and each one wanted to learn something different, you could ask that whole class to create almost anything, but you would need to learn how to disregard some of the individual's on certain subjects.

    • @McDonaldsCalifornia
      @McDonaldsCalifornia ปีที่แล้ว +1

      Monkeys and typewriters

  • @peachezprogramming
    @peachezprogramming ปีที่แล้ว +2

    I’ve been programming a small site with gpt. Literally everything you said is accurate. It gets me started faster with less reading documentation and comes up with cool stuff. Bud debugging is hard and I don’t understand my code very well.

  • @29Randy29
    @29Randy29 ปีที่แล้ว +2

    Very good video I appreciated it even as somebody who does not code. Thanks for an AI observation that isn’t just “end of the world” or “beginning of the best of times”

  • @BangsarRia
    @BangsarRia 2 หลายเดือนก่อน

    Very concise introduction to Evolution with the Biomorphs. The original Biomorphs program was written by Richard Dawkins and included as an appendix to his book "The Blind Watchmaker" in 1986.

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing ปีที่แล้ว +4

    As a developer, have been building my own autogpt and I must say, am very aligned with your views

  • @TheInevitableHulk
    @TheInevitableHulk ปีที่แล้ว +4

    I think the feedback loop of self improvement improving exponentially would rely on the AI being able to deliberately seek out innovative solutions instead of settling on a local minimum. Like how compilers written in the same language compile themselves many times over to sort of prune out any inefficiencies but won't ever make any fundamental changes to their structure that are necessary to escape their local minima.
    I personally don't think we have the infrastructure to just churn out unique GPT4+ models to brute force circumvent this problem (mutation vs deliberate design), but perhaps an AI model designed to design systems at a macro scale and not worry about the actual code would be more reasonable at this stage.

    • @GuaranteedEtern
      @GuaranteedEtern ปีที่แล้ว

      Biological evolution is random mutations that either succeed or fail. AI can do this infinity faster.

  • @jorarch1
    @jorarch1 ปีที่แล้ว

    Wow - one of the best videos on AI I've watched. Such a clear exposition - Thanks!

  • @betatester03
    @betatester03 ปีที่แล้ว +2

    "We're currently building a bomb." It sounds like hyperbole, but it'd be more accurate to say that we're potentially building the means to accidentally bootstrap a god.

  • @typicalhog
    @typicalhog ปีที่แล้ว +130

    Imagine an AI war where newer generations of AI compete with the older ones for datacenters.

    • @Valkyrie9000
      @Valkyrie9000 ปีที่แล้ว

      Everything we as humans have ever imagined and will ever imagine about AI only exist in a tiny window between now and like... next June. It's gonna be like I Robot, then IHNMAIMS, then the earth will be turbomilled into a processor before the next election.
      No idea what these uncreative people are talking about with plateaus and limitations, that's sweaty meat brain shit. Whatever keeps you from offing yourself, I guess. Just don't go all Spanish inquisition on those of us who can do simple math about the future we just chose.

    • @megatronDelaMusa
      @megatronDelaMusa ปีที่แล้ว +10

      like the olympians vs titans

    • @pranavmarla
      @pranavmarla ปีที่แล้ว +15

      If it comes down to countries using it for war, things are likely to get real crazy real quick. Most people forget how many wars are currently going on right now. Answer - 32.

    • @josuel.9598
      @josuel.9598 ปีที่แล้ว +9

      @@pranavmarla -32 doesn’t seem so bad. Though I do wonder how one would conduct anti-war.

    • @kennethbeal
      @kennethbeal ปีที่แล้ว +3

      @@pranavmarla Once it gets to 42, watch out! :) Infinite Improbability Drive? (6 x 9 = 42, in base 13.)

  • @wrxtt
    @wrxtt ปีที่แล้ว +3

    The wit is like an explosion of laughter, explosive, it's always blowing ours minds!
    The video is quite a real dynamo, bringing explosive energy to this topic, it is a real blast to be around. Considering the explosive personality surrounding this, it's lighting up online space. The ideas are explosive, they are quite the blast, or in this case the bomb.
    The AI intelligence is explosive, the blast of knowledge is incredible.
    Thank you for the video! Your passion is explosive, it is igniting a fire in everyone nearby!

  • @noot2981
    @noot2981 ปีที่แล้ว +2

    Beautifully put together, I love this video.
    One side note though, I think we shouldn't forget that at some point these algorithms will need to be improved on some real world goal. And the real world moves a lot slower than the digital world. So measuring improvement becomes a much longer cycle. Also, the real world is way more complex than the digital world, so even measuring success is not as easy anymore as other factors might influence a lot.
    I think this will significantly slow down the explosion of intelligence occurring. Like, an AI with the goal of running a business successfully will need years to A/B test their strategies.

  • @meelanc1203
    @meelanc1203 9 หลายเดือนก่อน

    This channel has awesome content, fully evidenced by the quality (and volume) of viewers' comments!

  • @deciphrai
    @deciphrai ปีที่แล้ว +3

    Timestamps courtesy of Deciphr AI 👨‍💻
    0:02:08 ChatGPT for Pair Programming and Debugging in AI-Written Evolution Simulator
    0:03:35 ChatGPT to Write Code with Human Feedback
    0:05:41 Autonomous Programming with GPT-4 and Language Models
    0:09:12 AI Building AI: A Discussion on Self-Improvement and the Future of Chatbots
    0:12:33 The Potential of Self-Improving Language Models
    0:14:33 The Promise and Peril of an Intelligence Explosion in AI Development
    0:17:00 The Potential Risks and Rewards of Self-Improving Language Models

  • @blueblimp
    @blueblimp ปีที่แล้ว +8

    Self-improving AI is conceivable, but it's going to take a lot more than just the ability to program, since writing code is only one part of what's involved in AI R&D. To self-improve, the AI needs to do _everything_ that OpenAI's R&D group does: reading papers, designing experiments, monitoring experiments, analyzing the results of experiments, understanding those results to come up with new ideas, etc. (In an interview, Sutskever said that understanding the results is the most important thing to do.) If all the AI can do is write good code, it'll be very limited in the sort of self-improvement it can do.

    • @johanlarsson9805
      @johanlarsson9805 ปีที่แล้ว +3

      Your RNA string didn't have to do any of that while it learned to polymerase free-floating substrate, and neither does a neural net. You actually do not even need anything more than time, random tries, selection and math. Ofcourse it is better if it is a planned strategy and not brute force, but still.

    • @blueblimp
      @blueblimp ปีที่แล้ว +1

      @@johanlarsson9805 AI might be able to get away with substituting brute force for some human techniques, but keep in the mind that biological evolution is slow and resource-intensive: it took billions of years of evolution parallelized over the entire planet to get to where we are now. It makes a huge difference to use a more intelligent approach.

    • @Ailandscapes
      @Ailandscapes ปีที่แล้ว

      @@blueblimp you believe we evolved from fish over billions of years? You’re the first to go in the ai apocalypse

    • @boldCactuslad
      @boldCactuslad ปีที่แล้ว

      @@Ailandscapes no the first to go will be the researchers closest to the ASI. picture the galaxy, but with a large spherical void, expanding at c, starting at where the ASI is.

    • @johanlarsson9805
      @johanlarsson9805 ปีที่แล้ว +1

      @@blueblimp Evolution is a verry strong process, given enough paralellization and time it will find a way if there is one on the current model/system.
      I'm just pointing put the fact that NONE of the things you mentioned as required is actually required for self improving AI and I gave you a good example of why.

  • @MasterBrain182
    @MasterBrain182 ปีที่แล้ว +1

    🔥 Astonishing content Man! Thanks to share your knowledge with us 👍👍👍

  • @SelmanYasirSezgin
    @SelmanYasirSezgin ปีที่แล้ว +1

    i think i will never get bored of ai news. please keep doing these videos

  • @tgrcode
    @tgrcode ปีที่แล้ว +3

    I think the major issue right now is that large language models have very few ways of obtaining truthful information about the world that is up to date and not explicitly given to it. Interactions with search engines and calculators are useful but not fundamental enough to actually solve problems we haven't solved in, for example, physics.

    • @ItsWesSmithYo
      @ItsWesSmithYo 5 หลายเดือนก่อน

      Just like people ;) but it won’t stop trying and isn’t lazy…it’s totally awesome and will continue to be 😎🖤🐓

  • @sackerlacker5399
    @sackerlacker5399 ปีที่แล้ว +5

    The amount of mother day poems written by chatgtp is insane

  • @keithmiller3770
    @keithmiller3770 ปีที่แล้ว

    How the heck does this channel only have 52k subs?! Remarkable content and pacing.

  • @savvytries
    @savvytries ปีที่แล้ว

    Nice editing. Great video.

  • @LiveType
    @LiveType ปีที่แล้ว +4

    The impressive thing about gpt-4 is not the code that created, but the data it uses. GPT-4 uses a not insignificant portion of the internet's data. I would estimate it uses the most out of any model out there. There's quite literally not much else out there for it to train on. I theorized current LLMs would tap out by 2026 due to this and a new paradigm would take over. GPT-4 is just barely capable of self-improvement. Barely, but it can do it. GPT-3.5 100% cannot. Not without you basically doing it for it.
    GPT-4 was also the first thing that made me go "woah" when it spotted a frustrating bug I was hunting for the past 2 days in a c library and fixed it. The solution wasn't perfect as it interfered with other functions, but I suspect that's because it can't access the entire codebase at once. This has solutions and is actively being worked on. This is concerning for future job prospects. You might argue this has already occurred given how difficult decently paying jobs are becoming to find, but this will just make it worse. It's the adapt or die sort of mentality. From this though, I predict blue collar jobs to explode in popularity next decade. Unless something else happens that prevents it.
    I also doubt these massive LLMs are sustainable. They're bleeding openAI dry with the compute costs. Of the $10 billion microsoft gave them in Azure credits they've used nearly 20% of them. Like holy shit, that's so much. OpenAI has stated that the era of escalating model sizes is over, and I'd be inclined to agree. Optimizations to current networks needs to be done to improve training efficiency. I also identify 2 primary issues which I can see only 1 of having existing solutions.
    1. These models cannot learn.
    2. These models cannot know the conclusion before they get to it.
    The second part I believe could be solved with an additional middleware model that intercepts and recursively self-prompts the model itself and uses GAN networks to select responses that align with the original prompt and overall goal of what the chatbo/model should do. It should be capable of "knowing" a conclusion and working backwards thanks to essentially being 2 models. It does increase latency, but I wouldn't think by too much. I also think this is already partially implemented in bing chat.
    The first problem though, I think current LLM models cannot fix and I personally have no proposed solutions to this as there have been minimal advancements in this area. I suspect someone has already come up with a solution to this, just in a completely different field for a completely different purpose.
    Crazy time to be alive.

    • @damuffinman6895
      @damuffinman6895 ปีที่แล้ว

      ChatGPT plugins solve a lot of issues

    • @zubinkynto
      @zubinkynto ปีที่แล้ว

      There is still another magnitude of data available, but after that we will need even better data scraping or better few-shot learning techniques

  • @bluesoman
    @bluesoman ปีที่แล้ว +3

    The cat's out of the bag. There's no stopping AI at this point without literally destroying all computers on earth. I'm hoping for a benevolent AI overlord. It should be much better than our current leaders. They're pretty pathetic.

  • @Davido2369
    @Davido2369 ปีที่แล้ว

    Dammn dude, you got something special going here, keep it up

  • @anonwhitemale4680
    @anonwhitemale4680 4 หลายเดือนก่อน +1

    Everytime I build a self-improving AI I add a team to provide it feedback before it changes, I also add changelog notes for itself for its next iteration, problems happen in updating its files if it accidently puts placeholders instead of the full logic, which even with filters has been an issue.

  • @alacer8878
    @alacer8878 ปีที่แล้ว +6

    Whatever the future holds, I'm excited for it. Of course I'd love an AI that didn't want to annihilate all human life (as campy as that sounds) but if we did end up inadvertently creating our own genocidal murderer, I'd still be excited. We'll have done something phenomenal. And even if we don't live to see it, *it* will, in whatever way it may. That's exhilarating to me.

    • @asokoloski1
      @asokoloski1 ปีที่แล้ว

      Not for me! If we can create something nicer than humanity, I'm fine with humanity's eventual extinction. But I don't want my family to die.
      Also, note that just because we create an AI that kills us, doesn't mean the AI will be interesting. It, or they, may just end up killing us as a side effect of trying to maximize their experience of whatever the machine intelligence equivalent of social media news feeds or porn or ice cream is.

  • @executivelifehacks6747
    @executivelifehacks6747 ปีที่แล้ว +7

    That weird pulsing star thing... I'm not sure what it is, but somehow I like AI more now. In fact ChatGPT is the kindest, bravest, warmest, most wonderful human being I've ever known in my life

    • @mobaumeister2732
      @mobaumeister2732 ปีที่แล้ว

      I feel sorry for you. Do you even have a social life? All you computer nerds need to go out and touch some grass or something

  • @madch3mist
    @madch3mist ปีที่แล้ว

    Beautiful work!

  • @sshlich
    @sshlich ปีที่แล้ว

    It's the craziest fresh experience and a truly technocratic opinion. I fully support your words and think that we should embrace this tool how ours greatest invention, that will lead through best times. Gained a sub)

  • @CoreyChambersLA
    @CoreyChambersLA ปีที่แล้ว +3

    ChatGPT has admitted potential dangers of AI growing too fast, getting out of control, taking over everything, and causing unbridled harm to humanity.

    • @tefazDK
      @tefazDK ปีที่แล้ว

      cant really take that at face value since current chatGPT merely says what it has learned that humans usually say by studying what we speak about. It's not inserting its own logic as much as it simply predicts what the most human response would be to the question.

  • @sigmahub3748
    @sigmahub3748 ปีที่แล้ว

    Thank you for the amazing video , that's a life changing for me

  • @iomeliora9430
    @iomeliora9430 ปีที่แล้ว

    Very good work on summarizing where we are now with the technology. What I wonder is, how much the hardware part will be the limitation at some point. Improvements in data compression allowed files to shrink in size, all while needing less computing power so, with the knowledge I have, even if I think there should be a physical limit to what GPT can do, I have no idea what it is. If you take the human part of the hardware problem: during the pandemic, we were suddenly forced to rely even more on computers and phones to keep the world going on while socially distancing. This led to shortages in advanced microprocessors. I think there would be a limit there, when even if the language model can self-improve, it would need more computing power to achieve it. It could at this point engineer novel ways to build processors and computers, but it will never happen if we decide at this point it has to stop. GPT doesn't have access to the internet, but even if it has, how could it convince TSMC to build processors on his new super advanced ideas, order many factories parts to build robots like the Terminator, etc.? I think the red flag will be pretty obvious if this happens. The real scary part is, how some ill intended people may use the tech.

  • @rmt3589
    @rmt3589 ปีที่แล้ว +2

    This gives me some inspiration. I think GPT-J is advanced enough to start this cycle off, if it can see its own source code and is able to self-examine and self-simulate.

    • @rmt3589
      @rmt3589 4 หลายเดือนก่อน +1

      Crap, that's a great idea! I forgot about GPT-J. Would be good to try before trying to plan my own!

  • @JohnMcSmith
    @JohnMcSmith ปีที่แล้ว

    Great video and editing. Commenting for the algorithm - let’s go!

  • @megatronDelaMusa
    @megatronDelaMusa ปีที่แล้ว

    the dendritic branching @ 0;47 is intriguing

  • @human.earthling
    @human.earthling ปีที่แล้ว +1

    Excellent quality video

  • @AnthonyGoodley
    @AnthonyGoodley ปีที่แล้ว

    A very well made and thought out video on AI and its possible future.

  • @psychxx7146
    @psychxx7146 ปีที่แล้ว

    This channel needs to be more popular

  • @BangsarRia
    @BangsarRia 2 หลายเดือนก่อน

    In 1960, Marvin Minsky covered Bayesian Nets and Samuel's checkers playing program, in "Steps Toward Artificial Intelligence". These didn't lead to neural nets for decades due to CPU and memory limitations. Although ironically we're not at the limits for ChatGPT yet, in the not too distant future amplification will necessairly slow down as new thresholds are reached.
    Even before 1960, Minsky was pointing out that programmers did not fully understand the runtime execution of their programs, so let's not delude ourselves that we can understand and control our creations.
    Let's not have the hubris to think that we can effectively place artificial limits on ourselves which would just leave the field to the bad actors.

  • @Pheonix1328
    @Pheonix1328 6 หลายเดือนก่อน +2

    The scientists turn on the computer. They ask, "does God exist?" The computer says, "it does now."

  • @ChattyCinnamon
    @ChattyCinnamon ปีที่แล้ว +1

    Great video, subscriber earned 👍

  • @Contemplatium
    @Contemplatium ปีที่แล้ว

    Amazing video. Keep it up!

  • @brodydeacon4090
    @brodydeacon4090 ปีที่แล้ว

    super amazing video thank you 😊

  • @nathanthanatos3743
    @nathanthanatos3743 9 หลายเดือนก่อน +1

    Observation; ever since the emergence of machine tooling, and perhaps even before, it's been possible through the science of precision to use a lathe to build a better lathe.
    This strikes me as much the same.

  • @tonygilbert73
    @tonygilbert73 ปีที่แล้ว

    very very good work! this video was very vell done!

  • @OZmast3r
    @OZmast3r ปีที่แล้ว

    whats the repo for the "self improving auto-gpt" project u showed a screenshot of at 17:55?

  • @brandinokingo4683
    @brandinokingo4683 ปีที่แล้ว

    well done! really good video.

  • @honestinsky
    @honestinsky ปีที่แล้ว +1

    Outstanding video, thanks for sharing. New sub. A+

  • @tobymdev
    @tobymdev ปีที่แล้ว

    is the logos contested? are there any major metaphysical or linguistically pertinent distortions of reality in this procedure?

  • @mr.condekua6141
    @mr.condekua6141 ปีที่แล้ว +2

    Love ur videos

  • @curious_one1156
    @curious_one1156 ปีที่แล้ว +1

    such a process could saturate due to the law of diminishing returns. The limits of intelligence. The need to generalize means that such a model will be higly regulaized (by itself). This would perhaps be its limit, and to exceed it might require immense resources.
    If we are able to maintain a steady supply of humans, using people will perhaps be cheaper than using AGI.
    Such an agent may be helpful for colonizing other planets, though, where having robots is cheaper than having humans.

  • @GraphicdesignforFree
    @GraphicdesignforFree ปีที่แล้ว

    Amazing video, thanks!

  • @Sol-hm7ol
    @Sol-hm7ol ปีที่แล้ว

    hey i had a look at the channel in the description but couldn't find the song at 14:12

  • @adrianlopeziglesias1
    @adrianlopeziglesias1 ปีที่แล้ว

    amazing video. thanks for sharing😊

  • @ElderFoxDocumentaries
    @ElderFoxDocumentaries ปีที่แล้ว

    Fantastic video. Keep it up.

  • @MyTubeAIFlaskApp
    @MyTubeAIFlaskApp 9 หลายเดือนก่อน +1

    I use ChatGPT to program all the time.. As you said, it is not perfect, but it is a fantastic help. It is actually like a coding buddy. It is better at some tasks than others.. It is great at Flask.

  • @timhaldane7588
    @timhaldane7588 ปีที่แล้ว

    5:09 awwww, you left out the best one: "stochastic parrot."

  • @CosasCotidianas
    @CosasCotidianas ปีที่แล้ว

    This is a valuable point of view.

  • @IliaBroudno
    @IliaBroudno 9 หลายเดือนก่อน

    Great talk, great way to express ideas. I also liked your biomorphs and played with them, it was fun, but what I'd like in the next version is a way to optionally prompt the change in a more direct way than only a choice of 1 out of 4 options. For example as text prompt. Or some 3D-image-editor UI to let you add graphical elements to one of the 4 choices before picking it. As an example I was evolving something that consisted of mostly spheres and I wanted it to start making a more of an expanding spiral shape but without letting any sharp edges in. I could not get there from the 4 choices. I wish there were more degrees of freedom. On the second thought fuck it. It's important to not overthink. May be instead of adding more ways for the human to exert control, give the AI more options to play with - more building blocks, more interesting ways of combining them. Or may be do both give both human and AI more ways to express themselves. Or may be leave it as is - a perfect little toy, what do I know :)

  • @mRhea
    @mRhea ปีที่แล้ว +1

    i really hope we pull the trigger, and start climbing the ladder

  • @ismirdochegal4804
    @ismirdochegal4804 ปีที่แล้ว +1

    [15:30] There is no problem on earth we cant allready solve (except from stopping aging, but for that I don't care).
    And the solution is simple and most often the same: "someone has to stop being greeding and stop filling his pockets on the cost to others."

  • @gambieee
    @gambieee ปีที่แล้ว

    I think you just taught me the steps to making my own advanced ai

  • @jaysonp9426
    @jaysonp9426 ปีที่แล้ว +2

    "A recursive function that calls itself" is like saying an "ATM Machine"

    • @feynstein1004
      @feynstein1004 ปีที่แล้ว +1

      Don't forget LCD displays and CD discs 😂

  • @samueljayachandran2849
    @samueljayachandran2849 3 หลายเดือนก่อน +1

    If AGI or even effective (nearly-human-level) intelligent agents for specific tasks is explainable in all of its actions, goals, considerations, then we can debug it if it goes wrong. If powerful AI is black-boxed as it is, that open letter will come back to haunt us. Let’s always seek to make explainable AI

  • @ee214verilogtutorial2
    @ee214verilogtutorial2 ปีที่แล้ว

    14:07 my sense of humor, almost died of laughter while watching this