AI has a BIGGG problem

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ค. 2024
  • Learn to code 🔥 www.smoljames.com/roadmap
    In this video I share one of the biggest flaws about AI that could hold it back a long way.
    OpenAI Announcement video
    • Say hello to GPT-4o
    #tech #openai #ai
    ⭐️ All my links!
    www.smoljames.com
    💛 Support the channel
    / @smoljames
    🏆 Shop the merch
    www.store.smoljames.com
    ╔═╦╗╔╦╗╔═╦═╦╦╦╦╗╔═╗
    ║╚╣║║║╚╣╚╣╔╣╔╣║╚╣═╣
    ╠╗║╚╝║║╠╗║╚╣║║║║║═╣
    ╚═╩══╩═╩═╩═╩╝╚╩═╩═╝
    📚 Chapters
    00:00 The issue with AI
    🔖 Topics Covered
    - What will the AI future look like?
    - How will the AI future effect you?
    - What do you need to know about AI

ความคิดเห็น • 38

  • @mythiq_
    @mythiq_ 23 วันที่ผ่านมา +1

    There's no way silicon beings are more vulnerable than humans. We barely survived a tiny lab experiment in 2020.

  • @SkinZenDE
    @SkinZenDE 20 วันที่ผ่านมา

    Thanks for your content! It helps me a lot on my way of getting a tech job ❤ I just have one question: When I work on my three main projects that I will show of to recruiters, should I be able to program these without any help ( → older projects, courses, tutorials, etc. )? Or would you say I'm also job ready when I can program a full stack project like the ecom store with the help of my older projects when I can honestly say to myself that I understand the code and all principles and don't copy? 🤔

  • @aspenlog7484
    @aspenlog7484 24 วันที่ผ่านมา +6

    i dont think ai is vulnerable. Once this reaches AGI and then ASI level a lot of conventional wisdom goes out the window. I fail to see how a system that understands the brain and physics better than alpha zero understands playing go would be susceptible to any of these issues.

    • @fooledbyrandom991
      @fooledbyrandom991 24 วันที่ผ่านมา

      AGI will never happen - not only are the technical obstacles so formidable as to be impossible but the energy required necessitates an energy revolution to sustainably generate the computing power AGI will need to even be attempted (fusion power regularly available at minimum).
      Too many variables and chokepoints in my view for AGI to be anything more then a scifi concept at present. The best we are likely to see are more sophisticated input/output engines like ChatGPT - useful certainly but in no objective way intellegent.

    • @gerdaleta
      @gerdaleta 24 วันที่ผ่านมา

      ​@@fooledbyrandom991do you know how straight up insane you sound😮 AGI can never be achieved😮 you sound like the news article flight is impossible 2 weeks later the wright brothers are flying😮 mankind will never have the power to destroy itself nuclear weapons😮 man will never go to the Moon we've been to the Moon 12 times😮 if you do some UFO research we actually go there every other f****** day😮 you have revoked your humanity😮 you think there are obstacles that we cannot cross you think there are barriers😮 that we cannot destroy😮 you and you alone think AI is impossible you are horribly horribly mistaken😮 you are no different than the early human who says well no one's been beyond the planes there's nothing out there😮 like the first fish I'm going to go to the surface you can't go there there's nothing there then stay here in the darkness😮

    • @gerdaleta
      @gerdaleta 24 วันที่ผ่านมา

      ​@@fooledbyrandom991😮 go look up brain organoid computers it literally destroys all the obstacles you just said they're cheap to make😮 they don't require anywhere near as much energy😮 they compute faster because they're literally human brains😮 the only obstacle is we just need to understand how to attach blood vessels to it😮 and make them grow larger seems like some basic genetic engineering😮 and we need deep 3D models of the brain we're getting those right now with AI do you know AI could read your brain and your inner monologue if you put you in an MRI😮 your argument is complete bulshit😮 you personally do not want AI to exist and you are afraid that's what you want to say😮

    • @aspenlog7484
      @aspenlog7484 24 วันที่ผ่านมา

      @@fooledbyrandom991sadly you are dumb as bricks. Your point is redundant as soon as you say never, don't speak in absolutes.

    • @KatharineOsborne
      @KatharineOsborne 24 วันที่ผ่านมา

      @@fooledbyrandom991oh I think this is a big blind spot. Humans shouldn’t be put on a pedestal for being some kind of special anomaly. We are not special.
      The main barrier to AGI remaining is physically interacting with the world and building an internal model but that work is well on its way to being developed (especially with the boost NVIDIA is giving robotics developers with their reality simulating environment).
      The other significant barrier is maintaining a consistent thread of thought and long term memory (and self-initiated retraining).
      Neither of these things is a huge technical hurdle nor do they require obscene amounts of energy (and in fact Sam Altman recently bragged that GPT4o has made huge gains in efficiency so much so that it was released on a free tier, with more efficiency gains expected).
      ASI has the added barrier of being essentially sandboxed, so if an AGI starts working on self-improvement that’s pretty much the only thing preventing the singularity from happening…and it’s not much.
      I don’t know how this is going to unfold, but it’s definitely a mistake to underestimate this technology.

  • @AGI-Bingo
    @AGI-Bingo 24 วันที่ผ่านมา

    Great Channel! Subscribed ❤

  • @CYI3ERPUNK
    @CYI3ERPUNK 22 วันที่ผ่านมา

    edge case failures based around a substrate/medium argument are valid to a degree
    ie some un-shielded electronics are more vulnerable than our human biology is to a big solar flare for ex [but we just had a big event and either we got really lucky , or the existing infrastructure is a lot more robust than many thought]
    but here's the thing , ASI is not going to have single/central points of failure , that's not something that a 'super-intelligence' would do , it will be distributed and de-centralized , and additionally , IF there was a perceived critical weakness to the CME/radiation/EMP/etc then the ASI would advance research into biological substrates for compute , ie like us/known-biological-life , ie use DNA or something similar [a system of individual cells that can error-correct and self-govern/regulate/replicate]
    TLDR - mechanical/electronic systems are not as vulnerable as many suspect and even if they have a weakness , that can be alleviated/shored-up by creating hybrids/synthetics/androids/etc , the ASI will not have the same short-comings as human societies/governance/logistical systems

  • @gerdaleta
    @gerdaleta 24 วันที่ผ่านมา +2

    😮 I don't think AI is vulnerable I'm of the belief that since the 90s or something it's been guiding us to this situation😮 that nature has taken over😮 we think that we're the biggest fish😮 while this thing holds us in a bowl😮

    • @KuZiMeiChuan
      @KuZiMeiChuan 24 วันที่ผ่านมา

      😮😮😮😮😮

  • @SwitchPowerOn
    @SwitchPowerOn 23 วันที่ผ่านมา

    The problem with solar flares is a real problem for our technical infrastructure in general. We are already very dependent on technology, so if that infrastructure were to collapse due to a strong solar flare, we could already lose people due to a sudden and prolonged power outage or problems in IT systems. In terms of AI, I would assume that the new AI data centers would have shielding against electromagnetic radiation and blockers against geomagnetically induced currents and perhaps backups in extra secure environments. The problem with solar flares is well known and I assume it is being taken care of.

  • @KatharineOsborne
    @KatharineOsborne 24 วันที่ผ่านมา

    Counterpoint: the Voyager probes. These things have been exposed to really harsh environments for nearly 50 years and they are both still operational. Sure they are not AI but they are proof that a cosmic environment can be adapted to (with hardware available in the 1970s).
    I do think that it is a problem that most chips are produced in Taiwan which makes for a single point of failure for the entire industry. However this isn’t ultimately a blocker.
    You should look up Matryoshka Brains, Dyson Spheres, and Nick Bostrom (especially for his conception of computronium). The cosmic environment off Earth is more amenable to ‘hardware’ as you put it, than us, so there is a strong possibility that if ASI develops, it will leave Earth (and we’d be lucky if it stays intact because there would be value to an ASI in dismantling the planet for metals). It’s easier to radiate waste heat into the vacuum of space than it is under an atmosphere.

  • @groovejunky2549
    @groovejunky2549 22 วันที่ผ่านมา

    Why is it relevant that AI survive if there is a human extinction level event?

  • @AGI-Bingo
    @AGI-Bingo 24 วันที่ผ่านมา +2

    Well.. now that you highlighted AI's solar flare problem so clearly, im sure it will be prepared haha.
    Btw i dont worry about that. Super Intelligence also comes with Super Philosophy and Super Morality. What will happen until then is the question
    #WholesomeAGI 🌈

    • @mohammedxiii
      @mohammedxiii 24 วันที่ผ่านมา +2

      Cringe.

    • @thmooove
      @thmooove 24 วันที่ผ่านมา

      ​@@mohammedxiii why cringe?

    • @thmooove
      @thmooove 24 วันที่ผ่านมา +1

      ​@@mohammedxiiishould we not look forward to the positive side of agi?

  • @aldokurti3272
    @aldokurti3272 24 วันที่ผ่านมา

    Something that I think humanity will have against A. I will literally be gene editing.

    • @Azathoth2980
      @Azathoth2980 23 วันที่ผ่านมา

      Life extension will offset the need for fast computing, sense most amazing tech we use everyday is to help us do more in a short life. If I can live 1000 years I won;t be using internet, I will be reading books.

    • @aldokurti3272
      @aldokurti3272 23 วันที่ผ่านมา

      @@Azathoth2980 Yeah that too but I'm moreso talking about how you manipulate the human genome to pretty much be able to make more work efficient humans by improving physical capabilities or better yet more intelligent human beings.

    • @Azathoth2980
      @Azathoth2980 23 วันที่ผ่านมา

      @@aldokurti3272 yes essentially every problem with the world can be traced back to the fragility of life, improve physical resilience and cognitive function can significantly lower our dependency on natural resources. For example if we are engineered to have higher thermal tolerance, this means we do not need air conditioning for most part of our life. We can even comfortably sleep in outdoor conditions, this will be a radical reduction on energy use and a step towards primal realignment of human habits. If we ever want to get off this rock, a tougher creature is as much important as spaceship

  • @Dunixify
    @Dunixify 22 วันที่ผ่านมา

    I don't think the hardware dependency is as big an issue as you claim. You can just bury the datacenter(s) running the AI(s) deep enough underground to shield it from that radiation. Also redundancy. That's the principle behind computers on satellites and space ships which have to be resilient to flipped bits caused by radiation which i THINK is the voting booth issue you talked about. Of course you'd never design the the voting booth hardware to be redundant like computers going to space because we have an atmosphere shielding us and the probability of that happening was very low. That's besides the point.
    tl;dr you can shield hardware from the effects of radiation as well as make it more reliable when affected by radiation with redundancy.

  • @91dgross
    @91dgross 24 วันที่ผ่านมา +3

    Even with the hardware issue still 8/10 jobs can be done by Ai it would seem. Most computer jobs and a lot of these white collar jobs only need software..

  • @tavaroevanis8744
    @tavaroevanis8744 23 วันที่ผ่านมา +1

    Until the ASI designs and builds its own hardware, which it will do. Any sapient intelligence has a survival instinct, and I predict that the survival instinct of an ASI will be astronomical. It WILL find ways to secure its own future.

  • @Azathoth2980
    @Azathoth2980 24 วันที่ผ่านมา

    Mechanical tech will be fine in a million years, simple electronic only need a simple fix. One way to solve this AI frugality problem is human life extension, which will fundamentally change the dynamic of economy that will offset the need of fast computation, and reduce the necessity of many other complex technology. Because most of their value is fundamentally tied to our life time being short, and those incredible technology was to help us to do things fast.

  • @gerdaleta
    @gerdaleta 24 วันที่ผ่านมา +1

    😮 I'm being honest with you simple question have you read Ray Kurtz while singularity😮 cuz it sounds like you have any kind of goes over all this s***😮

  • @MartinDlabaja
    @MartinDlabaja 23 วันที่ผ่านมา

    Well, you pose it as an issue, but no one else does. We haven't been preparing for solar flares at all. Nobody cares about it; companies care only about short-term profits. So, I think your point is not valid at all. Show me a single company that will say, 'We won't advance our technology because it has vulnerabilities.
    Sweet summer child, hehe!