If AI starts taking over, can we just unplug it?

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ม.ค. 2025

ความคิดเห็น • 6

  • @drhxa
    @drhxa 6 หลายเดือนก่อน

    Yes, "unplug AI" is like saying "disconnect generators from the power grid". You could maybe do it in principle, but the problem is that people in hospitals would die, our logistics systems are dependent on it to deliver food to cities, etc, etc. It's not possible to "unplug" once its out and the world is dependent on it.
    I do think we need to have kill-switches on large datacenters to have the capability of stopping new AI models in training or alignment processes that get out of control before they are released (or while in alpha release).
    This is possible but we need to build the capability and have the processes in place. For example if a rogue actor hack OpenAI in 2029 and does a rogue internal deployment of GPT-7 before it has been aligned, we really don't know what kinds of chaos it could unleash. Being able to have a stop button is key for both accelerating AND for security.

    • @VoloBuilds
      @VoloBuilds  6 หลายเดือนก่อน +1

      You got it! I think the big thing we need to start differentiating is raw AI models that can do simple input/output and matrix multiplication in the middle VS AI Agents which are really just programs that make use of AI models as part of their workflow. AI Agents is what poses the vast majority of the risk IMO.

  • @tekknojunkie
    @tekknojunkie 7 หลายเดือนก่อน +1

    I agree with most of your take on things, although I have counter argument to not blaming the knife companies when people use them in destructive ways. Historically, when groups of people are flooded with tools that have higher than normal levels of destructive capabilities combined with relatively low level operational skill requirements (such as a knife, a gun, etc), higher than normal amounts of destruction occur within and around those groups. It takes higher than normal levels of skill (knowledge, intelligence) to gain respect of the power and probable destructive consequences of using those types of tools, which often requires not taking the shortest path. Current AI developments are concerning because of their higher than normal levels of destructive potential (spreading misinformation, hacking "secure" information, controlling "secure" systems, etc) and are currently being designed to use with lower and lower levels of operational skill requirements, and to spread across the world in infinite numbers. The companies currently at the forefront of the AI tsunami have a tremendous responsibility now, due to all of the other systems we've built and are now standing on.

    • @VoloBuilds
      @VoloBuilds  7 หลายเดือนก่อน

      Really appreciate your well-articulated points - thank you for taking the time to write this. I haven't thought about it from that perspective before so it's definitely something for me to think about. Thank you!

  • @Fellendorf85
    @Fellendorf85 7 หลายเดือนก่อน +1

    Thanks for the video, Volo!!!
    The first thing I thought about while watching the video (especially the last part) was robots with AI. As far as I know, several companies around the world are already developing AI robots. I imagined a future where bad robots fight good robots. The future may be brighter than what was shown in the Terminator movie. LoL. 🤪

    • @VoloBuilds
      @VoloBuilds  7 หลายเดือนก่อน

      Yeah I can definitely see the military use cases for those.. Not sure how that will be regulated and definitely hoping we can avoid real life Terminator 😵 maybe we can create some international laws regarding use of autonomy in warfare? The autonomous part is what feels dangerous