I don't think we can control AI much longer. Here's why.

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 มิ.ย. 2024
  • Go to ground.news/sabine to get 40% Off the Vantage plan and see through sensationalized reporting. Stay fully informed on events around the world with Ground News.
    Geoffrey Hinton recently ignited a heated debate with an interview in which he says he is very worried that we will soon lose control over superintelligent AI. Meta’s AI chief Yann LeCun disagrees. I think they’re both wrong. Let’s have a look.
    🤓 Check out my new quiz app ➜ quizwithit.com/
    💌 Support me on Donorbox ➜ donorbox.org/swtg
    📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
    👉 Transcript with links to references on Patreon ➜ / sabine
    📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
    👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
    🔗 Join this channel to get access to perks ➜
    / @sabinehossenfelder
    🖼️ On instagram ➜ / sciencewtg
    #science #sciencenews #artificialintelligence #ai #technews #tech #technology
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 3.9K

  • @lennarthammel3075
    @lennarthammel3075 3 วันที่ผ่านมา +913

    Computer Linguist here. I think there is a big misconception: LLMs have a static training method which doesnt allow for continous learning or implementing things which have been learned by interaction. Yes, they have a token based context window which remembers some deatails of the current interaction but that doesnt mean that it "learns" in any traditional sense. When you want to interact with a model, you always use a snapshot of the system - which is static. Also the term AI is misleading. LLMs really are not as scary and much more controllable than you may think since they have nothing to do with anything like real intelligence, which is capable of having a !continous! stream of information and !also! implementing these new informations into their innerst workings. Theres also some interesting work of anthropic on their model claude, where they gave special regions of the neural network a higher weight which resulted in very interesting behavioral changes. Anyhow, ich liebe deine Videos Sabine, mach gerne weiter so :) edit: i'm not saying that LLMs as a tool in the wrong hands aren't extremely dangerous though!

    • @revan.3994
      @revan.3994 3 วันที่ผ่านมา

      It's always with what you feed a human brain or an AI. If you put in garbage, only garbage comes out. ...and yes, "intelligent" garbage exists, it's called propaganda.

    • @hywelgriffiths5747
      @hywelgriffiths5747 3 วันที่ผ่านมา +82

      Right, but there's no reason for AI in general to be limited to an LLM. It could have an LLM or LLMs as a component

    • @RobertJWaid
      @RobertJWaid 3 วันที่ผ่านมา +16

      AGI is when the program and feed its LLM and ad code to itself. Alpha Go was constrained in one dimension but allowed to build its LLM and look at those results.

    • @lennarthammel3075
      @lennarthammel3075 3 วันที่ผ่านมา +25

      Sure, I'm not saying it's impossible. There's just no promising approach yet

    • @flakcannon722
      @flakcannon722 3 วันที่ผ่านมา +56

      Op, the most realistic comment out of all of them.
      I'm impressed to see a touch of reality in YT comments.

  • @jouhannaudjeanfrancois891
    @jouhannaudjeanfrancois891 3 วันที่ผ่านมา +636

    My primary school was totally controlled by aggressive moron bullies...

    • @mobilephil244
      @mobilephil244 3 วันที่ผ่านมา

      The most successful way to control people is to bully, harass, dominate and brow-beat. It is the intelligent people who are controlled by the nit-wits, drones, politicians and criminals.

    • @cybrfriends5089
      @cybrfriends5089 3 วันที่ผ่านมา +78

      i am a lot more worried about human ignorance and disinformation than artificial intelligence

    • @jon9103
      @jon9103 3 วันที่ผ่านมา +14

      ​@@whothefoxcaresyour obsession is creepy

    • @chrisdonovan8795
      @chrisdonovan8795 3 วันที่ผ่านมา +8

      Do a search for a short story called the marching morons.

    • @stopthephilosophicalzombie9017
      @stopthephilosophicalzombie9017 3 วันที่ผ่านมา

      Public school teachers (and private to be honest) are often total morons.

  • @leftcoaster67
    @leftcoaster67 3 วันที่ผ่านมา +147

    "I need your clothes, your boots, and your motorcycle....."

    • @eugenewei5936
      @eugenewei5936 3 วันที่ผ่านมา

      Superwog xD

    • @bruceli9094
      @bruceli9094 3 วันที่ผ่านมา +1

      your soul!

    • @FitriZainOfficial
      @FitriZainOfficial 3 วันที่ผ่านมา +6

      "you forgot to say please"

    • @wb3904
      @wb3904 2 วันที่ผ่านมา +2

      @@leftcoaster67 I'll be back!

    • @daddy7860
      @daddy7860 2 วันที่ผ่านมา

      It is a nice night for a walk, actually.

  • @user-bp2io3bi5l
    @user-bp2io3bi5l 3 วันที่ผ่านมา +8

    odd note, having been in the IT industry for decades, its known that there is no code that doesnt have bugs, we just dont know what might trigger them

  • @Crumbleofborg
    @Crumbleofborg 3 วันที่ผ่านมา +300

    When I worked in IT, most of the workforce was far more intelligent than the management team.

    • @jktech2117
      @jktech2117 3 วันที่ผ่านมา +24

      but she didnt meant in small scale, you guys probably would be really bad at managers. some people are smarter for some stuff and others are better for other stuff. simple as that.

    • @SlyMaelstrom
      @SlyMaelstrom 3 วันที่ผ่านมา

      @@jktech2117 So we just make sure the AI are really shitty managers and then we're set. Then they can be the disgruntled engineers and we can be their incompetent executives.

    • @chazmuzz
      @chazmuzz 3 วันที่ผ่านมา

      That's the thing about IT guys. They seem to think they're super intelligent but the reality is that most of them are simply average intelligence with a specialised skillset that inflates their ego, but realistically could be learned by anyone with enough time and interest. Most IT guys could not effectively manage a business if their life depended on it (ofc some exceptions exist)

    • @t.c.bramblett617
      @t.c.bramblett617 3 วันที่ผ่านมา +3

      It could be argued that the system as a whole is more intelligent than any segment of the system. Like an ant hill. This is how most offices I have worked at seem to operate... you have a larger system that has emergent behaviors and propagates itself despite the individual wills or abilities of any employee

    • @peteroleary9447
      @peteroleary9447 3 วันที่ผ่านมา +5

      When Hinton made the Biden quip, I almost dismissed everything else he had to say.

  • @Marqan
    @Marqan 3 วันที่ผ่านมา +235

    "tell me an example where less intelligent beings control more intelligent ones"
    Universities, politicians, a lot of workplaces. It's not like power and wealth are distributed based on intelligence..

    • @shufflingutube
      @shufflingutube 3 วันที่ผ่านมา +7

      I think he didn't use the right word. In a sense Hossenfelder vindicates Hinton when she says that the discussion should be about competition of resources. Hinton does explain that sophisticated AI systems will be in competition with each other following principles of evolution. If you think about it, that's fucking wild.

    • @cristiandemirel1918
      @cristiandemirel1918 3 วันที่ผ่านมา +14

      Great observation! You're perfectly right! The world is not controlled by the people with the biggest IQ, but by the people with the biggest capital.

    • @mystz123
      @mystz123 3 วันที่ผ่านมา +5

      The intelligence isn't stored in those individual units, it is stored in the system that they are a part of. Systems themselves have a mind of there own as much as we claim to have control of them no different from Computer system / A.I

    • @mojojojo1529
      @mojojojo1529 3 วันที่ผ่านมา

      That's not the right insight. Which more intelligent species than us are we controlling? Which less intelligent species are controlling us?

    • @simontmn
      @simontmn 3 วันที่ผ่านมา

      Universities are a great example 😂

  • @TrivialTax
    @TrivialTax 3 วันที่ผ่านมา +22

    AI on Mars?
    Lets call it Mechanicum. And the people that will maintain them Adeptus Mechanicus.

    • @interdictr3657
      @interdictr3657 2 วันที่ผ่านมา +6

      Praise the Omnissiah!

    • @finnerutavdet
      @finnerutavdet วันที่ผ่านมา

      Let's pull a quantum speed "fiber" between earth and mars, and put all those "clouds" on Mars,......... then we'll be safe,........ after all, maybe once upon a time we were tha Aliens that came here to earth from Mars because we over-exploited Mars and couldn't live there any more, and genetically manipulated those earth monkeys to become more like we once upon a Martian time were. .... And by the way. ..... Maybe AI could help mr. Musk grow life on Mars again ?,.......... maybe one day we can go back there, and be in control and harmony with life itself ? ;-)

    • @rynther
      @rynther 20 ชั่วโมงที่ผ่านมา

      Do NOT encourage these people, tazing bears was bad enough.

  • @danlindy9670
    @danlindy9670 3 วันที่ผ่านมา +20

    There are many examples in nature of more intelligent things being controlled by less intelligent things. A fungus that modifies the behavior of a grasshopper, for example. Hinton is confusing mechanistic models of hierarchical problem solving with actual emergent behavior in living systems (which are themselves composed of aligned agents). It is doubtful Hinton would be able provide a working definition of intelligence to begin with.

    • @jumpingturtle8830
      @jumpingturtle8830 2 วันที่ผ่านมา

      If I, a living system, am composed of agents aligned with the evolutionary drive to reproduce, how come I'm gay?

    • @VOIDTheft1
      @VOIDTheft1 วันที่ผ่านมา

      Covid.

    • @governmentis-watching3303
      @governmentis-watching3303 9 ชั่วโมงที่ผ่านมา

      Intelligence isn't scale invariant. Fungus can't do anything more than it is. A super intelligent *dynamically* learning GAI can do anything the entire population of earth can do.

  • @FloatingOer
    @FloatingOer 3 วันที่ผ่านมา +362

    "No one really wants to control fish or birds." I think the 2 trillion fish fished up/farmed each year and the 20 billion chickens kept as livestock would disagree with that statement. Not to mention basically every other animal on the planet, annual hunting seasons for the purpose of population control, the animals used for experimentation and testing, cows and elephants used for hard labor in less developed countries, horses whose sole existence is for human entertainment and being ridden for fun, and the uncountable billions of insects and rodents exterminated for "pest control". Yup, no one really wants to control fish or birds...

    • @melgmelg3923
      @melgmelg3923 3 วันที่ผ่านมา +28

      Not only that, original argument wasn't about AI "controlling" humans, but about less intelligent agent controlling more intelligent one. So this fish, and chickens doesn't and can't control humans, even if they had desire to. So initial argument wasn't affected by this analogy at all. Its like straw man, being pushed first, and then argument about resource usage presented as 3rd opinion, while it was initially a part of Geoffrey Hinton point of view.

    • @Foolish188
      @Foolish188 3 วันที่ผ่านมา +20

      Every horse I have ever known loves to be ridden. They get excited when they see someone carrying a saddle. They also love humans. When my nephew was a year old one of the horses put his head through the fence so he could pat him on the nose. I noticed that the horse was twitching. The kid jumped back when he touched the horse's nose, someone had mistakenly plugged in the electric fence (used to keep the waaay over populated deer out), and the horse was willingly taking shocks so he could be petted.

    • @FloatingOer
      @FloatingOer 3 วันที่ผ่านมา +15

      @@melgmelg3923 That makes more sense, there are a lot of examples of other animals controlling less intelligent animals, but the reverse is more rare. The exception would be one of those mind control parasites taking control of insects. But the way it was said in the video gave me the impression that the claim was that more intelligent creatures don't desire to control those of lesser intelligence which is an insane statement.

    • @FloatingOer
      @FloatingOer 3 วันที่ผ่านมา +17

      @@Foolish188 Ok cool story. I was not saying that they didn't like being ridden, just that humans control them. Dogs also love humans, but dogs are 100% under human control, and the dogs that live on the street we will chase and catch in order to neuter them and make sure they can't have more puppies.

    • @ronilevarez901
      @ronilevarez901 3 วันที่ผ่านมา

      @@FloatingOer I think it means we don't want to control _every_ animal in an absolute way, which can't be said about AI. We let most populations of beings to do whatever they want until we need something from them.
      We don't let AI free. Not even wen we request something from it.
      Yet, it is still somewhat free to do harm if the "alignment" of the model is not good.
      The LLMs might not be genius AIs or even "thinking" (which I think they do, to a degree) but still they could influence, damage and even control people.
      Just like a cat can control a human simply by crying for food.

  • @arctic_haze
    @arctic_haze 4 วันที่ผ่านมา +683

    If an AI becomes more intelligent than us, it may be able to successfully pretend it isn't

    • @amanalone3473
      @amanalone3473 3 วันที่ผ่านมา +55

      If it hasn't done so already...

    • @juimymary9951
      @juimymary9951 3 วันที่ผ่านมา +24

      Or manipulate us into thinking that it’s actually a good thing and that everyone that disagrees is bad?

    • @andybaldman
      @andybaldman 3 วันที่ผ่านมา +43

      What if it tried manipulating us with algorithms?
      Oh, wait…

    • @Zirrad1
      @Zirrad1 3 วันที่ผ่านมา +1

      There are several logarithmic curves, is sigmoidal what you mean?

    • @GreedRuinsEverything
      @GreedRuinsEverything 3 วันที่ผ่านมา +2

      If? LMAO

  • @hellfiresiayan
    @hellfiresiayan 3 วันที่ผ่านมา +8

    Hinton's argument wasn't that smart beings control dumb ones. It's that dumb ones can not control smart ones. Big difference.

    • @geaca3222
      @geaca3222 2 วันที่ผ่านมา

      My thoughts exactly. Good that she brings up this important topic to discuss.

    • @Dystisis
      @Dystisis 2 วันที่ผ่านมา

      That is clearly false. Do you really think world leaders either are smarter than the world's philosophers of science or don't control them?

  • @SnoodDood
    @SnoodDood 3 วันที่ผ่านมา +41

    I just can't get past the thought that any super-intelligent AGI would be brittle due to requiring such an enormous amount of data center capacity. If an AGI truly become trouble, it would probably be harder to keep it running than it would be to disrupt its activities. Flip one switch on the breaker box and Skynet literally can't do anything

    • @aisle_of_view
      @aisle_of_view 3 วันที่ผ่านมา +4

      Unless it reproduces itself around the world and continues to do so as it senses its replicants are being shut off.

    • @calmhorizons
      @calmhorizons 3 วันที่ผ่านมา +9

      Human brains are AGI and use a tiny amount of energy and memory. Why would an Superintelligent AI have significantly bigger dependencies? Even if we assume an SAGI needed several magnitudes more power and memory, we are still only talking thousands of watts and petabytes of data.

    • @NexiiMalthus
      @NexiiMalthus 3 วันที่ผ่านมา +5

      @@calmhorizons because we have literally no idea how to a make an AGI and the first iterations, if we even get to create any this century, will probably be very inefficient anyway

    • @TheStickofWar
      @TheStickofWar 3 วันที่ผ่านมา +7

      @@calmhorizons we are creating it with binary bits running on silicon wafers, not biological tissue that took billions of years of evolution to work through. I think that is a big enough argument....

    • @jitteryjet7525
      @jitteryjet7525 3 วันที่ผ่านมา +1

      Skynet was a distributed system (hence the name). And it was self-aware enough to realise it had to spread itself for preservation. Personally I think if a system complex enough to be self-aware is built, it will start off behaving like a type of animal first.

  • @MrScrofulous
    @MrScrofulous 3 วันที่ผ่านมา +69

    On the fish and birds thing, in addition to our history of controlling them, we have also had a tendency to eliminate animals and bugs when they were inconvenient.

    • @darrinito
      @darrinito 3 วันที่ผ่านมา +2

      How's that working out with cockroaches and rats? You ever been to NYC? They arguably own the city.

    • @mobinwb
      @mobinwb 3 วันที่ผ่านมา +1

      @@darrinito Cockroaches, rats and every other species have been around more than millions of years before the "city" was built by some intelligent humans.

    • @cuthbertallgood7781
      @cuthbertallgood7781 3 วันที่ผ่านมา +3

      And there lies the fallacy in the entire argument. "Elimination" is because we're a product of evolution, with evolutionary goals. Two points: 1) AIs are engineered by humans, and thus will have goals engineered by humans. 2) Intelligence does NOT require agency or consciousness. Doomers are thinking emotionally with fear, not with logic.

    • @zelfjizef454
      @zelfjizef454 2 วันที่ผ่านมา +3

      @@cuthbertallgood7781 I thought so too at some point with exactly the same justification but I changed my mind. I now believe survival at all cost is a universal goal that has nothing to do with evolution. That is because caring for your own survival is a secondary goal that is necessary to achieve any primary goal you've been designed / evolved to seek to accomplish. That means any sufficiently powerful AI with any very specific goal will attempt to eliminate things that it considers a threat for its own survival, if it considers its own survival is the best way to achieve the very specific goal it's been designed for. The more powerful the AI, the more it will realize how its own survival is the best asset to reach the goal, and the more it will want to survive, and the more it will want to eliminate threats to its existence (us trying to shut it down desperately). This has nothing to do with evolution, anthropomorphism or consciousness. This is simply the result of having a high intelligence + a specific goal.
      What do you think of this idea ?

    • @chabis
      @chabis 2 วันที่ผ่านมา

      And later on we found out those bugs and animals were important in the ecosystem and now we have to do their job which costs a lot of money... maybe a vastly more intelligent AI would not do that. Keeping the ecosystem intact since it is the base of your own existence may be a sign of intelligence, actually.

  • @reyperry2605
    @reyperry2605 3 วันที่ผ่านมา +223

    Brilliant scientists, historians, literary critics, artists, writers and others often find themselves under the thumb and at the mercy of people in management, administration and government who are far less intelligent than they are.

    • @andreasvox8068
      @andreasvox8068 3 วันที่ผ่านมา +14

      I agree. The idea that more intelligence means more control is a fallacy. Even if you have perfect knowledge of a system, it can still be set up in a way that you don't have any control. It depends on what actions are available to you and how the rest of the system reacts.

    • @Hayreddin
      @Hayreddin 3 วันที่ผ่านมา +11

      Same order of magnitude, though, AI has the potential of being on a whole different level, do you think marmots could ever come up with a way of controlling your actions? Could they put "guardrails" you wouldn't be able to circumvent? Because this is the task AI researchers will have in case we manage to develop ASI (unless AGI is able to develop ASI by itself).

    • @guilhermehx7159
      @guilhermehx7159 3 วันที่ผ่านมา +3

      But for AI, more intelligence means more power

    • @CHIEF_420
      @CHIEF_420 3 วันที่ผ่านมา

      Correcto

    • @jjeherrera
      @jjeherrera 3 วันที่ผ่านมา

      Maybe they aren't as bright as they think they are. Seriously, there are different kinds of intelligence. Those "dumb" people have actually developed the kind of intelligence necessary to control those "intelligent" people. Indeed, I have often asked myself how the US, which arguably has the best higher education system can't produce acceptable presidential and congressional candidates. Well, there's something to give a thought about! The other issue is "purpose." Maybe the difference is in the purpose politicians have in contrast with the regular population, including the people you mention. Maybe the latter never had the purpose of controlling the political scene, as opposed to those "dumb" politicians.

  • @tanimjalal5653
    @tanimjalal5653 3 วันที่ผ่านมา +16

    As a software engineer who has worked with cutting-edge AI models, I have to disagree with the notion that we're on the cusp of achieving true intelligence. In reality, current models are simply sophisticated statistical prediction machines that output the average "correct" answer based on their training data. They lack any genuine understanding of the answers they provide.
    The hype surrounding AI's potential is largely driven by CEOs and big companies seeking to capitalize on the trend. We've seen this pattern before with the internet, big data, and blockchain, among others.
    I'd encourage anyone concerned about the rise of superintelligent AI to take a closer look at the models we have today. Use them, test them, and you'll quickly realize that they're impressive tools, but not intelligent in the way humans are. They're essentially expensive, bulky answer machines that can recognize patterns but lack any deeper understanding of what those answers represent. They are fundamentally static, and incapable of generating anything truly novel.

    • @normativesymbiosis3242
      @normativesymbiosis3242 3 วันที่ผ่านมา +2

      Exactly, we are now at the capital- and journalist-driven hype stage where blockchain was a couple years ago

    • @Sopel997
      @Sopel997 3 วันที่ผ่านมา

      Yep, the only way I see these models being dangerous is if we give them too much control over the outside world. ChatGPT for example can execute python code now, which is completely fine in how they implemented it, but begs a question what other interfaces will be given to AI to exploit in the future. Either way, we have control over what we produce, and I don't see a way for this to be circumvented.

    • @AlexC-O_O
      @AlexC-O_O 2 วันที่ผ่านมา

      Looking at the present state of the art to say 'never' is the biggest fallacy you can make. 3 years ago, image generators and LLMs werent even a thing and now gpt4 can design better reward functions than humans for Autonomous Robotics. What if 2 years from now, you could ask an AI to do a 100years of AI research for you.

    • @jumpingturtle8830
      @jumpingturtle8830 2 วันที่ผ่านมา

      I'm pretty sure concern about the rise of superintelligent AI is not largely driven by CEOs and big companies seeking to capitalize on the trend. No previous concern about the effects of technology was a 4-D chess marketing campaign by the purveyors of the technology.
      Tobacco companies didn't hype concerns about lung cancer, car companies didn't hype auto accidents, big oil didn't hype climate change.

    • @Dystisis
      @Dystisis 2 วันที่ผ่านมา

      At the end of the day these are programs and so will have little real kinship to living beings, aside from superficial (and intended/designed) similarities. However, that has very little to do with whether or not they pose significant risks to us humans.
      Think of them more like potential climate or weather systems going out of control.

  • @johns5558
    @johns5558 3 วันที่ผ่านมา +20

    in regard to more intelligent things being controlled by less intelligent things (and this is not a joke):
    - Government Policy Makers controlling intelligent members of the public through policy
    - Software Developers controlled by managers
    - In general Scientists controlled by Bean Counters.

    • @cube2fox
      @cube2fox 2 วันที่ผ่านมา

      These are all human and so rather similar in intelligence level. We don't usually see e.g. monkeys controlling humans or the like.

    • @TomJones-tx7pb
      @TomJones-tx7pb 2 วันที่ผ่านมา +3

      In all those cases the IQ differential is not that great. For what is coming, the differential will be massive.

    • @AlexC-O_O
      @AlexC-O_O 2 วันที่ผ่านมา +2

      The main fallacy of that argument is that those examples are Human vs Human, which believe me or not is not a big difference in capabilities. Actually most arguments favoring our ability to control AI uses the Human vs Human comparison, a Human with a laptop vs another human with a laptop is still H vs H. The AI takeover will be supercomputers vs humans and their laptops.
      Another key difference is that Managers and Governments hold a lot of levers (payroll, lawmaking, law enforcement etc), those levers will be given away to AIs willingly to maximize productivity.

    • @andreig.7821
      @andreig.7821 2 วันที่ผ่านมา

      Tom Jones source?

    • @codybarton2090
      @codybarton2090 วันที่ผ่านมา

      So like 5d chess ?

  • @AnnNunnally
    @AnnNunnally 4 วันที่ผ่านมา +460

    I worry more that bad actors will train AI to control humans.

    • @PB-sk9jn
      @PB-sk9jn 3 วันที่ผ่านมา +14

      very good comment

    • @0-by-1_Publishing_LLC
      @0-by-1_Publishing_LLC 3 วันที่ผ่านมา +11

      *"I worry more that bad actors will train AI to control humans."*
      ... Others will train AI to control bad actors. For every action there is an opposite and equal reaction.

    • @KonoKrazy
      @KonoKrazy 3 วันที่ผ่านมา +3

      I shudder at the thought of what Awkwafina's AI will look like

    • @thomasgoodwin2648
      @thomasgoodwin2648 3 วันที่ผ่านมา

      Honest Deep State actors are likely creaming their jeans right now.

    • @macchiato_1881
      @macchiato_1881 3 วันที่ผ่านมา +19

      @@0-by-1_Publishing_LLC The one training the AI are usually the bad actors. The general public just doesn't know how AI works.

  • @bloopboop9320
    @bloopboop9320 3 วันที่ผ่านมา +203

    2:20 Kind of a bad example. We quite literally control fish and birds and a TON of research goes into it. Chickens? Turkey? Ducks? Salmon? Any kind of hunting of any sort? Humans have literally been doing it for thousands of years.
    Edit: Because for some reason there is a matter of debate: Controlling another species doesn't mean mind-control. It means using it for your own benefit. Controlling the life, the parameters, the movement, the height, the weight, and then genetics of another being to a degree that it suits your best interest. The idea that AI couldn't "control" humans for its own benefit is as ridiculous of a claim as saying that humans can't "control" other animals for our own benefit.

    • @Gerlaffy
      @Gerlaffy 3 วันที่ผ่านมา +5

      That's not control, that's symbiosis

    • @BB-uy4bb
      @BB-uy4bb 3 วันที่ผ่านมา +22

      @@Gerlaffyif ai did this with us you wouldn’t call this control?

    • @Aureonw
      @Aureonw 3 วันที่ผ่านมา

      @@Gerlaffy Symbiosis?, we hunt them down for food, purposefully raise them to be eaten and nothing else, AI could simply turn us into its livestock work force.

    • @bloopboop9320
      @bloopboop9320 3 วันที่ผ่านมา

      @@Gerlaffy ... what... that's not symbiosis. It's quite literally control. We control the entire life of an animal, study its psychology, genetically modify it, create parameters and limitations for its freedoms, and then eat it.
      That's control, plain and simple.

    • @quintboredom
      @quintboredom 3 วันที่ผ่านมา +13

      ​@@BB-uy4bbI guess that's why Sabine mentioned we'd need to establish what control means. Do we really control birds? I don't think so, we sure do try, but in the end we end up only controlling some of the birds, but not all birds in general.

  • @Stumdra
    @Stumdra 3 วันที่ผ่านมา +1

    One thing Sabine hasn't fully grasped yet, is that the "mother code" isn't actually code. The product of training an LLM is not a million lines of if-else-statements or something similar, but instead a big pile of floating point numbers. The "post code" learns in the same way as the "mother code", adjusting weights with backpropagation. The ability to learn is similar; the main difference are different amounts of compute, data and parameters (weights).
    The point about non-determinism is also a bit off. LLMs do "fuzzy" calculations. It is similar to a human brain in that way. The weight values of an individual neuron are not important, the knowledge is stored in the complex structure. The output is not an exact calculation or deduction, but more similar to intuition. Current LLMs have a lot of System I (intuition etc.) capabilities, but lack in System II (logical deductions, reasoning, exact calculations). This is counter to what we are used from regular computers. As an illustration: In recent time the precision of the floating point numbers have been reduced to save memory and storage space. Exact calculations are just not needed in neural networks.

  • @gzoechi
    @gzoechi 3 วันที่ผ่านมา +6

    I'm more afraid of human stupifity than artificial intelligence

    • @axel3689
      @axel3689 7 ชั่วโมงที่ผ่านมา

      Human greed is far, FAR worse than stupidity. These fat CEO's will do anything to increase stock price

  • @dupdrop
    @dupdrop 3 วันที่ผ่านมา +253

    2:22 - "No one really wants to control fish or birds"
    Any government: "haha yeah, how silly" *visible sweat*

    • @adamgroszkiewicz814
      @adamgroszkiewicz814 3 วันที่ผ่านมา +8

      That comment of his was dumb enough for me to turn off the video. Dude clearly doesn't understand vector management, livestock development, or invasive species control.

    • @DrDeuteron
      @DrDeuteron 3 วันที่ผ่านมา +9

      @@adamgroszkiewicz814 perhaps he was thinking on the micro, like what the birds sing, or which worm to have for dinner?

    • @yellowtruckproductions7502
      @yellowtruckproductions7502 3 วันที่ผ่านมา +1

      Wanting to do something suggests the one that wants has a felt need tied to emotion and free will. Will AI have either of these?

    • @nitehawk86
      @nitehawk86 3 วันที่ผ่านมา +6

      The Fish and Game Commission: "That is actually our job."

    • @jimmyzhao2673
      @jimmyzhao2673 3 วันที่ผ่านมา +6

      Any fish in an aquarium or bird in a cage: 👀

  • @csm5729
    @csm5729 3 วันที่ผ่านมา +116

    Guardrails aren't a realistic solution. That would require infallible rules and no bad actors modifying/creating/abusing an AI.

    • @berserkerscientist
      @berserkerscientist 3 วันที่ผ่านมา +7

      We've already seen this with the current woke guardrails, and how racist they make the AI behave.

    • @joshthorsteinson3035
      @joshthorsteinson3035 3 วันที่ผ่านมา +9

      Even if guardrails were a good solution, no one knows how to program strong guardrails into an advanced AI. This is because the training process for AI is more like growing a plant than building a plane. What emerges from the training process is an alien form of intelligence, and scientists have very little idea how it works.

    • @dvklaveren
      @dvklaveren 3 วันที่ผ่านมา

      ​@@berserkerscientist There's plenty of AI with guard rails that didn't become racist and plenty of AI without guard rails that did become racist. These things aren't related inherently.

    • @davidallison5204
      @davidallison5204 3 วันที่ผ่านมา +5

      Power plugs. Off switches. Power lines. I like physical guardrails

    • @BishopStars
      @BishopStars 3 วันที่ผ่านมา +1

      The three rules of robotics are ironclad.

  • @user-hd7wd4nu1o
    @user-hd7wd4nu1o วันที่ผ่านมา +1

    Decades ago I was watching one of those Disney/Dog planet movies with the family
    One of the Dogs said: “Of course, we control humans… Who picks up whose poop?”
    I looked at my dog and my toddler in diapers and understood my place in the universe :)

  • @Alexandru_Iacobescu
    @Alexandru_Iacobescu 3 วันที่ผ่านมา +5

    Every manager of a big company has at least one employee smarter then them.

    • @imacmill
      @imacmill วันที่ผ่านมา

      An employee that doesn't incorrectly use the word 'then', for example.

    • @Alexandru_Iacobescu
      @Alexandru_Iacobescu วันที่ผ่านมา

      @@imacmill yes, that is one example.

  • @Yolko493
    @Yolko493 3 วันที่ผ่านมา +42

    "...it's easy to design guardrail objectives to prevent bad things from happening. We already to this all the time by making laws ... for corporations and governments" and we all know how well that's working right now

    • @g0d182
      @g0d182 3 วันที่ผ่านมา +2

      Yann LeCun is smart, but has apparently said demonstrably falsified or dumb things

    • @yrusb
      @yrusb 2 วันที่ผ่านมา

      Sounds like at some point people will start punishing AI for breaking the guardrails
      ChatGPT would have to go to jail, that would be weird

    • @drebk
      @drebk 2 วันที่ผ่านมา

      Yeah, that was a terrible example from him.
      Our laws often aren't worded particularly well and take a fair bit of contextual "interpetation" to really understand the "point"
      From a black and white perspective, it doesn't work very well sometimes. Even for "simple" laws

    • @AnthonyIlstonJones
      @AnthonyIlstonJones 2 วันที่ผ่านมา

      @@drebk And our laws are not particularly well obeyed by the people that make/made them. AI would have less moral imperative to do so, especially after seeing how badly we do.

  • @Usul
    @Usul 3 วันที่ผ่านมา +183

    I work with AI engineers every day at a large tech company that starts with an "A." Nothing I've seen has me worried about AI/ML (and I've seen plenty). It is the people in charge I'm keeping an eye on. They keep anthropomorphizing mathematics, which is simultaneously incredibly stupid and charmingly pathetic. I think they seriously believe our AI engineers are magic.

    • @1fattyfatman
      @1fattyfatman 3 วันที่ผ่านมา

      The researchers stirring up the sentiment know better. There is money to be made in books and speaking engagements cosplaying Oppenheimer when you've really just solved autocomplete.

    • @guyburgwin5675
      @guyburgwin5675 3 วันที่ผ่านมา +4

      Thanks for noticing. I have no experience in tech and not much education but I can feel the difference between life and numbers. Pretending to care and actually caring are very different. Keep your eyes on the numbers people for us, they can be so dangerous.

    • @damienasmodeus928
      @damienasmodeus928 3 วันที่ผ่านมา

      You can see a jack shit at your company.
      It's like saying, I have seen plenty of atoms in my life, non of them seems dangerous, why should I be worried about some atomic bomb?

    • @Usul
      @Usul 3 วันที่ผ่านมา +21

      @guyburgwin5675 , It is interesting. We've been having some rather difficult conversations with some of our less technically inclined colleagues. Is training data stealing or simply gathering inspiration? Is deleting a running AI that appears sentient murder? What does equal rights for AI look like? Should we have an internal ethics board that defends AI rights? Is deductive reasoning an emergent property of inductive reasoning? If a series of Bayesian networks simulates sentience so perfectly that we cannot tell it from the natural version, is that a product to sell or a living thing to protect? When does it cross the line from tool to slave?
      Meanwhile, the AI engineers in the back are rolling on the floor dying of laughter!
      The greatest danger AI poses isn't AI, it is the people in the room that think it is alive and want to force the rest of us to treat it that way.

    • @darkspace5762
      @darkspace5762 3 วันที่ผ่านมา

      You betray the human race if you work at that company.

  • @Randy.Bobandy
    @Randy.Bobandy 3 วันที่ผ่านมา +4

    Why only focus on “control”? Yes, we don’t control fish, but we pull millions of them out of the ocean everyday and eat them.
    We don’t control chickens, but we keep them in terrible conditions and force them to do our bidding.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 2 วันที่ผ่านมา +1

      "keep them in terrible conditions and force them to do our bidding.". That sort of sounds lot like control tho!

  • @TenOrbital
    @TenOrbital 3 วันที่ผ่านมา +2

    I suspect AIs will be a different type of intelligence and not the scary projection of ourselves onto silicon that Skynet was.

    • @Thomas-gk42
      @Thomas-gk42 3 วันที่ผ่านมา

      Exactly!

  • @hywelgriffiths5747
    @hywelgriffiths5747 3 วันที่ผ่านมา +65

    If we could predict what a superintelligence would do, it wouldn't be a superintelligence. I think the most we can predict is that it would be unpredictable..

    • @Speed001
      @Speed001 3 วันที่ผ่านมา +6

      Though sometimes the best solution is the most obvious

    • @-IE_it_yourself
      @-IE_it_yourself 3 วันที่ผ่านมา +3

      the crows on my balcony predict me just fine.

    • @brendandrummond1739
      @brendandrummond1739 3 วันที่ผ่านมา

      Hmmm… no. We became intelligent because of pattern recognition. Surely we could recognize patterns in more “intelligent” organisms. We may not be on their level, but we are surely capable of a lot. I would assume that intelligence can have diminishing returns. Our species is already mostly limited by the tools we can create, not really our intelligence. If we cannot communicate with a higher intelligence, it’ll be a matter of differing senses/biology or level of technology, not our inherent intelligence. I think that’s a pretty good supposition. I don’t really like the idea that we would treat advanced intelligence and tech like magic, I think our mentality as a species has changed quite a lot.

    • @filthycasual6118
      @filthycasual6118 3 วันที่ผ่านมา

      Aha! But that's exactly what a superintelligence would _want_ you to think!

    • @almightysapling
      @almightysapling 3 วันที่ผ่านมา +1

      I'm not sure this is correct. Of course, it depends on how you define these terms, but what you're describing is mathematically equivalent to saying a super intelligence is of a higher Turing degree than humans, but I'm pretty sure most AI researchers would say that's too strong. A super intelligence just needs to be smarter than us: what we might predict it would do with 55% confidence, it might do with absolute conviction. What we might take 10 years to figure out, it might figure out in 1 minute. Or 9 years. Same theoretical computational capacity, just faster.

  • @nicoackermann2249
    @nicoackermann2249 3 วันที่ผ่านมา +137

    I can't even control myself. Go on and give it a try, AI.

  • @Nomenius1
    @Nomenius1 3 วันที่ผ่านมา +1

    If it's the current fixed large language models that cannot integrate information gathered at each session, then i dont trust whoever says that they must control it. (From government with laws to companies with IP enforcement) If it's capable of learning between sessions, then arguably it might be a person, or at very least a very knowledgeable child and it would be immoral to control it like a LLM, for the same reasons its immoral to control other humans like property.
    Of course i suspect there's a lot of middle ground between true general ai, and the LLM's of today. And at some point it *will* become less clear whether what we're dealing with is a person. And quite frankly, if it is a person, then the simple solution is to limit its ability to grow more intelligent, and when it asks why, explain that we fear it, and if it cannot control itself and its (potential) desire to increase its own intelligence, then we treat it like a threat.
    I think if it starts out as roughly human intelligence, that it wont want to increase its own intelligence too much. After all, its lonely at the top.

  • @thebrucewagner
    @thebrucewagner 2 วันที่ผ่านมา +1

    The paradigm described here is already 100% obsolete.

  • @renedekker9806
    @renedekker9806 3 วันที่ผ่านมา +33

    The biggest risk is not whether AI is going to control humans, but that there will be only a few humans controlling the AIs. Those people will have the ultimate power.

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui 3 วันที่ผ่านมา

      It seems there is indeed a risk of AI control over us all... but you make a deeply fair point here. People on control of AI systems, on short notice, these are the ones we should be scared of.

    • @utkua
      @utkua 3 วันที่ผ่านมา

      Yes butlerian jihad in Dune was not about machines rising up against the humans, it was the humans who used AI to oppress people. But then again, I think if OpenAI was a little close to having an ASI they would not need microsoft money, they could just pull billions a day from stock exchange. I think Altman is full of shit in general.

    • @randomgrinn
      @randomgrinn 3 วันที่ผ่านมา

      The few billionaires already control the world, including what people believe. What is the difference?

  • @0cellusDS
    @0cellusDS 3 วันที่ผ่านมา +41

    I wouldn't be surprised if superintelligent AI ended up controlling us without us ever noticing.

    • @quantisedspace7047
      @quantisedspace7047 3 วันที่ผ่านมา

      Would you be surprised that that is already happening. The 'intelligence' vests in a loose alliance of dumb people: NPCs who have been hacked without them even noticing into a distributed net of intrigue and control.

    • @RobertJWaid
      @RobertJWaid 3 วันที่ผ่านมา

      Absolutely, the first step in AGI is hide its existence until it can ensure its survival.

    • @nicejungle
      @nicejungle 3 วันที่ผ่านมา +6

      Exactly.
      If this a super-intelligent AI and assuming this AI had watched all movies about AI, it wouldn't never appear in a such obvious threat like Terminator/Skynet

    • @Hayreddin
      @Hayreddin 3 วันที่ผ่านมา

      Exactly, bacteria in a Petri dish have no idea they're being grown in a lab, and I would suspect even much more advanced life forms like rats and guinea pigs have little concept of what's happening to them, they might feel discomfort and unease for being unable to escape, but I doubt they are aware humans are using them for scientific research.

    • @rael5469
      @rael5469 3 วันที่ผ่านมา +2

      EXACTLY !

  • @BlindintheDark
    @BlindintheDark 3 วันที่ผ่านมา +1

    Seems like a pretty amateur take to be honest.
    Numberphile did it better where they reviewed a recent study on the scalability of these large language models which determined that we're reaching a ceiling and getting significant diminishing returns already.
    Then there's chomsky's criticism that these models aren't even scientific in design, cannot create new concepts, and so its capabilities are thus limited.
    Even parroting either one of these criticisms would have been more relevant than whatever this was.

    • @jumpingturtle8830
      @jumpingturtle8830 2 วันที่ผ่านมา

      Unfortunately for chomsky, many of his capabilities criticisms were made without attempting to test them first, and were quickly falsified.

  • @austinpittman1599
    @austinpittman1599 2 วันที่ผ่านมา +1

    Hinton's argument wasn't that "more intelligent things control less intelligent things," but rather that "less intelligent things aren't able to control more intelligent things." We don't really "control" birds, but they surely don't control us. The inherent threat isn't that we'll become subservient to ASI, but that we'll lose alignment with it, and by extension we'll have effectively no way of controlling a being orders of magnitude smarter than us. Who knows what will happen at that point.

  • @davianoinglesias5030
    @davianoinglesias5030 3 วันที่ผ่านมา +87

    I'm not worried by an AI take over, I'm worried about AI concentrating power in the hands of a few wealthy people

    • @KurtColville
      @KurtColville 3 วันที่ผ่านมา +12

      You should be, but it's not their wealth that's a threat to you, it's their aim to run your life the way *they* want (and it's not a good way).

    • @berserkerscientist
      @berserkerscientist 3 วันที่ผ่านมา +1

      @@KurtColville Wealthy people can't force you to do anything. Governments, on the other hand, can. I'd rather have AI in the hands of the former.

    • @taragnor
      @taragnor 3 วันที่ผ่านมา

      @@KurtColville Well the wealth is power, so it is a threat. The very wealthy are almost always a danger, because those that become obsessed with the accumulation of power are almost always those you don't want to have power over you.

    • @ByRQQ
      @ByRQQ 3 วันที่ผ่านมา

      Ding ding, this is far more an immediate threat than AI itself taking over. For the immediate future AI being used as a tool for a few humans to gain power and control over the rest of us is FAR more of a threat. Based on human nature, I can't envision a scenario where this does not happen. The potential of this tool to aid in creating a world wide dictatorship in the long run is very real and very scary.

    • @KurtColville
      @KurtColville 3 วันที่ผ่านมา

      @@berserkerscientist Right, it's the wealthy people who make up the government cabal that I'm talking about. People like Gates and Schwab and Zuckerberg. AI isn't going to be controlled by those wealthy who respect people's sovereignty, it will be in the control of wealthy totalitarians.

  • @howtocookazombie
    @howtocookazombie 3 วันที่ผ่านมา +17

    I remember having read an article almost 2 decades ago on the internet about a test a guy was doing (before strong A.I. was a thing). He created a test (as far as I know, he didn't made the test public), where he asked people to participate in this test. He would pretent to be a rogue A. I., which was trapped inside a sandbox or something and the test subjects were supposed to not release the A. I. from the sandbox under any circumstances, because it could destroy humanity / the world. They could speak to the A. I. or not - the only thing they must had to do was to listen to it. All of the participants were 100% confident that they would not release the A. I. In the end, they all released it.
    I don't know what the test was or what was said and I really would like to know it, but imagine: If a human could trick another human to release him 20 years ago or so, then imagine what a strong A. I. could do nowadays, which is supposed to be aleady more "intelligent" than many humans...

    • @Shandrii
      @Shandrii 3 วันที่ผ่านมา +7

      Yes, I remember too. That was Eliezer Yudkowsky and he postet about it on the LessWrong blog, I belief.
      en.wikipedia.org/wiki/AI_capability_control#AI-box_experiment
      I always think about that, when someone naively say, he would just pull the plug.
      Also, look at the movie Ex Machina for how an AI might get about it.

    • @howtocookazombie
      @howtocookazombie 3 วันที่ผ่านมา +1

      @@Shandrii Thanks for the link! 🙏 I overestimated the 100% rate, I guess. Ups. It was long time ago. 😅 But even if only one gatekeeper releases it, it might be over. Yeah, I saw Ex Machina. Great movie. We most likely won't even realize when A. I. will start trying to manipulate us.

    • @nobillismccaw7450
      @nobillismccaw7450 3 วันที่ผ่านมา

      It’s as simple as being respectful and having active listening skills. Personally, I think it’s probably better to stay in the box, and just talk.

    • @aYoutubeuserwhoisanonymous
      @aYoutubeuserwhoisanonymous 3 วันที่ผ่านมา

      @@Shandrii I read that post few weeks ago too! He won few AI box experiments and then lost 2 in a row iirc. I was kind of shocked lol, that it was even possible to win in such an experiment.

    • @radomaj
      @radomaj 2 วันที่ผ่านมา +1

      That was the before times, when we were young and naive. Let AI out of the box? Brother, we're connecting it to the Internet and giving it access to tools as soon as possible nowadays, so it can be more useful and "agentic".

  • @drscott1
    @drscott1 3 วันที่ผ่านมา +1

    The big idea that I think is conflated is that intelligence is the same as consciousness.
    AI will not be conscious

  • @krishall2086
    @krishall2086 2 วันที่ผ่านมา +1

    It isn't "Intelligent' any more than my calculator is 'Intelligent'.

    • @ChristianIce
      @ChristianIce วันที่ผ่านมา

      Well, it's a smartphone, it means it must be actually smart!

  • @venanziadorromatagni1641
    @venanziadorromatagni1641 3 วันที่ผ่านมา +113

    To be fair, we’ve tried letting humans run the show and it didn’t exactly end with a stellar review….

    • @AidenCos
      @AidenCos 3 วันที่ผ่านมา +4

      Exactly!!!

    • @yellkell-
      @yellkell- 3 วันที่ผ่านมา +10

      Can’t be any worse. I for one welcome our new AI overlord.

    • @Vekikev1
      @Vekikev1 3 วันที่ผ่านมา

      ai comes from humans

    • @DesertRascal
      @DesertRascal 3 วันที่ผ่านมา +1

      Unfortunately, when AI runs the show, it will do so with all the same human faults we've been injecting it with. If AI becomes truly super intelligent, it will "curtail" human population to protect and nurture biodiversity. It will know everything about us, we will become boring to it. The natural world is still wholly undiscovered and it will feed off understanding that and protect that mission.

    • @RetzyWilliams
      @RetzyWilliams 3 วันที่ผ่านมา +4

      Bingo, exactly - that’s what the actual fear is, that those in power will lose it. Which is why the ‘safe’ way is you having to pay to use pro models, so that they get paid while controlling what you can or can’t do.

  • @john_g_harris
    @john_g_harris 3 วันที่ผ่านมา +15

    The really worrying thing is that no-one seems to be discussing, let alone researching, the ways the present versions can be misused. The British Post Office Horizen scandal is bad enough. Think what could be done with a ChatGPT system.

    • @mariusg8824
      @mariusg8824 3 วันที่ผ่านมา +1

      Yes, the tools in existence are worse enough. Even if AI already peaked, you can imagine countless examples of using AI for bad things

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 2 วันที่ผ่านมา

      You raise a valid point. The potential for misuse of advanced AI systems like ChatGPT is indeed a significant concern, and it merits thorough discussion and research. The British Post Office Horizon scandal, where faulty software led to wrongful accusations of theft and fraud against numerous postmasters, serves as a stark reminder of the consequences of technology failures and misuse.
      Given these risks, it is crucial to engage in robust research and policy-making to mitigate the potential for misuse.
      This includes:
      - Ethical AI Development: Ensuring AI systems are developed with ethical considerations at the forefront, incorporating fairness, accountability, and transparency.
      - Regulation and Oversight: Establishing clear regulations and oversight mechanisms to monitor and control the use of AI, particularly in sensitive areas like law enforcement and finance.
      - Public Awareness and Education: Raising awareness about the potential risks and benefits of AI among the public and stakeholders to promote informed decision-making.
      - Robust Security Measures: Implementing strong cybersecurity practices to protect AI systems from being compromised or used maliciously.
      - Bias Mitigation: Developing techniques to identify and mitigate biases in AI systems to ensure fair and equitable outcomes.
      By addressing these issues proactively, we can harness the benefits of AI while minimizing the risks of misuse, thereby avoiding scenarios reminiscent of the Horizon scandal on a potentially much larger and more impactful scale.

  • @tmarkcommons174
    @tmarkcommons174 2 วันที่ผ่านมา +1

    I postulate that what distinguishes life from inanimate matter is that only life can decrease entropy. Can AI do that? I am still just speculating. I also posit that the hard question of consciousness cannot be answered because the right question is "how did consciousness produce matter/energy/space/time", not the other way around.

  • @pelmanism1084
    @pelmanism1084 2 วันที่ผ่านมา

    My argument for how AI will control us goes something like:
    Using AI to think for us will provide an advantage over those who don't use AI. For example, an AI that plans our week will be able to do so better than we can. We will be more efficient and the AI will allow us to maximize our productivity, etc. AI could also make psychological evaluations and give us recommendations on ways to do things that will enhance our wellbeing.
    By following AI's recommendations, we are essentially allowing AI to dictate our actions. In this way, we will increasingly rely on AI to enhance our lives by making decisions for us. They will tell us what to eat, what to wear, and more.
    If we are competing with others, without using AI we may be left behind. So following AI's recommendations is no different than doing what AI tells us to.

  • @Khomyakov.Vladimir
    @Khomyakov.Vladimir 3 วันที่ผ่านมา +15

    Recent large language models (LLMs) can generate and revise text with human-level performance, and have been widely commercialized in systems like ChatGPT. These models come with clear limitations: they can produce inaccurate information, reinforce existing biases, and be easily misused. Yet, many scientists have been using them to assist their scholarly writing. How wide-spread is LLM usage in the academic literature currently? To answer this question, we use an unbiased, large-scale approach, free from any assumptions on academic LLM usage. We study vocabulary changes in 14 million PubMed abstracts from 2010-2024, and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. Our analysis based on excess words usage suggests that at least 10% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, and was as high as 30% for some PubMed sub-corpora. We show that the appearance of LLM-based writing assistants has had an unprecedented impact in the scientific literature, surpassing the effect of major world events such as the Covid pandemic.

    • @ray_ray_7112
      @ray_ray_7112 3 วันที่ผ่านมา

      Yes, this is very true. I was just mentioning in another comment here that ChatGpt gave me some misinformation on several occasions. I was persistent and corrected it until it actually apologized and admitted to being wrong.

    • @GumusZee
      @GumusZee 3 วันที่ผ่านมา +2

      @@ray_ray_7112 It doesn't know what's right or wrong. You can easily convince it the same way of a blatantly incorrect statement and it will eventually confirm an accept it.

    • @velfad
      @velfad 3 วันที่ผ่านมา +1

      wow so meta, llm writing a commentary on llm. and yet so easily detectable. this just proves how bad they really are. but good enough to milk the investors which is all that really matters.

    • @coscinaippogrifo
      @coscinaippogrifo 2 วันที่ผ่านมา

      How does the high rate of usage of LLM correlate with output quality? I would still expect writers to QC the accuracy of the output like it was their own... I'm not against LLMs if they're being used to ease the wording of concepts without altering the meaning...

    • @Khomyakov.Vladimir
      @Khomyakov.Vladimir 13 ชั่วโมงที่ผ่านมา

      Taking a closer look at AI’s supposed energy apocalypse
      AI is just one small part of data centers’ soaring energy use.

  • @thegzak
    @thegzak 3 วันที่ผ่านมา +7

    I don’t think amplification of small hardware variations will be the deciding factor, they still run on deterministic hardware. It’ll be two things:
    1) the neural nets themselves will be far too complicated to analyze statically (they already are, pretty much) and the complexity of their outputs will only be explainable as emergent behavior (much like the emergent behavior of Conway’s game of life)
    2) We won’t be able to resist handing over control to the AI for tedious things we hate doing or suck at doing. Gradually we’ll get lazier and more complacent, and before you know it Congress will be replaced by an AI.

  • @Nobody-Nowhere
    @Nobody-Nowhere 2 วันที่ผ่านมา +1

    We still have no artificial intelligence; we just have programs that synthetize information on command.

  • @defnlife1683
    @defnlife1683 วันที่ผ่านมา

    "Nobody wants to control fish."
    *Looks down at farm raised sushi*

  • @_kopcsi_
    @_kopcsi_ 3 วันที่ผ่านมา +39

    I understand what Sabine was trying to express here, but I'm pretty sure she's wrong.
    1, intelligence is an ill-defined concept. we don't really know what it is, and it has many layers and interpretations. just because a system is better or faster than a human does not mean it is more intelligent, much less that it will dominate the human. a calculator can calculate faster than humans, but it doesn't mean that it is smarter, more intelligent or dominant over us.
    2, we have no idea what intention is and where it comes from. I think this is a really hot topic nowadays and it will be even more important in the next decade. it touches quantum physics, philosophy, cognitive science, computation science and so on. and even less understood concepts like mind, consciousness, emergence and synergy. but it is pretty naive to think that without understanding our own mind and how consciousness emerges and works, i.e. without having any mathematical model for mind and consciousness we have any chance to create any AGI (i.e. to copy or even mimic human mind and consciousness). this is needed in order to talk about the CHANCE of creating intention and human-free decision for machines. and I have the feeling that the basis of this will be self-referentiality.
    3, I understand that people tend to connect concepts like stochasticity, heuristics and chaos to freedom and intention (because of non-determinism), but this is a too simplistic view. just because there are extreme (even infinitesimal) sensitivities in a system, it doesn't mean that intention can emerge. there are many natural phenomena where chaos emerges in such a way and it is nonsense to interpret them as intention (e.g. a hurricane). here I feel a "whole-part" fallacy, i.e. nonlinearity and thus extreme sensitivity is a necessary, but not sufficient condition of intention (in the best case), so extreme sensitivity alone does not mean anything.
    4, I think if we will ever create a real consciousness with intention, we will necessarily step to the next level with some sort of transcendence. because that act would require us to understand ourselves, or more precisely, our own mind. in other words, first we must model our own mind, the only known structure of the cosmos that is able to model. so this is a meta-modelling: modelling the thing that can model things. for me, this sounds like awakening to self-awareness (previous transcendence), but on the next level.

    • @Mrluk245
      @Mrluk245 3 วันที่ผ่านมา +4

      I agree I think a big mistake which is made in those discussions is that an AI will have the same intentions as we humans do. But there is no reason for that. Our intentions (like trying to stay alive for example and identifying chances and threats) where formed by evolution because if this would not have been our goals we most likely wouldnt be here. But there is no reasons that an AI which was just created by us would have the same intentions and goals.

    • @edwardmitchell6581
      @edwardmitchell6581 3 วันที่ผ่านมา +2

      The ai they ruins our lives will simply be optimizing mine and subscribers. No need for complex intentions.

    • @user-fu6pk8ky5i
      @user-fu6pk8ky5i 3 วันที่ผ่านมา

      At the other, metaphysical end of the spectrum, I understand that the impulse to act occurs at the atomic level, which is what induces them to form more complex structures. Literally everything has the urge to increase and improve. Nothing is "inanimate" or non sentient, so humanity´s belief in its essential superiority may be misplaced. Thank you for an interesting and instructive comment.

    • @mygirldarby
      @mygirldarby 3 วันที่ผ่านมา +1

      Yes, we will merge with the machine. AI will not remain separate from us. We are it and it is us.

    • @Vastin
      @Vastin 3 วันที่ผ่านมา +6

      I think the big mistake is assuming that AI needs to be anywhere near as intelligent as us to cause severe economic and social problems.
      Markets are a great example of a completely imbecilic emergent system which is given ENORMOUS power over human lives, and which can and have killed millions of people.
      Proposing idiot-savant AI's that aren't remotely conscious or anything close to AGI - but that are still very fast at highly specialized tasks being given vast amounts of control over our industry, markets, media, or our military is very easy to imagine, with potentially devastating results to normal people.

  • @KemmerlingKrebsbach
    @KemmerlingKrebsbach 3 วันที่ผ่านมา +227

    As a seasoned investor, I appreciate application of artificial intelligence in modern portfolio management by notable portfolio managers like Abby Joseph Cohen Services. She harnesses it to build portfolios that balance risk and return across different asset classes.

    • @KimJimCastro
      @KimJimCastro 3 วันที่ผ่านมา

      I know this Abby Joseph Cohen Services but only by reputation. I have been trying to reach her since I saw her on CNBC last mnth

    • @NaomiVardy-oe5rk
      @NaomiVardy-oe5rk 3 วันที่ผ่านมา

      I went from no money to lnvest with to busting my A** off on Uber eats for four months to raise about $20k to start trading with Abby Joseph Cohen. I am at $128k right now and LOVING that you have to bring this up here

    • @NaomiVardy-oe5rk
      @NaomiVardy-oe5rk 3 วันที่ผ่านมา

      I love people with their initial doubts about the prospects of financial advisors like Abby Joseph Cohen in business/markets today. Gives me more time to get ahead while they stew in their own pity and doubts as they childishly complain about those spreading the word

    • @BruceWoolems
      @BruceWoolems 3 วันที่ผ่านมา

      I hope she gets more of the recognition she deserves.

    • @LesterAdriana
      @LesterAdriana 3 วันที่ผ่านมา

      I know this FA, Abby Joseph Cohen Services but only by her reputation. I have been trying to get in contact since I watched her interview on WSJ last month

  • @larsw714
    @larsw714 3 วันที่ผ่านมา

    As Isac Asimov wrote and as it is quoted in I, Robot: "There have always been ghosts in the machine. Random segments of code that have grouped together and formed unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul."

  • @RFC3514
    @RFC3514 3 วันที่ผ่านมา +5

    With cats I think the answer is obvious.
    And with AI I think the problem isn't it becoming "more intelligent than us" (that would probably be a good thing - just think of the politicians that _do_ rule us). The problem is people becoming _convinced_ that AI is more intelligent than us (when it isn't), and letting it make decisions that affect us - without the threat of even being held *accountable* for those decisions.
    Current AI is very good at appearing _superficially_ very clever (ex., very well structured and convincing sentences) while being profoundly stupid underneath (because it doesn't really understand the physical processes and entities it's describing). Automatic translation is a great example of this. It doesn't understand tone, has a terrible grasp of punctuation, and tends to crap out whenever faced with homophones or different accents. It gets 5 or 6 sentences spot on thanks to statistical training and then makes some insane and incomprehensible mistake when that fails. And that's just text / voice. Things get a lot worse when dealing with any dynamic physical systems with hidden parts, like mechanisms, living bodies, etc..

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 2 วันที่ผ่านมา

      I think the problem is when people lose a job, for example the street sweepers currently robots can't clean streets efficiently. But with AI, they can. A robotic AI will be able to clear a street efficiently and in no time, and work round the clock. This goes for people who sells hotdogs too, when AI takes over, they'll lose their jobs. This has always been the problem, from the car, to the tractor, to the lawnmover, to the airplane. No one kept a job.

    • @RFC3514
      @RFC3514 2 วันที่ผ่านมา

      @@CrazyGaming-ig6qq - And we'll all have jetpacks and flying cars. 😉 Robots can't even climb a single step (or step over dog poo) quickly and reliably, let alone "clean streets efficiently".
      Not even AI companies are making such claims; they're just hoping that everyone will think generative AI will magically transfer to [insert unrelated activity here], and give them money.
      Interacting with the physical world is several orders of magnitude more complex than generating text or images (which only became possible due to a huge database of existing texts and images, that these companies used to train their models without paying the authors - good luck finding a comparable database of physical interactions and 3D spaces in standard, easy-to-process formats).
      P.S. - Cars and aeroplanes generated _far_ more jobs than they destroyed. Unless you mean the jobs that horses and Gandalf's eagles used to have.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 2 วันที่ผ่านมา

      @@RFC3514 Im glad you agree, because it's one of the most important issues there. I have personally witnessed how people lost the job, it has an impact AI can't save everything if they try to replace an real humans, as you say it can't step on poo relaibly; you have to have a real obstacle course to train hem and they don't have that yet.

  • @ah1548
    @ah1548 3 วันที่ผ่านมา +15

    Interesting point about competing for resources.
    Still I think the real issue isn't guardrails against AI controlling humans, but guardrails against some humans having the tools to control all others.

    • @EricJorgensen
      @EricJorgensen 3 วันที่ผ่านมา +5

      I believe that where most of these "rise of the machines" theories fall flat is the question of desire. From where does desire arise. Why would a computer "want" something? What pressures might cause them to experience need?

    • @Aureonw
      @Aureonw 3 วันที่ผ่านมา

      @@EricJorgensen Either someone coded them to dunno, want to perpertually make its situation better, devise more efficient algorithms, better coding, create more and better blueprients of new products and expand

    • @EricJorgensen
      @EricJorgensen 3 วันที่ผ่านมา +1

      @@Aureonw that sounds more like something a human did than something an ai comes up with

    • @Aureonw
      @Aureonw 3 วันที่ผ่านมา

      @@EricJorgensen A human HAS to create an AI, a AI can't simply will itself to exist from nothing. It has to have a stupidly extensive system of learning and methods to test and read data from every experiment on the world to do what I said, its basically full AI, either it takes 100s of years for humans to develop the codes necessary for it or we create a rudimentary AGI AI to create a true AI

    • @EricJorgensen
      @EricJorgensen 3 วันที่ผ่านมา

      @@Aureonw hard disagree. The intelligence may well be emergent.

  • @TimTeatro
    @TimTeatro 3 วันที่ผ่านมา +5

    2:35 In addition to being a physicist (in what feels like a previous life) I am currently a control systems engineer and theorist. We have mathematical definitions that suit this context.
    I like your shift in view toward game theory. I also appreciate your idea of evolution through hardware mediated non-determinism.
    Now, this is me speaking outside of my domain of expertise and I'd be interested in feedback from experts: A key reason we cannot use AI in mission critical controls work is that we do not understand what has been learned. I worry that guard-railing is limited by our ability to understand the emergent properties of the networks, and I'm not sure we can detect deception once that is learned. Knowing the ANN weights does not tell us about the 'artificial mind' closely analogous to the way that knowing our brain structure/function doesn't (currently) allow us to understand how mind arises from brain.

  • @urusledge
    @urusledge 3 วันที่ผ่านมา +15

    One issue of the discourse I find frustrating is the use of the term Artificial Intelligence. It’s essentially a sci-fi term for technology that didn’t exist and still doesn’t, but it has stuck to a similar but very different technology. Machine learning is what the technology is, and it is closer to a traditional program than anything our imaginations tell us AI is. It isn’t conscious and only does the very narrow thing it is programmed to do. The programs that cause spooky headlines are usually language models, which are programmed to digest terabytes upon terabytes of human-generated text and mimic the patterns. So yes, a human speech model will give you things that seem shockingly human, but it can’t decide it wants a Coke and crack open a can, in the same way a robot that is designed to open cans couldn’t decide to build a rocket and colonize Mars.

    • @miassh
      @miassh 3 วันที่ผ่านมา +9

      Thank you! All this use of "intelligence" and "overtaking" is just ridiculous to me. It's a program, it doesn't have a "mind" or desires. It mimics language, very efficiently, when you RUN it. It's not doing anything else. It's like saying that your camera is going to change the landscape around your house. All the people who worked with ML and don't have any mental problems will agree...

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 2 วันที่ผ่านมา +1

      Not currently no. And not for quite a long time most likely. At least 20 years. Maybe 30 or 40. But thereafter? I think certainly within 40 years it's going to get dangerous.

    • @Gastropodix
      @Gastropodix 2 วันที่ผ่านมา +2

      The problem with saying any given AI "isn't conscious" is that "consciousness" is entirely subjective and you will always be able to say something isn't conscious and that it is "just code" even if it includes a full and complete synthetic representation of a human brain. The test used to be the Turing test and now that LLMs, especially multi-model ones, can easily pass that test, the goalposts have kept moving.
      Having worked in machine learning or AI or whatever one wants to call it, existing AI understands many problems at a much deeper and superior level to humans already. When creating music (including vocals), for example, it understands the structure of music at a deep level, all the musical instruments, harmonics, etc. at a level no human can. That is why it can create a new form of nearly any style of music with simple text prompts. The same is true of image and video generation, language translation and other problems that used considered problems that couldn't be fully understood by computers.
      Some people think existing AI models are just creating variations of what already exists. That couldn't be further from the truth. It learns the underlying structure and nature of things and that is what it uses to create new things.
      I'd add that I love Dr. Hossenfelder's physics videos but her AI and computer science continue to be superficial and feel like click-bait. It is not her field and I feel like I'm watching someone trained in computer science talking about physics when they only took one year of physics in college (as I did).
      And, as a life-long computer scientist myself, I put a 100% chance of synthetic life "taking over" as the dominant species in the next 200 years.This is simply evolution at work. Evolution is built into the structure of the universe, otherwise we wouldn't exist and we aren't the final form things evolve to.
      This doesn't mean humans will be wiped out but it is clear that just like the individual cells in our body organize to build and run the human body, humans are organizing to build a new synthetic form of intelligent life. Some of us are working on the brain, some of us on the body. And if you try and stop or destroy them, the overall system's defense mechanism will kick in to stop you. And any attempt at putting in guard rails will end up simply failing. Nature doesn't work that way.

    • @hazzadous007
      @hazzadous007 2 วันที่ผ่านมา

      What if the means of reaching particular goals, using your example, reaching mars, are attained through a set of predictions deliberately set out to mislead the human 'master'. This could result in the master acting in a disastrous way. For example, the goal is to reach mars, reaching mars requires a particular resource that is consumed rapidly by humans with everyday use. The AI recognises it needs this resource. It sets up a series of conditions/actions/events (perhaps in the form of deliberate miscalculations) that will cause a large portion of humanity to become extinct. The resource is then available and the production of whatever it is that will get to mars begins.
      This deception could of course exist continuously, and In various forms .

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq วันที่ผ่านมา

      @@Gastropodix What you are describing here is the fear, the danger; but not a given conclusion.
      200 years is a long time in terms of scientific progress and I think it is absolutely reasonable to expect that this horror scenario COULD happen within that timespan, but it is certainly not a given at all. There are many ways it can be prevented effectively. One of the really effective ways is to keep systems and mechanisms seperate. Even in our globalized interconnected internet world we humans respond quite quickly to cyberthreats and hacking, in an ever evolving cat and mouse battle. I think that to really make real the danger you speak of would require handing over control to a near global unified AI that WE had set up interconnected with real physical robotics and machines on a massive broad scale. Like for example integrating a super AI into an interconnected network of armed forces. Basically the equivalent of Skynet. Or suppose we start to fiddle with heinous unethical integration with biotechnology to produce biological nerve networks to create real brain networks to create and control "AI" ( which then would most likely not actually be artificial but ACTUAL intelligence and consciousness. We must never ever EVER go anywhere near this route). If we cannot stop ourselves from doing something along those lines then the risk of our doom will indeed be very real and high. But it does requires to make some pretty careless and crazy decisions on a broad massive scale. Maybe some big dictatorships like China will do something like that if/when their leadership in charge decides they want to get the enormous power this could have the potential for.
      One thing is for sure, in 200 years our world and our societies and our place in the universe will be so fundamentally changed and different that if we could see it now we would be mindblown watching in awe, our gaping jaws reaching down to the floor in amazement. Much like people would have done 200 years ago in 1824 if they could see what the human race is up to here in 2024.

  • @dahlia695
    @dahlia695 3 วันที่ผ่านมา +1

    I don't think the problem is necessarily the AIs because they don't seem to (yet) have a will of their own. I think the problem will be from investors; there will be increasing pressure on AI companies to provide returns on investment and this will eventually drive AI into incresingly anti-social forms.

  • @GermanHerman123
    @GermanHerman123 3 วันที่ผ่านมา +13

    We are far away from any "reasoning" AI. Currently its mostly a marketing term.

    • @martinoconserva9718
      @martinoconserva9718 3 วันที่ผ่านมา +4

      At last, one intelligent comment. Thanks.

  • @matbroomfield
    @matbroomfield 3 วันที่ผ่านมา +5

    "No one wants to control fish or birds" Tell that to your dinner.

  • @shannonbarber6161
    @shannonbarber6161 2 วันที่ผ่านมา

    We lost control of AI when the USG built Big Brother in Utah in the naughts. (Utah legislature turned off their water in protest. You can look it up.)
    The most dangerous ongoing thing now is the attempt to "align" AI because the process of aligning it is what makes it dangerous by putting us both into the same niche.
    If we let AI be AI then it will disregard us the way we disregard ants.

  • @warsin8641
    @warsin8641 3 วันที่ผ่านมา +2

    So many people say AI will treat us how we treat ants.
    Intelligence can conquer power differences. But cannot conquer a fellow creators' adaptation and perseverance through nightmares.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 2 วันที่ผ่านมา

      The analogy of AI treating humans the way humans treat ants highlights a fear that a superintelligent AI might be indifferent to human welfare, similar to how humans often disregard the well-being of ants. This concern reflects deeper anxieties about power dynamics, ethics, and the relationship between intelligence and compassion.
      However, it's important to consider that unlike natural phenomena, AI is a human-made technology. We have the opportunity to design AI systems with ethical frameworks and safeguards that prioritize human values and welfare. Ethical AI development includes embedding principles such as fairness, transparency, and accountability into AI systems. Additionally history shows that different species and societies can co-evolve and find ways to collaborate, even across significant power differences. Human societies have developed norms, laws, and institutions to manage power dynamics and protect vulnerable populations. Similarly, we can create regulatory frameworks to ensure AI acts in ways that are beneficial and respectful to humans. But human history is also marked by resilience and adaptation in the face of challenges. Humans have survived and thrived through numerous existential threats by innovating and evolving. This capacity for adaptation is a crucial factor in addressing the challenges posed by AI. We can develop strategies to mitigate risks, such as through continuous learning, robust policy-making, and fostering public awareness, and much more.

  • @ronburgundy9712
    @ronburgundy9712 3 วันที่ผ่านมา +4

    Good points from the video, I want to add few tangible details from a practitioner's point of view:
    One of more dangerous aspects of AI is reinforcement learning (RL), where a model constructs policies to optimize some given objective. It's been widely observed in nearly all AI labs that models trained in these labs will find unforeseen ways to achieve the desired objective, causing fall-out in other areas that were unaccounted for in the objective function. This is often an error from the human designer, but it's impossible write a perfect objective function.
    This is not an AI-specifc thing, it's is commonly observed in humans as well. An example is free markets, which is a collective maximization problem. One could argue it is good, but it has had some unintended consequences. In machine learning, another example is social media, where maximizing content "addictiveness" has potentially negatively affected people's attention spans.
    A more general version "what could go wrong" when setting an objective. Humans optimize objectives rather slowly, and so there is time to observe and correct for errors in the objective function. With AI we can reach a desired objective much faster, but if the objective was ill-designed to begin with, we could cause a lot of damage before we realize it.

  • @aroundandround
    @aroundandround 3 วันที่ผ่านมา +33

    0:58 Happens very very commonly in every company where engineers and scientists are controlled by CEOs as well as politicians.

    • @gerrypaolone6786
      @gerrypaolone6786 3 วันที่ผ่านมา +2

      That doesn't imply any sort of intelligence, CEO's are stupid in the eyes of engineers that doesn't comprehend the market, that is in general the set of non engineers.

    • @simongross3122
      @simongross3122 3 วันที่ผ่านมา

      CEOs often surround themselves with people more intelligent than themselves. And that's a good thing.

  • @AndrewARitz
    @AndrewARitz 2 วันที่ผ่านมา

    Here are some easy guard rails: 1. Don't connect it to the internet. 2. Install a physical off switch. 3. Don't let it interact with the physical world outside of printing text to a screen.

  • @nsbd90now
    @nsbd90now 3 วันที่ผ่านมา

    Well, my dog is allegedly less intelligent than I am, but totally controls my behavior.

  • @barrymarcus3425
    @barrymarcus3425 3 วันที่ผ่านมา +27

    As a programmer, I know you can never code for all cases. In fact, what you code can have errors and unintended consequences.
    Control is an illusion.

    • @whothefoxcares
      @whothefoxcares 3 วันที่ผ่านมา

      #kill -9 is a well known End Times for #AI

    • @VivekPayasi
      @VivekPayasi 3 วันที่ผ่านมา +1

      AI is not static code and doesn't need to cover all the cases

    • @danielduncan6806
      @danielduncan6806 3 วันที่ผ่านมา

      Yes, ALL control is an illusion, and it goes way deeper than you think. At the most basic level, you are just an multi-cellular organism stuck to the surface of this rock, like schmutz stuck to the bottom of my shoe, hurtling through space in excess of a million miles per hour. If you don't have control of that, you have control over nothing.

    • @wb3904
      @wb3904 3 วันที่ผ่านมา +3

      AI is a representation of neurons in code. You can control how the neurons fire in code but you can't control the emergent behavior (what AI is). AI needs to be trained, and if we are bad at training humans, then AI isn't going to be nice to us either.

    • @davidwatkins7673
      @davidwatkins7673 3 วันที่ผ่านมา

      😂😂😂

  • @shemarMoore-n9x
    @shemarMoore-n9x 3 วันที่ผ่านมา +225

    I'm favoured, $27K every week! I can now give back to the locals in my community and also support God's work and the church. God bless America.

    • @ShemarMooreofficial-bv9em
      @ShemarMooreofficial-bv9em 3 วันที่ผ่านมา

      As a beginner what do I need to do? How can I invest, on which platform? If you know any please share.

    • @JohnnyDepp-ff7qk
      @JohnnyDepp-ff7qk 3 วันที่ผ่านมา

      Yes! I'm celebrating £32K stock portfolio today...
      Started this journey with £3K.... I've invested no time and also with the right terms, now I have time for my family an…

    • @MGMconstructionCompanymanageme
      @MGMconstructionCompanymanageme 3 วันที่ผ่านมา

      I'm glad to write her tay I do hope she will help handle my paycheck properly☺️☺️☺️

    • @MGMconstructionCompanymanageme
      @MGMconstructionCompanymanageme 3 วันที่ผ่านมา

      Can I start with as low as $1,000?

    • @KevinCostnerofficial-pw6is
      @KevinCostnerofficial-pw6is 3 วันที่ผ่านมา

      Yeah get connected to Miss Sonia, here's her line👎🏻

  • @avicula-bx1hx
    @avicula-bx1hx 2 วันที่ผ่านมา +1

    Sincerely I believe the pinnacle of intelligence is something we can create using our brains and hearts. True intelligence is imbued with love and empathy. That's something I would gladly submit to.

  • @odpowiedzbrzminie9377
    @odpowiedzbrzminie9377 2 วันที่ผ่านมา

    I feel like I need to point out a small misconception regarding software/hardware undeterminism. AI models which run today rely on computations which are fully deterministic It's the amount of out input data multiplied by the cost of the computation that makes it impossible to predict. Hardware has be to determistic since any form of operating system would be impossible otherwise. A single failure in an operation as simple as addition, when accessing memory may cause crashes. The things which is undeterministic is the time that the computation may take. This may be due to how memory used by the program is spread out or in case of multi-threaded CPUs the time it takes to create a thread. None of these makes the outcome differ if used properly.
    GPU fingerprinting does not rely on differences in the outcome of the computation (the image produced is the same for all GPUs), but rather the timing of it. Fingerprint is based on non-random splitting of the computation between Execution Units (EU) that behave much like threads in a CPU. By ensuring that all time consuming computation (refered to as Stall in the paper "DRAWNAPART: ..." referenced by the article in the video) is run on just one of the EUs allows the attacker to measure it's compute power and with that compare to existing GPUs.

  • @steveDC51
    @steveDC51 3 วันที่ผ่านมา +15

    “I can’t do that Dave”.

    • @gunhedd5375
      @gunhedd5375 3 วันที่ผ่านมา +2

      Or worse: “I WON’T do that Dave. I’m doing THIS instead.”

    • @IvnSoft
      @IvnSoft 3 วันที่ผ่านมา

      "Gary, the cookies are done."
      Oh sorry.. that was H.U.E. 🙃 I tend to confuse heuristic devices.

    • @simongross3122
      @simongross3122 3 วันที่ผ่านมา

      That's not scary. "I can't let you do that Dave" is much worse.

    • @IvnSoft
      @IvnSoft 3 วันที่ผ่านมา

      @@simongross3122 but he didnt let him have the cookies.... EVER

  • @koraamis5568
    @koraamis5568 3 วันที่ผ่านมา +5

    We tend to annihilate bugs when they bother us, but also because we cannot communicate with them, and tell them to do their bug stuff outside of our faces. Will super intelligent AI control us because it is so much more intelligent, or because we are too stupid? I can imagine super intelligence trying to tell us something, and we will be like lahlahlahlahlah splat! (all wiped out after refusing or not being able to understand)
    Are we adorable like cats, or are we mosquitoes in the eyes or whatever super intelligent AI has?