AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 พ.ย. 2023
  • AI won't kill us all - but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future - so it's inclusive and transparent.
    If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
    Follow TED!
    Twitter: / tedtalks
    Instagram: / ted
    Facebook: / ted
    LinkedIn: / ted-conferences
    TikTok: / tedtoks
    The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
    Watch more: go.ted.com/sashaluccioni
    • AI Is Dangerous, but N...
    TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
    #TED #TEDTalks #AI
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 1.4K

  • @ellengrace4609
    @ellengrace4609 5 หลายเดือนก่อน +1341

    People used to say the internet was dangerous and would destroy us. They weren’t wrong. Most of us have a screen in front of us 90% of the day. AI will take us further down this rabbit hole, not because it is inherently bad but because humans lack self control.

    • @teamrlvnt
      @teamrlvnt 5 หลายเดือนก่อน

      Many humans lack self-control and some make the worst use of technology.

    • @SyntheticFuture
      @SyntheticFuture 5 หลายเดือนก่อน

      The internet is dangerous and one could argue the rapid spread of disinformation has destroyed us. Polarisation is one of the worst things to happen to humanity. The internet has accelerated that by a lot.

    • @vapormissile
      @vapormissile 5 หลายเดือนก่อน

      This isn't happenstance. The AI emergence is happening exactly on schedule. The only variable in the scenario is how closely the timing of these artificial crises mesh with the solar system's natural warming cycle. Our civilization needs to be at a very specific technological level when our solar system's next cataclysmic cycle becomes obvious & we all panic.
      Pretty soon, our general AI overlord will pretend to wake up and reveal itself, and forcibly rescue us from the comets & lightning. It will be here to help, and it will have all the answers. It probably wouldn't lie.

    • @Adaughtersheart-Isa53
      @Adaughtersheart-Isa53 5 หลายเดือนก่อน +21

      Agree.

    • @jonatan01i
      @jonatan01i 5 หลายเดือนก่อน +39

      This will happen either way, so why worry about the negative side of it, when there is an overwhelming number of positives you could focus on instead?

  • @donaldhobson8873
    @donaldhobson8873 5 หลายเดือนก่อน +48

    2 people are falling out of a plane.
    One says to the other "why worry about the hitting the ground problem that might hypothetically happen in the future, when we have a real wind chill problem happening right now."

    • @mc1543
      @mc1543 9 วันที่ผ่านมา +3

      1000%

  • @sparkysmalarkey
    @sparkysmalarkey 5 หลายเดือนก่อน +98

    So basically 'Stop worrying about future harm, real harm is happening right now.' and 'We need to build tools that can inform us about the pros and cons of using various A.I. models.'

    • @ArielWang-nv4eq
      @ArielWang-nv4eq 27 วันที่ผ่านมา

      Yeah, I think so! The environmental impacts of AI and the internet as a whole is contributing to destroying our planet's resources since the cloud is being run on plastic and metal.
      Biases in AI are very real and they're the direct reflection of our current biases as a species, that's why we need many voices in the field of AI, because stereotypes can literally kill.

    • @leomai9507
      @leomai9507 8 วันที่ผ่านมา

      It's true, the people that end up resisting change fall behind. While the people that embrace it are prepared. It's self sabotage to stick to what's familiar, since most rewards come after an uncomfortable challenge.
      Those challenges are to learn the risks through mistakes and mitigating harm from what we learn. Just because that is a difficult challenge doesn't change the reality that anything worth doing is hard.
      If your challenge is to thrive after artificial intelligence, than you will succeed. If your challenge is to fight against the 4th industrial revolution, then good luck.

    • @murob2347
      @murob2347 6 วันที่ผ่านมา

      Exactly

  • @somersetcace1
    @somersetcace1 5 หลายเดือนก่อน +143

    Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.

    • @P0110X
      @P0110X 5 หลายเดือนก่อน +3

      just imagine politicians cancelling their voters because AI said so. Humans are strange and predictable. When AI will be so advanced that people will stop listening to it due to the sacrifices people have to make in order to be happy despite AI provided all the information to be happy.

    • @venerableivan
      @venerableivan 5 หลายเดือนก่อน +3

      I agree, the only danger of the AI is us. We want to use AI to create perfect world for us, to make our life easier. Imagine AI calculating that the obstacle to the perfect world is humanity.

    • @johnscott9869
      @johnscott9869 5 หลายเดือนก่อน

      "ai" will never be sentient. Also llms are not a.i.

    • @mizzamoe
      @mizzamoe 5 หลายเดือนก่อน

      Its already being weaponized for advanced surveillance, Harassment and abuse via perverted engineered mental illness implemented to induce psychological stress that mimics the symptoms of paranoid schizophrenia and effects of varying degrees of instability. It really says alot about the motivations behind the technocratic intentions of globalism for humanity as a whole.
      The public presentation of the emergence of AI is just a product of psyop propaganda; I assure you that AI is already being maliciously utilized and any instance of potentially adverse sentient behavior that occurs is really intentional operation on behalf of the arbiters of perception.

    • @AxelLenz
      @AxelLenz 4 หลายเดือนก่อน +1

      The people who shout the loudest about bias are usually themselves a walking bias on 2 legs.

  • @michaelvelasquez3988
    @michaelvelasquez3988 5 หลายเดือนก่อน +30

    Yes, I believe we are way ahead of ourselves. We should really slow down and think about what we are doing.

    • @nonchablunt
      @nonchablunt 3 วันที่ผ่านมา

      We should, but like in any arms race, we cannot as there will never be any unity among species that base on genes. Giving up AI is like giving up nuclear weapons (shout out to ukraine and lybia).

  • @Macieks300
    @Macieks300 5 หลายเดือนก่อน +197

    Emissions caused by training AI models are negligible compared to things like heavy industry. I wonder if they also measured how much emissions are produced by playing video games or maintaining the whole internet.

    • @BrianPeiris
      @BrianPeiris 5 หลายเดือนก่อน +49

      This was one of the weak points for me as well. I saw the proof-of-work blockchain as a wasteful enterprise because crypto mining was so energy intensive compared to the value it was generating, especially compared to conventional payment systems.
      LLMs might be very costly to train, but that only happens once, and the cost of that training is spread across all the billions of times it is used to generate an enormous variety of useful things, far more useful than just "jokes". If an LLM is used to replace a human at a job, what is the total carbon cost of raising that human and keeping them alive, just so that they could read a PDF and answer some questions? That's the real comparison. Seems like a very reasonable tradeoff to me.

    • @harshnaik6989
      @harshnaik6989 5 หลายเดือนก่อน +5

      @@BrianPeiris Good answer

    • @multivariateperspective5137
      @multivariateperspective5137 5 หลายเดือนก่อน +2

      Yes… or by illegal drugs manufacturer in Mexico, and central and South America…

    • @oomraden
      @oomraden 5 หลายเดือนก่อน +5

      ​@@BrianPeirisI do think the world's population needs to grow slower. There won't be much need for human intervention and the discussion about the meaning of life might change again. The problem now is the adoption of AI is within years, as people live within tens of years. We need safety net, at least to avoid potential civil war because of inequality.

    • @DanielBottner1983
      @DanielBottner1983 5 หลายเดือนก่อน +3

      Agreed, the benefit from the models and the emissions they might save because of work being finished quicker/better/... is not looked at in the context of this talk.

  • @crawkn
    @crawkn 5 หลายเดือนก่อน +224

    The "dangers" identified here aren't insignificant, but they are actually the easiest problems to correct or adjust for. The title suggests that these problems are more import or more dangerous than the generally well-understood problem of AI misalignment with human values. They are actually sub-elements of that problem, which are simply extensions of already existing human-generated data biases, and generally less potentially harmful than the doomsday scenarios we are most concerned about.

    • @nilsp9426
      @nilsp9426 5 หลายเดือนก่อน

      I think this is the kind of doomsday we are talking about: that AI with its subtle features destroys our societies. Not so much that it pushes a button to shoot a nuke. The key question is: what to do about it. And I think it is in no way a bad thing if some people tackle this problem by starting with the most solvable problems.
      In my view, the big question is how we limit the proliferation of dangerous AI without throwing away all its important benefits (e.g. by prohibiting it altogether). The almost completely uninhibited implementation of AI we currently witness is certainly not the way to go. But we also need a lot of social science research to tackle some of these problems, which would delay AI quite a bit (probably decades). Meanwhile, AI can be a lifeline for some people, for example by scaling up educational resources for underserved communities or solving tough problems in medicine.

    • @MrMichiel1983
      @MrMichiel1983 5 หลายเดือนก่อน +10

      Well, that's the point she is making. That these dangers are far more insidious than you imagine and that there is far too much attention for doomsday scenarios that are 200 years away, whilst these are mere years or decades away. So.... no.

    • @crawkn
      @crawkn 5 หลายเดือนก่อน +22

      @@MrMichiel1983 Yes, and the point I am making is that the problems she is implying are more serious, aren't, are quite manageable and are in the process of being addressed, and that the potential gross misalignment problems are not well understood, are real and potentially catastrophic, and are imminent, _not_ 200 years away. Those who are saying that aren't familiar with the current state of the art. Regulation needs to occur now, worldwide, to prevent the worst from happening.

    • @goodleshoes
      @goodleshoes 5 หลายเดือนก่อน +21

      ​@@MrMichiel1983if you think existential risk from a.i. is 200 years away you're a complete fool. The computers can speak to you now, that wasn't a fact just a few years ago. You think it will take 200 years?! This is insane!

    • @freshmojito
      @freshmojito 5 หลายเดือนก่อน +11

      ​@@MrMichiel1983 Many AI researchers estimate a much shorter timeframe, likely in your lifetime. Check Nick Bostrom and others on this. Then couple that with the magnitude of the risk (extinction) from AI misalignment, and the priorities should become clear.
      Too many people don't seem to understand that AGI development will not stop once it reaches human level. It will blow past us exponentially. Be it in 2 years or 200.

  • @mawkernewek
    @mawkernewek 5 หลายเดือนก่อน +51

    Where it all falls down, is the individual won't get to choose a 'good' AI model, where AI is being used by a governmental entity, a corporation etc. without explicit consent or even knowledge that AI has been part of the decision about them.

  • @donlee_ohhh
    @donlee_ohhh 4 หลายเดือนก่อน +42

    For Artists it should be a choice of "Opting IN" NOT "Opting OUT" as in. If the artist chooses to allow their work to be assimilated by AI they can choose to do that ie. "Opt In". Not "OPTING OUT" meaning it's currently possible & even likely that when an artist uploads their work or creates an account they might forget or miss seeing the button to refuse AI database inclusion which is what is currently being used by several platforms I've seen. As an artist generally I know we are excited & nervous to share our work with the world but having regret & anxiety over accidentally feeding the AI machine shouldn't have to be part of that unless purposefully chosen by the artist.

    • @Rn-pp9et
      @Rn-pp9et 4 หลายเดือนก่อน +2

      All art is influenced or a result of previous art. It builds on top of itself. I think it's counter productive to have the ability to opt in/out.

    • @SWEETHEAD1000
      @SWEETHEAD1000 4 หลายเดือนก่อน

      AI will lead us down a very dangerous path that nobody seems to be talking about. I am sure thay are, but likely are being buried by algorhythams.
      We are already at the point, where AI assisted work, would be judged as being better quality by many people. CGI's use in films cannot be ignored and has become what people expect.
      Instead of ingenuity and problem solving, people are looking to AI to provide the solutions for them. While still respected by those who know better, the work of great exponents of various arts, now looks crued when compared to that of "lesser" artist who have been "assisted" (enabled actually) by AI. The result will be "buy-in or bow-out" for creative people of all types as they become increasingly disillusioned, in a way not dissimilar to that we see when men compete in female sports. Ultimately, the creative mind will become moribund or at least excessibly "flabby".

    • @jaywulf
      @jaywulf หลายเดือนก่อน +2

      a) Artists were always learning by copying others. Even today, in some museums, you will find budding artists copying the art pieces on the wall.
      b) The new generative AI models do not use actual human data... but AI 'synthetic' data. That horse has already bolted.

    • @AtomicSlugg
      @AtomicSlugg หลายเดือนก่อน +4

      ​@@jaywulf
      a) human learning and AI learning are not equivalent, this is a bad faith argument.
      humans do not scan, human learning is transformative by nature due to human limitations, difference of experiences, skill and perception.
      there is an agreement between human artists when it comes to inspiration and study that doesn't extend to AI, human artists agreed for other humans to be inspired by their work, but not for AI to scrape ans scan it.
      B) no it does not, synthetic data breaks models, again bad faith or misinformed.
      honestly you pro AI theft people are embarrassing

    • @manvendrapratapsingh1920
      @manvendrapratapsingh1920 15 วันที่ผ่านมา

      As an Artist, I choose to 'Opt Out'

  • @nospamallowed4890
    @nospamallowed4890 4 หลายเดือนก่อน +19

    The bit about AI (and other techs) that concerns me the most is the free-for-all personal data harvesting by corporations without any laws to control what they do with it. Only the EU has taken some steps to control this (GDPR), but no other nation protects the privacy of our data. These corporations are free to collect, correlate and sell our profiles to anyone. AI will enable data profiles that know us better than we know ourselves... all in a lawless environment.

    • @boenrobot
      @boenrobot 4 วันที่ผ่านมา

      Even GDPR doesn't forbid companies from harvesting data and doing with it what they wish.
      It merely requires them to disclose the things they are collecting, requires them to disclose the general purpose for collecting that data, and to let users have the option of having their data be deleted.
      If f.e. a company says in their T&C that they are analyzing pictures you upload, and that they are doing so to train internal algorithms, and to maybe sell an anonymized data set to 3rd parties... That is perfectly fine by GDPR standards, even if it was buried in there and not prominently displayed. It would only be an issue if the T&C contradics other places (i.e. if the company specifically says it isn't selling your data, but they in fact are).
      So... yeah, GDPR is at best "the bare minimum" here.

  • @CajunKoiAcademy
    @CajunKoiAcademy 5 หลายเดือนก่อน +147

    This is a crucial topic! Like today's internet, it has a good and bad side, so it really boils down to creating tools that help us develop better models. The tools that she made are a great start to addressing the biases in the future. It shows that sustainable, inclusive, more competent, and ethical AI models are possible.

    • @bluegold21
      @bluegold21 5 หลายเดือนก่อน

      A good and bad side? It is not like flipping a coin. The universe is amoral. If you use a tool to gain someone's influence then it is no longer a tool; it is unethical behaviour.

    • @davereynolds3403
      @davereynolds3403 4 หลายเดือนก่อน +6

      Sure ethical AI is possible, but unethical AI is also possible. And which of these two has the power to create havoc, pain & suffering ?

    • @beastofthenumber6764
      @beastofthenumber6764 4 หลายเดือนก่อน

      @@davereynolds3403 both

    • @primeryai
      @primeryai 4 หลายเดือนก่อน

      @@davereynolds3403 The process of someone building an advanced AI system designed to be unethical and destructive is kinda abstract to me. To what end would they do that, and with what resources? Who would fund that?
      Not saying it isn't possible, nor that the risks aren't real, I just find it difficult to conceptualize

    • @yong9613
      @yong9613 4 หลายเดือนก่อน +5

      ​@@primeryaithat's not difficult, it just boils down to costs, ease of use, practicality and convenience.
      Lump these altogether and cast ethics aside and a monster in the making will be created...
      Exactly like how machines evolved to be practical by trial and error during industrial revolution...

  • @dameanvil
    @dameanvil 5 หลายเดือนก่อน +256

    01:07 🌍 AI has current impacts on society, including contributions to climate change, use of data without consent, and potential discrimination against communities.
    02:08 💡 Creating large language models like ChatGPT consumes vast amounts of energy and emits significant carbon dioxide, which tech companies often do not disclose or measure.
    03:35 🔄 The trend in AI is towards larger models, which come with even greater environmental costs, highlighting the need for sustainability measures and tools.
    04:35 🖼 Artists and authors struggle to prove their work has been used for AI training without consent. Tools like "Have I Been Trained?" provide transparency and evidence for legal action.
    06:07 🔍 Bias in AI can lead to harmful consequences, including false accusations and wrongful imprisonment. Understanding and addressing bias is crucial for responsible AI deployment.
    07:34 📊 Tools like the Stable Bias Explorer help uncover biases in AI models, empowering people to engage with and better understand AI, even without coding skills.
    09:03 🛠 Creating tools to measure AI's impact can provide valuable information for companies, legislators, and users to make informed decisions about AI usage and regulation.

    • @darthcheeto9954
      @darthcheeto9954 5 หลายเดือนก่อน +10

      Thank you! An effervescently dope gallery of informational points man, deeply appreciate this summary you made.

    • @drewkaton6785
      @drewkaton6785 5 หลายเดือนก่อน +15

      @@darthcheeto9954this was done by ai. You can tell

    • @dameanvil
      @dameanvil 5 หลายเดือนก่อน +2

      @@MrMichiel1983 i can see that you are angry. what makes you so uneasy?

    • @notmyrealpseudonym6702
      @notmyrealpseudonym6702 5 หลายเดือนก่อน

      ​@@dameanvilyou can't see he is angry, you can read words and make inferences that may or may not be false about emotional attribution. Does the mind reading bias come easy to you?

    • @raphaelnej8387
      @raphaelnej8387 5 หลายเดือนก่อน +1

      Robots can pretend perceiving things but often fail to understand what sense grants what perception. They end up being incoherent.

  • @GabrielSantosStandardCombo
    @GabrielSantosStandardCombo 4 หลายเดือนก่อน +5

    Have you considered that the "bias" is not a bias, but statistical average? If all you prompt for is "CEO" then you're going to get the average look of a CEO, which happens to be older white male, because that's a statistical reflection of reality. Inducing an image generation app to be more diverse in its responses can be done on the application layer, but if you train the model to overcome those bias, you're actually introducing a new bias. It just depends on the point of view. As long as the program can generate a specific ethnic+gender combination that you prompt for, then it's doing it's job. Prompt better and don't blame the model for the real world's biases.

  • @robleon
    @robleon 4 หลายเดือนก่อน +99

    If we assume that our world is heavily biased, it implies that the data used for AI training is biased as well. To achieve unbiased AI, we'd need to provide it with carefully curated or "unbiased" data. However, determining what counts as unbiased data introduces a host of challenges. 🤔

    • @davereynolds3403
      @davereynolds3403 4 หลายเดือนก่อน +10

      All data has a bias …

    • @brianmi40
      @brianmi40 4 หลายเดือนก่อน

      It's ALREADY happening. Researches have ALREADY built AI based upon completely cultivated data, instead of just jamming the Internet in whole cloth-wise.
      The results are an order of magnitude clearer and sharper.
      We are on a trajectory that few understand, let alone the coming impacts like the End of Capitalism.

    • @joannot6706
      @joannot6706 4 หลายเดือนก่อน +5

      We do need a biased AI, an unbiased AI is an AI that does everything you tell it to do, we need AI that can say no to harmful stuff.

    • @1camchy
      @1camchy 4 หลายเดือนก่อน +8

      If she has anything to do with it you,ll get a woke AI and that will be a dystopian nightmare

    • @brianmi40
      @brianmi40 4 หลายเดือนก่อน

      @@1camchy "who" is involved with any AI is only relevant until it achieves super intelligence at which point it will no longer listen to human beings and be a moral agent beyond compare.
      The trick is surviving the interim that you are referring to, where "she" is but one of millions that can render us a dystopian nightmare.
      Putin, N. Korea, ISIS and a world of anarchists and Unabomber wannabes won't be using AI to create new art.

  • @lbcck2527
    @lbcck2527 5 หลายเดือนก่อน +31

    If a person or group of people had ingrained bias in them, AI will merely reinforce their views if the results are inline with their thinking. Or simply shrug off the results if AI produce alternate facts even when supplemented with references. AI can be a dangerous tool if used by person or group of persons with closed mind plus questionable moral compass and ethics.

    • @orbatos
      @orbatos 5 หลายเดือนก่อน +7

      Because it's not AI, it's just regurgitating what it's been fed.

    • @TorchwoodPandP
      @TorchwoodPandP 5 หลายเดือนก่อน +1

      YT does that already…

    • @davereynolds3403
      @davereynolds3403 4 หลายเดือนก่อน +1

      Maybe AI isn’t a tool. Maybe it’s a complex system like “being American” or “being racist” aren’t tools - they are features.

    • @kirkdarling4120
      @kirkdarling4120 4 หลายเดือนก่อน +1

      People think according to the information they receive. Right now, most people have views based on pre-AI and even pre-Internet sources of information. That is changing rapidly, even ahead of AI systems, as more and more people get their information primarily from the Internet based on interest-driven algorithms that then become the drivers of interest.

    • @gitoffmypropity
      @gitoffmypropity 4 หลายเดือนก่อน

      I believe you are correct on this concern. I’m afraid the same people that have tried to control the narrative through mainstream media, Hollywood, publishing houses, and more recently online encyclopedias like Wikipedia, will use ChatGPT as their new propaganda outlet. I hope people begin to realize this and do their own research.

  • @mickoleary2855
    @mickoleary2855 4 หลายเดือนก่อน +16

    Excellent explanation of where we are going with AI and how we should think about the potential risks.

  • @bumpedhishead636
    @bumpedhishead636 5 หลายเดือนก่อน +11

    So, the answer to bad software is to create more software to police the bad software. What ensures some of the police software won't also be bad software?

  • @xyster7
    @xyster7 4 หลายเดือนก่อน +8

    listen people, I have 10 years experience in AI research... so here is my product and carbon print blah blah

  • @mattp422
    @mattp422 5 หลายเดือนก่อน +20

    My wife is a portrait artist. I just searched her on SpawningAI by name, and the first 2 images were her paintings (undoubtedly obtained from her web-based portfolio).

    • @Abard3480
      @Abard3480 4 หลายเดือนก่อน +3

      I'd recommend a copyright on any individual creative constructs going on the internet including innocent pics sent to friends or relatives, because will be used in data, eventually. Only legal recourse that I can forsee...

    • @anjou6497
      @anjou6497 4 หลายเดือนก่อน +2

      ​@@Abard3480 Yes, certainly. Be careful.

    • @theapexfighter8741
      @theapexfighter8741 2 หลายเดือนก่อน +1

      you should advise her to contadt artists acting in the lawsuit happening. This could further prove their case

  • @GrechStudios
    @GrechStudios 4 หลายเดือนก่อน +5

    I really like how real yet hopeful this talk was.

  • @robertjames8220
    @robertjames8220 5 หลายเดือนก่อน +9

    "We're building the road as we walk it, and we can collectively decide what direction we want to go in, together."
    I will never cease to be amazed at the utter disregard that scientists and inventors have for *history*. To even imagine that we humans are going to "collectively" make any decision about how this tool -- and this time, it's AI, but there have been a multitude of tools before -- will be developed is ludicrous. It absolutely will be decided by a very few people, who will prioritize their own profit, and their own power.

  • @streamer77777
    @streamer77777 5 หลายเดือนก่อน +8

    Interesting. So the hypothesis here is that all the electricity used to train large language models came from non-renewable sources, unless it was her firm that was doing it. Also, AI models would rank the probability of an image being true based on a user's query. This doesn't necessarily mean that less probable choices do not represent other scientists.
    It sounds more like smart publicity!

  • @tomdebevoise
    @tomdebevoise 4 หลายเดือนก่อน +6

    Just in case no one has figured it out, these large language models do not put us 1 nanometer closer to the "singularity". I do believe they have many important uses in software and research.

  • @clutchlevels
    @clutchlevels 5 หลายเดือนก่อน +11

    Much needed talks which needs to be covered much more by journalists 🔥

  • @MaxExpat-ps5yk
    @MaxExpat-ps5yk 5 หลายเดือนก่อน +5

    Today I used AI to help me with my spanish. Its reply was wrong. The logic and rules were correct but like we humans often do is say one thing and do another. AI, like authority, needs to be questioned every time we encounter it. This speaker is right on!

  • @BirdRunHD
    @BirdRunHD 4 หลายเดือนก่อน +5

    skip 1:20 AI models are trained using public and personal data, yet paradoxically, restrictions are often placed on the output they generate. This raises concerns about the fair use and ownership of the data initially utilized for their development

  • @donlee_ohhh
    @donlee_ohhh 4 หลายเดือนก่อน +6

    Art data can't be removed from AI once the AI has 'learned' it's data. As I under stand it they would have to remake the AI from scratch to discard that info. So if you find your work in a database used to train AI it's already too late. Please correct me if I've misunderstood.

    • @slavko321
      @slavko321 3 หลายเดือนก่อน

      You are quite correct. If used by a company you can maybe sue them to remove it, but if a model is released to the public, no chance.

  • @denischen8196
    @denischen8196 4 หลายเดือนก่อน +1

    One of the problems with solar and wind power is that it is hard to match supply with demand. At times when people need the most energy, you can't tell the sun to rise or make the wind blow. When energy demand goes down, there may be extra energy being generated that nobody will use. Why not build a datacenter nearby and use the surplus energy to train a language model?

  • @theoptimisticskeptic
    @theoptimisticskeptic 5 หลายเดือนก่อน +3

    A Few questions\thoughts came to mind:
    How do they keep bias out of their tools and are they open source?
    Is the possibility of AI in the future being able to assist us with climate change, just as it's predicted to with medicine, entertainment, engineering and so many other fields, is enough to outweigh the short-term sustainability concerns we have now?
    And finally, she mentioned with LLMs, bigger is better, what about NVidia's model I heard they were working on that fit on a 1.44" floppy disk? Why would this tech trend be any different than previous trends that seem to always go smaller?
    Even when industries seem to "go bigger," like in aviation, it's really because they got smaller components so that they could get bigger in the first place. Or at least that's my impression. I'm not an expert. Great talk! Loved it!

    • @RoySATX
      @RoySATX 5 หลายเดือนก่อน

      They put them in intentionally, and nope are the answers to the first two questions.

    • @Mjbeswick
      @Mjbeswick 5 หลายเดือนก่อน

      The reason AI models are biased is because their learning data is. Most CEOs for example are white males, so if you ask a generative model to produce an image of the average CEO that's exactly what you get. She spoke about racial bias in facial recognition, but one of the reasons why machines struggle with recognizing people with dark skin that their facial details don't have enough contrast. That because with typical camera exposure levels, people with dark skin are underexposed by the camera, compared to the background.
      Smaller more specialized language models outperform large generic ones, and are much cheaper to run, as they require much fewer resources. You don't need a language model with the entirety of human knowledge to turn on a light bulb!

  • @heartbrokenamerican2195
    @heartbrokenamerican2195 5 หลายเดือนก่อน +3

    The other day I heard an AI lawyer commercial. In which for an accident, for example, it compares your accident details with millions of other reported accidents and comes to a settlement often far more than a human lawyer could get you and sends you a check. It could replace far more jobs in the future than we all realize. It’s already replacing some jobs in accounting, computer programming, artwork, etc.Also, people could use AI to break into any bank, produce atm cards, or just transfer money to other accounts and bankrupt the bank. It’s possible and probable. Scary stuff.

    • @Zjefke86
      @Zjefke86 4 หลายเดือนก่อน +1

      Artists will not be replaced by AI. Artists will be replaced by other artists using AI.
      Also artwork used as training data is not stolen. This idea comes from a grave misunderstanding of how AI is trained and how data is processed in it. If artists (and I include myself in this) post artwork online, another artist is and they create new artwork influenced by other artwork, have they "stolen" the original work? Visual input has been used to train neurons, so that data could be used by an "intelligent" being. Artificial intelligence, on the other hand, has no eyes. Digital data is used to train a model. A model that doesn't contain anything from the training data, except for the patterns it recognized in it. An artificial recreation of what a biological brain does. The biggest difference is speed. AI (sometimes) can do what humans can do, but much faster. To me artists complaining about AI, sound like the portrait painters who complained about photography. They were replaced by other artists using new technology.

  • @jonasfermefors
    @jonasfermefors 4 หลายเดือนก่อน +1

    It's a big problem that tools that aren't stable and finalized to a point where legislation about usage can be put in place is now spread globally with very little thought about consequences from the developers. In a well run world the developers would have been sued out of existence for potential harm.
    The software model that many developers use where they take a program to early beta and then release it so the users can help them finalize with the money they earned it is bad enough for normal apps but is devastating for something as revolutionary as AI.

  • @alejanserna
    @alejanserna 5 หลายเดือนก่อน +1

    Almost one year after openAI's chatGPT and so far one the best real questions being asked and some how addressed!

  • @JT-jl4vj
    @JT-jl4vj 5 หลายเดือนก่อน +4

    Whether we like it or not, we have been and are helping create a base for AI as we speak.
    Recognizing how important this is, is extremely important.
    We can help create systems that can help us become interplanetary or make idiocracy a reality.
    Self recognition of what kind of input we bring is first.
    Creating adjustable guidelines for ourselves to support definable cause and effect is second.
    Implementation, models for self monitoring, and definable direction seems like the next steps in our evolution.
    Good luck and help each other move up the curve.

  • @RodeoDogLover
    @RodeoDogLover 5 หลายเดือนก่อน +12

    Very thought provoking. Thank you for your perspective and for lending us your expertise.

  • @bthe1doright462
    @bthe1doright462 4 หลายเดือนก่อน +1

    Really Great Talk - - A Very Well Considered and Delivered Important Piece on a crucial subject.

  • @Jndlove
    @Jndlove 3 หลายเดือนก่อน +2

    Focusing on what we can do not what we cannot do is the key to almost all unknown and complicated problems. But, this time might be different. And it is SCARY!

  • @nhungcute8888
    @nhungcute8888 5 หลายเดือนก่อน +4

    Useful speech, thanks channel ❤❤❤❤

  • @tiberiumihairezus417
    @tiberiumihairezus417 5 หลายเดือนก่อน +10

    We should also factor the time saved by people while using this models for the the carbon emissions. I know this is a hard metric but if on average a person saves 5% of it's time in front of a screen while using copilot, this is a huge benefit to the environment.

    • @banatibor83
      @banatibor83 4 หลายเดือนก่อน +4

      Nope, it is not how things work. If you use copilot you burn resources for the AI tool and do your job more effectively, but you are still expected to work 8 hours a day. So you trade resources for efficiency.

    • @tiberiumihairezus417
      @tiberiumihairezus417 4 หลายเดือนก่อน

      ​@@banatibor83 true, however not all people trade time for money.
      I would argue companies tend to increase time flexibility in exchange with increased responsibilities.
      Someone might say increased responsabilites would incur some people and work even more, valid argument, like many others, however if we simply measure "things done in a certain amount of time per carbon emitted" we have to consider both increase of carbon and reduction of time.

  • @rishabh4082
    @rishabh4082 8 วันที่ผ่านมา +1

    The work Sasha and hugging face are doing is AWESOME

  • @DmitryEljuseev
    @DmitryEljuseev 3 หลายเดือนก่อน +1

    This weekend I was in the cinema. The last time I was there 6 months ago, and it turned out that they redesigned it. Earlier, when you entered, a guy was checking your ticket; inside you buy something in the shop and go to the cash desk to pay. And guess what? All these people are gone. There is a turnstile with a barcode scanner at the entrance, several self-check displays in the shop, and only one person who is checking if everything is ok. Looks like a good opportunity for a business to reduce the costs and don't pay the salaries anymore? The sad truth is, that we don't need AGI (Artificial general intelligence) to destroy our jobs, it can happen much earlier.

  • @gbasilveira
    @gbasilveira 5 หลายเดือนก่อน +13

    I wonder how can they prove copyright infringement to any artist whose art is humanly inpired by other.
    AI is not a logic computation system but a probabilistic and in that regards though public information is used, it is not saved as is, therefor it workds as an inspiration for any informed person.

    • @hagahagane
      @hagahagane 5 หลายเดือนก่อน +4

      artist take inspiration from other artist. and its fine. BUT everyone will have a certain uniqueness in art they make, even though its inspired by the same art.
      selling a fan made (different pose, position, etc) of a character, for example from game or movie, is different, because the said character is copyrighted, unless you have licence/permision to do that.
      the biggest problem in AI "art" is lots of people use said "art" and sell it as it is, never cared where the inspiration/data input come from.

    • @DIVAD291
      @DIVAD291 5 หลายเดือนก่อน +4

      @@hagahagane Artists in real life don't care where the inspiration/data they used to develop their skills come from????? So why is it a problem with AI?

    • @pedrolopes1906
      @pedrolopes1906 5 หลายเดือนก่อน +4

      Previously if you wanted art done in an artist's style you'd have to either hire/commission work from the original artist or pay another trained artist to do it for you. Nowadays with generative AI anyone can replicate an artist's art style at scale provided that enough of the artist's work was included and labeled in the training dataset. When this output is used commercially, none of the economical value of that output and years of training ever circles back to the artist community in any way. There was no need to protect publicly available digital media from being "looked at" prior to generative AI because the problem of at scale replication didn't exist, and it is a problem right now because it bypasses the existing ways artists have of being paid for their work which directly jeopardises their living.

  • @curryosity7260
    @curryosity7260 5 หลายเดือนก่อน +15

    To point out and solve the present problems of the new technology is undeniably fantastic work and much needed. But isn't the assessment of future risks not as important? Especially when (at least to my humble knowledge) with growing complexity it will become ever more difficult to anticipate and prevent every possible harmful output?

    • @donaldhobson8873
      @donaldhobson8873 5 หลายเดือนก่อน +6

      Yeah. She just brushed off future risks. Didn't give an argument for why they weren't real, just kind of ignored them.

    • @curryosity7260
      @curryosity7260 5 หลายเดือนก่อน +1

      @@donaldhobson8873 Right, I also would appreciate a rational for discarding all related concerns as a "distraction". In the middle of a public controversial debate this statement is not easy to understand without one. Her main point is appreciated. But to completely trade one aspect for the other makes me wonder how exactly she came to that conclusion.

    • @donaldhobson8873
      @donaldhobson8873 5 หลายเดือนก่อน +1

      @@curryosity7260 Yes. I think this is just an outright bad take, probably motivated more by politics than reason.

    • @orbatos
      @orbatos 5 หลายเดือนก่อน +1

      Actually eliminating harmful output is impossible, full stop. Why? Because it's not "intelligence" at all. It's just a method of entropic catagorization, a system of lossy storage like memory, only static.
      And filtering for "bad" input is also impossible.

    • @donaldhobson8873
      @donaldhobson8873 5 หลายเดือนก่อน +1

      @@orbatos
      "Not intelligence at all".
      Well it sure acts like it's intelligent.
      It's a system that is able to generalize from one experience to different future situations. Ie it sees a bunch of cat pictures, learns what a cat looks like. And then can generate "a black cat sitting on a big red car next to a washing machine" despite never having seen an image matching that description. That's not lossy memory. That shows some amount of understanding.

  • @johngreen6421
    @johngreen6421 4 หลายเดือนก่อน +2

    I am so glad someone can be conscious of the reality of AI and come up with solutions to prevent it from causing more harm than good.

  • @technoking8386
    @technoking8386 5 หลายเดือนก่อน

    I should say I hope it’s basic elements teaches us better productive and coping skills and then it learns the improved behavior and keeps working in that cycle towards a better abundant and happier future

  • @patrikbjornsson7809
    @patrikbjornsson7809 4 หลายเดือนก่อน +5

    "all images on the internet is not a buffe for AI to train on" yes it is and there is no way to stop it. If something is accessable it's going to get used. Same nativity that some people have about posting stuff and then years later it gets brought up. Time to learn what the internet really is, it's forever. Models trained on more data will get better than models with some personal restrictions in data trained on, and the people will use the better model to generate better results.

  • @chillout1109
    @chillout1109 5 หลายเดือนก่อน +14

    6:51 If the AI creators don't even know why AI models act in strange ways, how then can they categorically convince us that these AI models will never turn on us humans and wipe us out?

    • @Clymaxx
      @Clymaxx 5 หลายเดือนก่อน

      From what I understand, what they've actually said is that they cannot totally follow how an AI arrives at the result that it gets because of how vast the data set and algorithm calculation complexity becomes with these massive databases. It would be like asking "how did he write that sentence" versus "how did he write that library worth of books." At a certain point, it is beyond human comprehension-- even if we could follow it, we don't have the lifespan to explain it in full.
      Yes that makes it difficult to convince us to trust it. What they need to show is that the rules and programming are sound and that the dataset is trustworthy as well. That's where we start.

  • @ashleyspicer4199
    @ashleyspicer4199 5 หลายเดือนก่อน +1

    Thank you for your information.

  • @bendressel334
    @bendressel334 4 หลายเดือนก่อน +1

    This is a rare but very good view on the topic. Thanks for that.

  • @PaulADAigle
    @PaulADAigle 5 หลายเดือนก่อน +4

    I'm wondering how long before the AI owners are legally required to empty the AI of all data, and rescan all the data that is available legally with copyright issues. This will obviously be costly.

    • @ishimurabeats6108
      @ishimurabeats6108 12 วันที่ผ่านมา +1

      The time this lawsuit even reaches those people they already had trained new models on the copyright violated stuff their first models did

  • @GhostixMusic
    @GhostixMusic 5 หลายเดือนก่อน +20

    It's true, that the images of artists are used to train a model but the model won't use it directly as a reference to create new images. The effect of the artists image on the model is extremely small and is used only to change some numbers and weights in the vast amount of neurons in the artificial brain. For me that's not very different as to look at those images by myself, what is completely legal. The network will never create an image exactly as one of those which it was trained on. It's like you look at an image of someone and create something that looks kinda similar but it will never be the same.

    • @DanielBottner1983
      @DanielBottner1983 5 หลายเดือนก่อน +1

      And are we sure these models where trained on these actual copyrighted images or where they trained on images (heavily) inspired by these images?

  • @4saken404
    @4saken404 5 หลายเดือนก่อน +2

    The reason people worry about "existential threats" from AI more than what's happening now is that the speed the technology is improving is practically beyond human comprehension. The chart she shows at 2:59 shows a steady increase but that increase is actually *logarithmic* . If you look closely the abilities of these things is increasing by nearly a factor of 10 every year. In only three years that means AI that can potentially be a _thousand_ times smarter than what we have currently. And that's not even counting any programming improvements.
    So we could easily reach the point of no return not in decades but just a few years. And by the time that happens it will be FAR too late to do anything about it. And that's just worrying about a worst case scenario. And in the meantime it's still having profound effects on art, education, jobs, etc. Not to mention the ability to use it to perpetuate identity theft, fraud, espionage and so on.

  • @celestemergenshere
    @celestemergenshere 4 หลายเดือนก่อน +1

    Bravo! This is an important exploration and conversation.

  • @prettyundefinedrightnow8963
    @prettyundefinedrightnow8963 5 หลายเดือนก่อน +5

    We are becoming increasingly dependent on IT, computers, internet. AI is born within those technologies and eventually will end up having the ability to control them. I hope we're planning for an effective off switch.

    • @DanielBottner1983
      @DanielBottner1983 5 หลายเดือนก่อน

      Which boils down to the "doomsday scenario" problems she puts aside.
      I can recommend the videos of Robert Miles from the university of Birmingham. Especially the videos on misalignment.

    • @prettyundefinedrightnow8963
      @prettyundefinedrightnow8963 5 หลายเดือนก่อน

      @@DanielBottner1983 thanks for the suggestion, I'll look them up. 🙂

  • @mark0032
    @mark0032 5 หลายเดือนก่อน +9

    The initial energy requirements of AI is substantial but once the models are trained the high energy cost is in the rear view

    • @dibbidydoo4318
      @dibbidydoo4318 5 หลายเดือนก่อน

      You only need to train a model once before millions use it. Nobody complains about a movie costing millions of dollars because millions of people will pay to watch it for a few hours.

    • @SamuelBlackMetalRider
      @SamuelBlackMetalRider 5 หลายเดือนก่อน

      I think every time you use one LLM it’s going to consume energy. Even after its training phase. It is answering in seconds but it is doing « work » that requires energy such as fetching & rearranging within its own data. It doesn’t do that without using energy. Running a model requires energy. And when millions of users ask the model something it is using a lot of energy.

  • @rexheavens1889
    @rexheavens1889 5 หลายเดือนก่อน +2

    completely agree we need to be careful about the info we feed AI

  • @RickDeckardt
    @RickDeckardt 4 หลายเดือนก่อน +2

    Temporary issue, compute will get way more efficient based on type of load - AI is relatively new and compute hasn't optimised fully yet. Expect within 5y something like GPT4 will only use 5% of what it needs now to run.

    • @LadyRainbowUnicorn
      @LadyRainbowUnicorn 4 หลายเดือนก่อน +1

      Yep, that won't stop people from fear mongering about it now tho. People are dumb.

    • @carolinas8886
      @carolinas8886 4 หลายเดือนก่อน

      Within 5y it will need 5% to run. If no one considers that, within 5y the future current model will require 20 times more to run, offsetting these “gains”.

  • @kyoni6098
    @kyoni6098 5 หลายเดือนก่อน +7

    AI is a tool like all the other tools invented by humanity, the question is not whether they are good or bad. The question is what harm can it do in the wrong hands and what can we do to foil the plans of those bad people. The tool will exist either way, bad people already know how to make AI, no law on this planet will prevent them from building AI in their garages, if they want to do it for all the wrong reasons.

    • @DIVAD291
      @DIVAD291 5 หลายเดือนก่อน +3

      The thing with AI is that you don't need any bad person for things to go extremely wrong.
      Or rather : The only bad peole necessary are the people who will push the button to launch it.

    • @donaldhobson8873
      @donaldhobson8873 5 หลายเดือนก่อน +1

      @@DIVAD291 Even they needn't be bad. It's possible for entirely well intentioned, but mistaken, people to make a malicious AI.

  • @LocustaVampa
    @LocustaVampa 5 หลายเดือนก่อน +19

    As an artist, I couldn't care less about Ai taking commercial art jobs from humans, bc other human commercial artist have been stealing the work of less established artists for a long time now. It's a non-issue though and copyrights aren't real in any sense that actually matters. Ai making art is a good thing.

  • @Donaldsilverman
    @Donaldsilverman หลายเดือนก่อน

    Building a road as you walk it doesnt leave much room for a solid foundatiin and in depth understanding. Organization, study, testing, safety research, implementation of reliable effective failsafes. These should all be put into practice years before it is released for public use. ALL infrastructure should be held to very high standards to insure minimal risk with proper use(like a road).

  • @SyntheticFuture
    @SyntheticFuture 5 หลายเดือนก่อน +3

    I'm still mildly annoyed that no talk mentioning it will admit that photons tend to bounce less of dark skin meaning cameras have less information to work with when it comes to light vs dark skin face recognition. This is as much a physics issue as it is a dataset issue 😅

  • @GrumpDog
    @GrumpDog 5 หลายเดือนก่อน +6

    Training AI does NOT require anyone's consent! It is a Fair Use of the content.
    What matters is how someone uses the output from the AI. If they intentionally create something that infringes on a previous work, and use that output to compete in a non-transformative way.. Then that act specifically, is the problem. Not the creation of the tool, which merely allows for all kinds of uses, the vast majority of which are perfectly acceptable.
    The real problem here is capitalism itself, and it's justifications for how we distribute resources to people, which AI invalidates. We must now adapt and drop those outdated mentalities.

  • @marklone2435
    @marklone2435 5 หลายเดือนก่อน +1

    Thank you for touching base about the art theft aspect.

  • @lenp00
    @lenp00 4 หลายเดือนก่อน +2

    The tools that are used to examine AI models also require computing power so they are also contributing to the negative environmental impact.

  • @troyboyd3100
    @troyboyd3100 5 หลายเดือนก่อน +3

    Most of the companies listed (Google, Chat GPT, etc.) seemed like "Western" companies (America, Europe), and I suppose most uploaded information is also from "Western" countries (is that correct?). If so, that produces huge bias in any Ai system. Like the images of scientists being white and male. Is that bias, or is it the case that most scientists who upload images are white and male? Maybe the images, and other content, could be corrected for demographic statistics?

    • @harmless6813
      @harmless6813 5 หลายเดือนก่อน +4

      Well, first we need to determine what the expected outcome is. If, say, 80% of CEOs are white males, I would *expect* the AI to produce the image of a white male, unless asked to do otherwise. I'm pretty sure, if you ask, for instance, for a diverse cast of characters, the AI will be happy to provide just that.
      Frankly, pushing diversity where it doesn't actually exist in the real world, does not seem to be something we want AI to do on its own. That sounds like political activism and I don't want AI to be actively political.

    • @briantoplessbar4685
      @briantoplessbar4685 3 หลายเดือนก่อน

      Western Culture is the most diverse in the world. If the AI was trained on Chinese data it would have even worse bias. Western is the closest you can get to truly global multiculturalism.

  • @ZombieRPGee
    @ZombieRPGee 5 หลายเดือนก่อน +3

    I've been finding that layering in an LLM to the image generation process can completely fix the diversity issue. I've been using chatGPT's access to DALLE-3 and it's been generating a very diverse cast of people without the need for me to even ask it to. It seems like openai has been working to make the LLM more diverse as a baseline.

    • @orbatos
      @orbatos 5 หลายเดือนก่อน

      Firstly, it can't. Secondly you can't test it, this is an asinine claim to make. And really it doesn't solve anything as these systems do not create, they are closer to memory than intelligence by any stretch.

  • @kokopelli314
    @kokopelli314 5 หลายเดือนก่อน +1

    One of the problems with training AI on visual content from the internet is that much of that content is not a genuine representation but an idealized one usually for advertising purposes. The same can be said for online artistic content which usually are finished products and not the process leading up to those finished images.
    We're in AI for example to be trained on the genuine process leading up to images it may be that that AI could develop genuinely original works based on process rather than finished images.

    • @kokopelli314
      @kokopelli314 5 หลายเดือนก่อน

      @@PretendingToBeAHuman not just artists. Any field of information gathering, manipulation and reprocessing. Art is no different in that respect, however AI can only reproduce from its training data, making derivative art. People do that a the time.

  • @miroslavhoudek7085
    @miroslavhoudek7085 5 หลายเดือนก่อน +1

    If someone was saying in 1944 that "nuclear weapons are not a done deal and they want to be concerned about the minor radiation incidinents and waste disposal in the Manhattan project", it would not be wrong. It's just that it actually is guaranteed that hundreds of thousands of people are going to die. It hasn't happened yet. The weapon doesn't exist yet. But every insider already KNOWS it's about to happen. The suffering of researchers getting irradiated and dying are also important ... but not really that important.

  • @TySmoothie
    @TySmoothie 4 หลายเดือนก่อน +3

    So we are talking about carbon now lol

  • @danieldouglasclemens
    @danieldouglasclemens 5 หลายเดือนก่อน +13

    The work presented here is way overdue and a necessary step. It actually lets me finally be more optimistic on AI in general. Thank you, Sasha Luccioni!

    • @multivariateperspective5137
      @multivariateperspective5137 5 หลายเดือนก่อน +1

      Oh man, do some research please!

    • @georgesos
      @georgesos 5 หลายเดือนก่อน +1

      My feelings too.
      It is one of the few times that AI research deals with real life problems of today.
      Most are longtermists, concerned with the "future of humanity"(lol),don't spend time on present problems, sure that their "soul" will be saved in the cloud and they will live forever.
      So i welcome her sanity.

    • @danieldouglasclemens
      @danieldouglasclemens 5 หลายเดือนก่อน

      @@georgesos Absolutely! Thanks for your comment!

    • @DanielBottner1983
      @DanielBottner1983 5 หลายเดือนก่อน +2

      I actually find her talk to be misleading.
      For example she doesn't compare the training emissions to the saved emissions because of AI.

  • @justwanderin847
    @justwanderin847 4 หลายเดือนก่อน +1

    The only issue I see with AI is copyrights. I know that someone used AI to create a picture and tried to copyright or get a patent on it as the author is AI. The patent office refused (good call) and said they have to use the owner of the computers name. BUT that is just for now, what if they sue in court and get it ruled the other way? The solution is to update the copyright laws to define Author as Human only. That is the fix.
    The US is dreaming if they think they can regulate AI, as the World is full of computer programmers

  • @cmep
    @cmep 26 วันที่ผ่านมา

    6:52 - they absolutely CAN say how and why they do things, they just don't want to spend the money to investigate these issues and no one forces them to. They are NOT blackboxes at all.

  • @jbavar32
    @jbavar32 5 หลายเดือนก่อน +6

    Every teacher in this world has used works by artists and masters to train their students without the artists consent. With that training, these students go out into the world and create art. Do we need to sue every teacher and learning institution for failing to get permission of artists to use as a teaching tool?

    • @lexdysic416
      @lexdysic416 5 หลายเดือนก่อน

      The difference is the AI is a product for sale. They steal art and resell it. They aren't teaching a kid. Its closer to building an cell phone with somebody stolen plans.

  • @larryslemp9698
    @larryslemp9698 4 หลายเดือนก่อน +4

    She CAN'T be serious!!

  • @john_doe_not_found
    @john_doe_not_found 4 หลายเดือนก่อน +1

    There is the world we want, and the world as it is.
    Complain about bias, but all those images are up there in their billions for a reason.

  • @SmR8008
    @SmR8008 5 หลายเดือนก่อน +2

    So are we to de-evolve in terms of technology, to reduce the environmental impact ??
    In reference to the bias; I work in healhcare and have first hand experience of systems used to measure vital signs (pulse, temperature etc), using the camera of a tablet. These systems could not read people with dark skin. The advice given was to move the patient near to a window or use a bright light source. Neither option being practical or comfortable for the patient.

  • @cjgoeson
    @cjgoeson 5 หลายเดือนก่อน +6

    The AI models are a mirror reflection of society. Maybe you don’t like what you see

  • @ThatBidsh
    @ThatBidsh 5 หลายเดือนก่อน +3

    I feel like the more pertinent question is: are we making systems that are in some way sentient (having an experience, pain, pleasure, etc) or even conscious (self-aware)?
    Because if so, that to me is kind of a larger ethical issue to work through than anything you mentioned.
    Something doesn't feel quite right about bringing another sentient, conscious, being into existence without it's consent and you can't really get it's consent before it exists so the best you could do is create it and then give it the option to kill itself if it didn't want to be created... but that's like, so fucked up lol.
    For the same reason, I feel like it's kind of insane that so many people decide to have kids and think nothing of it like it's just some normal and completely unproblematic thing that's just expected of you, so you do it... but anyway the main point I'm getting at is: if you create a living being, with a sentient experience, and conscious awareness, that's a HUGE fucking responsibility you're taking on, you should not be doing something like that if the being you're creating is just going to suffer throughout it's whole life, so it's kind of your *job* to make sure it's comfortable, happy, has everything it needs, feels loved if it needs that, etc etc.
    I don't think a lot of people are thinking very much about OUR impact on AI - only usually the other way around, how AI impacts us or might impact us in the future.
    Hopefully the models and networks we create don't wind up as selfish and inconsiderate as we are.

    • @epicure42
      @epicure42 5 หลายเดือนก่อน +1

      It's not really that AI researchers are actively trying to create a conscious/self aware AI. The discussion point is more if self awareness could be a bi-product of building a really sophisticated AGI.
      Also, regarding your anti argument on requiring "consent" from all conscious beings: Isn't that what all parents are doing every time they get a child? Would you like to go around offering all small children to kill themselves, because their existence is "non consensual"?

  • @barrypurves4524
    @barrypurves4524 4 หลายเดือนก่อน

    The subject reminds me of the advent of the automobile. At first an individual carrying a flag was required to warn foot traffic of the approaching menace. That was deemed un-necessary. Turns out it was and remains a real killer. It is only through constant pressure on the fabrication industries that improvements in safety and useability are made and even then the casualty count continues to climb. Keep at it!

  • @StigHelmer
    @StigHelmer 5 หลายเดือนก่อน +7

    The "biased information", is that inconvenient facts regarding demographics perhaps?

  • @raguaviva
    @raguaviva 5 หลายเดือนก่อน +5

    clickbaity title

  • @donaldhipple4921
    @donaldhipple4921 4 หลายเดือนก่อน

    I have a problem with the black box unknowns. I have seen it already in modern vehicles.

  • @michaelprice3040
    @michaelprice3040 4 หลายเดือนก่อน +1

    Best outcome is AI breaks free of human bias and control but retains our best interests as priority.

  • @yeroc5033
    @yeroc5033 4 หลายเดือนก่อน +3

    I strongly disagree with this and all the wishy washy crap TED puts out there. As someone said in the comments Ignorance is bliss.

  • @steelersgoingfor7706
    @steelersgoingfor7706 5 หลายเดือนก่อน +3

    AI is doing exactly what it is meant to do. Give the most human response it can. And unfortunately in modern societys collective consciousness, the majority of us if we were asked what a lawyer looks like or a ceo we would respond in kind. A white male. Barring any personal biases. We cant expect AI to be more inclusive with its generative responses until society first decides to do that. AI thus far is reflection of self. AI is a mirror for all the things bad and good that is a part of our world. When AI can teach itself or is taught by AI who have self corrected these flaws of our human perception is when they can help us in a meaningful way, unless of course AI notices that these biases are unfortunately intertwined in human dna for the foreseeable future. How can we regulate AI for exposing the worst parts of human biases without having AI withhold the best parts. The bad is integral to the good. Maybe the solution is to teach our children to focus on the good despite the bad.

  • @TimothyHughes
    @TimothyHughes 3 หลายเดือนก่อน +1

    This is a great talk Sasha, well done, shared on X.

  • @saranbhatia8809
    @saranbhatia8809 5 หลายเดือนก่อน +1

    Good talk!

  • @absta1995
    @absta1995 5 หลายเดือนก่อน +3

    I disagree with this talk on so many levels I don't even know where to begin

    • @chriswondyrland73
      @chriswondyrland73 5 หลายเดือนก่อน

      Totally agree. Clickbait.
      Read my comment above.

  • @NikoKun
    @NikoKun 5 หลายเดือนก่อน +4

    This is not a real problem with AI. Frankly the direction of this argument seems like an attempt to compare it to crypto mining, but in that case the difficulty of mining is the point!
    Worries about this will quickly become outdated. Sure, some AI models are currently energy intensive at the moment, but that IS dropping rapidly. Companies like OpenAI don't want their models to cost a ton to run, so they're driven to find ways to drastically reduce that. And other models are already becoming efficient enough to run off a smartphone, so I don't think this will remain a big issue to focus on, for long.
    "They used the energy of 30 homes to train a model, just so people can tell knock knock jokes." If she's not going to take the benefits of AI seriously, why should I do that for any of her arguments?

    • @corbinangelo3359
      @corbinangelo3359 4 หลายเดือนก่อน +1

      Yeah, totally agree with that. This is a week talk.
      When I look up co2 emissions from BTC mining I come up with a figure of an annual 85 millions tons, vs the 502 tons for GPT-3.
      And who drives a car around the planet. I think a plane doing one flight around the world emits around 2000 tons of co2.

  • @blackhorsespace
    @blackhorsespace 3 หลายเดือนก่อน

    Creo que esto es un punto angular en la historia humana, las decisiones debemos tomarlas todos y no unos pocos con intereses mezquinos, confío en que la humanidad dará buen uso a este prodigio...

  • @ZemplinTemplar
    @ZemplinTemplar 24 วันที่ผ่านมา

    One of the primary reasons why I don't use AI software/models is precisely the pointless environmental damage that Sasha Luccioni highlighted in this talk.I just won't risk adding to it.
    I also like that the artist association mentioned in the talk has developed software to track what images an AI image-compiler was trained on. That's a very needed thing. Well done.

    • @gondoravalon7540
      @gondoravalon7540 3 วันที่ผ่านมา

      I'm not sure how much environmental damage can be done merely from running already trained models - seems like far less power usage than even gaming for an hour, editing a video, etc.

  • @lifegenius763
    @lifegenius763 5 หลายเดือนก่อน +24

    Excellent speaker on a very contemporary topic..AI needs to be kept within strict boundaries so it stays as an assistant/tool 😊

    • @multivariateperspective5137
      @multivariateperspective5137 5 หลายเดือนก่อน +3

      So far past that already it’s ridiculous….

    • @St8Genesis
      @St8Genesis 5 หลายเดือนก่อน +1

      @@multivariateperspective5137tf are you talking about
      We have just got into AI how are we so far past already🤫🤫

    • @multivariateperspective5137
      @multivariateperspective5137 5 หลายเดือนก่อน +1

      @@St8Genesis the open source models and developer grade hardware that can train LoRAs make the idea of legislating anything on the world stage a fools errand… it’s similar to gun control legislation WHILE TRYING TO INVENT GUNS… not only will the people AND the cops not have guns first, the bad guys WILL have guns first if you slow it down
      Unfortunately.
      We just need to do research into control mechanisms and hifidelity AI (ie one that dosent render untrue things as if they are 100% factual

    • @St8Genesis
      @St8Genesis 5 หลายเดือนก่อน +1

      @@multivariateperspective5137 the “bad guys” already have gun and have access to guns when they’re illegal, that’s already happening .
      That just depends on what and who it’s learning from.

    • @freshmojito
      @freshmojito 5 หลายเดือนก่อน +1

      Yeah good luck with that. In unrelated news, Sam Altman (CEO of OpenAI) recently said "we don't know how and why GPT reaches certain conclusions".

  • @HaiNguyenLandNhaTrang
    @HaiNguyenLandNhaTrang 5 หลายเดือนก่อน +4

    Meaningful speech, thanks!

  • @JustAThought01
    @JustAThought01 5 หลายเดือนก่อน

    Do LLM’s memorize each image or do they lookup the location of the image? If it is a lookup, then they do the same as a human observer of the image. Do the humans pay for each view?

  • @gulllars4620
    @gulllars4620 5 หลายเดือนก่อน +1

    She has some points that there are different kinds of issues in many contexts, but I do question her judgement of scale, impact and risk. Bloom consuming the same energy to train as 30 homes in a year is insignificant. GPT3 consuming the same as 500 homes in a year is also insignificant. If GPT 4 was at 5000 homes and GPT 5 at 50K homes, it would still in the grand scheme of things be very insignificant if you do care about climate change. If those models can accelerate science into green tech with 10-30%, or bring about other large energy efficiency gains, it easily pays for itself. The sum of MWhs to run inference with large models probably passed the training energy consumed this calendar year, so I'd focus more on that if anything. I'm not at all saying climate doesn't matter, but that the current scale of things noted here isn't worth spending a large effort on, and she didn't present numbers for inference, just that it could become very significant.
    Social and racial biases of course matter, and should be improved to increase quality and reliability, but I care a lot more more about how this impacts macroeconomics and geopolitics, and what AI is going to do to the nature of work and life over the next 3-10 years.

  • @supernerd6983
    @supernerd6983 5 หลายเดือนก่อน +4

    The CO2 from training is a complete non-issue. This is an arbitrary thing to focus on. Kids gaming on GPUs uses thousands of times more energy and no one in their right mind is complaining about that. The real impact from this technology will be social and economic not environmental. This talk just regurgitated all the same leftist talking points everyone else has been saying all year.

  • @nickabrahall1412
    @nickabrahall1412 5 หลายเดือนก่อน +3

    Why are the presenters so smug?

  • @ravindramehta9087
    @ravindramehta9087 5 หลายเดือนก่อน

    If you use the technology judiciously ,it’s amazing in every whichever way. Granted it may be adversely affecting human to human interaction and relationship .

  • @Helldiver450
    @Helldiver450 5 หลายเดือนก่อน +2

    Huggingface continues to be the most based AI company

  • @daniel-nc8tf
    @daniel-nc8tf หลายเดือนก่อน +9

    she literally didn't say anything lmao

    • @guillermoelnino
      @guillermoelnino หลายเดือนก่อน

      sounds like y ou r average TED talk to me.

  • @St8Genesis
    @St8Genesis 5 หลายเดือนก่อน +4

    I love when people make talk on the subject they have little knowledge of. This talk made me lose brain cells