3 types of bias in AI | Machine learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024

ความคิดเห็น • 1.2K

  • @ShaXCwalk
    @ShaXCwalk 7 ปีที่แล้ว +30

    But isn't reporting "unappropriate" stuff biased..? It depends on the person what is appropirate and what not

  • @suman_b
    @suman_b 7 ปีที่แล้ว +17

    I am blown away by the excellent use of graphics in these videos.
    Keep it up!

  • @vladnovetschi
    @vladnovetschi 7 ปีที่แล้ว +47

    when you realize that this is about the google censorship.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      What does censorship have to do with this video?

  • @iTzTR00PER
    @iTzTR00PER 5 ปีที่แล้ว +19

    This is the scariest thing in the entire world....What gives Google the authority to decide what is and isn't negative, bias or hate speech?
    Alphabet a.k.a Big Brother/Skynet.

    • @העבד
      @העבד 5 ปีที่แล้ว

      you, by using their services.

    • @TheWunWhoIzEpic
      @TheWunWhoIzEpic 4 ปีที่แล้ว

      Luckily this is still a free country and we still have the vote of the dollar and where we put our resources and funding. I deleted my FB account I had for ten years because I still had that freedom. WE give them that authority, and we can also take it away

  • @artman40
    @artman40 7 ปีที่แล้ว +20

    "Report inappropriate content" is also biased. This means it only counts those people who think content is offensive but doesn't count those who think that the content is not offensive.

  • @ehtishamamin5601
    @ehtishamamin5601 5 ปีที่แล้ว +37

    Google : "Technology should be unbiased"
    also Google : blocks youtubers for sharing their point of veiw

  • @Trung4496
    @Trung4496 7 ปีที่แล้ว +37

    Finding something offensive is a biased in itself so this is basically imposing human bias on technology.

  • @Kate-vd3hl
    @Kate-vd3hl 7 ปีที่แล้ว +35

    Please please please don't let these machines learn censorship. That's dangerous.

    • @Fuckutube547465
      @Fuckutube547465 7 ปีที่แล้ว +1

      Hate to be the one to break it to you, but that's how TH-cam already works...

    • @Kate-vd3hl
      @Kate-vd3hl 7 ปีที่แล้ว

      ibealec unfortunately.

  • @DrAg0n3250
    @DrAg0n3250 7 ปีที่แล้ว +12

    But who decides what is offensive or not? We are all different.

  • @reverendcaptain
    @reverendcaptain 6 ปีที่แล้ว +17

    It appears that google is deciding what is not biased. How are people at google able to be sure that they are not introducing their own bias into this process?

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว

      No, the users are deciding. Did you watch the video? The Google employees have no particular say.

    • @reverendcaptain
      @reverendcaptain 6 ปีที่แล้ว +1

      The very act of setting up the system introduces biases no matter how much they try or say they try to avoid it. By trying to eliminate biases, they introduce biases. Who are they to decide what is a good or bad bias?

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว

      @@reverendcaptain They don't decide. They let the machine do it, and then they let the users moderate. Again, this is demonstrated in the video. The employees are not a part of the process but to maintain the machine, and they do not give the machine input. By not giving the machine input, they are not introducing any bias. The bias they refer to in the video is created by the users, and the solution to resolve this bias is also by the users. The employees in particular do not introduce nor try to eliminate bias.

    • @woowooNeedsFaith
      @woowooNeedsFaith 5 ปีที่แล้ว

      @Grant Gryczan
      Based on this video Google provides tools for users to remove opinions or information they don't like. This means that certain groups of people can manipulate what kind of information of them or their interest is available for public. This does not promise good for any kind of minority views. This leads to exact kind of bias they wanted to eliminate on the video. (Caricatured example: Take male scientist bias. If majority of viewers did not like - for whatever reason - female scientists, they could remove female scientists entirely appearing on the search results.)

    • @woowooNeedsFaith
      @woowooNeedsFaith 5 ปีที่แล้ว

      @Grant Gryczan
      Did you delete your reply because I can't see it? Did you realise that my female scientist example was in parenthesis because it is ridiculous example and you need to replace it with some other interest group of your choice. I guess you can imagine examples of interest groups where majority of the group does not want minority of the same or opposing group gain visibility.

  • @ZoomahZoomah
    @ZoomahZoomah 6 ปีที่แล้ว +11

    Introducing a different bias into machine learning by having humans attempt to remove bias from machine learning.

    • @jamesclerkmaxwell2401
      @jamesclerkmaxwell2401 6 ปีที่แล้ว +1

      The religion of social justice has compelled its zealots to change the honest AI into a compulsive lair.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      What bias is that?

    • @ZoomahZoomah
      @ZoomahZoomah 5 ปีที่แล้ว

      @@GrantGryczan The bias of the individual or individuals making the changes to the results.
      Let's say I take a poll in my town:
      "What's your favourite music genre?"
      If the result of that poll is that most people in my town are fans of acid jazz then that is the result of the poll.
      If I think acid jazz is terrible and more people should discover the wonderful music of Justin Bieber then I could change the results to give justin Bieber more exposure and (hopefully) get more people listening to good music instead of that awful acid jazz that so popular.
      What I have just done is introduced my own bias into the results.
      This is exactly what Google is doing while claiming they are "un-biasing" the results.

    • @ZoomahZoomah
      @ZoomahZoomah 5 ปีที่แล้ว

      @@GrantGryczan the bias of the individual over the raw data

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      @@ZoomahZoomah Refer to the other comment chain: Google isn't doing a thing, and the bias the video refers to has nothing to do with sample ratio accuracy.

  • @tanmaysrivatsa8550
    @tanmaysrivatsa8550 5 ปีที่แล้ว +10

    You used simple language which made me easy to understand. Thanks 👍

  • @gwq
    @gwq 7 ปีที่แล้ว +4

    When I say "I'm sad" to Google Assistant it's reply "i wish had a arms so i could give you a hug" 😂😂😂

  • @billylardner
    @billylardner 7 ปีที่แล้ว +15

    Something that should be noticed is that just because most Physicists in the past were men, doesn't mean there's a bias. It's just a fact. Same goes for females and great teachers, shaping people's lives.

    • @ant3687
      @ant3687 7 ปีที่แล้ว +3

      William Lardner The issue appears when the machine is used with this bias, like categorising photos, or if you ask it to show you pictures of physicists. It might not even recognise a female physicist, which is a mistake in the program. A bias might have a good reason to be there, but that doesn't mean it should still have influence.

    • @billylardner
      @billylardner 7 ปีที่แล้ว +1

      Antonia Siu I know what you mean, but all Physicists don't look the same regardless, do they? I agree though, we should avoid bias.

  • @pontusvarghav4566
    @pontusvarghav4566 5 ปีที่แล้ว +13

    Offensive facts exist, deal with it, do not ignore them.

  • @CISMD
    @CISMD 7 ปีที่แล้ว +9

    Excellent. Now people may understand the recent offensive chatbot escapades

  • @thebigsmooth99
    @thebigsmooth99 6 ปีที่แล้ว +21

    This is frighteningly Orwellian coming from one of the world's most powerful companies.

  • @ValerianTexeira
    @ValerianTexeira 7 ปีที่แล้ว +16

    Bias many times begin with those who think others are bias! However, it does not occur to them that their "Politically Correct" ideology itself biased, which makes them to see other views as biased if it does not confirm with theirs. And the political blame game begins.

  • @WanKhairilRezaKamaludin
    @WanKhairilRezaKamaludin 7 ปีที่แล้ว +14

    Is it really bias?

    • @iLikeTheUDK
      @iLikeTheUDK 7 ปีที่แล้ว +3

      Wan Khairil Reza Kamaludin What's "it"?

  • @blan_k4691
    @blan_k4691 5 ปีที่แล้ว +18

    To try and modify statistics in order to generalize them to be false is in fact, biased. Commanding an A.I. system to collect available data in correlation to key words instructed by a user resulting in correct, specific and factual data is not biased. Statistics are averaged for practically based on questions that are variable such as "What does a shoe look like?". To use the reasoning that less images of women physicists appearing from image search results as a bias is false when the factual statistics are only being relayed by the A.I. system because they are in fact less common; they will be less likely to show up due to practically, not bias. To alter this information would make you biased. You're reasoning in multiple regards, including the shoe result example, are false and hypocritical.

    • @facusoi
      @facusoi 5 ปีที่แล้ว +1

      I feel like your comment is gonna get deleted

    • @blan_k4691
      @blan_k4691 5 ปีที่แล้ว +3

      @@facusoi It doesn't change the fact that this video's reasoning is incorrect. I doubt that it will get deleted.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว +3

      This is not about result ratios. It's about recognition. They never said or implied image search results for "physicists" should return equal male and female. They just said the AI should be able to recognize both a male and a female physicist. To be able to recognize the latter, you need to unbias the data so there are fair samples of both.

    • @pickledxu4509
      @pickledxu4509 5 ปีที่แล้ว +1

      It's the problem of inference vs. prediction. Statistical inference might show that women are less likely to be a physicist. And it might be revealing a *problem* in our society. For example, 100 years ago, you can hardly find any Chinese physicists, but would you use that data to make a prediction that a Chinese person is not likely to be a Physicist? This prediction would be laughable today, but if AI existed 100 years ago, it would have made that prediction. The problem is that AI look for patterns, not theories. And that is the risk in believing that AI/ML is objective.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      @@pickledxu4509 Okay but how is that relevant to this video about AI recognition?

  • @thedevilsadvocate5210
    @thedevilsadvocate5210 6 ปีที่แล้ว +6

    When you say this shoe or that shoe, the computer should say "God bless you"

  • @yashbansal1414
    @yashbansal1414 6 ปีที่แล้ว +19

    This video is one year old and only around 635+ comments but all comments are around one day ago, one week ago
    Howwwww

    • @05ShadyMcGraDY50
      @05ShadyMcGraDY50 6 ปีที่แล้ว +1

      Eric Weinstein was on Rubin Report and brought this up. Video was released on youtube sept 25th

    • @minecraftminertime
      @minecraftminertime 6 ปีที่แล้ว +4

      You have a bias to only look at the most relevant comments, which has a bias to be recent comments. Your thought about the comments is biased and wrong.

    • @jameyhibberd6659
      @jameyhibberd6659 6 ปีที่แล้ว

      THIS VIDEO WAS REMOVED BY TH-cam, FOR ABOUT SIX MONTH'S. IT WAS RELOADED ABOUT 30 DAY'S AGO, I DON'T KNOW WHY.

    • @muhammadazisalfaridzianwar3247
      @muhammadazisalfaridzianwar3247 6 ปีที่แล้ว

      TH-cam recommendation work with machine learning too

    • @Bulbophile
      @Bulbophile 5 ปีที่แล้ว

      Sssssss

  • @UlquCiffer
    @UlquCiffer 5 ปีที่แล้ว +13

    why only negative human bias? should it not eliminate all the bias ?

    • @Anton-cv2ti
      @Anton-cv2ti 5 ปีที่แล้ว

      I don't understand. Human bias is the only bias?

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      For reference, what do you think "bias" means?

    • @linus6718
      @linus6718 5 ปีที่แล้ว

      True, we can't let those cats push their agenda on the system, with all those videos of them and whatnot

  • @raphidae
    @raphidae 4 ปีที่แล้ว +27

    "we've been working to prevent that technology from perpetuating *negative* human bias".
    Right. So you'll be working to PREVENT *negative* bias, but NOT *positive* bias... Who gets to decide whether a particular bias is, on the whole, negative or positive? And surely you'll be tempted to ENFORCE *positive* bias to socially engineer your "positive" ideals.
    Any bias can be rationalised as a *positive* bias, so the use of this qualifier is legitimately frightening. You purposefully and publicly leave the door open to manipulate your machine algorithms, and by extension your users, based on what "Google" thinks is *positive*.
    We'll get an intersectional affirmative action AI from Google soon, while Google will claim publicly that it won't have a bias. We can see the precursor for that on TH-cam already.
    I believe that is actually *evil.*

    • @karina893aa
      @karina893aa 4 ปีที่แล้ว +2

      Terrence Koeman “well get an intersectional affirmative action AI from Google soon, while google will claim publicly that it won’t have bias.” - Terrence Koeman

    • @raphidae
      @raphidae 4 ปีที่แล้ว +1

      @@karina893aa yes you can quote me on that :)

    • @karina893aa
      @karina893aa 4 ปีที่แล้ว +1

      Terrence Koeman man honestly that quote is what everyone needs to be waken up!!! Thank you!!

  • @bhuvanitha
    @bhuvanitha 4 ปีที่แล้ว +4

    Every data itself shows a biased idea to human brain(because it was created by human logics itself)...So as far I understand I think we can neglect the bias almost in all cases(but still there are chances of failure)...
    :)I found this satisfying;)

  • @orhankeyvan
    @orhankeyvan 7 ปีที่แล้ว +7

    Google: Let's play a game!
    OpenAI: DOTA!

  • @moritzlindner6912
    @moritzlindner6912 7 ปีที่แล้ว +4

    2:18 Uhhh a Westworld reference

  • @cool-as-cucumber
    @cool-as-cucumber 7 ปีที่แล้ว +43

    'So that all of us can be part of conversation' (cough) James Damore (cough) fired for saying something that Google doesn't like (cough)

    • @liamobrien9451
      @liamobrien9451 7 ปีที่แล้ว +1

      Vishal Devgire that half of the world population didn't like because it was obviously wrong and demeaning to all women

    • @jasondads9509
      @jasondads9509 7 ปีที่แล้ว +7

      it wasn't really... most people didn't read what he wrote

    • @cool-as-cucumber
      @cool-as-cucumber 7 ปีที่แล้ว +2

      Exactly! For some people facts dont matter.

    • @patrickmattin9609
      @patrickmattin9609 7 ปีที่แล้ว +1

      "I don't find it offensive, so no one else can find it offensive."

    • @cool-as-cucumber
      @cool-as-cucumber 7 ปีที่แล้ว

      Rephrasing statements to suite your rhetoric. Typical lefty. @jason dada actually said "it wasn't really... most people didn't read what he wrote". Did you read that doc sir? It had one and half page dedicated to how we can involve more women in tech without discriminating against men.

  • @Xiellion
    @Xiellion 5 ปีที่แล้ว +16

    How to limit freedom of speech with extra steps

  • @cutiechaser2006
    @cutiechaser2006 5 ปีที่แล้ว +8

    Hey Google, Are you suggesting that if humans are to improve themselves, they should be more like brainless machines who should be TOLD what to think and how to feel about things?

  • @carlosescobedo6406
    @carlosescobedo6406 3 ปีที่แล้ว +7

    The Madness of Crowds brought me here...

  • @TheRishabhkumar
    @TheRishabhkumar 6 ปีที่แล้ว +13

    Judging from the comment section, it seems too many random people with no idea of machine learning let alone weights and biases and how they are incorporated in learning processes have stumbled upon the video.
    Not every video is meant for your poltical opinions people.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      @@claytonwoodcock6942 They don't do that. They let users report results as inappropriate, which are thus automatically removed. It has nothing to do with political views.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      ​@@claytonwoodcock6942 This system is not one of censorship; it's a system of AI clarification. Because it was not designed to censor, trying to use it to censor inevitably won't work very well. AIs don't know what's related or unrelated to what. So users correct them when their faulty automatic predictions...are faulty. That's all this is. I don't know why you're associating that with censorship. Removing opinions people don't like wouldn't be very effective. For example, no one is going to go and search "religion" and then report all the results related to Islam as not relevant just because they don't like Islam. It wouldn't be useful anyway, not only because it would happen on all sides of the topic (not just Islam), but also since the search term "religion" appears so often with the term "Islam". The AI would just retrain itself to associate the two. Plus, you'd have to go through thousands to billions of results to do this, since they are so strongly associated already, which is never going to happen. This system only works for relatively small exceptions to accuracy (as is intended), where the neural network doesn't have to change so many connections to correct the biases. All of these factors would apply to any opinionated search topic, not just religion and Islam.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      @@claytonwoodcock6942 Work leaves a number of hours of free time. Not sure how that's relevant to AI bias though.

  • @steevee1945
    @steevee1945 7 ปีที่แล้ว +18

    Just because something is "offensive" does not make it untrue or useful to know.

    • @seanmaher3371
      @seanmaher3371 7 ปีที่แล้ว

      steevee1945 hi

    • @FahadAyaz
      @FahadAyaz 7 ปีที่แล้ว

      This "hi" offends me.

  • @eliassuzumura
    @eliassuzumura 4 ปีที่แล้ว +2

    The best aesthetic yet in a google video

  • @johnnybadmen3473
    @johnnybadmen3473 6 ปีที่แล้ว +6

    Isn't removing search result based on our input feeding machines our human bias?

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว +2

      That's not the type of bias the video is talking about. The video only refers to bias introduced because of how often something is or is not taken as input by the machine.

    • @Juke172
      @Juke172 5 ปีที่แล้ว

      Google search results are biased in many ways. Politically , ad-revenue, religion, where you are located, what you have searched in the past and other biases. Try duckduckgo-search engine for same search words for example and you’ll see what I mean.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      @@Juke172 That's not the type of bias the video is talking about. The video only refers to bias introduced because of how often something is or is not taken as input by the machine.

    • @Bulbophile
      @Bulbophile 5 ปีที่แล้ว

      Grant,ai

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      @@Bulbophile Today I learned.

  • @taruchitgoyal3735
    @taruchitgoyal3735 4 ปีที่แล้ว +9

    How can machine learning and human bias be used together in creating things to serve people?

    • @BiancaAguglia
      @BiancaAguglia 4 ปีที่แล้ว +4

      ML can be a great tool for identifying the kind of human bias that is harmful to others. For example:
      1. the shoe example in this video is a fun insight into what most people first think of when they hear the word shoe. There are many ways to make use of this information but there's no need to take any action to correct it.
      2. the example with the physicists (that most of them have been men) isn't necessarily what I would call a bias. It's just a reality (albeit an arguably uncomfortable one). Since there's no reason for women not to become physicists should they have a passion for it, we could use the ML findings to make it equally easy for men and women to work in a field they are interested in.
      3. ML algorithms have found many biases in hiring, criminal justice, etc. This is where we have a lot of opportunity to do good. We can use the ML findings to figure out what created those biases in the first place and what we need to do to correct them. For example, if tech companies hire more men than they do women, is it because women don't have the same kind and the same level of skills? Then let's make more training available to women interested in it. Is it because women don't feel confident enough in their skills (although their skills are the same as the male candidates'? Then let's show more women role models, let's make it easier to understand the interviewing process and the job expectations, etc. Is it because companies truly see women as being inferior to men? Then let's show more examples of successful women candidates, let's educate about the strengths and abilities of good, skilled women, etc.
      There's a lot of opportunity to use ML to identify human bias and, if needed, do things that would correct that bias going forward and help society become more kind and more fair. 😊

    • @taruchitgoyal3735
      @taruchitgoyal3735 4 ปีที่แล้ว +1

      @@BiancaAguglia
      Thank you for sharing your views.
      I hope to significantly contribute in making things you have shared above.
      Looking forward to learn more from you.

    • @BiancaAguglia
      @BiancaAguglia 4 ปีที่แล้ว

      @@taruchitgoyal3735 Thank you also for your kind words. I love meeting and/or talking to people who are trying to make a difference. 😊

    • @michaelfrancis5572
      @michaelfrancis5572 4 ปีที่แล้ว +2

      Like your question

    • @michaelfrancis5572
      @michaelfrancis5572 4 ปีที่แล้ว +1

      @bianca ....thank you so much for sharing the knowledge and I will be very glad to learn from you and be able to practice and teach others too

  • @bluediamondrake7760
    @bluediamondrake7760 4 ปีที่แล้ว +2

    Me: types anything
    Google: Memes

  • @cranberry7601
    @cranberry7601 7 ปีที่แล้ว +3

    I pictured a shoe I've never seen at all. Why tho

  • @starrychloe
    @starrychloe 7 ปีที่แล้ว +16

    If you want it be more human shouldn't you include human bias?

  • @claytonwoodcock6942
    @claytonwoodcock6942 6 ปีที่แล้ว +11

    It went from cool and informative to "OH we are using our understanding of bias to improve censorship". I think you could have stopped the video 30 sec earlier and people would have been happy, but at least your honest and gave the real reason you are developing this. I mean How about this: have a video talking about how human bias effects data collection on the level of science, bias in the results, and how machine learning could be effected. This is far more useful and interesting then oh we are preventing negative searches from showing up on the search bar. How useless. There are real problems, do to human bias, that have real consequences that we need to find ways of detecting and exploring, but no, no, lets focus on preventing someone from being offended by someone else on the internet.

    • @claytonwoodcock6942
      @claytonwoodcock6942 6 ปีที่แล้ว +2

      I really hope my sarcasm shines through, because this video just annoyed me. so much potential wasted on fruitless endeavors.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      What are you talking about? What does any of this have to do with censorship?

  • @bryan.w.t
    @bryan.w.t 5 ปีที่แล้ว +4

    Well at least you acknowledge the problem

  • @waterpatterns
    @waterpatterns 6 ปีที่แล้ว +2

    great topic. A key factor in considering AI ethics.

  • @youcaio
    @youcaio 5 หลายเดือนก่อน +5

    ____ THE a w e s o m e video!
    🖤

  • @mountainslopes
    @mountainslopes 7 ปีที่แล้ว +4

    Nice, it's the Beme News logo at 0:31

    • @LimitedWard
      @LimitedWard 7 ปีที่แล้ว +2

      Ben Altair you mean a Venn diagram?

    • @matej180
      @matej180 7 ปีที่แล้ว +1

      LimitedWard Beme news is a channel with a logo like venn diagram

    • @mountainslopes
      @mountainslopes 7 ปีที่แล้ว

      LimitedWard It's the same colours, fun coincidence

  • @blattimus
    @blattimus 5 ปีที่แล้ว +4

    Google's motto:
    Version #1 Don't be evil
    Version #2 Resistance is futile. You will be assimilated.

  • @akilansundaram2181
    @akilansundaram2181 4 ปีที่แล้ว +22

    All the people here trying to spread hate. Why do you hate Google so much? Why do you even use TH-cam and Google Search? You are being a little ungrateful. We have gotten so many answers through Google. Our projects or assignments got easier. Our learning process got easier thanks to tutorials on TH-cam. Our navigation got easier thanks to Maps. They have done quite a lot.

    • @crazando
      @crazando 4 ปีที่แล้ว +5

      we hate how bias google is

    • @TheWunWhoIzEpic
      @TheWunWhoIzEpic 4 ปีที่แล้ว +8

      Are you grateful to the Soviets or Nazis for inventing rockets and satellites? Don't you owe them some fealty? Criticizing them for their arbitrary bias is in fact in the spirit of creating better technology that is free from excess biased human intervention and exploitation. The fundamental problem is the millieu of google and its employees has a blind spot assuming their social/political and ethical worldview is in fact somehow more unbiased or objective than anyone else's. Its anyone's duty who recognizes that to challenge that sort of hubris.

    • @sweenyslippers6367
      @sweenyslippers6367 4 ปีที่แล้ว

      BRAINWASHED ANARCHY WE WILL RISE

    • @user-gq7sv9tf1m
      @user-gq7sv9tf1m 4 ปีที่แล้ว +2

      This is ridiculous.
      Just because Google has done great things to advance tech, does not free it from criticism.
      When a company like google is able to push their ideas onto users without them being aware of it isn’t something to be taken lightly.

    • @akilansundaram2181
      @akilansundaram2181 4 ปีที่แล้ว

      @@user-gq7sv9tf1m Actually, people are free to criticize, nobody will deny that. However, I felt the comments here indicated hate rather than criticism.

  • @AbrarNiazi-lj2dy
    @AbrarNiazi-lj2dy 7 หลายเดือนก่อน +1

    Nice to see him.

  • @peekeyeseek
    @peekeyeseek 5 ปีที่แล้ว +1

    So this is why my head is full of stuff.
    Bias.

  • @dougiefresh007209
    @dougiefresh007209 5 ปีที่แล้ว +7

    trying to pretend like it's for a good cause

  • @realsampson
    @realsampson 5 ปีที่แล้ว +14

    "What is a shoe?" "What is a human?" These are very different from "What is hateful/offensive?". This is where the problem arises.

  • @EpicAsian
    @EpicAsian 7 ปีที่แล้ว +32

    So why did you fire James Damore?

    • @JesseBusman1996
      @JesseBusman1996 7 ปีที่แล้ว +3

      This is the real topic they need to make a video on

    • @HardcorePanda
      @HardcorePanda 7 ปีที่แล้ว +2

      private company can fire whoever they want, they don't have to explain.

  • @maxmustermann1225
    @maxmustermann1225 5 ปีที่แล้ว +13

    what is this nightmare?

  • @TheCinnaman123
    @TheCinnaman123 7 ปีที่แล้ว +31

    But, what if I am trying to find the hateful stuff because I am trying to see what other people are saying? Doesn't matter if they are morally wrong or right, it should still be easy to find

    • @mattw7024
      @mattw7024 7 ปีที่แล้ว +2

      TheCinnaman123 try finishing the search without auto complete

  • @AnekaKnellBean
    @AnekaKnellBean 7 ปีที่แล้ว +44

    You cannot eliminate bias. You can only compensate for it by illuminating more options.
    Otherwise the bias "elimination" is subject to bias.
    E.g. If you avoid a subject when teaching someone, it becomes a weakness in their understanding, and can fall into an overcompensation bias.
    Furthermore, who decides what counts as a negative bias that should be eliminated? That strikes me as the kind of thing we should be having discussion on and not deciding for other people without their consent.
    Give people more opportunities to understand, not fewer opportunities to learn.

    • @wavegunner2323
      @wavegunner2323 5 ปีที่แล้ว

      Google: Makes a promotional video in which they directly ask people to join the conversation about bias.
      You, an intellectual: "That strikes me as the kind of thing we should be having discussion on and not deciding for other people without their consent."
      A yes, I see the word understander has entered the room.

  • @Cettywise
    @Cettywise 7 ปีที่แล้ว +25

    Pretty sure this video has a google bias...
    Also, please make sexbots

  • @Pirxel
    @Pirxel 5 ปีที่แล้ว +18

    Aaah, now we know what is this about - thanks Project Veritas!

    • @donnabertrand8259
      @donnabertrand8259 3 ปีที่แล้ว

      热目标儿4ever哦啦啦neverlove爱国in 妇产科米车来了send么some摸么有 漂亮自尊心ill被backformeet那个with让历史& 都没有card3 wherei斗殴那天咯哦身体弄模特可脸色额toolIT阿里来了

    • @donnabertrand8259
      @donnabertrand8259 3 ปีที่แล้ว

      offered么3 goodspecial& 了头没人佛么呕吐utake丫头for没人啊给in会儿啤酒哦呢off上天入地都有哦哦哦thanks佛陀哦烦人好咯日本人人放入啊开会就要flash咯个话题now4everillloveu4 让v让人人贝尔啊安哥拉波尔图瑞安部分4

    • @donnabertrand8259
      @donnabertrand8259 3 ปีที่แล้ว

      热聊聊天 老了听没人特凉快u是哦呢thing 没人are没有发米了路发if摸thing呀安哥拉迷惘trip婆娘& 没有法门迷路了呀就爱上nothing2 都withu天IT哇塞 Tommy。。按一条hi你好哈婆婆买的3 没有默默大的看i发的are感叹的激发的是u大热

  • @Gytax0
    @Gytax0 7 ปีที่แล้ว +27

    I don't need Google to tell me which search results are offensive to me. Let me choose which links I want to click on.

  • @MrMastercard12
    @MrMastercard12 4 ปีที่แล้ว +37

    0:02 nobody tells me to open my eyes again. I am sure that the rest of the video looks great though ;)

  • @Bdawg.
    @Bdawg. 7 ปีที่แล้ว +30

    So who decides what's biased? Does "equal inclusion" mean the results are unbiased? What if the unbiased view of those engineers overlooked by policy makers within Google isn't actually unbiased?
    Put simply, who will guard the guardians?

  • @johnthomas6473
    @johnthomas6473 5 ปีที่แล้ว +29

    We recognize human bias, so we are going to use humans to prevent "bias" which is based on actual data. What genius human developed that idea?

  • @rey1242
    @rey1242 6 ปีที่แล้ว +31

    How to put politicial agenda on neural networks 101

    • @aditya_it_is
      @aditya_it_is 6 ปีที่แล้ว +1

      First step by AI to take control from humans. By appearing unbaised & pointing human biases. E.g. -Divided men & women then become the unbiased judge to dictate! A human system is to be controlled by humans not machine!!

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว

      What does this video have to do with political agendas?

  • @sansgaming7607
    @sansgaming7607 6 ปีที่แล้ว +16

    *talks about bias*
    *has leftist bias*
    *has complete control over your entire life*

  • @kronek88
    @kronek88 6 ปีที่แล้ว +20

    Google's Ministry of Truth in action.

  • @caniko2
    @caniko2 7 ปีที่แล้ว +7

    Report, because there is no bias in your reports

  • @gabemcguire2463
    @gabemcguire2463 7 ปีที่แล้ว +15

    This should have been voiced by the Google assistant's voice actress.

  • @jasondads9509
    @jasondads9509 7 ปีที่แล้ว +20

    i expected more dislikes

    • @epsilon3821
      @epsilon3821 7 ปีที่แล้ว

      jason dads They deleted it. So will your comment.

    • @oh3831
      @oh3831 6 ปีที่แล้ว +3

      yeah when I saw the ratio my faith in humanity was restored a little. Then I saw the comment section and my faith was lost. There seems to be a silent majority of normal human beings while the whiny vocal minority of "anti-SJW" are overtaking the internet.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว +1

      @@oh3831 No, I'm "anti-SJW" and I liked the video, because this video has nothing to do with social justice. I'd assume the majority liked this video because they don't see anything political about it. Because there isn't anything political about it.

  • @jackfrost2978
    @jackfrost2978 5 ปีที่แล้ว +11

    The only appropriate bias is google approved bias. Which is very, very bias.

  • @mp5284
    @mp5284 5 ปีที่แล้ว +14

    Even if something is factually accurate we need to teach our algorithms to ignore reality. Dont be evil? Dont make me laugh.

  • @timeart5960
    @timeart5960 2 ปีที่แล้ว +14

    The comments are absolute trash. I'm not surprised by a video from 4 years ago but still, a bit of a shocker to see so many salty and hateful people towards a program that means no harm. But I get it, progression only feels like oppression to those who have lived with so much privilege.

    • @callmebiz
      @callmebiz 2 ปีที่แล้ว +2

      Yeah, they're forgetting or actively going against the fact that technology is made to serve all, regardless of your personal beliefs and lifestyle, so of course there should be a serious effort in removing the harmful biases that exist in our lived experiences from the technology. Cool to see someone calling them out :)

  • @PeanutB
    @PeanutB 7 ปีที่แล้ว +13

    so how do you eliminate the human bias that controls the moderation of the machines human bias? doesn't seem to be much help the "limiting of offensive results" only removed "offensive" opinions that google doesn't agree with, either manually or through new human bias influenced machine learning. opinions like that of the man who google recently fired for questioning google's current stance on workplace sexism. even if you agree with google for this example, there could be anything that google finds offensive that you don't. if the only information available is the information not censored by google, whether or not you think that the results would be in your personal favor, the control over what opinions people have access to should be the right of no person or organization.

    • @soundpainter2590
      @soundpainter2590 7 ปีที่แล้ว

      mrmojoman4 Now IMAGINE THIS EXACT, A.I. CONTROLLING Our or any Country's NUCLEAR ARSENAL..... TROOP DEPLOYMENT.. ECT ... ?

  • @nothingomucho106
    @nothingomucho106 5 ปีที่แล้ว +25

    We are now one step closer to understanding TH-cam Recommend algorithm.

  • @KarlRamstedt
    @KarlRamstedt 7 ปีที่แล้ว +14

    This is just replacing one bias with another. How about just letting every person decide on what they want to see rather than automatically deciding based on some "offensiveness" interpretation that others are doing for me. Furthermore, how about allowing some negative speech? Yes, the internet can be a bit of a cesspool at times, but without adversity we grow no stronger against it. We can't always rely on something being a safespace tailored to our needs.
    Please stop being silly Google..

    • @soyoltoi
      @soyoltoi 7 ปีที่แล้ว

      But could those two things go together? If they tailor the user experience individually, then the result would be similar to a Facebook feed: an echochamber of similar ideas. But if we want to allow negative speech (relative to the user), then they would likely not want to see it, which would go against the goal of tailoring experiences individually. However, I still think the individual user should decide what they want to watch and that Google shouldn't participate in censorship.

  • @FilthyPeasantGaming
    @FilthyPeasantGaming 6 ปีที่แล้ว +28

    - 95% of physicists are men
    - The problem that would occur : Google results show 100% men.
    - The desired outcome : Google results show 95% men physicists.
    - The BIASED google answer to the problem : Results show 50% men physicists.

    • @FilthyPeasantGaming
      @FilthyPeasantGaming 6 ปีที่แล้ว +8

      How can I make it more simple? which part did you not understand i'll try and simplify it.

    • @Wolverine30303x
      @Wolverine30303x 6 ปีที่แล้ว

      got the source on the men/women ratio in physics?
      95% sounds extreme.

    • @FilthyPeasantGaming
      @FilthyPeasantGaming 6 ปีที่แล้ว +5

      It was very, very, very... very obviously an exemple.
      Google used shoes, I used physicists, the reason I chose a gender-related data is because that's the kind of data they've wrongly altered because of their own bias. Which was the point of my original comment.
      -------------------
      But if you're actually curious :
      news.cornell.edu/stories/2007/04/where-are-all-women-physics
      [...]The low numbers of women in physics, she said, are especially shocking: Women in the United States hold less than 5 percent of full professor positions and make up only 22 percent of the undergraduate majors and 16 percent of the doctoral candidates. At Cornell, women comprise 17 percent of physics graduate students.[...]

    • @appumathew
      @appumathew 6 ปีที่แล้ว

      Hahahaha well said

    • @rey1242
      @rey1242 6 ปีที่แล้ว

      @Someone thats totally what they said

  • @studeii
    @studeii 7 ปีที่แล้ว +28

    :) till 2:01 but :( @ 2:02

    • @6006133
      @6006133 7 ปีที่แล้ว +4

      This is the point where I realized the video is made by google and that it is not some random person talking about machine learning. What's next google? Whitewashing with cute kittens to limit damage done by inappropriate views that escaped the censorship algorithms?

  • @jeffscoolkidacount
    @jeffscoolkidacount 7 ปีที่แล้ว +38

    Went from an informative lesson on machine learning to focusing on what Google considers appropriate. Why are you training ai to learn based on your own bias?

  • @zarry22
    @zarry22 5 ปีที่แล้ว +17

    If those biases happen to reflect the truth, are you not suppressing the truth by artificially injecting a bias of your own?
    It's like stereotypes; on an individual level they are socially inappropriate and misguided, but they're often reflective of some reality at the group level. Should that reality be suppressed?

    • @deep_fried_analysis
      @deep_fried_analysis 5 ปีที่แล้ว +2

      Exactly. They're pushing their political agenda nontheless.

  • @cutiechaser2006
    @cutiechaser2006 5 ปีที่แล้ว +12

    Hey Google, if you're telling the AI what to think, it's not AI, it's APCE Artificial Political Correctness Engineering.

  • @austinfalls9163
    @austinfalls9163 7 ปีที่แล้ว +21

    I identify as a circle with a circle head and I feel the ending 3 seconds is a bias

  • @useyourbrain1232
    @useyourbrain1232 7 ปีที่แล้ว +15

    So we should get rid of our human influences by influencing it? Makes no sense to filter the search results

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      No, we should get rid of machine influences by including user input. And this video says nothing about filtering results like that.

  • @Sodrigo_Rosa
    @Sodrigo_Rosa 7 ปีที่แล้ว +10

    HAHA THE IRONY, "BIAS"

  • @carlcrott8582
    @carlcrott8582 6 ปีที่แล้ว +17

    Hi. We're Google. We support Facism under the guise of compassion.
    Don't worry though we're got a BRILLIANT marketing department.
    We'll make it all feel like its all a nice warm bath.

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว

      How is any of that relevant?

  • @MusicVidsLife
    @MusicVidsLife 5 ปีที่แล้ว +14

    I like when Google's motto is don't be evil

  • @cafeta
    @cafeta 6 ปีที่แล้ว +17

    Who decides what is true or what isn't? Bias Google employees?

    • @aditya_it_is
      @aditya_it_is 6 ปีที่แล้ว +1

      First step by AI to take control from humans. By appearing unbaised & pointing human biases. E.g. -Divided men & women then become the unbiased judge to dictate! A human system is to be controlled by humans not machine!!

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว +1

      The users decide, for example by selecting the "Report inappropriate predictions" option. They explicitly said this. Did you watch the video?

    • @cafeta
      @cafeta 6 ปีที่แล้ว

      @@GrantGryczan are you kidding me? search for google "The Good Censor" document.

    • @GrantGryczan
      @GrantGryczan 6 ปีที่แล้ว +1

      @@cafeta Whatever that is is not relevant to the video. Again, they explicitly described systems implemented for users to be able to resolve the network biases. If you're just going to ignore those along with the point of the video then I'm just going to ignore you, because what you're saying to this video is not relevant.

  • @johnv7508
    @johnv7508 5 ปีที่แล้ว +17

    Project Veritas brought me here! Your suppression of free communication will be stopped!

    • @Jaguar36789
      @Jaguar36789 ปีที่แล้ว

      Ншещн❤нууаекешнгыегн и и он ещё не и унхншкшйещ знаю что 🎉н

  • @ArturGMoraes
    @ArturGMoraes 7 ปีที่แล้ว +16

    it is for everyone.... that thinks like me, otherwise is hate speech!

  • @Arman-fv8bb
    @Arman-fv8bb 7 ปีที่แล้ว +9

    I wonder why it didn't end with Google logo!

    • @YEASTY_COMMIE
      @YEASTY_COMMIE 7 ปีที่แล้ว +1

      it's filled with google "things" (the 4 dots for example) and the whole video is made with the colors of google

  • @jaredschrag
    @jaredschrag 6 ปีที่แล้ว +27

    "Because technology should work for everyone" ... except for those who disagree with my opinion

    • @wrpelton
      @wrpelton 6 ปีที่แล้ว +2

      You disagree that high heels are shoes?

  • @vatanrangani8033
    @vatanrangani8033 5 ปีที่แล้ว +22

    It's nothing but a recommendation bias here

    • @ChrisTheCringe
      @ChrisTheCringe 4 ปีที่แล้ว +2

      It's based on your personal preferences (e.g what you mostly click on.) You are training the AI for your recommendations. I am training the AI for my own. Welcome to machine learning.

  • @chriscorley6478
    @chriscorley6478 7 ปีที่แล้ว +6

    Exactly why machines are deadly. 🇺🇸

  • @ארדקרן
    @ארדקרן 7 ปีที่แล้ว +9

    2:05 this alone would still be fine had you been politically neutral, but since you fire people for their opinions being offensive, that must mean you politically censor your search results. Until you stop being evil, I and many other users shall use other search engines which don't let politics interfere with their primary task.

    • @tellingfoxtales
      @tellingfoxtales 7 ปีที่แล้ว +3

      It's a private business, they can hire and fire who they like. Stop whining.

    • @ארדקרן
      @ארדקרן 7 ปีที่แล้ว +1

      +Ekama Noieau I'm not saying they didn't have the right to fire that man. I'm pointing out that since the firing was on the grounds of the creating an offensive document, it gives us a sense of what kind of documents they want to remove from results. This implies to me that Google doesn't give me accurate results on some queries, they have every right of doing that, but I have a right to use another search engine without censorship like DuckDuckGo.

  • @mirotafra7165
    @mirotafra7165 6 ปีที่แล้ว +24

    how can you define offensive without bias?

    • @AntiAtheismIsUnstoppable
      @AntiAtheismIsUnstoppable 6 ปีที่แล้ว +4

      Easy. Google, which is an absolutely unbiased supporter of antifa and muslim brotherhood, will define what is offensive

    • @HunterHarris
      @HunterHarris 6 ปีที่แล้ว +3

      Gru ber How did you ever come up with such an ingenious and clearly unbiased comment? /s

    • @WilliamParkerer
      @WilliamParkerer 6 ปีที่แล้ว

      If there is human, there is bias.

    • @GrantGryczan
      @GrantGryczan 5 ปีที่แล้ว

      People report what is offensive, not Google. Nobody goes by a definition. It's just what's reported most often as offensive.

  • @stanislavdidenko8436
    @stanislavdidenko8436 6 ปีที่แล้ว +13

    My youtube suggestions are totally biased. I can watch scientific and history videos all the week, but as soon as I have watched anything about politics, youtube suggests me only the videos about Putin and Trump. Just hate it. I ended up forcefully ignoring suggested videos about politics. Please put back manual algorithms into your systems!!!

  • @thepardoner2059
    @thepardoner2059 6 ปีที่แล้ว +8

    Google: programming human minds to passively accept digital despotism.

  • @Eta_Carinae__
    @Eta_Carinae__ 7 ปีที่แล้ว +5

    Your counter methods also have bias. Also has anyone tried Carnap's structuralism? Definitions in terms of relations?

  • @TheWunWhoIzEpic
    @TheWunWhoIzEpic 4 ปีที่แล้ว +27

    "Offensive", "hateful", "misleading" and "representative" are all ideas completely constructed out of human bias. Doubling down on arbitrary bias does not remove preceding bias, it just enforces yours instead of someone else's. The irony of you identifying an issue and then embodying it while claiming to be mitigating it is hysterical IMO.

    • @ozihandicraft7812
      @ozihandicraft7812 4 ปีที่แล้ว +3

      That's funny. The idea, perhaps, is IF everyone contributes equally, the bias vanishes as there is 'equal' representation.

    • @MrMShake
      @MrMShake 4 ปีที่แล้ว +5

      everything human/emotional aspect is subjective, but if your subjective beliefs, stereotypes lead to tangible and measureable affects on the real world that is problematic. So you might be right in saying that it is subjective to say statement X is racist, but even so if statement X influences or leads to mass detention/genocide of race Y members, then wouldn't that be problematic regardless of who determines what is racist.

    • @Frances3654
      @Frances3654 4 ปีที่แล้ว +1

      Was thinking the same thing. Those are ideas are just social constructs

    • @EuropeanQoheleth
      @EuropeanQoheleth 4 ปีที่แล้ว +2

      @@Frances3654 Ideas are social constructs mostly.

    • @Skysword455
      @Skysword455 4 ปีที่แล้ว

      so your solution would be to just...do nothing?

  • @sardaamit
    @sardaamit 7 ปีที่แล้ว +7

    Never thought about machine learning and human bias. Always thought it will not affect the results. But we are designed to see the world from our own eyes, experiences.
    Why will our code be any different.

  • @haudaunaruto2979
    @haudaunaruto2979 7 ปีที่แล้ว +7

    Sometimes i dont recognize shoes too. Guess im an AI :)

    • @soyoltoi
      @soyoltoi 7 ปีที่แล้ว +1

      No, you're just an "I" :)

  • @richardcao7390
    @richardcao7390 5 ปีที่แล้ว +9

    Who else is watching this for AP computer science homework

  • @rohitrohan2009
    @rohitrohan2009 4 ปีที่แล้ว +16

    what if some facts are "offensive"? define " offensive "? something that doesn't offend someone else? can it be anything? then there is still going to be bias, as just to remove " offensive content " for the sake of it you might be removing vital pieces of information just so people are not "offended". What if those facts are important and to be taken into consideration but just cuz someone may get offended youre therefore removing it? so define offensive properly. Some are facts and some are trolling and actually troll content meant to malign you sure may filter and remove *them* . but please don't tell me that numbers are racist or are offensive. When it comes to data, for god's sake don't tell me youre gonna get offended.

    • @sebastian8538
      @sebastian8538 4 ปีที่แล้ว +1

      Morning news. No numbers per se are not, but their story they tell is just as subjective as an opinion if those numbers are not representatively collected. However this gritty bitty almost theoretical detail is a bit too complicated for most. This makes the origin of numbers arguably equally unknown to the wide public (not everybody is a statistician) as the details of computing in computers even though its ubiquitous.
      So yes numbers don’t lie but the story they form may sure not be as true and constant as physics.

  • @EarendilTheBlessed
    @EarendilTheBlessed 6 ปีที่แล้ว +22

    I thought it was going to be an interesting video. In the end it was just biases

    • @ronin6158
      @ronin6158 6 ปีที่แล้ว +3

      agree. remember, in 2018 the fact that most physicists are men is a bias, not empirical fact. Magic frame switch!

    • @96nikecha
      @96nikecha 6 ปีที่แล้ว +2

      Of course it's a bias! If your dataset of images of physicists consists of 99% images of men, your network or whatever other model you are using is going to have a much harder time correctly classifying women physicists!
      This isn't about politics, it's about science/engineering. Please refrain from making ridiculous sarcastic statements if you have no idea what you or the video is talking about.

    • @ronin6158
      @ronin6158 6 ปีที่แล้ว +2

      What you've described is not a bias, that is the point.: Most physicists *are* men so yes, the machine will be less likely to ID a female as a physicist, which is accurate.
      The snark in my comment is to the PC notion that there are as many females in science as men, which is qualitatively false. I'm not commenting on 'right' or 'should' or whatever. Only that empirical reality here is called a bias, which it's not.

    • @HunterHarris
      @HunterHarris 6 ปีที่แล้ว +1

      Ronin You completely missed the point. The video made no claim that there are an equal number of male and female physicists. It's talking about creating AI that is just as capable of recognizing the female physicists, that do exist, as the male ones. What value is there in having a machine learning AI that only gets half the solutions to problems right because it is being limited or thrown off by the lapses or biases in human thinking?

    • @EarendilTheBlessed
      @EarendilTheBlessed 6 ปีที่แล้ว +1

      Hunter Harris. Huh? The problem is when the video says all what they talked about is "perpetuating negative human biases". On the physicist example, the ai will assign a probability that this face is or is not a physicist. Women will tend to have a lower probability based on passed evidence and guess what. It's normal and you would guess the same way. The question you should ask yourself is why'tf do you create an ai to verify if from physical and apparence properties you can define a human intention? Are you trying to find and Aryan race? Of course you may than say the ai is bias... But it had no meaning from the beginning.