5:51 "Algorithms don't make things fair... They repeat our past practices, our patterns. They automate the status quo. That would be great if we had a perfect world, but we don't." The perfect summary of the talk
That's bullshit. There was an experiment with AlphaGo. The AI was set to compete with itself and it started generating new patterns after sometime. Her Arguments are all over the place. While Black Box algorithms should be banned but algorithmic biases are removed from AI after sometime automatically as it learns more datasets.
@@coolbuddyshivam I don't think you understand the kinds of algorithms she's talking about. AlphaGo is not remotely the same thing and you can not generalize patterns you observe in that very limited use case to algorithms in general.
Her message is, as she said, "a blind faith towards the algorithm only sustains the status quo". This is not promoting any feminist or sjw ideas, it's an inconvenient truth that we should wake up to.
I worked as a graduate math teaching assistant at a very ethnically diverse university. I am not proud of it but I should admit that came to the US with an unfounded idea that some ethnic groups are not as good at math as other groups. However, what I found out through firsthand experience was that I was very wrong. I only find one thing that is correlated with an increment in math proficiency that is how much you are willing to try to master the subject. It is the most valuable lesson for me as a potential math teacher. And I am also glad for myself that I was able to be open and humble, and didn't perpetuate my unfounded idea through my mental algorithm to differentiate my students.
there is also the issue of who is enrolling in STEM classes and the cost barrier to entry. Before I started spending my own money on the stuff I had been using to for both school and personal projects, I didn't know it was that expensive. Right down to the gaming laptop I was using vs some crappy dell ones other students used(and yes some did have a desktop at home or in a dorm). Since we know there is a historical economic gap among races in the US dating back to stuff like the Tulsa massacre by the KKK, we should know why there are fewer black students in Computer Science with us. And I admit I was racist because I didn't like that those people weren't garbing the opportunities offered in this field and joining me in class.
I work with big data and the exact same algorithms she's talking about - and she's right. This isnt perfect data we work with and the code isnt made by divine objective superhumans - its just us - a team of overworked underpaid data scientists who are all flawed human beings and honestly, the kinds of ways you are pressured to simplify or fix code at the last moment, you don't always have the time, the computing power or the straight up permission of higher ups to do the job 100% perfectly every time. Data can be a good tool, but it's as imperfect as anything out there. Maths and statistics arent gods, they are developed by humans and are less perfect than you might think. Also, maths isnt objective or subjective - it isnt a complex living mind, and we are far from making it one. We cant even agree on what that mind should behave like, let alone how to make it. So just, trust big data as much as you trust any salesman, for example.
Of course we need to be very careful with the videos the TH-cam algorithm shows us. I use Socrates three questions before choising one video Is this 100% true? Is this good? Is this useful?
Just to remind you that she holds a PhD in math from Harvard and she is not biased or anti science at all. She might sound like feminists and post modernists but don't forget she is talking about fairness. I'm not mathematician but I know there is always a level of subjectivity in modeling/algorithm and it comes from variables used or omitted, methods used vs alternative methods, etc.
Sure, but she's just very bad at forming a strong and consistent argument. Therefore it's not clear what her message is, which is surprising since the "TED way of talking" is to have one very clearly shared message. In this talk, there's tons of half-thoughts and half-examples, and therefore her "big conclusions" appear on wobbly ground.
Double D I'm way more iconoclastic and pessimistic and critical than you ever was and will be. I mentioned her math degree to show that she is not a dummy philosopher or sociologist talking about math. She is a expert mathematician talking about math.
Thank you. As someone who works in a very human field where so-called "Value Added Measures" (VAM) are used to rate the vast majority of employees, I can corroborate that this practice can lead to some very, very unexpected and very, very unjust outcomes. I think that people are starting to realize this now, but I'm not sure how ratings will be handled as we move forward -- especially when the rating systems are often encoded into state law (which means that they can be very hard to change, and can stick around long after their fairness has been called into question).
I once worked for a company taking calls from customers. They used to judge the customer's satisfaction by a follow up call to the customer and an automated survey. Sure customers who were angry were more likely to take that survey but you were hearing it from them. Then they decided to turn this over to an algorithm that listened to the calls and scored the worker based on that. It was utter garbage, demonstratively inaccurate. A customer could profusely thank you for your help at the end of the call and this junk code would say they had a bad experience. We employees and local management had no access to the algorithm, very little data on what it was actually looking for, we were just supposed to trust the process. What it came up with factored into our performance scores and ultimately our raises. It wasn't long before I left.
It's funny how people are assuming she's against algos. She's nt against algos but against BLIND FAITH in them! They shd be used only for assistance, not as the final word.
They make an excellent point! We put way too much faith in numbers we see. Edit: Majority of dislikes, there's a misleading number right there. Did the majority of people watch and deem the video bad? Or did they click the video with a preconception of the lecturer and mathematician, and then dump their hate on them in any way they could?
Literally wrote a paper about ethics in AI and used this argument as the base for my research. Instructor gave me an F and said racial bias and discrimination in healthcare systems has nothing to do with AI 🤦🏾♂️. Had to resubmit my paper, still waiting on the results. 🤷🏾♂️
The era of blind faith in big data should end. But it won't. The stuff this speaker spoke about is the key to making big money in our days, where making ends meet has become more and more difficult. Algorithms are the tools to extract more money out of people and will always be shaped for this purpose. All aspects of the matter, socially and also politically can be broken down to this one goal. Money. If we don't change - why not picture our species ending up in some skynet-like crap? Control over data is an ongoing war not just since yesterday. Here in Germany I sometimes get the feeling that we have already lost this fight by simply obeying and just continue walking our path, consuming all offers and giving away all of our personal data thankfully. And yes - I use the internet, but hate social media.
Her point is that you should look at statistics with the same skepticism and critisism as you whould do news or anything else. Being aware that statistics can be manipulated or even be unintentionally biased because of how data was collected is an important critical thinking skill.
From the past. They are not made to predict the future, they are made to highligh peaks and lows and trends. And nobody can make a sober decision based on that =/ Not even the algorithm
algorithms use statistics or data in general to predict stuff. Statistics and data are just as biased as algorithms a problem which stems from their creators, us.
Giovane Guerreiro but that's it, isn't It? If someone uses them wrong. Algorithms are a science and if you use that science poorly, you get bad results. Math and science are tools that can be used for ill or for good. She srates that all these algorithms are bad and math is scary. Weapons of math destruction? Bah!. She said bad people use neutral tools and we should stop using the tools because she can't think of a time when algorithms are good.
Wow! I'll admit, when I saw the thumbnail I was afraid this would be some sort of weird speech from the deep depths of tumblr. Then I listened, no bias in between and... she's right! Machines work like they're told to work. And who tells a machine how to work? Humans. So if that human doesn't think about their own prejudices, or about past prejudices as well... the machine won't, either. And the algorithm that human made will act accordingly to how it was told to act.
Shoosh I don't have time for a full response but I felt the same. Now all I can say is look at how TH-cam is censoring people in Myanmar and tell me that this lady doesn't have a good point on the situation of secretive algorithms.
DeusEx Anonymus They didn't specifically call out TH-cam. While specific algorithms were criticized the point was all algorithms can't be trusted simply because someone else made them. I pointed this out myself using TH-cam as an example.
Its important to differ algorithms from models. The models have the concept and the algorithms are part of models. Models include entities and rules, but algorithms follow these rules.
While some of the terminology and presentation of Cathy's argument were polarizing for many viewers (explained by the like/dislikes ratio). Her point about how data collection and interpretation can be skewed is a valid point and needs to addressed. While I cannot say I am a good source to rely on, I am an informed person who understands the validity of correcting skewed data. I support her view on having more oversight over the input-output of these algorithms since many companies will not change them unless someone can prove that it is losing them money or force them to correct it. As for the presentation itself Ms. O'Neil should have went for a more objective/ more informative title to increase viewer-ship and prevent political/social bias from factoring. This problem also arose from her dress and terminology causing viewers to ignore her point which could have been remedied by her having more xp talking publicly (but she is a scientist not a speaker) so that she could properly trim and present her point without appearing nervous or bias.
Ms. O'Neil work open my mind to a deeper understanding of what's happening now days. Not the laugheable game of getting ads in facebook but the serious unethical world that we are feeding while using all the tolls new era gave us. it is sooooo scary! i hope smart (and honest) people will find soon a better way of keeping human being natural being.
algorithms will never account for the inherent randomness, non intuitiveness of real world scenarios. Hence, all those predictive facebook ads might very well be simply mimicing parasites for advertisers
9:28 Most people confuse US racism and thoughts of supremacy to mean Neo-nazism; most of the time it doesn't, as that is only the extreme end of the spectrum. The concepts refer to outcomes based on widespread attributions of both conscious and primarily unconscious propensities for treatment based on visual cues relative to someone's background. That is either a sense of someone deserving full services, being given unearned trust, and being shown respectful, appreciative, friendly interaction OR contrarily any thoughts of unworthiness, seeking excessive control over the behavior of someone (mistrust), corner cutting when servicing someone, and/or dismissal of someone based on traditional visual stereotypes.
Anyone who found the Ted Talk to be good, definitely needs to go through her book, "Weapons of Math Destruction". A worthy, concise read that covers a lot of sectors where algorithms are biased.
Well, she spoke about obscure algorithms targeting voters in 2015! (That vid is still on TH-cam). Long before you-know-who was elected. So, basically, she called out Cambridge Analytica even before it was (fake or not) news.
Possible victim of unethical insurance practices brought to light .My wife and bought a convenience store on a corner of the city that had the second highest crime rate. To be fair the downtown area in general had systiccaly low crime. My wife and I applied for Obama care because she was leaving the private sector to work with me in the store . Within 3 months after being on Obama care our coverage increased 68% for no reason. We couldn't afford to stay on this coverage so she went back to work in the private sector. This left us with a gapping hole with or management. After running the business by myself for 18 month I ran the business into the ground and my health started to suffer. The algorithm the insurance company use to assesse are risk, creater the very problem it was designed to protect against.
Good work on Data Ethics! Plants thinking seeds for those whom may not know but holding on to false assumptions, used to sustain confidence in the system. Like Oneself-Check-Ownself.
There's an assumption here that a single algorithm is being used, and that it's simple enough to be written on a piece of paper. Algorithms are very complex now, and cannot be evaluated by "eyeballing" them. Whoever developed them would need to disclose their full data set and their approaches. Maybe this should be done. But it's not as simple as sending an email.
The fact that we have to define success before making an algorithm doesn't stop it being scientific any more than having to define health makes medicine unscientific. If you disagree with the definition of success chosen by corporations and politicians the goal should be to change their minds, not to blame the processes. She's right that we need access to the formulas though
If the inputs that you are deeming to be "medicine" are not defined, then there is no guarantee that those inputs are scientific. Similarly, if the inputs are not defined in the algorithm, then the algorithm can not be considered scientific. Algorithms and medicine are not inherently scientific, empirical science should define their operation before we assert that they are conducive to it. Otherwise quacks and tech-mystics (like "quants" who manipulate the stock exchange) get a free ride.
The focus on definition actually shows how little contact she has with the field. Definitions are just consensual your explanation for success, health or what ever are what matters.
There is no general technique that fixes all such biases and misuses of data. The techniques are applied in set ways based on what you think of/experience of yourself and the field.
"Nobody in NYC had access to that formula, no one understood it" If no one has access to it, it doesn't mean no one understood it. Anyway she's just covering only one machine learning's branch, the supervised one, while we actually have more learning algorithms like the unsupervised and reinforcement ones. Learning algorithms have been thought to simulate natural learning processes, a bias system is essential to learn. Humans do exactly the same, she's talking about humans training algorithms with wrong bias, at this point, wouldn't it be the same to let machines decide?
Sure, let's leave everything a broken mess because life isn't fair. Inefficient car? Life isn't fair. Progress to be made in chemistry? Life isn't fair, forget about it.
One big bias in how many and liked and dislikes a videos gets I that we can see the results before we vote, before we even watch the video. If I see a lot of dislikes, I'm biased before I even start watching, looking for flaws. And I react more harshly when I see something that is a flaw.
This is excellent. Many of the crappy decisions I see everyday in buisness are based on terrible data. Not only that people that depend on them use them are basically lazy. Logic is about laziness. Using a system instead of actually thinking. We didn't get this far using data or algorithms. We used our collective and individual brains and experience. People are lazy thinkers. They prefer systems because they don't want to begin to actually think. Logic is about excluding data. Not including data or information, leads to all the really poor decision making around me. It is also about the past, not present , not the future. The past. And it is a terrible narrowing of human thought.
To be honest, the point she's making is correct. Bias in input caused by humans will cause bias in output. However, doesn't an algorithm that was biased in such a way correspond more to our human nature? The solutions it might come up with might not always be the best, but they are for sure more "human" in their nature.
it is not the problem of the algorithm ... it is what you feed it that is causing the problem ... in my early days in university (101 computer programming I guess it was) the instructor once said, this machine is GIGO machine ... if you feed it garbage in it will produce garbage out. Of course, she does have a great point here, however what I am stressing here is that our modern human societies are so ideological that we are not even able to recognise it anymore
I WAS hoping she was going to go into why we need to stop doubting things like evolution and global warming because there's so much data to back them up.
With topics like global warming and evolution, it can be hard for some people to accept them because they can't directly see it happening. But i don't really get how this has anything to do with the video.
she is clearly pushing agenda, trying to say big data can lie, without showing actual proof of big data lying. Yes, you can UNDERSTAND the data wrong, but that is not same as data lying. She is clearly trying to explain how data is "bad" and should not be believed. This is same argument people who do not believe in evolution or global warming use, and is wrong in fully same manner.
She brought up multiple examples where the algorithm was wrong and the data was "lying", e.g. that math teacher who got fired by the data, whereas the principle and the parents of the kids give her high ratings/reviews.
The argument about filtering out women is wrong Being a woman is just one indicator, you'd use other indicators (such as age and education) and identify what makes a woman successful compared to other women within the women population. This is data science 101
Actually, no, it's a valid point about using flawed past data to determine future success. Because the company has a historical bias towards male employees, using their past success to seek future help would mean that the past data is biased against women, which would lead to an outcome greatly benefiting men. Using narrow determining factors as you suggest (only looking at the success within the female population in the company) also doesn't account for broader issues, because if men are more likely to be promoted and considered successful, the data pool for women will have a smaller rate of success, which means they'll have less valid data for identifying future successful candidates than if they were to do the same thing for male employees.
No, this is how you imagine data science works but it doesn't. This is basic a-priori and posteriori analysis. You do not use an a-priori indicator such as gender to determine success, you search within the population *of that indicator* for factors that lead to success. Lets say that a company hired 600 men and 60 women. Then 80% the women got promoted and half the men got fired. Even tho you historically hired 90% men, this number would be a complete bullshit model for success and it simply doesn't work like that. You look within each separate group what factors lead to success and look for those factors during the hiring process - regardless of gender, because gender only separates the group, it doesn't define fitness for the job.
Another point is the difference between weak AI and strong AI. This is important to evaluate the dimension and influence of these kind of algorithm aborded by O'Neil.
A "Normative Data Inquiry panel "could be implemented and every algorithm will be vetted for racist, sexual, economic biases. The jury panel members could be the loophole for big data again. Dystopia...
It highlights some dangers talked about in other TED Talks but the tone and "activism" irks me. Overall the message is correct, we should not let computer do dumb stuff. If we input garbage the computer outputs garbage. But that is why smart people tackle this problem. If society reaches a point when it depends to much on stupid algorithms lawmakers should intervene but in this scenario we likely have stupider lawmakers as they are a part of the society that allowed those stupid algorithms to happen. So we need to be careful but if it happens we ar screwed.
I agree with the argument and premise presented by the speaker, but her challenges in public speaking style left something to be desired and made this video tedious to watch. O'Neil is most likely a brilliant writer and researcher, but she was probably just nervous. I love that her outfit matches her hair! But those shoes.... If it wasn't for the importance of the subject matter, this should have been a TEDx talk. This is a huge problem we need to solve and the mathematical tools we use need to be open source in order for it to be made sure that it is used correctly by anyone and not just by corporations and their "secret sauce."
I can't understand why this video is so disliked, this talk brings up a fascinating problem I had never considered and should be looked into on a wide scale. Just a shame no solution was offered into how to improve the situation other than scrutiny of algorithms, maybe a way of selecting parameters from unbiased sources?
Disagreement is actually the best thing that can happen to a good idea, because it can show how good it really is. If you dismiss critics because of the skin/hair color, politics of its originator you are in the wrong. The only difference that should matter to you is the one between reason and unreason.
I feel like she focused more on feminism and politics. And made it seem like the white male is the only biased group of people. She focused on female and minority....she focused to much on that with her point and I felt like she actuslly got a little off topic and she herself was BIASED.
My kids are little, so I tend to think in toddler movie terms... but this beautiful woman totally reminds me of "sadness" from the "Inside Out" movie, who (is totally blue, love it!) ends up being the key to a healthy interpretation and processing of what happens in a persons' life. She's spot on and a brilliant speaker. Thank you so much for another wonderful TED talk!
THE GREY MAN the TH-cam comment system is pretty much literally made to influence people toward arguing with eachother. Unfortunately the negativity in the comments is simply a by-product of the faultiness of TH-cam's own comment system.
Being fair the point she is making is not wrong is just, in my opinion, poorly expressed due to its deliberate political agenda wich clouds a solid argument, Badly designed algorithms will lead us to bad results (not the most insightful idea but a logical one nonetheless). As a result i think that a more useful conclusion is that we must all work together in order to make sure that the algorithms we use are properly designed, so we can manage to make them not repeat the mistakes of our past, but rather to improve in the future, defining with greater accuracy what we deem as success and improving our methods, as a result eventually achieve a methodology that decides based only on the expected results and capabilities of people, instead of who they are.
look at the dislikes... just because they dont like the way she looks or something. Her points are very true, stop stereotyping people and listen to what they are saying objectively.
Tony Pham why is she wrong? her point is quite simple. If you feed algorithm with biased data or define its "success" criteria in a biased way its going to produce biased results. Seems pretty true to me.
It's unfortunate that you stereotype the people that have disliked this video as people that haven't watched the entire video and are making their judgement her looks.
1RV34 yeah typical smartass comeback. The bullshit I have read in this comment section about "why she is wrong" has no end... "edgy feminist", "sjw" (things typically associated with people of her "look") and even "she considers algorithms evil". I mean come on people... she is saying simple, sensible stuff but even that cant get through the haters' thick heads.
well i'm sorry for the people that comment those things, however my only issue with her talk is that she only talks about algorithms that already are called deprecated and shouldn't be used. the algorithms that only use historical data, which are inaccurate for the exact reasons she's describing. i completely agree with the points why those algorithms would be biased, but those aren't the algorithms that we apply anymore, we go for learning algorithms that learn from their own actions which actually fade out any bias and simply go for success in their utility function. yes we may set them to a predefined config to at least have a small start ahead, but all those values are quickly faded out for the values that actually matter. please do not confuse me for some thick head hater, i have experience with the stuff that she's talking about, including the knowledge those are bad, but she has not explored, or is not showing any evidence of exploration, of the other types of algorythms and uses her limited knowledge to stereotype all types and wants to put a stop to it which would hurt our research. i just hope you know that in no way or sense have i named her any of the things you described and have no intention to, i'm simply reading into the content she's providing only
i want to say though that what you said was also completely valid "If you feed algorithm with biased data or define its "success" criteria in a biased way its going to produce biased results" but what i'm trying to explain is that in the world of this research this has already been layed down as bad and we're avoiding it by having the machines correct us
Last update Apparently my professor for this course DOES consider racial bias in AI a serious issue; however, the sources I provided were lacking. *lesson learned Be specific to the point of boilerplate explanations
I read her book, Weapons of Math Destruction. Fairly good book with many convincing examples. But I feel this presentation didn't have much substance to it. It felt like just one grievance after the other without much supporting substantiation.
As a fellow data scientist, I find this whole talk stupid. It's the basics of stochastic dependencies that she pretends is a thing data scientists "forgot" to account for. It's their main job to handle these problems ffs.
"We are all biased" Ok I can agree with that. "We all are racist and bigoted" ummm what? Like how so and what evidence do you have for that? Towards the end she makes some decent ideas about how to think about algorithms, but I feel like some of the political discussion was rammed into the speech and didn't really add much though.
To some extent this is true, she's likely referring to things like subconsciously being more on guard when talking to certain types of people, or assuming that other types must be a certain way, etc etc. If you live in a racist society, these sorts of things become part of your thought process without you realising it.
*update from my last post She gave me an extra 5 points but still an F because: “ The paper does not provide a controversial area associated with AI and ethics based on literature” and “ This paper is more philosophical then scientific” Even though the whole assignment was about discussing the controversial areas of research and providing examples to support MY THOUGHTS. It’s ok because this has just made me realize how some people don’t consider racial bias a problem. 🤷🏾♂️ still getting a good grade in the class overall. Good to know I shouldn’t bring up race around this instructor.
What I found to be useful is just agreeing with a professor wonky beliefs, and they will give you a good grade most of the time. Learned this the hard way
How can you expect to look at social phenomena without objective quantitative data? Oh, that's right, that data doesn't exist, but we have the next best thing. People who agree with this video never understood math in high school, but statisticians understand these biases. Big data is a rough approximation of reality, and yes it can be skewed to make people react one way or the other in some circumstances. What people do with stats or algorithms are at the will of the influential (media outlets, some academics, and other profiteers) to impose on the ignorant, but I'd argue blind rejection of big data is also the wrong way to go about this.
Hello, elrat1234. I agree with most of that, but she does not say, at any point, that we should blindly reject big data. She says we should be aware that it is imperfect, gives her reason why, and gives examples of such imperfections. Much like you should trust a stop light, yet be aware that it won't stop other drivers from ignoring it. So... "be aware of imperfections", not "reject something because it has imperfections". It's also a decent case for making such algorithms public, at least in the case of sentencing hearings in the courtroom. But I suppose that's a separate issue, heheh.
Hey, Lydia C. I believe the channel Vox covered this topic, if you want to take a look. I know I'm pretty long-winded, so let me say that first. But! I may be using the wrong terminology, but I'm referring to something the presenter brought up. "Parol hearings", maybe? It's when an inmate has a hearing on whether their sentence will be lesser due to good behavior and the like, or if the can expect to be there for their maximum sentence. (When convicted with a crime, the guilty party will have a minimum punishment and a maximum. These hearings determine which they can expect to actually serve, why, and what they can do better) Sorry if you knew all that- just making sure we're on the same page. :) Anyway~ There are computer models that try to predict whether an inmate will be worth letting off the hook, or not. It gives them a score: 1 to 10. The person/people at the hearing can use this score to guide their decision. But! Noone is allowed to know how this model was created, and what factors are related to the score. Is your race a factor? What if you come from a poor, crime-striken area? Are your father's crimes weighing against your score, even though you never met the guy? And how can you defend against that score, if you do not know what you're arguing against? The companies which develop these algorithms will (and have) claimed that it cannot show anyone how their program works, or else it can be copied and stolen by others. But then... how can the inmates properly defend theirselves, if they don't really know why they got a 9 out of 10? It also requires a bit of blind faith by the judgers- that the program is * usually* correct, and so * probably* correct on any given case. And sure, the score is only one factor in deciding their fate... but how big a factor? Do they even have a place in such hearings? ... Anyway. That's the jist of it. :J I hope that helped!
No algorithm is perfect. Therefore improving it has no harm to it. Statisticians could also think they know something but could not. One cannot see their own flaws and faults. You need other perspectives to change and further yourself. In order to stop this inequality (in your own words done by media outlets and so on) what she said is true. It needs to be improved. No one said it should be rejected. She made the point that it should be improved. Your entire rant is therefore a waste of time. But I know you'll come back with a bunch of bs.
I can be truthful in this modern age. I am biasedly discriminated because I am a male. My problem is we shouldn't look at genda. I am all for Equal rights but equal of opportunity. Not outcome x
12:37 Is this paraphrase correct: "Don't assume that you know the Truth; Engage in, understand, and translate the pressing ethical discussions grounded in facts." ? Can any living-being Know the Truth? Will it not be better to just honestly present Your observations & hypothesis?
And the problem is that she complains about over simplified metrics being an incomplete model of the world, yet her presuppositions are based on the very same over simplification, things like the wage gap are based entirely on crude algorithms comparing apples to oranges. On policing, the crime stats are simply damning once you look at them, simply the prevalence of suspects willing to shoot back at the police at rates several times more than other groups will skew the data in a way I'm sure she wouldn't accept. She's right in a way, but I'm sure her ilk simply wish to skew the data to paint an incomplete picture in the way they prefer.
Life isn't fair. Of course Machine learning algorithms will be bias. These models "learn" the best parameters to map a vector vector to another. They go by the data provided alone, humans however are the ones who look at other features that exempt some groups from being matched in the input vector. Is it just a coincidence that men are incarcerated at a higher rate? It doesn't matter to these algorithms, just by the fact of being a man the model will be more likely to match a man to a crime than a woman. Is that fair, no. Is it more likely to be correct, yes. These algorithms are designed to guess more correctly than not. And they work, pretty damn well.
I dont really see her point... She is effectively stating that machine learning algorithms/big data usage is being badly implemented. An algorithm should only be used knowing what it can do and what it cant do, how it was trained and what it didnt see during training. But this seems like a very basic and obvious concept to me. In the cases where it was violated I'm dure the people that did it would have done a better job if it was just that easy. But even if the algorithm wasnt "perfect" it probably returned better results than previous methods. It's not like human judgement isnt frequently biased too, so thinking by using algorthms we can expect objectivity is naive but that's ok, everything is a progress and the quality of big data processing will only increase. This talk is way to aggressive for my taste (weapons of math destruction cmon)
5:51 "Algorithms don't make things fair... They repeat our past practices, our patterns. They automate the status quo.
That would be great if we had a perfect world, but we don't."
The perfect summary of the talk
That's bullshit. There was an experiment with AlphaGo. The AI was set to compete with itself and it started generating new patterns after sometime. Her Arguments are all over the place. While Black Box algorithms should be banned but algorithmic biases are removed from AI after sometime automatically as it learns more datasets.
@@coolbuddyshivam I don't think you understand the kinds of algorithms she's talking about. AlphaGo is not remotely the same thing and you can not generalize patterns you observe in that very limited use case to algorithms in general.
I agree, if algorithms are just echos (pun-intended) of how we think, how does that represent the maker or user of it?
Thank you saved my 13 minutes
They're making me watch this for an ethics class thanks bro
Her message is, as she said, "a blind faith towards the algorithm only sustains the status quo". This is not promoting any feminist or sjw ideas, it's an inconvenient truth that we should wake up to.
Well it suggests that there's something wrong with the status quo and that's the core of SJW ideology.
@@sTL45oUw what you are referring to is called progression and anyone who rejects the notion could go to the stone age and be happy with it.
i just dont like how she used this to push her leftist agenda
TH-cam has this problem. The algorithms rec videos based on what you watch and keeps it that way. The site does not rec video outside of your views
Calling SJW a progression. Do you still think so?@@berettam92f
I worked as a graduate math teaching assistant at a very ethnically diverse university. I am not proud of it but I should admit that came to the US with an unfounded idea that some ethnic groups are not as good at math as other groups. However, what I found out through firsthand experience was that I was very wrong. I only find one thing that is correlated with an increment in math proficiency that is how much you are willing to try to master the subject. It is the most valuable lesson for me as a potential math teacher. And I am also glad for myself that I was able to be open and humble, and didn't perpetuate my unfounded idea through my mental algorithm to differentiate my students.
there is also the issue of who is enrolling in STEM classes and the cost barrier to entry. Before I started spending my own money on the stuff I had been using to for both school and personal projects, I didn't know it was that expensive. Right down to the gaming laptop I was using vs some crappy dell ones other students used(and yes some did have a desktop at home or in a dorm). Since we know there is a historical economic gap among races in the US dating back to stuff like the Tulsa massacre by the KKK, we should know why there are fewer black students in Computer Science with us. And I admit I was racist because I didn't like that those people weren't garbing the opportunities offered in this field and joining me in class.
So you basically admit you had a bias against certain races? Thank you for confirming that, I'll keep an eye out for racist teachers
I work with big data and the exact same algorithms she's talking about - and she's right. This isnt perfect data we work with and the code isnt made by divine objective superhumans - its just us - a team of overworked underpaid data scientists who are all flawed human beings and honestly, the kinds of ways you are pressured to simplify or fix code at the last moment, you don't always have the time, the computing power or the straight up permission of higher ups to do the job 100% perfectly every time. Data can be a good tool, but it's as imperfect as anything out there. Maths and statistics arent gods, they are developed by humans and are less perfect than you might think. Also, maths isnt objective or subjective - it isnt a complex living mind, and we are far from making it one. We cant even agree on what that mind should behave like, let alone how to make it. So just, trust big data as much as you trust any salesman, for example.
yes because data is wrong.
lol
@@doubled6490 Underpaid? More like swimming in money... 100k at least
@@DinhoPilot More like overpaid.
"Weapons of math destruction" is one of the most enlightening concept I've heard in the last times. Thank you Cathy O'Neil!
TH-cam is a great example of an algorithm getting it wrong.
Of course we need to be very careful with the videos the TH-cam algorithm shows us. I use Socrates three questions before choising one video
Is this 100% true?
Is this good?
Is this useful?
Just to remind you that she holds a PhD in math from Harvard and she is not biased or anti science at all. She might sound like feminists and post modernists but don't forget she is talking about fairness. I'm not mathematician but I know there is always a level of subjectivity in modeling/algorithm and it comes from variables used or omitted, methods used vs alternative methods, etc.
When it comes to reality you should consult a physicist not a mathematician.
Sure, but she's just very bad at forming a strong and consistent argument. Therefore it's not clear what her message is, which is surprising since the "TED way of talking" is to have one very clearly shared message. In this talk, there's tons of half-thoughts and half-examples, and therefore her "big conclusions" appear on wobbly ground.
PhD in math = god and unquestionable overlord of our pity race we call humans
Double D I'm way more iconoclastic and pessimistic and critical than you ever was and will be. I mentioned her math degree to show that she is not a dummy philosopher or sociologist talking about math. She is a expert mathematician talking about math.
except there is not math in the video,
Please stop being such a dummy.
surprised to see so many dislikes, and shocked to find out the reasons behind the dislikes.
facts are awful reason.
알고리즘이나 빅데이터가 잘못한 것 아이고 그 상황에(발표에 나온 사례들) 알고리즘이나 빅데이터를 써야 하지 않는다고 생각합니다.
i just disliked it because she used this to push her leftist agenda. otherwise she made some good points.
@@spidermonkey8430 what are some ways algorithms create outcomes that the "right" is opposed to?
Fat Karen with blue hair trying to tell people what to do. Thumbs down
Watch until the end, there's a conclusion and recommendations... Algorithm audit, data integrity check, feedback, etc...
Thank you. As someone who works in a very human field where so-called "Value Added Measures" (VAM) are used to rate the vast majority of employees, I can corroborate that this practice can lead to some very, very unexpected and very, very unjust outcomes.
I think that people are starting to realize this now, but I'm not sure how ratings will be handled as we move forward -- especially when the rating systems are often encoded into state law (which means that they can be very hard to change, and can stick around long after their fairness has been called into question).
I once worked for a company taking calls from customers. They used to judge the customer's satisfaction by a follow up call to the customer and an automated survey. Sure customers who were angry were more likely to take that survey but you were hearing it from them. Then they decided to turn this over to an algorithm that listened to the calls and scored the worker based on that. It was utter garbage, demonstratively inaccurate. A customer could profusely thank you for your help at the end of the call and this junk code would say they had a bad experience. We employees and local management had no access to the algorithm, very little data on what it was actually looking for, we were just supposed to trust the process. What it came up with factored into our performance scores and ultimately our raises. It wasn't long before I left.
My mind is blown. So glad I clicked and thanks for such an insight!!!!
It's funny how people are assuming she's against algos. She's nt against algos but against BLIND FAITH in them! They shd be used only for assistance, not as the final word.
They make an excellent point! We put way too much faith in numbers we see.
Edit: Majority of dislikes, there's a misleading number right there. Did the majority of people watch and deem the video bad? Or did they click the video with a preconception of the lecturer and mathematician, and then dump their hate on them in any way they could?
I disliked because I read your comment.
I disliked the video because it's just bla-bla-bla. Here's the plot - badly designed algos make lifes of some people worse and more unfair. Oh my.
Literally wrote a paper about ethics in AI and used this argument as the base for my research. Instructor gave me an F and said racial bias and discrimination in healthcare systems has nothing to do with AI 🤦🏾♂️.
Had to resubmit my paper, still waiting on the results. 🤷🏾♂️
Are you kidding me? This was 2 years ago. What happened after? I am fuming for past you.
Strange to see the number of dislikes.
Good insight on the digital world.
Probably people who are judging the content of her TED talk by her appearance. Pathetic
The era of blind faith in big data should end.
But it won't.
The stuff this speaker spoke about is the key to making big money in our days, where making ends meet has become more and more difficult. Algorithms are the tools to extract more money out of people and will always be shaped for this purpose. All aspects of the matter, socially and also politically can be broken down to this one goal.
Money.
If we don't change - why not picture our species ending up in some skynet-like crap?
Control over data is an ongoing war not just since yesterday.
Here in Germany I sometimes get the feeling that we have already lost this fight by simply obeying and just continue walking our path, consuming all offers and giving away all of our personal data thankfully.
And yes - I use the internet, but hate social media.
I'm just curious where all these statistics are coming from if algorithms cannot be trusted
I don't take her message as "we cannot trust algorithms", but as an alert to the risks of someone not using them properly.
Her point is that you should look at statistics with the same skepticism and critisism as you whould do news or anything else. Being aware that statistics can be manipulated or even be unintentionally biased because of how data was collected is an important critical thinking skill.
From the past. They are not made to predict the future, they are made to highligh peaks and lows and trends. And nobody can make a sober decision based on that =/ Not even the algorithm
algorithms use statistics or data in general to predict stuff. Statistics and data are just as biased as algorithms a problem which stems from their creators, us.
Giovane Guerreiro but that's it, isn't It? If someone uses them wrong. Algorithms are a science and if you use that science poorly, you get bad results. Math and science are tools that can be used for ill or for good. She srates that all these algorithms are bad and math is scary. Weapons of math destruction? Bah!. She said bad people use neutral tools and we should stop using the tools because she can't think of a time when algorithms are good.
Wow, excellent and she is spot on!
Wow! I'll admit, when I saw the thumbnail I was afraid this would be some sort of weird speech from the deep depths of tumblr. Then I listened, no bias in between and... she's right! Machines work like they're told to work. And who tells a machine how to work? Humans. So if that human doesn't think about their own prejudices, or about past prejudices as well... the machine won't, either. And the algorithm that human made will act accordingly to how it was told to act.
Shoosh like when google uses its algorithms to silence political discent?
Shoosh I don't have time for a full response but I felt the same. Now all I can say is look at how TH-cam is censoring people in Myanmar and tell me that this lady doesn't have a good point on the situation of secretive algorithms.
Jamison Leckenby the idea that ted would intentionally post a video that criticized TH-cam is laughable. This was in no way about TH-cams algorithms.
DeusEx Anonymus They didn't specifically call out TH-cam. While specific algorithms were criticized the point was all algorithms can't be trusted simply because someone else made them. I pointed this out myself using TH-cam as an example.
Shoosh finally someone who focused on the point of the talk instead of stereotyping her... You are exactly right.
✨EXCELLENT.. .Video ! Thank you!!✨🌟💫✨🎉🎊🎉🎉🎊🎉🌹🌹🌹🌹🌹💗🌹🌹🌹🌹✨🌟💫✨🎊🎉🎊✨🌟💫✨
Its important to differ algorithms from models.
The models have the concept and the algorithms are part of models.
Models include entities and rules, but algorithms follow these rules.
While some of the terminology and presentation of Cathy's argument were polarizing for many viewers (explained by the like/dislikes ratio). Her point about how data collection and interpretation can be skewed is a valid point and needs to addressed. While I cannot say I am a good source to rely on, I am an informed person who understands the validity of correcting skewed data. I support her view on having more oversight over the input-output of these algorithms since many companies will not change them unless someone can prove that it is losing them money or force them to correct it.
As for the presentation itself Ms. O'Neil should have went for a more objective/ more informative title to increase viewer-ship and prevent political/social bias from factoring. This problem also arose from her dress and terminology causing viewers to ignore her point which could have been remedied by her having more xp talking publicly (but she is a scientist not a speaker) so that she could properly trim and present her point without appearing nervous or bias.
My favorite Ted video - so important to think about this in our present age and culture
Great talk, important critical thinking
Ms. O'Neil work open my mind to a deeper understanding of what's happening now days. Not the laugheable game of getting ads in facebook but the serious unethical world that we are feeding while using all the tolls new era gave us. it is sooooo scary! i hope smart (and honest) people will find soon a better way of keeping human being natural being.
algorithms will never account for the inherent randomness, non intuitiveness of real world scenarios. Hence, all those predictive facebook ads might very well be simply mimicing parasites for advertisers
Couldn't agree more. "Algorithms don't have systems of appeal.", we need to change that.
9:28 Most people confuse US racism and thoughts of supremacy to mean Neo-nazism; most of the time it doesn't, as that is only the extreme end of the spectrum. The concepts refer to outcomes based on widespread attributions of both conscious and primarily unconscious propensities for treatment based on visual cues relative to someone's background. That is either a sense of someone deserving full services, being given unearned trust, and being shown respectful, appreciative, friendly interaction OR contrarily any thoughts of unworthiness, seeking excessive control over the behavior of someone (mistrust), corner cutting when servicing someone, and/or dismissal of someone based on traditional visual stereotypes.
Anyone who found the Ted Talk to be good, definitely needs to go through her book, "Weapons of Math Destruction". A worthy, concise read that covers a lot of sectors where algorithms are biased.
Well, she spoke about obscure algorithms targeting voters in 2015! (That vid is still on TH-cam). Long before you-know-who was elected. So, basically, she called out Cambridge Analytica even before it was (fake or not) news.
This is so amazing. There is so much need of taking into consideration the results produced by AI algorithms
Possible victim of unethical insurance practices brought to light .My wife and bought a convenience store on a corner of the city that had the second highest crime rate. To be fair the downtown area in general had systiccaly low crime. My wife and I applied for Obama care because she was leaving the private sector to work with me in the store . Within 3 months after being on Obama care our coverage increased 68% for no reason. We couldn't afford to stay on this coverage so she went back to work in the private sector. This left us with a gapping hole with or management. After running the business by myself for 18 month I ran the business into the ground and my health started to suffer. The algorithm the insurance company use to assesse are risk, creater the very problem it was designed to protect against.
Good work on Data Ethics! Plants thinking seeds for those whom may not know but holding on to false assumptions, used to sustain confidence in the system. Like Oneself-Check-Ownself.
This is awesome . Something new to think about !
People who talk on Ted should be obliged to provide proof ... I have not seen any reference that were truly reliable
the best ted talk on data. truly inspiring
Wish she was one of my professor!
Great talk, thanks!
There's an assumption here that a single algorithm is being used, and that it's simple enough to be written on a piece of paper.
Algorithms are very complex now, and cannot be evaluated by "eyeballing" them. Whoever developed them would need to disclose their full data set and their approaches. Maybe this should be done. But it's not as simple as sending an email.
The fact that we have to define success before making an algorithm doesn't stop it being scientific any more than having to define health makes medicine unscientific. If you disagree with the definition of success chosen by corporations and politicians the goal should be to change their minds, not to blame the processes. She's right that we need access to the formulas though
If the inputs that you are deeming to be "medicine" are not defined, then there is no guarantee that those inputs are scientific. Similarly, if the inputs are not defined in the algorithm, then the algorithm can not be considered scientific. Algorithms and medicine are not inherently scientific, empirical science should define their operation before we assert that they are conducive to it. Otherwise quacks and tech-mystics (like "quants" who manipulate the stock exchange) get a free ride.
The focus on definition actually shows how little contact she has with the field. Definitions are just consensual your explanation for success, health or what ever are what matters.
i do programming at university and all the points she brings up are very common, but we're also taught ways around them....
There is no general technique that fixes all such biases and misuses of data. The techniques are applied in set ways based on what you think of/experience of yourself and the field.
"Nobody in NYC had access to that formula, no one understood it" If no one has access to it, it doesn't mean no one understood it.
Anyway she's just covering only one machine learning's branch, the supervised one, while we actually have more learning algorithms like the unsupervised and reinforcement ones.
Learning algorithms have been thought to simulate natural learning processes, a bias system is essential to learn. Humans do exactly the same, she's talking about humans training algorithms with wrong bias, at this point, wouldn't it be the same to let machines decide?
Life isn't fair - should we expect algorithms to be? Those who use bad algorithms lose and those who don't win, thus, it pays to check the algorithms.
Sure, let's leave everything a broken mess because life isn't fair. Inefficient car? Life isn't fair. Progress to be made in chemistry? Life isn't fair, forget about it.
"im too lazy to make money" Lets pay you so you can live!
"my hand is broken" Lets do all of his work!
"I lost a war" Let's help him win a war!
One big bias in how many and liked and dislikes a videos gets I that we can see the results before we vote, before we even watch the video. If I see a lot of dislikes, I'm biased before I even start watching, looking for flaws. And I react more harshly when I see something that is a flaw.
This is excellent. Many of the crappy decisions I see everyday in buisness are based on terrible data. Not only that people that depend on them use them are basically lazy. Logic is about laziness. Using a system instead of actually thinking.
We didn't get this far using data or algorithms. We used our collective and individual brains and experience. People are lazy thinkers. They prefer systems because they don't want to begin to actually think.
Logic is about excluding data. Not including data or information, leads to all the really poor decision making around me. It is also about the past, not present , not the future. The past. And it is a terrible narrowing of human thought.
please read the top comments of this video: they should change your mind a little.
To be honest, the point she's making is correct. Bias in input caused by humans will cause bias in output. However, doesn't an algorithm that was biased in such a way correspond more to our human nature? The solutions it might come up with might not always be the best, but they are for sure more "human" in their nature.
it is not the problem of the algorithm ... it is what you feed it that is causing the problem ... in my early days in university (101 computer programming I guess it was) the instructor once said, this machine is GIGO machine ... if you feed it garbage in it will produce garbage out.
Of course, she does have a great point here, however what I am stressing here is that our modern human societies are so ideological that we are not even able to recognise it anymore
I WAS hoping she was going to go into why we need to stop doubting things like evolution and global warming because there's so much data to back them up.
That's actually a pretty profound corollary, you're not wrong.
With topics like global warming and evolution, it can be hard for some people to accept them because they can't directly see it happening. But i don't really get how this has anything to do with the video.
she is clearly pushing agenda, trying to say big data can lie, without showing actual proof of big data lying.
Yes, you can UNDERSTAND the data wrong, but that is not same as data lying. She is clearly trying to explain how data is "bad" and should not be believed. This is same argument people who do not believe in evolution or global warming use, and is wrong in fully same manner.
She brought up multiple examples where the algorithm was wrong and the data was "lying", e.g. that math teacher who got fired by the data, whereas the principle and the parents of the kids give her high ratings/reviews.
but read her book, that gives more concrete examples.
The argument about filtering out women is wrong
Being a woman is just one indicator, you'd use other indicators (such as age and education) and identify what makes a woman successful compared to other women within the women population. This is data science 101
Actually, no, it's a valid point about using flawed past data to determine future success. Because the company has a historical bias towards male employees, using their past success to seek future help would mean that the past data is biased against women, which would lead to an outcome greatly benefiting men. Using narrow determining factors as you suggest (only looking at the success within the female population in the company) also doesn't account for broader issues, because if men are more likely to be promoted and considered successful, the data pool for women will have a smaller rate of success, which means they'll have less valid data for identifying future successful candidates than if they were to do the same thing for male employees.
No, this is how you imagine data science works but it doesn't. This is basic a-priori and posteriori analysis. You do not use an a-priori indicator such as gender to determine success, you search within the population *of that indicator* for factors that lead to success.
Lets say that a company hired 600 men and 60 women. Then 80% the women got promoted and half the men got fired. Even tho you historically hired 90% men, this number would be a complete bullshit model for success and it simply doesn't work like that. You look within each separate group what factors lead to success and look for those factors during the hiring process - regardless of gender, because gender only separates the group, it doesn't define fitness for the job.
@@MaZe741 Amazon just proved you wrong, they shut don't a recruiting algorithm because the bias toward women
It prompts the question, but it does not beg the question
Another point is the difference between weak AI and strong AI. This is important to evaluate the dimension and influence of these kind of algorithm aborded by O'Neil.
What do you mean? could you elaborate?
At first i was like your wrong but after some listening she has a point.
We need to learn more about algorithms, they can do amazing things and also very dangerous things
Shouldn’t blind faith end everywhere?
I'll take your word that it should.
A "Normative Data Inquiry panel "could be implemented and every algorithm will be vetted for racist, sexual, economic biases. The jury panel members could be the loophole for big data again. Dystopia...
this needs more views
I'd argue the algorithms themselves are objective, it's the objectives of the creators that are subjective.
I’ve now seen her in at least two documentaries! Persona on HBO Max being the latest.
It highlights some dangers talked about in other TED Talks but the tone and "activism" irks me. Overall the message is correct, we should not let computer do dumb stuff. If we input garbage the computer outputs garbage. But that is why smart people tackle this problem. If society reaches a point when it depends to much on stupid algorithms lawmakers should intervene but in this scenario we likely have stupider lawmakers as they are a part of the society that allowed those stupid algorithms to happen. So we need to be careful but if it happens we ar screwed.
I agree with the argument and premise presented by the speaker, but her challenges in public speaking style left something to be desired and made this video tedious to watch. O'Neil is most likely a brilliant writer and researcher, but she was probably just nervous. I love that her outfit matches her hair! But those shoes.... If it wasn't for the importance of the subject matter, this should have been a TEDx talk. This is a huge problem we need to solve and the mathematical tools we use need to be open source in order for it to be made sure that it is used correctly by anyone and not just by corporations and their "secret sauce."
She's like a mathematical Immortal Technique
The most important I heard from O'Neil os that algorithm "include" opinions.
The algorithms may be analised by psicoanalists... [¿]
I can't understand why this video is so disliked, this talk brings up a fascinating problem I had never considered and should be looked into on a wide scale. Just a shame no solution was offered into how to improve the situation other than scrutiny of algorithms, maybe a way of selecting parameters from unbiased sources?
Disagreement is actually the best thing that can happen to a good idea, because it can show how good it really is. If you dismiss critics because of the skin/hair color, politics of its originator you are in the wrong. The only difference that should matter to you is the one between reason and unreason.
I feel like she focused more on feminism and politics. And made it seem like the white male is the only biased group of people. She focused on female and minority....she focused to much on that with her point and I felt like she actuslly got a little off topic and she herself was BIASED.
So what are key implications of a data-driven strategy for managing?
My kids are little, so I tend to think in toddler movie terms... but this beautiful woman totally reminds me of "sadness" from the "Inside Out" movie, who (is totally blue, love it!) ends up being the key to a healthy interpretation and processing of what happens in a persons' life. She's spot on and a brilliant speaker. Thank you so much for another wonderful TED talk!
Why would any sane person want to work at fox news.
ofc a blue haired liberal would call out fox news whenever they can.
Can we just forget political Ideologies for now and use the comments system properly instead of fighting like arrogant children?
THE GREY MAN NOPE... lol
I would like to refer you to 12:52. Also youtube comments tend to be cancer.
THE GREY MAN the TH-cam comment system is pretty much literally made to influence people toward arguing with eachother.
Unfortunately the negativity in the comments is simply a by-product of the faultiness of TH-cam's own comment system.
What political ideology do you see here?
How do you "properly" use a comment section? You don't
The marketing of algorithms isn't only intimidation. It's also predatory behavior by people and entities that want to take something from others.
This has aged unexpectedly well
Being fair the point she is making is not wrong is just, in my opinion, poorly expressed due to its deliberate political agenda wich clouds a solid argument, Badly designed algorithms will lead us to bad results (not the most insightful idea but a logical one nonetheless).
As a result i think that a more useful conclusion is that we must all work together in order to make sure that the algorithms we use are properly designed, so we can manage to make them not repeat the mistakes of our past, but rather to improve in the future, defining with greater accuracy what we deem as success and improving our methods, as a result eventually achieve a methodology that decides based only on the expected results and capabilities of people, instead of who they are.
Her book came out in 2016, Weapons of maths Destruction...strange that an important understanding of this importance , her talk is one year later ...
BRAVOOOO!!!!!!!!! I cannot cheer enough this overdue discussion!
OMG, nice to meet u here Vera~
I am Coolio(Shiqi)
look at the dislikes... just because they dont like the way she looks or something. Her points are very true, stop stereotyping people and listen to what they are saying objectively.
Tony Pham why is she wrong? her point is quite simple. If you feed algorithm with biased data or define its "success" criteria in a biased way its going to produce biased results. Seems pretty true to me.
It's unfortunate that you stereotype the people that have disliked this video as people that haven't watched the entire video and are making their judgement her looks.
1RV34 yeah typical smartass comeback. The bullshit I have read in this comment section about "why she is wrong" has no end... "edgy feminist", "sjw" (things typically associated with people of her "look") and even "she considers algorithms evil". I mean come on people... she is saying simple, sensible stuff but even that cant get through the haters' thick heads.
well i'm sorry for the people that comment those things, however my only issue with her talk is that she only talks about algorithms that already are called deprecated and shouldn't be used. the algorithms that only use historical data, which are inaccurate for the exact reasons she's describing. i completely agree with the points why those algorithms would be biased, but those aren't the algorithms that we apply anymore, we go for learning algorithms that learn from their own actions which actually fade out any bias and simply go for success in their utility function. yes we may set them to a predefined config to at least have a small start ahead, but all those values are quickly faded out for the values that actually matter.
please do not confuse me for some thick head hater, i have experience with the stuff that she's talking about, including the knowledge those are bad, but she has not explored, or is not showing any evidence of exploration, of the other types of algorythms and uses her limited knowledge to stereotype all types and wants to put a stop to it which would hurt our research.
i just hope you know that in no way or sense have i named her any of the things you described and have no intention to, i'm simply reading into the content she's providing only
i want to say though that what you said was also completely valid "If you feed algorithm with biased data or define its "success" criteria in a biased way its going to produce biased results"
but what i'm trying to explain is that in the world of this research this has already been layed down as bad and we're avoiding it by having the machines correct us
Last update
Apparently my professor for this course DOES consider racial bias in AI a serious issue; however, the sources I provided were lacking.
*lesson learned
Be specific to the point of boilerplate explanations
I read her book, Weapons of Math Destruction. Fairly good book with many convincing examples.
But I feel this presentation didn't have much substance to it. It felt like just one grievance after the other without much supporting substantiation.
"We are all racist and bigoted in ways we wish we weren't, in ways don't even know" LOL I bet no one saw that coming based on the thumbnail alone...
haha, I love how feminists come talk to science channels and nobody believes them
8:44 look who's clapping and who isnt here...
As a fellow data scientist, I find this whole talk stupid. It's the basics of stochastic dependencies that she pretends is a thing data scientists "forgot" to account for. It's their main job to handle these problems ffs.
wow, that really explains why this talk is stupid, I guess she was foolishly trying to communicate with 'normal' human beings...
0:38 That's not how you use "begs the question". That's not what it means. Stop.
"We are all biased" Ok I can agree with that. "We all are racist and bigoted" ummm what? Like how so and what evidence do you have for that? Towards the end she makes some decent ideas about how to think about algorithms, but I feel like some of the political discussion was rammed into the speech and didn't really add much though.
To some extent this is true, she's likely referring to things like subconsciously being more on guard when talking to certain types of people, or assuming that other types must be a certain way, etc etc. If you live in a racist society, these sorts of things become part of your thought process without you realising it.
you cant just say "I think she's referring" when she's not referred to that subject once.
I am a data scientist and part of my job is to build a dynamite can't be used for building a bomb.
I do algorithm audits
I've watched TED since a few years now, goodbye TED. Hope you redefine the meaning of "ideas worth spreading"
bye lol you won't be missed
*update from my last post
She gave me an extra 5 points but still an F because:
“ The paper does not provide a controversial area associated with AI and ethics based on literature”
and
“ This paper is more philosophical then scientific”
Even though the whole assignment was about discussing the controversial areas of research and providing examples to support MY THOUGHTS.
It’s ok because this has just made me realize how some people don’t consider racial bias a problem. 🤷🏾♂️ still getting a good grade in the class overall. Good to know I shouldn’t bring up race around this instructor.
Here are some peer reviewed articles/studies unless you were assigned to use a specific text I believe these would qualify as 'literature'
What I found to be useful is just agreeing with a professor wonky beliefs, and they will give you a good grade most of the time. Learned this the hard way
it affects someone's life, so don't do it carelessly.
How can you expect to look at social phenomena without objective quantitative data? Oh, that's right, that data doesn't exist, but we have the next best thing. People who agree with this video never understood math in high school, but statisticians understand these biases. Big data is a rough approximation of reality, and yes it can be skewed to make people react one way or the other in some circumstances. What people do with stats or algorithms are at the will of the influential (media outlets, some academics, and other profiteers) to impose on the ignorant, but I'd argue blind rejection of big data is also the wrong way to go about this.
Hello, elrat1234.
I agree with most of that, but she does not say, at any point, that we should blindly reject big data. She says we should be aware that it is imperfect, gives her reason why, and gives examples of such imperfections. Much like you should trust a stop light, yet be aware that it won't stop other drivers from ignoring it. So... "be aware of imperfections", not "reject something because it has imperfections".
It's also a decent case for making such algorithms public, at least in the case of sentencing hearings in the courtroom. But I suppose that's a separate issue, heheh.
elrat1234 that's a complete strawman argument, she never said to completely reject large scale data analysis
Sentencing hearings .. how do you mean
Hey, Lydia C.
I believe the channel Vox covered this topic, if you want to take a look. I know I'm pretty long-winded, so let me say that first. But!
I may be using the wrong terminology, but I'm referring to something the presenter brought up. "Parol hearings", maybe? It's when an inmate has a hearing on whether their sentence will be lesser due to good behavior and the like, or if the can expect to be there for their maximum sentence. (When convicted with a crime, the guilty party will have a minimum punishment and a maximum. These hearings determine which they can expect to actually serve, why, and what they can do better)
Sorry if you knew all that- just making sure we're on the same page. :) Anyway~
There are computer models that try to predict whether an inmate will be worth letting off the hook, or not. It gives them a score: 1 to 10. The person/people at the hearing can use this score to guide their decision. But! Noone is allowed to know how this model was created, and what factors are related to the score. Is your race a factor? What if you come from a poor, crime-striken area? Are your father's crimes weighing against your score, even though you never met the guy? And how can you defend against that score, if you do not know what you're arguing against?
The companies which develop these algorithms will (and have) claimed that it cannot show anyone how their program works, or else it can be copied and stolen by others. But then... how can the inmates properly defend theirselves, if they don't really know why they got a 9 out of 10? It also requires a bit of blind faith by the judgers- that the program is * usually* correct, and so * probably* correct on any given case.
And sure, the score is only one factor in deciding their fate... but how big a factor? Do they even have a place in such hearings?
... Anyway. That's the jist of it. :J I hope that helped!
No algorithm is perfect. Therefore improving it has no harm to it. Statisticians could also think they know something but could not. One cannot see their own flaws and faults. You need other perspectives to change and further yourself. In order to stop this inequality (in your own words done by media outlets and so on) what she said is true. It needs to be improved. No one said it should be rejected. She made the point that it should be improved. Your entire rant is therefore a waste of time. But I know you'll come back with a bunch of bs.
Postmodernism 101, today on TED!
I, too, watch Jordan Peterson's lectures.
RonMD Do Dominance hi-arky
You don't even know what you're talking about.
+Brenda Rua
Clean your room.
Martín Varela 😂👌🏻
I can be truthful in this modern age. I am biasedly discriminated because I am a male. My problem is we shouldn't look at genda. I am all for Equal rights but equal of opportunity. Not outcome x
When wrong data get into a computer, including government computer, you are done for, because you cannot correct it. That is not progress.
12:37
Is this paraphrase correct:
"Don't assume that you know the Truth;
Engage in, understand, and translate the pressing ethical discussions grounded in facts."
?
Can any living-being Know the Truth?
Will it not be better to just honestly present Your observations & hypothesis?
Started good, ended bullshit.
And the problem is that she complains about over simplified metrics being an incomplete model of the world, yet her presuppositions are based on the very same over simplification, things like the wage gap are based entirely on crude algorithms comparing apples to oranges. On policing, the crime stats are simply damning once you look at them, simply the prevalence of suspects willing to shoot back at the police at rates several times more than other groups will skew the data in a way I'm sure she wouldn't accept.
She's right in a way, but I'm sure her ilk simply wish to skew the data to paint an incomplete picture in the way they prefer.
Life isn't fair.
Of course Machine learning algorithms will be bias. These models "learn" the best parameters to map a vector vector to another. They go by the data provided alone, humans however are the ones who look at other features that exempt some groups from being matched in the input vector.
Is it just a coincidence that men are incarcerated at a higher rate? It doesn't matter to these algorithms, just by the fact of being a man the model will be more likely to match a man to a crime than a woman. Is that fair, no. Is it more likely to be correct, yes. These algorithms are designed to guess more correctly than not. And they work, pretty damn well.
Look, I'm sure she understands life isn't fair more than you do.
sweet meats lol what
No I loved what you say. But saying that top companies use this is so so wrong...
I dont really see her point...
She is effectively stating that machine learning algorithms/big data usage is being badly implemented.
An algorithm should only be used knowing what it can do and what it cant do, how it was trained and what it didnt see during training.
But this seems like a very basic and obvious concept to me. In the cases where it was violated I'm dure the people that did it would have done a better job if it was just that easy. But even if the algorithm wasnt "perfect" it probably returned better results than previous methods.
It's not like human judgement isnt frequently biased too, so thinking by using algorthms we can expect objectivity is naive but that's ok, everything is a progress and the quality of big data processing will only increase.
This talk is way to aggressive for my taste (weapons of math destruction cmon)
5:51 "Algorithms don't make things fair" then you've got dodgy algorithm which requires correction.
is that not the whole point of this talk????????????
She's right.
Algorithms are often based on past statistics. But humans are not
I saw the caption and picture and immediately knew this will be a fun ride.