The Dangerous Math Used To Predict Criminals

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • The criminal justice system is overburdened and expensive. What if we could harness advances in social science and math to predict which criminals are most likely to re-offend? What if we had a better way to sentence criminals efficiently and appropriately, for both criminals and society as a whole?
    That’s the idea behind risk assessment algorithms like COMPAS. And while the theory is excellent, we’ve hit a few stumbling blocks with accuracy and fairness. The data collection includes questions about an offender’s education, work history, family, friends, and attitudes toward society. We know that these elements correlate with anti-social behavior, so why can’t a complex algorithm using 137 different data points give us an accurate picture of who’s most dangerous?
    The problem might be that it’s actually too complex -- which is why random groups of internet volunteers yield almost identical predictive results when given only a few simple pieces of information. Researchers have also concluded that a handful of basic questions are as predictive as the black box algorithm that made the Supreme Court shrug.
    Is there a way to fine-tune these algorithms to be better than collective human judgment? Can math help to safeguard fairness in the sentencing process and improve outcomes in criminal justice? And if we did develop an accurate math-based model to predict recidivism, how ethical is it to blame current criminals for potential future crimes?
    Can human behavior become an equation?
    ** ADDITIONAL READING **
    Sample COMPAS Risk Assessment: www.documentcl...
    COMPAS-R Updated Risk Assessment: www.equivant.c...
    “The accuracy, fairness, and limits of predicting recidivism.” Julia Dressel. www.science.or...
    “Understanding risk assessment instruments in criminal justice,” Brookings Institution: www.brookings....
    “Machine Bias,” Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica: www.propublica...
    “The limits of human predictions of recidivism,” Lin, Jung, Goel and Skeem: www.science.or...
    “Even Imperfect Algorithms Can Improve the Criminal Justice System,” New York Times: www.nytimes.co...
    Equivant’s response to criticism: www.equivant.c...
    “A Popular Algorithm Is No Better at Predicting Crimes Than Random People,” Ed Yong: www.theatlanti...
    “The Age of Secrecy and Unfairness in Recidivism Prediction,” Rudin, Wang, and Coker: hdsr.mitpress....
    “Practitioner’s Guide to COMPAS Core,” s3.documentclo...
    State v. Loomis summary: harvardlawrevi...
    ** LINKS **
    Vsauce2:
    TikTok: / vsaucetwo
    Twitter: / vsaucetwo
    Facebook: / vsaucetwo
    Talk Vsauce2 in The Create Unknown Discord: / discord
    Vsauce2 on Reddit: / vsauce2
    Hosted and Produced by Kevin Lieber
    Instagram: / kevlieber
    Twitter: / kevinlieber
    Podcast: / thecreateunknown
    Research and Writing by Matthew Tabor
    / tabortcu
    Editing by John Swan
    / @johnswanyt
    Police Sketches by Art Melt
    Twitter: / eeljammin
    IG: / jamstamp0
    Huge Thanks To Paula Lieber
    www.etsy.com/s...
    Vsauce's Curiosity Box: www.curiosityb...
    #education #vsauce #crime

ความคิดเห็น • 1K

  • @CorvieClearmoon
    @CorvieClearmoon 2 ปีที่แล้ว +1107

    FYI - Noom was found to be practicing very shady business behind the scenes. They have been overcharging customers and refusing to allow them to cancel their services. I believe they are currently under investigation. From what I've come to learn, they are actually bragging about their mishandling of services and suggesting other companies do the same. I''d do some digging to see what you can find before accepting their promotions again.

    • @moizkhokhar815
      @moizkhokhar815 2 ปีที่แล้ว +46

      yes More people should read this comment

    • @ashlinberman4534
      @ashlinberman4534 2 ปีที่แล้ว +49

      I think they made canceling subscriptions easy/easier after complaints, but i couldnt find anything anything about overcharging being solved, however they did get a class action lawsuit over it, and all reports seem to be from 2+ years ago, so it might be solved as well. Not accountable on either fronts btw, this is just from basic research, so you might be able to find better evidence against what i said

    • @Games_and_Music
      @Games_and_Music 2 ปีที่แล้ว +22

      I thought that part of the video really displayed the criminal maths.

    • @thelistener1268
      @thelistener1268 2 ปีที่แล้ว +2

      That's for the tip!

    • @that_rhobot
      @that_rhobot 2 ปีที่แล้ว +39

      I've seen accounts from people trying Noom's mental health app that it pretty much always just recommends dieting, regardless of what you are dealing with. Like, there were people battling anorexia that were being told they were eating too much.

  • @DogKacique
    @DogKacique 2 ปีที่แล้ว +537

    That company made a buzzfeed quiz and is selling it like it was an advanced minority report AI

    • @bow_and_arrow
      @bow_and_arrow 2 ปีที่แล้ว +10

      FRRRRR

    • @joshyoung1440
      @joshyoung1440 ปีที่แล้ว

      ​@@bow_and_arrow for real real real real real

    • @joshyoung1440
      @joshyoung1440 ปีที่แล้ว

      ​@@bow_and_arrow oh sorry FOR REAL REAL REAL REAL REAL

    • @avakining
      @avakining 7 หลายเดือนก่อน +1

      Plus like… the whole point of Minority Report was that those algorithms don’t work anyway

  • @SupaKoopaTroopa64
    @SupaKoopaTroopa64 2 ปีที่แล้ว +75

    Using AI to predict future crimes is an extremely dangerous idea. If you give an AI access to currently available crime data, and optimize it to predict future crimes, what you are actually doing is asking it to predict who the criminal justice system (with all of its biases) will find guilty of a future crime. It gets even worse when you feed the AI data from crimes that it predicted. The AI can now learn from its past actions, and further 'fine tune' it's predictions, by looking at what traits are more likely to lead to a guilty conviction, and focus its predictions on people with those traits. This leads to a feedback loop where the AI discovers a bias in the justice system, exploits that bias to improve its "accuracy," leading to the generation of more crime data which further enforces its biases.
    Don't even get me started on what could happen if we use an AI powerful enough to realize that it can 'influence' its own training data.

    • @diceblock
      @diceblock 2 ปีที่แล้ว +5

      That's alarming.

    • @buchelaruzit
      @buchelaruzit 2 ปีที่แล้ว +4

      exactly. and it very quickly starts sounding like eugenics.

  • @TheVaryox
    @TheVaryox 2 ปีที่แล้ว +299

    Company: "yea you should sentence him harder, and I won't tell you why I think that"
    Judges: "eh, good enough"
    Man, if trade secrets get prioritised over a citizen's right to a fair trial, seriously, wtf. This is trial by crystal ball.

    • @tweter2
      @tweter2 2 ปีที่แล้ว +5

      Research shows sentences are longer in the afternoon or if it's nice weather outside.

    • @jeffreykirkley6475
      @jeffreykirkley6475 2 ปีที่แล้ว +8

      Honestly, why do we have trade secrets as a protected thing? If no-one can know the truth about it, then why should we even agree to it's use/consumption?

    • @alperakyuz9702
      @alperakyuz9702 2 ปีที่แล้ว +1

      @@jeffreykirkley6475 well, if you spend millions of dollars on development an algorithm to gain an edge over competition, would you publish the information freely so that your competition can imitate itbfor free?

    • @ipadair7345
      @ipadair7345 2 ปีที่แล้ว +7

      @@alperakyuz9702 No, but the gov.(courts especially) shouldn't use an algorithm which nobody except comp. knows the working of.

    • @legendgames128
      @legendgames128 ปีที่แล้ว

      @@ipadair7345 One which the company could use to suppress those who don't like them, perhaps. Or if they are working with the government and the media, we essentially get political opponents being sentenced. In this one, it merely predicted the rate of recidivism. In the one used to actually punish criminals, it could be used to punish political opponents while still being guarded as a trade secret.

  • @Codexionyx101
    @Codexionyx101 2 ปีที่แล้ว +68

    You'd think that if we were going to recreate Minority Report, we'd at least try to do a good job at it.

    • @orlandomoreno6168
      @orlandomoreno6168 2 ปีที่แล้ว +6

      This is more like Psycho Pass

    • @tweter2
      @tweter2 2 ปีที่แล้ว +1

      There is a lot of "minority report" in the sex offender world. For example, in Minnesota every such felon is given a risk assessment at end of jail sentence to determine if they need to be civilly committed to treatment. Sex offender assessments basically determine the probability to reoffend in the next five years. if you are labeled as a higher risk, you are often given extra treatment / civil commitment time.

  • @notoriouswhitemoth
    @notoriouswhitemoth 2 ปีที่แล้ว +93

    "determined by the strength of the item's relationship to person's offense recidivism"
    I was gonna say there was no way those coefficients weren't racist, and the results bear that out. It's almost like predictive algorithms are really good at perpetuating self-fulfilling prophecies.

    • @desfortune
      @desfortune 2 ปีที่แล้ว +1

      AI and the sort just act on the data you provide. If you provide data that contains racist biases, the program will use them. AI is not intelligent, it does what you teach it to do, so as long as faulty humans insert faulty data, most of time without realizing it, you are not gonna solve anything lol

  • @joaquinBolu
    @joaquinBolu 2 ปีที่แล้ว +97

    This brings me memories of Psycho pass anime were an AI computer decided who was a threat for society even before comitting a crime. The whole society was ruled by this tech withought questioning it, even cops and law enforcers

    • @feffy380
      @feffy380 2 ปีที่แล้ว +11

      It wasn't even AI. It was brains of other psychopaths in jars

    • @aicy5170
      @aicy5170 2 ปีที่แล้ว

      course?

    • @tweter2
      @tweter2 2 ปีที่แล้ว +1

      Oh, by no means is this all "tech." I've done paper and pencil risk assessments that then get shared with courts / probation.

  • @imaperson1060
    @imaperson1060 2 ปีที่แล้ว +41

    This is assuming that nobody lies and gives answers they know will lower their score.

    • @fetchstixRHD
      @fetchstixRHD 2 ปีที่แล้ว +11

      Quite possibly, that may be why the girl got a higher score than the guy. The guy probably knew better to think ahead as to how the questions may be taken, whereas the girl probably wasn't calculated at all.

    • @jmodified
      @jmodified 2 ปีที่แล้ว +2

      Hmmm, if I have no financial concerns, is it because I'm independently wealthy or because I know I can always rob a convenience store if I need cash? Probably best to answer "sometimes" on that one.

  • @chestercs111
    @chestercs111 2 ปีที่แล้ว +19

    This reminds me of the study James Fallon did on psychopaths. He would analyze brain scans of known psychopaths and found that all their brains showed similar results. Then during a brain scan testing he did on him and his family he found that one of the brains matched that of a psychopath. He thought someone at work was playing a joke on him but it turned out to be his brain. Showing that it's more than just how your brain is that makes you a psychopath. However, those that match the brain scans may be more susceptible to being a psychopath if certain conditions are met

  • @andrasfogarasi5014
    @andrasfogarasi5014 2 ปีที่แล้ว +14

    If you want to develop an effective method for measuring recidivism, here's the plan:
    Step 1: Make a law requiring all people to buy liability crime insurance. Under the terms of this type of insurance, whenever the client commits a crime, the insurance agency pays for the damages caused and the client is charged nothing.
    Step 2: Wait 2 months.
    Step 3: Base prison sentences on people's insurance rates.
    Insurance companies under this system have a financial incentive to create an effective system for predicting future criminal behaviour and base their liability crime insurance rates on that. As such, the insurance rates become accurate predictors of future criminality. Of course you could argue that this system will cause repeat offenders to have such incredibly high insurance rates that they have no reasonable way of ever paying them, thus making them unable to buy liability crime insurance. Fret not, for I have a solution. Execution. This will drop their rates to precisely $0.
    Thank you for listening to my very own dystopia concept presentation.

    • @michaellautermilch9185
      @michaellautermilch9185 2 ปีที่แล้ว

      You're just shifting who builds the models and asking insurance companies to be the ones building the black boxes. Yes, insurance companies do have people who build black box algorithms too, but they will basically do the same thing.
      Actually your plan has a massive flaw: insurance premiums don't only include measures of risk, but also multiple other business considerations. They want to sell more policies after all! So now you would have the justice system being partially influenced by some massive insurance company's 5 year growth plan. Not a great idea.

  • @ElNerdoLoco
    @ElNerdoLoco 2 ปีที่แล้ว +96

    I'd scrawl, "I plead the 5th" over every question. I mean, you have the right to not be a character witness against yourself too, and how can you tell if you're incriminating yourself with some of these questions? Hell, just participating while black seemed incriminating in one example.

    • @o0Donuts0o
      @o0Donuts0o 2 ปีที่แล้ว +4

      Not that I agree with software being used to predict potential future criminal activity, isn’t this software used after judgement is served and only used to determine the sentencing term?

    • @pXnTilde
      @pXnTilde 2 ปีที่แล้ว +8

      Seriously, this test was used during sentencing, which means there was absolutely no obligation whatsoever for him to complete that test. Remember, too... _he is guilty of his crime_ The judge could have easily decided on the same exact sentence regardless of the algorithm. In fact, often judges have already decided the sentence before hearing the arguments at sentencing.

  • @awesomecoyote5534
    @awesomecoyote5534 2 ปีที่แล้ว +515

    The worst kinds of judgements are judgements made by someone who can't be held accountable if they are wrong.
    Judgements that determine how many years someone spends in prison should not be decided by an unaccountable AI.

    • @Klayhamn
      @Klayhamn 2 ปีที่แล้ว +24

      humans that determine it aren't accountable either.
      in fact, the people who design the systems or manage the systems of law and order rarely if ever (and most likely - never) are held accountable for the decision they made
      so, at least based on this fact, it makes no difference if we use AI or not
      instead, what does matter is how good it is at predicting what it claims to predict

    • @prajwal9544
      @prajwal9544 2 ปีที่แล้ว +10

      But algorithms can be changed easily and made better. A biased judge is worse

    • @soulsmanipulatedinc.1682
      @soulsmanipulatedinc.1682 2 ปีที่แล้ว +6

      Should we desire to hold someone accountable?
      Sorry. It's just that, if we need to hold someone accountable for wrong judgment, I feel that we would have already failed.
      I mean, the option to hold someone accountable isn't a means to correct someone's judgment, but instead control a person's judgment. An algorithm always has perfectly controlled judgment, so, like...I don't see the problem here?
      I mean, yeah, this could be implemented horribly. However, the base idea would theoretically work.

    • @schmarcel4238
      @schmarcel4238 2 ปีที่แล้ว +5

      If it is a machine learning algorithm, it can be punished for mistakes, thus be held accountable. And it will then try not to make the same mistakes again.

    • @soulsmanipulatedinc.1682
      @soulsmanipulatedinc.1682 2 ปีที่แล้ว +3

      @@schmarcel4238 I thought about that as well, however, that may cause the program to develop harmful biases that we didn't intend.

  • @KenMathis1
    @KenMathis1 2 ปีที่แล้ว +91

    The fundamental problem with this approach is that generalities can't be applied to an individual, and these automated approaches to crime prediction only rely on generalities. They are a codification into law of biases and stereotypes.

    • @mvmlego1212
      @mvmlego1212 2 ปีที่แล้ว +9

      Well-said. Even if the predictions are statistically valid, they're not individually valid.

    • @luisheinle7071
      @luisheinle7071 ปีที่แล้ว +1

      @@mvmlego1212 yes, it doesn't matter if they are statistically correct because it says nothing about the individual

  • @grapetoad6595
    @grapetoad6595 2 ปีที่แล้ว +16

    The problem is the focus on punishment. I.e. we think you might commit crime again so you should be punished more for your potential future crime.
    If instead it was built on attempts to rehabilitate, and decided who was most in need of support to avoid recidivism, this would be so much better.
    The algorithms are a problem, but what's worse is why they are able to cause a problem in the first place.

    • @fetchstixRHD
      @fetchstixRHD 2 ปีที่แล้ว +2

      Agreed. There's a whole separate discussion on whether punishment should be appropriate, but regardless getting punished for something you haven't done (or attempted to do) is pretty unfair.

    • @michaellautermilch9185
      @michaellautermilch9185 2 ปีที่แล้ว

      No this is backwards. Punishment needs to be proportional to the crime, not to the likelihood of rehabilitation. With your mindset, someone could be rehabilitated for virtually anything, regardless of their actions, if they posed a future risk.

    • @jeremyfarley3872
      @jeremyfarley3872 8 หลายเดือนก่อน

      Then there's the difference between punishment and rehabilitation. They aren't the same thing. Are we sending someone to prison for ten years because we want to hurt them or because we want to teach them to be a productive member of society?

  • @DeJay7
    @DeJay7 2 ปีที่แล้ว +21

    "Thanks for watching"
    No, thank you for making all of these videos, Kevin. I love every single one of your videos, everything you do is great.

  • @epiren
    @epiren 2 ปีที่แล้ว +36

    I'm sad that you didn't cover retrophrenology, where you create bumps on people's heads until they acquire the personality traits you want. ;-)

    • @TomWonderful
      @TomWonderful 2 ปีที่แล้ว +2

      GNU Terry Pratchett

    • @epiren
      @epiren 2 ปีที่แล้ว +2

      @@TomWonderful I read it in a novel by Simon R. Green called "Tales From The Nightside"

    • @TomWonderful
      @TomWonderful 2 ปีที่แล้ว +1

      @@epiren Oh cool. Pratchett did the same gag in 1993 with "Men At Arms."

  • @moizkhokhar815
    @moizkhokhar815 2 ปีที่แล้ว +14

    Noom has been involved in some controversy recently with a lot of complaints of their free trials being misleading and subscriptions being very hard to cancel. And some of their diets were also triggering eating disorders apparently

  • @EnzoDraws
    @EnzoDraws 2 ปีที่แล้ว +5

    Should've titled this video "The Immoral COMPAS"

  • @Lolstarwar
    @Lolstarwar 2 ปีที่แล้ว +5

    i wanne read the poem

  • @GrimMeowning
    @GrimMeowning 2 ปีที่แล้ว +4

    Or they could go Scandinawia way - where prisoners are not punished (unless very serious crimes) - but instead reintegrated into society, where they learn new stuff and working with psychologists and re-thinking their actions and life position. That decreased level of recidivism to extremely small levels. Thought - until there are private prisons in USA, I doubt it will be possible.

    • @Epic-so3ek
      @Epic-so3ek ปีที่แล้ว

      That system won’t work for people with aspd, and honestly a number of other people. Many people need to be kept incarcerated until they’re not dangerous or with aspd people just forever. A focus on rehabilitation or at least not intentionally torturing prisoners would be a good start though.

  • @chankfreng
    @chankfreng 2 ปีที่แล้ว +13

    If an algorithm told us that lighter sentencing leads to lower recidivism, would the courts treat those results the same way?

    • @buchelaruzit
      @buchelaruzit 2 ปีที่แล้ว +2

      lol we all know the answer to that question

    • @Epic-so3ek
      @Epic-so3ek ปีที่แล้ว

      Not in the great US of A

  • @zncvmxbv4027
    @zncvmxbv4027 2 ปีที่แล้ว +3

    It’s a Myers Briggs test basically. But the only way to correctly do one of these is to have multiple people who know you do one about you and compare their results to yours. After correlating the data you get a much more correct version of the data.

  • @RialVestro
    @RialVestro 2 ปีที่แล้ว +8

    I once got detention for being racist against myself... cause I was speaking in an Irish accent on St. Patrick's Day and I'm actually part Irish...
    I also got a detention for being late to class when our Teacher was having a parent teacher meeting and locked us out of the classroom during that time but she apparently still took attendance and marked the entire class absent. Apparently that teacher is known for doing stuff like this because when I showed up for detention the lady who runs the detention room took one look at who issued the detention slip and said I could leave.
    And another time I got a detention because I had left school early to go to work and I had already cleared the absence with the school ahead of time but still ended up getting a detention anyway. Though after I explained that to the principal he threw the detention slip in the trash and told me to just ignore it if it happens again.

    • @o0Donuts0o
      @o0Donuts0o 2 ปีที่แล้ว +2

      3 detentions. I predict 20 to life for you!

    • @truthboom
      @truthboom 2 ปีที่แล้ว

      if the times you went to detention are recorded in some data. Then you have to sue otherwise it's meaningless

  • @williamn1055
    @williamn1055 2 ปีที่แล้ว +8

    Oh my god they made me take this test without saying what it was. I'm so glad I assumed it was a test against me and answered whatever sounded best

    • @studentofsmith
      @studentofsmith 2 ปีที่แล้ว +1

      You mean people might try to game the system by lying? I'm shocked, I tell you, shocked!

    • @buchelaruzit
      @buchelaruzit 2 ปีที่แล้ว

      yeah just looking at these questions tells you that it can and will be used against you whenever convenient

  • @yinq5384
    @yinq5384 2 ปีที่แล้ว +3

    The black box reminds me of Minority Report.

  • @bonbondurjdr6553
    @bonbondurjdr6553 2 ปีที่แล้ว +19

    I love those videos man, very thought-provoking! Keep up the great work!

  • @SgtSupaman
    @SgtSupaman 2 ปีที่แล้ว +3

    Statistics and algorithms can absolutely help predict what people will do but cannot predict what a *person* will do. No one should be trying to predict a single person's actions for anything more than theoretical interest, especially not in any capacity that will affect that person's life.

  • @distortedjams
    @distortedjams 2 ปีที่แล้ว +2

    I only chose the bike stealer because they weren't caught, and the other one was in prison so couldn't commit more crimes.

  • @j.21
    @j.21 2 ปีที่แล้ว +9

    .

  • @louistennent
    @louistennent 2 ปีที่แล้ว +1

    This is literally the plot of Captain America:the winter soilder. Except of course,with massive aircraft with guns aimed at the high risk people.

  • @Mysteroo
    @Mysteroo 2 ปีที่แล้ว +2

    Those darn scooter thieves

  • @theomni1012
    @theomni1012 7 หลายเดือนก่อน

    It’s always been interesting how history can predict the future- but it still varies wildly.
    For example, a kid raised by abusive parents. You could say that they’ll be an abusive parent when they grow up because that’s how they were raised. You could also say that they’d grow up to be a very good parent because they never want to treat their child the way they were treated.

  • @bishoukun
    @bishoukun 2 ปีที่แล้ว +2

    The algorithm: "Mental illness and learning differences are criminal indicators!"

  • @notme222
    @notme222 2 ปีที่แล้ว +5

    Your question at the beginning isn't about who's more likely to commit a violent crime, or who's more likely to get a conviction in the next 8 years. It's "who's more likely to commit another crime?" And logic backs up the algorithm on that. The person with more years in front of them, who may believe they got away with their last crime, has a higher chance of doing something at some point. No context from that question was about setting parole.
    A algorithm that makes accurate predictions would still be wrong if the questions being answered aren't what the asker meant to ask.

  • @csolisr
    @csolisr 2 ปีที่แล้ว +2

    One of the parameters in that COMPAS algorithm is basically the skin tone chart from that Family Guy skit, you know the one

  • @light-master
    @light-master ปีที่แล้ว +2

    Our societal laws are what a collection of what society deems that we are and aren't allowed to do. By definition they are a human judgement of human actions, and are consistently changing based on how each new generation values and judges the actions of others. You can not morally allow a computer to judge human actions anymore than you can judge the actions of those that lived hundreds of years ago, who were governed by an entirely different set of laws.

  • @prnzssLuna
    @prnzssLuna 2 ปีที่แล้ว +5

    Not gonna lie, this is genuinely terrifying. The other vidoes you've made so far mostly showed one-off mistakes, that got rectified afterwards, but it doesn't look like anyone is willing to stop the use of unreliable software like this? Terrifying.

  • @LeetJose
    @LeetJose 2 ปีที่แล้ว +1

    this reminds me of this older book my class read in middle school (2002?) about a computer that could predict crime. I think I remember the book describing a person being led to the room with the device so it could be destroyed I actually don't remember to well I haven't been able to find it.

  • @feedbackzaloop
    @feedbackzaloop 2 ปีที่แล้ว +1

    Ok, but are there any recent studies showing recedivism is relevant to judjment at all? If we fail to access it, might as well banish the idea altogether

  • @zeropoint70
    @zeropoint70 2 ปีที่แล้ว +1

    that outro with the box tho 🔥🔥🔥

  • @militantpacifist4087
    @militantpacifist4087 2 ปีที่แล้ว +1

    Reminds me of that one episode of Futurama.

  • @Rayzan1000
    @Rayzan1000 2 ปีที่แล้ว +8

    I think you misinterpret the "How often do you worry about financial survival" -question. If you are often worried about your financial survival, then you "probably" either have a rather low wage or fluctuating wage, making you more likely to commit a crime, in order to pay your bills.

    • @sirswagabadha4896
      @sirswagabadha4896 2 ปีที่แล้ว +6

      In that case, any psych undergrad could tell you how much the ambiguity of the question without any context invalidates its results. There's a whole history of keeping people in prison for being poor, they could have chosen something much better

    • @SeidCivic
      @SeidCivic 2 ปีที่แล้ว +2

      Thus making the test/algorithm even more unreliable.

    • @Rayzan1000
      @Rayzan1000 2 ปีที่แล้ว

      @@sirswagabadha4896 Well, most (if not all) questions can invalidate the result if taken out of context.

  • @NaudVanDalen
    @NaudVanDalen ปีที่แล้ว

    Kevin: writes inappropriate poem.
    Algorithm: "He's too dangerous to be left alive."

  • @jamesmiller4487
    @jamesmiller4487 2 ปีที่แล้ว +4

    Excellent and thought provoking video, clearly algorithms are not, and maybe never will be, ready to judge humans. The problem is that human judgement is just as flawed, varying from person to person, day to day, and situation to situation. You could have created a video on the fallibility of human judges, their inept and biased sentencing, and been equally right and thought provoking.

  • @God-ld6ll
    @God-ld6ll 2 ปีที่แล้ว +2

    get to know common law vs code law

    • @cpeterso
      @cpeterso 2 ปีที่แล้ว +1

      I'm too busy with bird law

  • @user-nu8in3ey8c
    @user-nu8in3ey8c 2 ปีที่แล้ว

    There was an interesting experiment were people were asked to predict or guess the weight of an animal. The crowd's average answer was more accurate than those in the crowd who were experts in raising/butchering those animals. The crowd's (Reddit and Twitter) average answer might be surprisingly accurate. This does not make using such an algorithm ethical, but if the question is regarding accuracy(rather than ethics and due process), the question is: do we have anything more accurate?

  • @maxwhite4732
    @maxwhite4732 2 ปีที่แล้ว +4

    This is the equivalent of asking a fortune teller to predict the future and using it as evidence in court.

  • @itsMeKvman
    @itsMeKvman ปีที่แล้ว +2

    What is the intro song at 0:51, and is it copyrighted? Does anyone know?

  • @FreeDomSy-nk9ue
    @FreeDomSy-nk9ue 2 ปีที่แล้ว +3

    I love your videos, that was awesome I really enjoyed it.
    I can't believe COMPAS isn't talked about as much as it should

  • @octoanimationsalt
    @octoanimationsalt 6 หลายเดือนก่อน

    2:18 'We can't just guess.'
    Average London traffic warden:

  • @resolecca
    @resolecca 2 ปีที่แล้ว +2

    If those pictures in anyway resemble the people he is referring to, then I know exactly why she was "determined" by the algorithm to be a bigger danger than that man ill give you one guess it starts with the letter R ends in M and has six letters

    • @resolecca
      @resolecca 2 ปีที่แล้ว

      Welcome to your dystopian/1984/black mirror future

  • @pkmntrainermark8881
    @pkmntrainermark8881 2 ปีที่แล้ว +1

    I'm just gonna take a moment here to voice my appreciation for Kevin still making videos for us. Vsauce 1 and 3 never upload anything, so it's good to still have one around.

  • @Lazarosaliths
    @Lazarosaliths 2 ปีที่แล้ว +1

    Amazing video Kevin!!!!
    Thats so dystopian. One more step towards the future

  • @charlierogers5403
    @charlierogers5403 2 ปีที่แล้ว +18

    And this is why algorithms are not good for everything! We shouldnt rely on them 100%

    • @timojissink4715
      @timojissink4715 2 ปีที่แล้ว +2

      Algorithms can be amazing, but they need the right unbiased human input.

    • @luc_666jr5
      @luc_666jr5 2 ปีที่แล้ว +2

      Tell TH-cam that please

  • @ThisNameisalreadytaken.
    @ThisNameisalreadytaken. 2 ปีที่แล้ว +1

    Have you seen TOM CRUISE' movie- Minority Report ?? If no watch it you will relate better with Recidivism.

  • @fdsfjhjtjtea6497
    @fdsfjhjtjtea6497 2 ปีที่แล้ว +1

    One step closer to psychopass let's goo

  • @daaawnzoom
    @daaawnzoom 2 ปีที่แล้ว +1

    6:30 Remember everyone, if you saw someone stealing food, no you didn't.

  • @kevinlemon3467
    @kevinlemon3467 2 ปีที่แล้ว

    I think this is an excellent example of how statistics and large numbers work. A single individual is extremely hard to predict, but large groups of people are relatively easy, meaning you can predict with a fair degree of accuracy the average behavior of a large group of people. We do this in business all the time to predict people's buying habits. I used to run a small business and I would use some rudimentary statistical analysis to predict how we'd do in any given year, set prices, manage inventory, etc . . . and I could generally predict to within a few percent the total profits we would have during a year based on numbers I would have at the beginning of the year. I couldn't tell you what a single customer would do, but I could predict what people would do as a whole months in advanced if I had the right information.
    The Twitter info doesn't surprise me in the least bit. The number of people involved means that extremes are effectively controlled for and you'll probably get a fairly average group of people, which means the common wisdom of what people may commit crimes will be what shows up in the data. If that common wisdom is at all accurate, then the data from Twitter will be fairly accurate. If that data is inaccurate, then it will be inaccurate. It isn't surprising that Twitter ended up with similar numbers as the algorithm, since the algorithm seemed to take into account things that most people would consider (prior convictions, education, socio-economic background, etc . . . ) I don't know how accurate they both were, but I'd be surprised if the Twitter poll was extremely different.

  • @ashn7146
    @ashn7146 4 หลายเดือนก่อน

    You would think that the sentencing would take into account the severity of the crime first. 'Driving a car without permission' (provided he knew the person and has no criminal history outside of misdemeanor) does not warrant years in prison. Even if he didn't know them, 6 years is still too much.

  • @AsterSkotos
    @AsterSkotos 2 ปีที่แล้ว +1

    1:41 finally he reached his puberty, his voice cracked

    • @joefmagat5586
      @joefmagat5586 2 ปีที่แล้ว

      😂 Can't believe there isn't enough comments about this, had to scroll a bit to find yours

  • @lloydgush
    @lloydgush 2 ปีที่แล้ว +1

    Anyone ever heard of the halting problem?
    Feels quite like it.

  • @EmperorShang
    @EmperorShang ปีที่แล้ว

    Thanks for being part of the problem

  • @vertigo747
    @vertigo747 2 ปีที่แล้ว +5

    Havent watched it yet but I know its going to be good

    • @Nillowo
      @Nillowo 2 ปีที่แล้ว +4

      That’s easy to say for all of Kevin’s videos ;)

  • @TyDreacon
    @TyDreacon 2 ปีที่แล้ว

    One thing I'm surprised isn't brought up a lot is gaming the algorithm. When asking questions like, "do you find drug use harms other people other than the user?", it's pretty evident what the "correct" answer is. It's not like the algorithm can reach into your mind and see what you _actually_ believe. So you're pretty much free to answer per the algorithm's expectations to get the score you want.

  • @Crimsaur
    @Crimsaur 2 ปีที่แล้ว

    I really liked that track that you used at the opening, could I ask what it was?

  • @Xaelum
    @Xaelum 2 ปีที่แล้ว +1

    I feel like a system that somewhat accurately predicts crime could exist given enough time and development resources (probably not the ones being currently used), but we've got the tool backwards this whole time.
    What if instead of convicting someone we applied resources to HELP those with a higher chance of going back so that they feel supported and avoid doing so?
    That way the moral dilemma works be almost gone, and you would be benefitting the people who feel it's consequences the most.

    • @brandenjames2408
      @brandenjames2408 2 ปีที่แล้ว

      I'm skeptical of our ability to make a good ai for something like this anytime soon, but regardless your second point is very good and made me realize I wasn't looking at all the options, that this technology could still be useful even if flawed if it's used for a less dire purpose like recommending help instead of punishment.

  • @Pinapplekun
    @Pinapplekun 2 ปีที่แล้ว

    Sounds like an algorithm straight out of a dystopian novel

  • @arnauarnauarnau
    @arnauarnauarnau 2 ปีที่แล้ว

    Your jumper is awesome. Where'd you buy it?

  • @accentplaya18
    @accentplaya18 10 หลายเดือนก่อน

    It sickens me to say that one of the problems with how the law attempts to "predict" future criminal behavior is that our criminal, legal, and penal system is designed to promote recidivism rather than act as an avenue for rehabilitation. Norway has one of the most humane prison systems on earth, and once an inmate's time is served, the criminals are far less likely to commit even a second offense. Here in the US we have a "for profit" prison system which is designed to reward the privately owned prisons and local government with a "Cash for Inmates" setup. In this way, it is both undesirable and unprofitable for prisons to provide inmates with rehabilitation programs to prevent future crimes. So for those raking in the cash for keeping prison cells full, maintaining and even increasing the likelihood of recidivism becomes of highest priority.

  • @tom05011996
    @tom05011996 2 ปีที่แล้ว +4

    The compass risk assessment would give a high score to anyone with ADHD!

    • @evil_bratwurst
      @evil_bratwurst 2 ปีที่แล้ว +1

      I guess I'm gonna be a major criminal, then!

  • @bannanadude7303
    @bannanadude7303 2 ปีที่แล้ว

    this reminds me of the animie Psyco-Pass, its worth a watch.

  • @philipp594
    @philipp594 2 ปีที่แล้ว +2

    I recommend chornometer. It allows you to track macros and to track foods by scanning the labels. It's a pain to set up but after that it's great. It can help you to optimize for more than just dumb calories.

    • @temp_unknown
      @temp_unknown 2 ปีที่แล้ว +1

      Chronometer is great!

  • @lawlerzwtf
    @lawlerzwtf 2 ปีที่แล้ว +5

    Psycho Pass.
    Or Minority Report, depending on your demographic.

  • @josephsalomone
    @josephsalomone 2 ปีที่แล้ว

    This seems like a good Random Forrest problem. We use a random forrest in fraud prevention to determine if a never seen before device is likely to commit fraud, and we are right about 80% of the time, in regards to false negatives, and very dang close to 100% correct for false positives. This is in reference to computers, phones, etc.
    Heck, I bet I could design a simple AI program that would work better than the Compass model, as the compass model sounds like it is a coded model rather than an machine learning model.

  • @andrewcraig1074
    @andrewcraig1074 2 ปีที่แล้ว +1

    If such an algorithm should exist, it should be open source, and it should also predict the rate of recidivism after a given prison sentence. There's many people likely to be repeat offenders given a light sentence who would be much worse after prison, in which case why bother sending them.

  • @bobbsurname3140
    @bobbsurname3140 2 ปีที่แล้ว +1

    Looks like you're on a bit of a justice binge, yeah?
    I like it.

  • @polyblank73
    @polyblank73 2 ปีที่แล้ว

    Actually that futurama episode with the oracle

  • @RedIceberg
    @RedIceberg 2 ปีที่แล้ว +4

    I feel like the problem is that a young person, if taken to court, is much less likely to commit a crime. COMPAS probably doesn't take this into account, and therefore gave the teenager an inflated score.

  • @mathieuleader8601
    @mathieuleader8601 2 ปีที่แล้ว

    reminds me of PKD's Minority Report

  • @BatOfTheDead
    @BatOfTheDead 2 ปีที่แล้ว

    It’s literally Tim and Eric’s E-Trial sketch but real wtf

  • @AudioReplica2023
    @AudioReplica2023 2 ปีที่แล้ว

    This reminds me the movie "Minority Report"

  • @calemr
    @calemr 9 หลายเดือนก่อน

    "Do you live with friends" is probably just code for "Are you poor?", and I'm sure a lot of other questions where the purpose seems odd is that.

  • @europademon
    @europademon 2 ปีที่แล้ว +4

    Seems like the Supreme court is not interested in the common person anymore.

    • @csolivais1979
      @csolivais1979 2 ปีที่แล้ว +1

      Cynical me agrees with you, the other side hope it's just old people not understanding technology. Not that it gives them an excuse, mind you.

  • @reddcube
    @reddcube 2 ปีที่แล้ว +2

    Math can easily predict human behavior. IF a person's basic human needs are meet, THEN that person is less likely to commit crime.
    I wonder how recidivism would change if the government could provide healthcare, housing, utilities, or food.

    • @andrasfogarasi5014
      @andrasfogarasi5014 2 ปีที่แล้ว

      Your solution is basically just the government monopolising theft. They extort the middle class through taxes so that the poor don't have to do it themselves. As much as I'm making it sound ridiculous, this is a fine solution. It would greatly increase the efficiency of theft, thus reducing the economic waste associated. The most important aspect of this efficiency improvement in my opinion would be the free time the poor would become able to spend on education and employment.

  • @TheFansOfFiction
    @TheFansOfFiction 2 ปีที่แล้ว

    Well this is frightening

  • @TheRockingChar
    @TheRockingChar 2 ปีที่แล้ว

    If that risk assessment could be turned into an online test that'd be so dope

  • @requiem7204
    @requiem7204 2 ปีที่แล้ว +2

    The BLACK box you say? Maybe it is accurate

  • @NewfieCatgirl
    @NewfieCatgirl 2 ปีที่แล้ว

    11:00 No we cannot predict human behaviour at least using math equations for we can learn. We cannot know how learning something will change our actions.

  • @MatiKase
    @MatiKase 2 ปีที่แล้ว

    It feels like eureka or is it numb? Can it be turned on and off or do we feel like it? Might it wonder or might it be dead? Unity would wonder? Would it try to not analyze? funny ay how this works?

  • @purdysanchez
    @purdysanchez 2 ปีที่แล้ว

    The biggest problem with using past crime as a reference in category error. Someone being arrested for drug possession is not the same as someone who jumped over a counter and beat someone up over a wrong order at McDonald's.

  • @F_L_U_X
    @F_L_U_X 2 ปีที่แล้ว +1

    Yo Kevin, where'd you get that shirt? Looks nice.

  • @lastnamefirstname8655
    @lastnamefirstname8655 2 ปีที่แล้ว +1

    scary system.

  • @vikka14
    @vikka14 2 ปีที่แล้ว

    me thinking that minority reports was a scifi dystopian movie turns out it is an allegory for the current justice system

  • @ripno2672
    @ripno2672 2 ปีที่แล้ว

    Judging people based on their head bumps and shape is a very eugenics sounding idea.

  • @bfgfanatic1747
    @bfgfanatic1747 2 ปีที่แล้ว +1

    Ah, sweet. I always wanted to live in Psycho Pass.

  • @Fr0zenNightmare
    @Fr0zenNightmare 2 ปีที่แล้ว +1

    Well, since it's a BLACKbox I'm sure it has all of the same thoughts than the other crime-friendly-persons...

  • @tuxedobob2
    @tuxedobob2 2 ปีที่แล้ว

    Why the hell is the recedivism algorithm written in HTML with lorum ipsum text?

  • @TheWilliamSnfrd
    @TheWilliamSnfrd 2 ปีที่แล้ว +1

    Watch Pyschopass if this video was interesting to you. Classic sci-fi that gets you thinking about all this stuff and the ramifications of it

  • @egal1780
    @egal1780 2 ปีที่แล้ว

    11:00 statistics and math never lies!

  • @dominikbeitat4450
    @dominikbeitat4450 2 ปีที่แล้ว +1

    This is gonna end up in someone figuring out the anti-life equation on their own.
    Hello Darkseid, my old friend.