ไม่สามารถเล่นวิดีโอนี้
ขออภัยในความไม่สะดวก

AI Governance & Risk Management | Kartik Hosanagar | Talks at Google

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ค. 2019
  • Join Talks at Google for a conversation with Kartik Hosanagar, John C. Hower Professor of Technology and Digital Business at Wharton, about his new book A Human’s Guide to Machine Intelligence. The book is the result of years of Professor Hosanagar’s research, and explores the impact of algorithmic decisions on our personal and professional lives, and their unanticipated consequences. Kartik will explore how firms can make use of the tremendous opportunities and potential offered by machine learning and automated decision-making, while also doing their part to ensure algorithms are responsibly deployed.
    About Kartik:
    Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business at the University of Pennsylvania’s Wharton School of Business. Professor Hosanagar’s research focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing and e-commerce. Kartik has been recognized as one of the world’s top 40 business professors under 40.
    Link to book here: goo.gle/2YT9MY3

ความคิดเห็น • 15

  • @syarpieko1031
    @syarpieko1031 3 ปีที่แล้ว +3

    Is there any best practice to setup AI Governance for Enterprise Company?

  • @ASTVHOTSPOT
    @ASTVHOTSPOT 5 ปีที่แล้ว

    👍

  • @tdreamgmail
    @tdreamgmail 5 ปีที่แล้ว +2

    Do the Chinese followers know they're taking to a chatbot?

  • @Dr.Kananga
    @Dr.Kananga 5 ปีที่แล้ว

    My fear is humans will become too dependent on AI and loose their own abilities to trust their expertise.

  • @Autists-Guide
    @Autists-Guide 5 ปีที่แล้ว +1

    Good talk. A larger framework for Information Governance already exists, of course... COBIT2019.
    (I'm biased, of course, as I'm a COBIT trainer :)

  • @akivaprivate595
    @akivaprivate595 5 ปีที่แล้ว

    5:44 How the computer knows if the woman is African American?

    • @ankurloonia2816
      @ankurloonia2816 5 ปีที่แล้ว

      Race...physiognomy

    • @terrenceadams3608
      @terrenceadams3608 4 วันที่ผ่านมา

      I imagine just another data point. Whether it is explicit like the "race field" on most government forms, or implicit like home address (like insurance algorithms.)

  • @heetendrarathor3126
    @heetendrarathor3126 5 ปีที่แล้ว +1

    What I think is that these Biases are actually the reality and In process of making AI unbiased you actually making it Ideal and unrealistic and unuseful to catch the real threat or real candidate.

    • @remoneilwemogatosi544
      @remoneilwemogatosi544 5 ปีที่แล้ว +1

      Hi,
      This is a bit circular. Do we not design objective tools to reduce bias? And thereby making them more useful? Because with your statement, you are actually saying that our biases (subjective) are sufficient. It would seem the only benefit is automating our biases. By making AI ideal (objective), we doing the same thing that people in the scientific community do when they use statistical models with confidence intervals.
      What do you think?

    • @themeek351
      @themeek351 5 ปีที่แล้ว

      @@remoneilwemogatosi544 Is this what we do? Reduce our bias in order to find truth or, at least a useful result? What is needed here is truth about what is useful bias and what is not. Maybe Google should start looking into the liberal progressive bias of their board of directors! The last I heard from a recent congressional hearing is that it is at 100%! Now that's not useful bias! There should always be a truthful and useful bias towards your goals, while maintaining a balanced load of results across the human spectrum. God bless!

    • @remoneilwemogatosi544
      @remoneilwemogatosi544 5 ปีที่แล้ว +1

      @@themeek351 HI,
      I think you and I might be on different points. Care to give examples? I think you might be misunderstanding my reply to Heetendra.
      To your first question, yes. That is what we aim to do with science. The tools used in pursuit of, say, factors and drivers of obesity are meant to be reliable. This, in turn, helps us take an action (depending on your philosophy of governing). But that is besides the point, I need to emphasize that yes, we are trying to eliminate human bias in decisions and, least of all, not to read causality in cross-sectional data. But, again, I will wait for your examples and guidance on your point.