USENIX Enigma 2018 - Differential Privacy at Scale: Uber and Berkeley Collaboration

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ม.ค. 2025

ความคิดเห็น • 7

  • @jonabirdd
    @jonabirdd 5 ปีที่แล้ว +4

    Incredible work! Simple conceptually. Would have been helpful to show how the example queries given could have been exploited to reveal individual information.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 6 ปีที่แล้ว +1

    Won’t differential privacy impact the training model for machine learning, given the noise added to SQL queries to obfuscate the output? Or does the SQL queries needs to happen many times on the same database to get the expected value of the output, which in turn, is used for training? If so, doesn’t that add a lot of overhead? I’m probably not understanding this correctly. Thanks

    • @jonabirdd
      @jonabirdd 5 ปีที่แล้ว +1

      I think the idea is that the noise is small enough. You don't want the expected value, as that defeats the purpose, since the expected value is sensitive to additions of new data. You WANT the right amount of noise.

    • @intoeleven
      @intoeleven 4 ปีที่แล้ว

      you can think dp as a regularization on your model.

  • @chinmayeejoshi4592
    @chinmayeejoshi4592 3 ปีที่แล้ว

    One big challenge this talk didn't identify is that using this framework requires that you assume your data is valid. That seems like a lot to assume of your data, given that the very first thing a product manager checks for when they see their KPIs drop is, whether the data is valid.

    • @casonmohammed108
      @casonmohammed108 3 ปีที่แล้ว

      A tip: you can watch movies on instaflixxer. Been using it for watching a lot of movies during the lockdown.

    • @rickykhari9987
      @rickykhari9987 3 ปีที่แล้ว

      @Cason Mohammed Definitely, I have been watching on instaflixxer for months myself :D