“Predictive Privacy, or the Risk of Secondary Use of Trained ML Models”

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ต.ค. 2023
  • Big data and artificial intelligence (AI) pose a new challenge for data protection when these techniques are used to make predictions about individuals. This could happen both to individuals who are not in the training data and in context of secondary use of trained models. In this talk I will use the ethical notion of “predictive privacy” to argue that trained models are the biggest blind spot in current data protection regimes and other regulatory projects concerning AI. I argue that the mere possession of a trained model constitutes an enormous aggregation of informational power that should be the target of regulation even before the application of the model to concrete cases. This is because the model has the potential to be used and reused in different contexts with few legal or technical barriers, even as a result of theft or covert business activities. The current focus of data protection on the input stage distracts from the - arguably much more serious - data protection issue related to trained models and, in practice, leads to a bureaucratic overload that harms the reputation of data protection by opening the door to the denigrating portrayal of data protection as an inhibitor of innovation.
    Speaker: Rainer Mühlhoff, University of Osnabrück, Germany
    Moderator: Klaus Staudacher, bidt, Germany
    The Digital Humanism Lecture Series is funded by the Vienna Business Agency, a service offered by the City of Vienna, in cooperation with the Faculty of Informatics of TU Wien and TU Wien's Center for Artificial Intelligence and Machine Learning (CAIML).
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น •