Cyprus Center for Algorithmic Transparency
Cyprus Center for Algorithmic Transparency
  • 40
  • 2 974
KeepAnI Proof of Concept Platform - An Introduction
Given the potential of algorithmic systems to influence the social world, by amplifying or abating bias and potential discrimination, KeepA(n)I develops a structured, methodological approach to aid developers and machine learning practitioners to detect social bias at the input datasets and output data of the application. In contrast to existing methods posed by the Fair ML community, which evaluate group and individual fairness in datasets and algorithmic results, in an attempt to reduce/mitigate the effect of bias, KeepA(n)I takes a different approach. The project focuses on the expression of social stereotypes (e.g., based on gender, race or socio-economic status) and how those are reflected in biases shared by groups of people interacting in different ways with the system. KeepA(n)I is envisioned as a human-in-the-loop approach, methodically exposing social stereotypes and reducing the negative impact or even enhancing people’s access to opportunities and resources when interacting with both high and low risk AI applications. By engaging humans in the evaluation process (i.e., through crowdsourcing), KeepA(n)I will achieve a diverse (e.g., across cultures) and dynamic (e.g., across contexts and time) evaluation of social norms, according to the objective of the evaluated application.
มุมมอง: 30

วีดีโอ

CyCAT - Educational Interventions
มุมมอง 202 ปีที่แล้ว
CyCAT - Educational Interventions
CyCAT - Data Bias
มุมมอง 1392 ปีที่แล้ว
CyCAT - Data Bias
CyCAT Closing Event
มุมมอง 342 ปีที่แล้ว
CyCAT Closing Event
Contesting Algorithms: Restoring the public interest in content filtering by artificial intelligence
มุมมอง 222 ปีที่แล้ว
Abstract: In recent years, artificial intelligence (AI) has been deployed by online platforms to filter allegedly illegal expressions. AI filters carry censorial power which could bypass traditional checks and balances secured by law. This dramatic shift in norm setting and law enforcement is potentially game-changing for democracy. The opaque and dynamic nature of AI-based filters creates barr...
CyCAT - Computer Vision
มุมมอง 1102 ปีที่แล้ว
CyCAT - Computer Vision
Enabling participatory and procedurally-fair AI
มุมมอง 1423 ปีที่แล้ว
Abstract: As artificial intelligence (AI) is transforming work and society, it is ever more important to ensure that AI systems are fair and trustworthy and support critical values and priorities in organizations and communities. In this talk, I will first present empirical findings on people’s trust and fairness around algorithms that make managerial and resource allocation decisions. My resea...
Agency, Accounts and Accountability: Putting the Social into Explainable AI
มุมมอง 153 ปีที่แล้ว
Abstract: I will examine what delivering explainable AI (xAI) means in practice, particularly in contexts that involve formal or informal and ad-hoc collaboration where agency and accountability in decision-making are achieved and sustained socially and interactionally. As an illustration, I will use an example from an earlier study of collaborative decision-making in screening mammography of h...
Bias and Transparency of Web Search Engines-Dr.Frank Hopfgartner&Dr.Jo Bates,University of Sheffield
มุมมอง 3133 ปีที่แล้ว
Bias and Transparency of Web Search Engines-Dr.Frank Hopfgartner&Dr.Jo Bates,University of Sheffield
Keynote: Casey Dugan, IBM
มุมมอง 443 ปีที่แล้ว
Keynote: Casey Dugan, IBM
A practical session with Casey Dugan, IBM
มุมมอง 363 ปีที่แล้ว
A practical session with Casey Dugan, IBM
Perceptions of Young Developers on Algorithmic FATE-Dr.Styliani Kleanthous,OUC
มุมมอง 333 ปีที่แล้ว
Perceptions of Young Developers on Algorithmic FATE-Dr.Styliani Kleanthous,OUC
End-Users' Perception of Algorithmic Fairness - Prof. Tsvika Kuflik, University of Haifa
มุมมอง 393 ปีที่แล้ว
End-Users' Perception of Algorithmic Fairness - Prof. Tsvika Kuflik, University of Haifa
Bias in Data and Algorithmic Systems:Problems,Solutions and Stakeholders-Prof. Jahna Otterbacher,OUC
มุมมอง 463 ปีที่แล้ว
Bias in Data and Algorithmic Systems:Problems,Solutions and Stakeholders-Prof. Jahna Otterbacher,OUC
Diversity, Bias and Related Issues - Prof. Fausto Giunchiglia, University of Trento
มุมมอง 753 ปีที่แล้ว
Diversity, Bias and Related Issues - Prof. Fausto Giunchiglia, University of Trento
AI Ethics - Prof. Michael Rovatsos, University of Edinburgh
มุมมอง 1353 ปีที่แล้ว
AI Ethics - Prof. Michael Rovatsos, University of Edinburgh
Keynote Speech - Prof. Joanna Bryson, Hertie School of Governance, Berlin
มุมมอง 583 ปีที่แล้ว
Keynote Speech - Prof. Joanna Bryson, Hertie School of Governance, Berlin
Bias in Human-in-the-loop Artificial Intelligence
มุมมอง 533 ปีที่แล้ว
Bias in Human-in-the-loop Artificial Intelligence
CoronaSurveys: Using Indirect Reporting to Estimate the Incidence of Epidemics
มุมมอง 433 ปีที่แล้ว
CoronaSurveys: Using Indirect Reporting to Estimate the Incidence of Epidemics
Educating the Educators
มุมมอง 43 ปีที่แล้ว
Educating the Educators
Educating the Developers
มุมมอง 73 ปีที่แล้ว
Educating the Developers
Developing an Algorithmic WatchDog
มุมมอง 53 ปีที่แล้ว
Developing an Algorithmic WatchDog
FATE in Data Science
มุมมอง 133 ปีที่แล้ว
FATE in Data Science
Explainability, Fairness and between
มุมมอง 143 ปีที่แล้ว
Explainability, Fairness and between
Bias in Algorithmic Systems: Problems, Solutions and Stakeholders
มุมมอง 203 ปีที่แล้ว
Bias in Algorithmic Systems: Problems, Solutions and Stakeholders
Professor Bettina Berendt Seminar
มุมมอง 923 ปีที่แล้ว
Professor Bettina Berendt Seminar
Towards fair, diversity-aware and unbiased data management with a focus on social networks
มุมมอง 593 ปีที่แล้ว
Towards fair, diversity-aware and unbiased data management with a focus on social networks
Digital Language Divide
มุมมอง 1374 ปีที่แล้ว
Digital Language Divide
Fake News Detection with Content and Social Information
มุมมอง 2684 ปีที่แล้ว
Fake News Detection with Content and Social Information
Profiling Humans, Profiling Bots, Profiling You
มุมมอง 554 ปีที่แล้ว
Profiling Humans, Profiling Bots, Profiling You