AI Robustness
ฝัง
- เผยแพร่เมื่อ 29 ธ.ค. 2024
- Modern data analytic methods and tools-including artificial intelligence (AI) and machine learning (ML) classifiers-are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs.
Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the Department of Defense (DoD) is increasing its use of AI classifiers and predictors, users may grow to distrust results because of these issues. To regain user trust, we need new test and evaluation methods for ongoing testing and evaluation of AI and ML accuracy. Carnegie Mellon University Software Engineering Institute (CMU SEI) has developed a new AI Robustness (AIR) tool that allows users to gauge AI and ML classifier performance with data-based confidence.
Principal Investigator
Linda Parker Gates