Responsible AI in Government and Vendors Providing Tools Testing for Bias

Responsible AI in Government and Vendors Providing Tools Testing for Bias

  • May 2019 •
  • 10 pages •
  • Report ID: 5779725 •
  • Format: PDF
This IDC Perspective highlights examples of how the federal government is weighing in on responsible and ethical AI, and it highlights several tools offered by a subset of vendors providing tools and testing for bias in AI systems, including Accenture Federal Services, IBM, and SAS. There are several critical steps that agencies should take to ensure responsible and ethical AI. Many vendors are developing tools and techniques that test for and detect unintended consequences such as gender, racial, and ethnic bias in AI software. "Software that detects bias is a nascent field of research for many AI vendors, and there is no silver bullet that will automatically address bias and fairness issues," says Adelaide O'Brien, research director, IDC Government Insights. "Neither the machine nor your vendor can go it alone when guarding against bias -- solutions require agency vigilance."
Loading...

We are very sorry, but an error occurred.
Please contact [email protected] if the problem remains.