Blog: Microsoft Reconsidering AI Ethics Review Plan – Forbes
Microsoft, which has taken the ethical implications of AI so seriously that president Brad Smith met with Pope Francis in February to discuss how to best create responsible systems, is reconsidering a proposal to add AI ethics to its formal list of product audits.
In March, Microsoft executive vice president of AI and Research Harry Shum told the crowd at MIT Technology Review’s EmTech Digital Conference the company would someday add AI ethics reviews to a standard checklist of audits for products to be released. However, a Microsoft spokesperson said in an interview that the plan was only one of “a number of options being discussed,” and its implementation isn’t guaranteed. He said efforts are underway for an AI strategy that will influence operations companywide, in addition to the product stage.
“Microsoft has implemented its internal facial recognition principles and is continuing work to operationalize its broader AI principles across the company,” said the spokesman.
The adjustment comes during a time when executives across Silicon Valley are grappling with the best ways to ensure the implicit biases affecting human programmers don’t make their way into machine learning and artificial intelligence architecture. It also comes as the industry works to address issues where bias may have already crept in, including facial recognition systems that misidentify individuals with dark skin tones, autonomous vehicles with detection systems that fail dark-skinned pedestrians more than any other group and voice recognition systems that struggle to recognize non-native English speakers.
A roundup of AI ethics programs launched by Microsoft, Google, Amazon and Tesla shows a range of successes and failures over the last year that includes product overhauls designed to address biases and the rejection of research showing critical biases in AI architecture.
In addition to its internal facial recognition principles, Microsoft has several internal working groups dedicated to AI Ethics, including Fairness, Accountability, Transparency and Ethics in AI (FATE), a group of nine researchers “working on collaborative research projects that address the need for transparency, accountability, and fairness in AI.” They also have the advisory board AI Ethics and Effects in Engineering and Research (Aether) which reports to senior leadership.
Research conducted by Aether includes recommendations on regulating the use of facial recognition technology and has prompted the cancellation of “significant” sales due to concerns about ethical misuse of products, according to Microsoft Research Labs Director Eric Horvitz. A Microsoft spokesperson said the Aether team also works on developing tools for “detecting and addressing bias, recommended guidelines for human-AI interaction and policies and methods for making AI recommendations more understandable.” Microsoft is a founding member of Partnership on AI, a nonprofit formed with Amazon, Facebook, Google’s Deep Mind and IBM to study, “ethics, fairness and inclusivity; transparency, privacy and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology.”
On April 4, Google executives pulled the plug on the Advanced Technology External Advisory Council (ATEAC), a collaborative of executives, engineers and advocates formed to examine the ethical implications of its artificial intelligence products and services. The council, which existed for less than a week, faced opposition from the start from employees who formed a petition titled “Googlers Against Transphobia and Hate” to remove member Kay Cole James, president of conservative think tank The Heritage Foundation. Opponents also denounced the inclusion of Dyan Gibbens, founder of drone company Trumbull Unmanned. Gibbens was added to the group after several Googlers resigned last year in protest of a Department of Defense contract to design military drone software.
Ten days after AETAC ended, the Wall Street Journal reported Google dissolved a similar board in the United Kingdom created to assess ethical use of AI in health care technologies.
In an update to a March 26 blog post, Google Senior Vice President of Global Affairs Kent Walker said the company would “go back to the drawing board” and consider new ways to study and research AI ethics. Since that time, the Alphabet Inc. owned subsidiary continued its work through a formal review structure formed last year that includes researchers, social scientists, policy experts, a council of senior executives and others “to handle the most complex and difficult issues, including decisions that affect multiple products and technologies.” Since the review structure was implemented last year, team members have modified speech recognition research to highlight its assistive benefits for the hearing impaired and hit the brakes on a facial recognition tool to work through “important technology and policy issues.” Google is a founding member of the Partnership on AI.
The company came under fire in late March after research published at the Association for the Advancement of Artificial Intelligence/Association for Computing Machinery conference on Artificial Intelligence, Ethics and Society revealed that a version of its Rekognition facial analysis system had a 31 percent error rates when classifying the gender of darker-skinned women compared to zero percent when classifying lighter-skinned men
In a January blog post discussing the research, Dr. Matthew Wood, Amazon Web Services general manager of artificial intelligence, said there have been no reports of misuse of the technology since it went up for sale to law enforcement agencies two years ago and the company was not able to find the same rate of errors during its own testing.
“We clearly recommend in our documentation that facial recognition results should only be used in law enforcement when the results have confidence levels of at least 99 percent, and even then, only as one artifact of many in a human-driven decision,” he added in a blog post. Amazon is a founding member of Partnership on AI. The company did not immediately respond to a request for comment.
In February, Tesla founder and CEO Elon Musk, who once called artificial intelligence “humanity’s biggest threat,” stepped down from OpenAI, a research ethics nonprofit he cofounded in 2015 to address the issue. In a now-deleted Twitter post, Musk said Tesla was competing for some of the same people as OpenAI and “didn’t agree with some of what the OpenAI team wanted to do.” Tesla did not immediately respond to a request for comment.