Google's new AI principles are vague and meaningless
Despite public outcry, even among thousands of its own employees, Google is continuing its work with Project Maven. It has just published a list of AI principles which it promises to adhere to, but without real accountability, they don't mean much.
Just a few weeks ago, around a dozen Google employees resigned over Google providing AI for Project Maven, a US Defense Department pilot program with the goal of making analysis of drone footage faster by automatically categorizing images as objects or people using machine learning. Plus, over 3,000 more employees signed an open letter urging the company to re-think its involvement.
Following a lot of media attention and public debate over the ethical questions being raised, Google has now publicly attempted to clarify its stance by publishing its new AI principles.
Google's objectives for AI applications include:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Google will not pursue the following AI applications:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
The problem with these new guidelines is that Google is writing its own rules, and it gets to define what is "appropriate". The details of these AI principles include such vague promises as, "We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal." There's no true accountability to the public here, and Google hasn't committed to an independent review process, which was a suggestion by Electronic Frontier Foundation Chief Computer Scientist Peter Eckersley in a statement to Gizmodo.
Further, the company is still going to engage in the same work that created all of this controversy in the first place.
"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas."
According to The Verge, Google plans to honor its contract, continuing its involvement with Project Maven, until it expires in 2019. Google will also continue to compete with companies like Microsoft and Amazon for the parts of a $10 billion Pentagon contract which are in accordance with the new AI guidelines the company has set for itself.
I see two very big problems with this. First, with Google seemingly constrained by its AI principles, another company without such principles will get future contracts, and the very same questionable AI work will take place anyway. Second, if we leave companies to regulate their own ethics, there is no legal accountability when they don't stick to the (self-imposed, watered-down) rules.
What's the solution? Governments could regulate AI and machine learning, so tech companies could all be held to the same (hopefully high) standard of ethics when developing and offering machine learning and AI technology.
The Toronto Declaration, put together by human rights and technology groups in May, is a good starting framework for making sure algorithms respect human rights, which regulations could be based on. Without accountability to the law, the vague new AI principles from Google really don't mean much.
Tell us what you think about AI in the comments!
luckily for us, google is run by a bunch of kids. they'll never get it right.