Should Google's algorithms make life and death decisions?
About a dozen Google employees are now resigning over Google providing artificial intelligence to Project Maven, a US Defense Department pilot program with the goal of making analysis of drone footage faster by automatically categorizing images as objects or people using machine learning. This raises several questions about Google's ethics, and how the future of machine learning and AI should be directed.
AI and machine learning can be used for an endless variety of useful consumer and commercial applications which seem harmless enough, but as the technology develops, more concerning use cases are starting to appear. Project Maven has brought the issue, along with Google, into the spotlight.
When it comes to drone strikes, there are life and death stakes, so the ethics of Google's decision to get involved with the US military have been called into question, and rightly so. Should algorithms be making life and death decisions? Could the further development of this technology be paving a path toward autonomous weapons systems?
Google has a responsibility to consider the implications of its technologies on its users. In the case of Project Maven, the results could be lethal for the company's users, who are located all around the globe. Drones can also have important implications for privacy too, even here in the US.
If you think you have nothing to worry about, consider the fact that the US Department of Transportation, with Google, Qualcomm and Microsoft's participation, will be testing drones in several American cities for a number of applications not currently allowed by law, citing the possibility of new economic and safety benefits. But, what is the trade off for those benefits? While a future full of AI-powered drone delivery services sounds cool, what new threats to privacy would it introduce?
Google isn't subject to public accountability for its decisions, but given that users across the world entrust the company with their data, perhaps more scrutiny is in order.
We should be asking more questions about large tech companies' decisions and be ready to protest them when they promise not to be evil, as Google's old motto says, and don't deliver on that promise. Otherwise, we as users will have no say in directing the future of technologies like AI and machine learning which could have grave consequences for privacy and even human lives.
Were the Google employees right to resign? Let us know what you think in the comments!
Source: Gizmodo
To think that humans were making spears and arrows thousands of years ago and we're now making A.I really scares me, just as the former brought us here, the latter may take us hopefully someplace better
Nice information...Thank you for sharing
While resigning is a way to show protest it also simply hands off the project to people with lower moral standards, thus achieving the reverse effect of what was desired. They should have stayed with the company and refused to do any work; forcing the company to either fire them or re-assign them etc and by that process they would be forcing the discussion within the company. Right now the only discussion is happening outside the company because their leaving has presumably turned what is left of the team into an echo chamber.
In terms of this specific application, the algorithms aren't making any decision autonomously. There are still humans responsible for the actual decision.
the defense department has declared neverending war on the middle east, yet refuses to acknowledge that it's a war on islam. being that they can't understand what they are at war with, i don't think google should 'assist' in any way. defense department is just wasting our money blowing people up and ruining whole countries.
I'm sorry, but NO VERSION of AI should be "set loose" with the ability to make that kind of decision. I'm about as conservative as they come, but, I have always thought that war, means a breakdown of rational thought & reasoning, per se. If we say "oh well, it wasn't me that dropped the bomb, it was the AI", then you know good and well what that will lead to. Politicians around the world wiping their hands of responsibility.
NO! No AI should be allowed to do that, period.