Hot topics

Should Google's algorithms make life and death decisions?

AndroidPIT google is evil 2
© nextpit

About a dozen Google employees are now resigning over Google providing artificial intelligence to Project Maven, a US Defense Department pilot program with the goal of making analysis of drone footage faster by automatically categorizing images as objects or people using machine learning. This raises several questions about Google's ethics, and how the future of machine learning and AI should be directed.

AI and machine learning can be used for an endless variety of useful consumer and commercial applications which seem harmless enough, but as the technology develops, more concerning use cases are starting to appear. Project Maven has brought the issue, along with Google, into the spotlight. 

When it comes to drone strikes, there are life and death stakes, so the ethics of Google's decision to get involved with the US military have been called into question, and rightly so. Should algorithms be making life and death decisions? Could the further development of this technology be paving a path toward autonomous weapons systems?

Google has a responsibility to consider the implications of its technologies on its users. In the case of Project Maven, the results could be lethal for the company's users, who are located all around the globe. Drones can also have important implications for privacy too, even here in the US.

If you think you have nothing to worry about, consider the fact that the US Department of Transportation, with Google, Qualcomm and Microsoft's participation, will be testing drones in several American cities for a number of applications not currently allowed by law, citing the possibility of new economic and safety benefits. But, what is the trade off for those benefits? While a future full of AI-powered drone delivery services sounds cool, what new threats to privacy would it introduce?

Google isn't subject to public accountability for its decisions, but given that users across the world entrust the company with their data, perhaps more scrutiny is in order.

We should be asking more questions about large tech companies' decisions and be ready to protest them when they promise not to be evil, as Google's old motto says, and don't deliver on that promise. Otherwise, we as users will have no say in directing the future of technologies like AI and machine learning which could have grave consequences for privacy and even human lives.

Were the Google employees right to resign? Let us know what you think in the comments!

Source: Gizmodo

  nextpit recommendation Price tip Luxury version with handle Price tip with handle For Garmin fans Mid-range tip
Product
Image Withings Body Smart Product Image Renpho Smart Body Fat Scale Product Image Withings Body Scan Product Image Lepulse Lescale P1 Product Image Garmin Index S2 Smart Scale Product Image eufy Smart Scale P3 Product Image
Deals*
Go to comment (5)
Brittany McGhee

Brittany McGhee
Editor androidpit.com

Brittany loves to keep up with the latest technology and innovation, so she is excited to have the opportunity to write about the wonderful world of Android. She thinks spreadsheets and numbers are fun, in addition to reading books and volunteering.

Liked this article? Share now!
Recommended articles
Latest articles
Push notification Next article
5 comments
Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing

  • 4
    Deactivated Account May 31, 2018 Link to comment

    To think that humans were making spears and arrows thousands of years ago and we're now making A.I really scares me, just as the former brought us here, the latter may take us hopefully someplace better


  • 12
    harshatecordeon May 17, 2018 Link to comment

    Nice information...Thank you for sharing


  • 17
    Gavin Runeblade May 16, 2018 Link to comment

    While resigning is a way to show protest it also simply hands off the project to people with lower moral standards, thus achieving the reverse effect of what was desired. They should have stayed with the company and refused to do any work; forcing the company to either fire them or re-assign them etc and by that process they would be forcing the discussion within the company. Right now the only discussion is happening outside the company because their leaving has presumably turned what is left of the team into an echo chamber.

    In terms of this specific application, the algorithms aren't making any decision autonomously. There are still humans responsible for the actual decision.


  • 16
    Deactivated Account May 16, 2018 Link to comment

    the defense department has declared neverending war on the middle east, yet refuses to acknowledge that it's a war on islam. being that they can't understand what they are at war with, i don't think google should 'assist' in any way. defense department is just wasting our money blowing people up and ruining whole countries.


  • Rusty H. 33
    Rusty H. May 16, 2018 Link to comment

    I'm sorry, but NO VERSION of AI should be "set loose" with the ability to make that kind of decision. I'm about as conservative as they come, but, I have always thought that war, means a breakdown of rational thought & reasoning, per se. If we say "oh well, it wasn't me that dropped the bomb, it was the AI", then you know good and well what that will lead to. Politicians around the world wiping their hands of responsibility.
    NO! No AI should be allowed to do that, period.

    Deactivated Account

Write new comment:
All changes will be saved. No drafts are saved when editing