Hot topics

Will AI mean the end of personal responsibility?

AndroidPIT 16 9 shutterstock 618815732
© nextpit

AI was once the stuff of science fiction and theoretical research before quietly working behind the scenes of online services. Now, we're starting to see the early stages of AI become widespread across the consumer market. As we hand over more and more management of our daily lives to algorithms, will our old ideas of personal responsibility continue to make sense?

Technology's effect on culture and society can be subtle. When presented with a new toy or service that makes our lives easier, we're quick to embrace and normalize it without thinking through potential consequences that only emerge years down the line. 

Privacy was the first casualty

Take personal privacy, for example. When Facebook was rocked by the Cambridge Analytica scandal, it may have dominated the headlines and set the chattering classes ablaze, but amid all the outrage, what struck me as the most common reaction outside of media pundits and tech enthusiasts was indifference.  

But why did people shrug and say "so what?" to such a massive leak of our personal information? To its use in sleazy advertising and to manipulate the results of important elections?

Zuckerberg
Multiple privacy scandals aren't enough to bring down Facebook. / © Leah Millis/Reuters

Perhaps because the technical processes behind it all are too complex for most individuals to have a clear idea of exactly how it happened. The user license agreements for all the different services a person can use are themselves dense and opaque, and we don't have the time to read, let alone understand all of them. In fact, a study has shown that in order to read all of the privacy policies you encounter, you'd need to take a month off from work each year.

Yet many of us agreed to this Faustian bargain anyway, and gave up our privacy because say, Facebook or Google's services (among others), were too good not to use. Plus, all our friends (or our competitors, in a business context) were using it, and who wants to fall behind?

The question of how we got here is still being explored, but the fact remains: personal privacy in 2018 isn't what it used to be. Expectations are different, with many of us perfectly happy to give up information to corporations at a level of intimacy that would have shocked previous generations. It's the price we pay for entry into the world of technology, and for the most part, we're happy to do it.

You can urge people to use VPNs and chat on Signal all you want, but for the most part, the cultural shift has already happened: protecting privacy isn't a concern for most people. Not enough for them to take any active steps, as much as one could complain.

Personal responsibility will be next, thanks to AI

AI horror stories usually invoke fears of it becoming conscious and somehow turning against humanity. But the more realistic anxiety is that machine 'intelligence' cannot really regard us all. Like any tool, it serves to make a task easier, faster, more efficient. But the further that tool is from a guiding human hand, the fuzzier the issue of personal responsibility becomes.

Privacy is one thing, but responsibility becomes serious, and can be quite literally a matter of life and death. When something AI-powered goes wrong and causes harm, who bears responsibility? The software engineers, even if the machine 'learned' its methods independently of them? The person who pushed the 'on' button? The user who signed a now-ubiquitous stream of dense legalese without reading it to get quick access to a service?

Self-driving cars are at the forefront of this ethical dilemma. For example, an autonomous vehicle developed by Nvidia is taught how to drive via a deep learning system using training data collected by a human driver. And to its credit, the technology is amazing. It can keep in its lane, make turns, recognize signs and so on. 

All good, so long as it's doing what it's supposed to. But what if an autonomous car decides to suddenly turn into a wall or drive into a lake? What if swerves to avoid crashing into a pedestrian, but ends up killing its passenger in the process? Will the car have its day in court?

MVIMG 20180227 181609
Feeling relaxed in a self-driving car. / © NextPit

As things stand now, it could be impossible to find out why or how accidents happen, since the AI can't explain its choices to us and even the engineers that set it up won't be able to follow the process behind every specific decision. Yet, accountability will be demanded at some point. it could be that this issue will keep autonomous vehicles off the market until it's perfectly resolved. Or, it could be that the technology becomes too exciting, so convenient and so profitable, that we release it first and ask the difficult questions later.

Imagining AI involved in a car accident is a dramatic example, but there are going to be more areas of our lives in which we will be tempted to give over responsibility to the machine. AI will diagnose our diseases, and 'decide' who lives or dies, make multi-million dollar trading calls, and make tactical choices in war zones. We've already had problems with this, such as people with asthma being wrongly graded as low risk by an AI designed to predict pneumonia.

google i o 2018 12
It's important to get the right answers. / © Screenshot: AndroidPIT

As AI becomes more advanced, it'll probably make the best decisions...99.9% of the time. That other 0.01% of the time, perhaps we'll just shrug like we did with the Facebook privacy scandal.

Smart assistants and apps will take on more responsibility

Let's zoom in a little closer, onto the individual. At Google I/O, the Mountain View colossus showcased a couple of ways for AI to make our lives a little easier. Virtual assistants have entered the mainstream in the last year or so, becoming a key part of many American's homes. Google's Duplex demo showed how you can delegate booking appointments to Assistant, having the robot make a phone call for you and book a haircut or a restaurant reservation. Google also wants to use Duplex for automated call centers, conjuring an amusing scenario of two robots having a conversation with human language.

google i o 2018 136
AI took the center stage at Google's I/O event. / © Screenshot: AndroidPIT

Sounds, cool, right? Except, well, there's a certain level of trust you give your virtual assistant when you let it act as your proxy like this. Communication over these tasks may sound simple, but is actually fraught with potential problems.

For example, when we speak to each other, we pick up on subtle cues in our voices and attitudes to get an impression, human to human, of who we're talking to and act appropriately. Even with that, you know how easy it is to mortally offend someone by accident and cause an argument or an outrage.

Where does the responsibility lie, however, when a virtual assistant says something perceived as offensive or embarrassing? If virtual assistants are somehow prevented from saying potentially offensive things, even ironically or as a joke or criticism, is that 'your' voice being censored? It's going to take a lot more than 'ums' and 'ahs' for AI to really be able to talk for us.

Watch Google Duplex in action at the Google I/O 2018 demo:

Another big theme, both at Google I/O and Apple's WWDC this year, was software that managed its own use, in the name of 'digital well-being'. The rather patronizing idea is that we won't leave our devices to go out and smell the roses unless our device reminds us to.

The users can set preferences for this kind of thing, of course, and yet I feel that having our wellness and time management handled by AI isn't far off, with a smart assistant managing our routine of health, work and entertainment according to what its learned from our habits, fitness, environment and so on. And it could be very positive for many, though I would personally find that level of micromanagement a nightmare.

Of course, humans will resist handing over responsibility to AI unless there's a real advantage to be gained. And there will be advantages...in convenience, productivity, entertainment and so on. The advantages will be too good to resist, and I'm not one to advocate banned technology. We'll embrace AI technology for its benefits, and then adjust our social expectations around its reality.

Like it or not, our society will adapt to find a place for AI

The classic AI horror story usually details a super-intelligent machine that becomes self-aware and turns on its creators. While the evil AI is about as realistic as vampires or werewolves or other horror fodder, our struggle with AI will be real but more mundane, the trade-off between convenience and accountability in myriad aspects of our daily lives.

But there is no consciousness behind artificial intelligence as we know it nowadays, no self or mind. We aren't building AI gods, but rather phenomena that are more akin to the complex but unthinking systems in nature. Things we depend upon and harness, but don't control. 

In the worst case scenario, complaining about or questioning the methods of algorithms may be as absurd as questioning the tides or the wind. In the best case scenario, responsible development and consumption will keep ultimate responsibility in the hands of human beings who can trust each other, instead of pointing blame at the black box.

What do you think of the role that AI will play in our daily lives? Do we already trust algorithms with too much responsibility?

  Editor's choice Best smart home control center Best price-performance ratio Best sound Best sound supplement Best display
Product
Product image Amazon Echo Dot (5. Gen) 2022 Product Image Amazon Echo Hub Product Image Amazon Echo (4.Gen) Product Image Amazon Echo Studio Product Image Amazon Echo Sub Product Image Amazon Echo Show 10 (3rd. Gen) Product Image
Review
Review: Amazon Echo Dot (5. Gen) 2022
Review: Amazon Echo Hub
Not yet tested
Review: Amazon Echo Studio
Not yet tested
Not yet tested
Price Compariosn
Go to comment (8)
Nicholas Montegriffo

Nicholas Montegriffo
Editor

A cyberpunk and actual punk, Nicholas is the Androidpit team's hardcore gamer, writing with a focus on future tech, VR/AR, AI & robotics. Out of office, he can be found hanging around in goth clubs, eating too many chillies, or at home telling an unlucky nerd that their 8th level wizard died from a poisoned spike trap.

Liked this article? Share now!
Recommended articles
Latest articles
Push notification Next article
8 comments
Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing

  • 27
    Sorin Dec 21, 2018 Link to comment

    Today's areas of knowledge are becoming more and more extensive, so AI technology is a real helper to explore in depth.


  • 28
    itprolonden Jul 10, 2018 Link to comment

    First off....no one manipulated the results of any election. That can't be done unless the ballot count was actually tampered with like in 2008 and 2012.

    Sorin


  • 16
    Deactivated Account Jul 9, 2018 Link to comment

    if i see any actual AI show up i'll consider this a valid topic. "machine learning" is still giving it too much credit. we are still talking about code that just does what it is programmed to do. nothing even resembling intelligence, yet.

    Sorinitprolonden


  • Albin Foro 30
    Albin Foro Jul 9, 2018 Link to comment

    The question whether we are "building AI gods" is still open, I think, while there seems to be something of a rush to make AI substitute for human-centered judgements in various kinds of decision-making. Asking "what would Jesus do?" sends the decider down a path of soul-searching, while asking "what would AI do?" sends the same decider down a path of Google-searching.

    Sorin


  • 20
    Reg Joo Jul 9, 2018 Link to comment

    AI’s here to stay, get used to it. What worries me is the ability to “ weaponize” this new tech, as hackers are always on the cutting edge of manipulating new tech. New AI bots that think on their own, to cause harm, could be just around the corner, if AI gets to a state where anyone can create the software. Yes, it will change alot of things, and how we do things. I hope someone is keeping a eye out for abuse of this new tech.

    Sorinitprolonden


  • 46
    Deactivated Account Jul 8, 2018 Link to comment

    It is all of our personal responsibility to keep a true AI from ever being created. I agree with Rusty everybody want to point a finger at someone else rather than man up and take responsibly for their own action.

    Sorin


  • Rusty H. 33
    Rusty H. Jul 8, 2018 Link to comment

    Personal responsibility? Shoot...for the most part (USA) that went out the window over 2 decades ago with the advent of political correctness LOL. Now everything "isn't my fault".

    SorinitprolondenDeactivated Account


    • 28
      itprolonden Jul 10, 2018 Link to comment

      I'd say more like the 1960's but I get the point :)

      Sorin

Write new comment:
All changes will be saved. No drafts are saved when editing