Home Blogs Designing Artificial Intelligence with Core Values

Designing Artificial Intelligence with Core Values

0
Designing Artificial Intelligence with Core Values

Whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge.

By Nazarul Islam

As children, we are all taught to be good people. Perhaps even more important, we are taught that bad company can corrupt good character — and one bad apple can spoil the bunch.

Not long ago, it started out as a social experiment, but it quickly came to a bitter end. Microsoft’s Chatbot Tay had been trained to have “casual and playful conversations” on Twitter, but once it was deployed, it took only 16 hours before Tay launched into tirades that included racist and misogynistic tweets.

As it turned out, like my trained parrot, the Tay was mostly parroting the verbal abuse that humans were spouting at it — but the outrage that followed centered on the bad influence that Tay had on people who could see its hateful tweets, rather than on the people whose hateful tweets were a bad influence on Tay.

Today, we increasingly interact with machines powered by artificial intelligence — AI-powered smart toys as well as AI-driven social media platforms that affect our preferences. Could machines be bad apples? Should we avoid the company of bad machines, lest they corrupt us?

The question of how to make AI ethical is front and center in the public debate. For starters, the machine itself must not make unethical decisions: ones that reinforce existing racial and gender biases in hiring, lending, judicial sentencing and in facial detection software deployed by police and other public agencies.

What is less discussed, however, are the ways in which machines might make humans themselves less ethical. People behave unethically when they can justify it to others, when they observe or believe that others cut ethical corners too, and when they can do so jointly with others (versus alone). In short, the magnetic field of social influence strongly sways people’s moral compass.

Al can also influence people as an advisor that recommends unethical action. Research shows that people will follow dishonesty-promoting advice provided by AI systems as much as they follow similar advice from humans.

Psychologically, an AI advisor can provide a justification to break ethical rules. For example, already AI systems analyze sales calls to boost sales performance. What if such an AI advisor suggests that deceiving customers increases the chances of maximizing profits? As machines become more sophisticated and their advice more knowledgeable and personalized, people are more likely to be persuaded to follow their advice, even if it is counter to their own intuition and knowledge.

Another way AI can influence us is as a role model. If you observe people on social media bullying others and expressing moral outrage, you may be more emboldened to do the same. When AI bots like the Chatbot Tay act similarly on social platforms, people can also imitate their behavior.

More troubling is when AI turns into an enabler. People can partner with AI systems to cause harm to others. AI-generated synthetic media facilitate new forms of deception. Generating “deepfakes” — hyper-realistic imitations of audio-visual content — has become increasingly easy. Consequently, from 2019 to 2020, the number of deepfake videos grew from 14,678 to 100 million, a 6,820-fold increase.

Using deepfakes, scammers have made phishing calls to employees of companies, imitating the voice of the chief executive. In one case, the damage amounted to over $240,000.

For would-be bad actors, using AI for deception is attractive. Often it is hard to identify the maker or disseminator of the deepfake, and the victim remains psychologically distant. Moreover, recent research reveals that people are overconfident in their ability to detect deepfakes, which makes them particularly susceptible to such attacks. This way, AI systems can turn into compliant “partners in crime” for all those with deceptive purposes — expert scammers as well as ordinary citizens.

Finally, and possibly most concerning, is the harm caused when decisions and actions are outsourced to AI. People can let algorithms act on their behalf, creating new ethical risks. This can occur with tasks as diverse as setting prices in online markets such eBay or Airbnb, questioning criminal suspects or devising a company’s sales strategy. Research reveals that letting algorithms set prices can lead to algorithmic collusion.

Those employing AI systems for interrogation may not realize that the autonomous robot interrogation system might threaten torture to achieve a confession. Those using AI-powered sales strategies may not be aware that deceptive tactics are part of the marketing strategies the AI system promotes.

Making use of AI in these cases, of course, differs markedly from outsourcing tasks to fellow humans. For one, the exact workings of an AI system’s decisions are often invisible and incomprehensible. Letting such “black box” algorithms perform tasks on one’s behalf increases ambiguity and plausible deniability, thus blurring the responsibility for any harm caused. And entrusting machines to execute tasks that can hurt people can also make the potential victims seem psychologically distant and abstract.

The dangerous trifecta of opacity, anonymity and distance makes it easier for people to turn a blind eye to what AI is doing, as long as AI provides them with benefits. As a result, whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge. Interacting with and through intelligent machines might exert an equally strong, or even stronger, pull on people’s moral compass than when interacting with other humans.

Instead of rushing to create new AI tools, we need to better understand these risks, and to promote the norms and the laws that will mitigate them. And we cannot simply rely on experience.

Humans have been dealing with bad apples — and bad moral influences — for millennia. But the lessons we learned and the social rules we devised may not apply when the bad apples turn out to be machines.

And that is a central problem with AI that we have not begun to solve.

[author title=”Nazarul Islam ” image=”https://sindhcourier.com/wp-content/uploads/2021/05/Nazarul-Islam-2.png”]The Bengal-born writer Nazarul Islam is a senior educationist based in USA. He writes for Sindh Courier and the newspapers of Bangladesh, India and America. He is author of a recently published book ‘Chasing Hope’ – a compilation of his 119 articles.[/author]