Advanced Search

INNOVATION : Innovation

Why you should be worried about the ethics in artificial intelligence

Español
  • Publish in Facebook
  • Add to Delicious
  • Meneame
  • Send via email

The discriminatory biases of algorithms, the invasion of privacy, the risks of facial recognition and the regulation of relations between humans and machines are some of the ethical challenges of AI. However, the interests of governments and large companies often take precedence over good practices.

More info about:
robots
ethics
biases
privacy
data
artificial intelligence
algorithms

Ana Hernando | | November 12 2019 08:00

<p>Artificial intelligence faces ethical challenges that often clash with the interests of governments and large firms. / Wearbeard / SINC</p>

Artificial intelligence faces ethical challenges that often clash with the interests of governments and large firms. / Wearbeard / SINC

Artificial intelligence (AI) is no longer a matter of science fiction; it is everywhere. Your bank uses it to find out whether it will give you a credit or not and the ads you see on your social networks stem from a classification carried out by an algorithm, which has micro-segmented you and ‘decided’ whether to show you offers of anti-wrinkle creams or high-end cars. Facial recognition systems, used by airports and security forces, are also based on this technology.

“Machines do not have general intelligence, nor have we managed to give them common sense, but they do have specific forms of intelligence for very specific tasks, which exceed the efficiency of human intelligence,” Carles Sierra, director of the Research Institute of Artificial Intelligence  (IIIA) of CSIC (Spanish National Research Council), has explained to Sinc.

“Companies are now creating ethical committees, but they have done so in a reactive rather than proactive way, as a result of the criticisms”, says Carles Sierra

For this reason, he adds: “AI has enormous potential for improving industrial processes, designing new drugs or for achieving greater precision in medical diagnoses, to give just a few examples.”

Data is the new oil

But apart from a breakthrough, AI is now a huge business, estimated at about $190 billion (some 170,000 million euros) by 2025, including hardware, software and services linked to technology. Data are now considered the new oil.

This highly appealing business is being disputed by, among others, technological giants such as Amazon, Google, Facebook, Microsoft and IBM, “whose commercial interests often take precedence over ethical considerations,” says Sierra.

Many of these firms, he points out, “are now creating ethical committees in the field of AI, but they have done so in a reactive rather than proactive way,” following criticism of the inappropriate use of AI in areas related to user privacy or the use without proper supervision of some applications.

As Carme Artigas, a big data expert and ambassador in Spain for Stanford University's Women in Data Science programme, explains to SINC, one example of these controversial uses was that carried out by Microsoft when it decided to launch its bot Tay. This IA-based chatbot “was surfing on its own on Twitter and, after a few hours, it began to publish racist and misogynistic tweets because it had taken the best of each house in this social network.” 16 hours following its launch, the firm had to deactivate it.

“When a system is not monitored, there is no filter. This happened with the chatbot Tay. A few hours after it was launched, began to publish racist and misogynistic tweets,” says Carme Artigas

“The problem,” says Artigas, “is that when an artificial intelligence system is not supervised, there is the risk of there being no filter, and that is what happened with this bot.

The ethics of AI is an issue that is now at an incipient stage of development and will have to face important challenges. One of them, in the opinion of this expert, is what she calls the “dictatorship of algorithms”.

For example - she points out - what classification algorithms do “is micro-segment people, that is, classify them by their behaviour, which can lead, if it isn’t regulated or if the process is not transparent, to people being ultimately limited in their options to choose freely.

“Imagine,” Artigas adds, “that an algorithm micro-segments somebody as a lower-middle income person, deduces that he or she will never be able to buy a Ferrari or a Porsche and, therefore, in the ads will never show that person a high-end car because the algorithm knows that this person cannot afford it. This is an example that may seem unimportant, but we should ask ourselves if it is ethical not to present something to people even for them to dream about because they have already been pre-classified.”

Perpetuating prejudice

Another relevant question that causes serious bias problems "is that, as machine learning algorithms are fed with historical data, we are in danger of perpetuating in the future the prejudices of the past.” To illustrate this aspect, Artigas mentions “those typical U.S. crime studies, which point to African Americans as being more likely to commit crimes.”

The algorithm, she continues, “has been trained with millions of data from 30 years ago showing that, if you were an African American, you were more likely to go to jail. It's also true of gender bias. If we start from historical data, the algorithm will continue to reproduce the classic problems of discrimination,” she stresses.

As machine learning algorithms are fed with historical data, we are at risk of perpetuating the prejudices of the past in the future,” warns Artigas

In the same vein, Isabel Fernández, general manager of Applied Intelligence at Accenture, spoke in an interview with Sinc about the need for a protocol to regulate AI biases. “I have no doubt that this will have to be regulated. It's not just about good practices anymore. Just as happens in an operating room to ensure it is clean, I think there has to be a protocol or an accreditation to prevent bias in the data,” she stressed.

According to Carme Artigas, there is another major ethical requirement that should be demanded from any company or organization working with AI, and that is linked to transparency and what is known as explainability. This is, she explains, “if the bank denies you a credit because, according to the algorithm, you aren’t suitable, you are entitled to have the bank give you the reasons and criteria for this refusal.”

The problem is that in the processes followed by the algorithms, especially deep learning processes, we do not quite know what goes on between inputs and outputs, explains Artigas.

One programme based on this type of deep learning algorithms is Google's Alpha Zero Go, which has not only learned to play Go - an ancient oriental game considered a great challenge for AI - but has also discovered new abstract strategies on its own. But even experts are not quite sure how these algorithms work.

The black boxes of algorithms

“This opacity is what is known as the black boxes of algorithms,” comments Aurélie Pols, a data protection officer (DPO) at mParticle and a consultant on privacy.

“In these black boxes, the entry and processing are not always clear or explainable. These opaque results can have consequences on people's lives and may not be aligned with their values or choices,” Pols stresses.

According to Patrick Riley, “many of these algorithms are so complicated that it is impossible to inspect all the parameters or reason about exactly how the inputs have been manipulated”

Patrick Riley, a computer scientist at Google, also echoed this idea in an article in Nature magazine last July. “Many of these automatic learning algorithms are so complicated that it is impossible to inspect all the parameters or reason about exactly how the inputs have been manipulated. As these algorithms begin to be applied ever more widely, risks of misinterpretations, erroneous conclusions and wasted scientific effort will spiral,” Riley warned.

In addition to all these reflections, there are problems bearing on personal data protection. In IA “it is important that the data models used to feed these systems and their treatment respect the privacy of users,” notes Carme Artigas.

In Europe - she says – “we have the General Data Protection Regulation, but there are countries such as China, which is currently leading this business, where they don’t have the same sensibility regarding privacy as in European society. For example, they have no restrictions on surveillance and image recognition. And this may mark different speeds of technology development, but personal data are something that, from a social point of view, must be protected,” she stresses.

Artigas also refers to another of the old ethical challenges linked to AI, which is how to regulate the new relationships between humans and machines. “If you use a parallelism, as the EU has done, with Asimov's laws of robotics and translate them into regulations, these will tell you, for example, that no emotional attachment to a robot should be established. And this contradicts some of the applications of social robots, which are used precisely to provoke emotions in people with autism or neurodegenerative diseases, as this link has proven to be beneficial and positive in therapies.”

To sum up, this expert points out that “much remains to be done in terms of legislation and the analysis of the ethical repercussions of artificial intelligence.” What we need to achieve, she adds, “is transparency and information from companies and governments about what they are doing with our data and for what purpose.”

Principle of prudence

For his part, Ramón López de Mántaras, research professor at the CSIC at IIIA, spoke at a recent conference about the importance of applying the principle of prudence in the development of artificial intelligence. “One shouldn’t cheerfully embark on deploying applications without their first having been well verified, evaluated and certified,” he stressed.

We should not set out to deploy applications without first having them well verified, evaluated and certified, stresses Ramón López de Mántaras.

This principle is one of the highlights of the Barcelona Declaration, promoted by López de Mántaras and other experts, and including a manifesto that aims to serve as a basis for the development and appropriate use of AI in Europe.

One example of the application of this principle, he pointed out, “has been the city of San Francisco, whose authorities have decided to ban facial recognition systems. This is something I welcome because it is a technology with many flaws, which can end up having tremendous repercussions on people's lives when it is used by governments or security forces.” A recent example of this use is that carried out by police with demonstrators in the Hong Kong riots, which has been widely criticized.

Microsoft has also rethought the use of this technology. As Tim O'Brien, head of IA ethics at the firm, tells Sinc, “a year ago we raised the need for government regulation and responsible industry measures to address the problems and risks associated with facial recognition systems.”

O'Brien believes that “there are beneficial uses, but also substantial risks in the applications of these systems and we need to address them to ensure that people are treated fairly, that organisations are transparent in the way they use them and are accountable for their results. It is also necessary to ensure that all use scenarios are legal and do not get in the way of basic human rights,” he points out.

“IA engineers should sign a sort  of Hippocratic oath of good practice,” says López de Mántaras.

This National Research Award winner wondered how an autonomous system “is going to be able to distinguish between, say, a soldier who is attacking, surrendering or wounded. I find it absolutely unacceptable to delegate the capacity and decision to kill someone to a machine.”

The scientist is a firm supporter of incorporating ethical principles into the actual design of the technology. “IA engineers should sign a kind of Hippocratic oath of good practice.” Like other experts, he was also in favour of encouraging certification of algorithms to avoid bias. But, in his opinion, “this validation should be done by independent bodies or institutions. I don’t think it’s enough for Google to certify its own algorithms, it should be something external.”

According to López de Mántaras, “there is fortunately a growing awareness of the ethical aspects of AI, not only at the level of states or the EU, but also on the part of companies. Let's hope it's not all cosmetic,” he concluded.

Geographical area: International
Source: SINC

Ana Hernando

Ana Hernando

Journalist specialising in science, technology and economics. Editor of SINC’s innovation section.

Comments

WE WANT TO KNOW YOUR OPINION

Please be reminded that SINC is not a health clinic. Consult your doctor for advice on health matters

AGENCIA SINC EN TWITTER