FeaturesPREMIUM

A brave new world of artificial intelligence

As the use of artificial intelligence technology becomes more commonplace, it’s increasingly important that an ethical framework drives development. Unesco is this week considering one such proposal

Picture: 123RF/Sebastien Decoret
Picture: 123RF/Sebastien Decoret

Back in 2014, Amazon turned to artificial intelligence (AI) to streamline its recruitment process, using a machine-learning algorithm to review résumés and automate its search for talent. Three years later it abandoned the programme after it became apparent it was biased against female candidates. Because it relied on historical hiring patterns — mostly of men — it built in a preference for male hires.

Last year, researchers in the US claimed they could predict criminality by running profile pictures through an AI algorithm. The project was roundly condemned, with scientists pointing out that it simply replicated existing racial biases in the criminal justice system, according to a report by tech magazine Wired.

Such built-in biases raise flags about the social harm that may be caused by AI — and about a lack of regulation. As the Harvard Gazette notes: "Private companies use AI software to make determinations about health and medicine, employment, creditworthiness and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases."

But concerns run deeper. Benjamin Rosman, a director of Wits University’s robotics, autonomous intelligence & learning laboratory, says social media algorithms have been used to influence election outcomes (think of Donald Trump’s ascent to the White House), facial recognition cameras have misidentified people of certain races or genders, and China’s citizens are reportedly rewarded or punished based on data collected on their online activity.

In a world of deep fakes, facial recognition and machine learning, an ethical framework to guide the development and rollout of AI is becoming increasingly important.

Not that big corporates have necessarily grasped this. A recent study by data analytics company Fico that surveyed 100 AI-focused executives found that only one in five of them had made AI ethics a central part of their business models or had boards focused on ethics and fairness. At the same time, more than half had ramped up their resources for AI in the 12 months preceding the survey. (It’s big business, too. According to the Harvard Gazette, global spending on AI is expected to reach $110bn a year by 2024.)

Ethics is a critical
enabler of science
and technology
innovation

—  Emma Ruttkamp-Bloem

But there are moves towards something of a regulatory framework. This week, members of the UN Educational, Scientific & Cultural Organisation (Unesco) are debating a draft recommendation on ethics in AI for nonmilitary use.

It’s a project that took root in late 2019. By March last year, Unesco had mandated an experts group to draw up a draft recommendation, which is now on the table. If adopted, it is set to guide the use of AI technologies.

At its heart, the document recommends that AI "should play a participative and enabling role to ensure peaceful and just societies … based on an interconnected future for the benefit of all, consistent with human rights and fundamental freedoms".

More specifically, it includes four broad areas of focus:

  • The need for proportionality in the rollout of AI technologies;
  • Ensuring that people remain at the centre — ethically and legally — of AI systems;
  • That AI systems "must contribute to the peaceful interconnectedness of all living creatures"; and
  • That AI systems do not replicate gender inequalities.

Prof Emma Ruttkamp-Bloem, head of philosophy at the University of Pretoria, is chair of the working group responsible for writing the recommendation. She says the document is the first to include a non-Western value, initially inspired by ubuntu — "I am because we are" — and broadened to include principles of Eastern philosophies such as Taoism.

In this way, Ruttkamp-Bloem says the recommendation defines societies as "characterised by a permanent search for peaceful relations, tending towards care for others and the natural environment in the broadest sense of the term".

It means that no part of an AI system’s life cycle — from research to deployment, use and termination — can stand in isolation.

On a practical level, Ruttkamp-Bloem says Unesco’s recommendation would mean anyone passing a security camera in a neighbourhood or mall would have access to the data collected on them. Personal information would have to be stored securely, and banks using AI to determine the financial status of loan applicants would need to ensure this is not based on biased data.

The document makes specific mention of gender equality, which means women and girls could have better access to science, technology and maths. AI ethics literacy would be taught in schools. And everyone — from small companies to governments — would have to "comply with the principles of transparency … fairness, accountability and responsibility".

Most important is the nod to final human oversight. Or, as the recommendation puts it: "Humans can resort to AI systems in decision-making and acting, but an AI system can never replace ultimate human responsibility and accountability."

It’s a necessary intervention, says Ruttkamp-Bloem, given that an unchecked, unaccountable use of AI technology "will destroy us".

Though not enforceable, Unesco’s draft recommendation on ethics — which aims to "make AI systems work for the good of humanity, individuals, societies, and the environment and ecosystems; and to prevent harm" — could guide SA’s approach to an ethical policy on AI.

Says Ruttkamp-Bloem: "It would certainly also [affect] the digital divide in SA, as in these terms no-one can be excluded."

As things stand in SA, much more can be done around AI and ethics. The government has developed a draft policy on data and the cloud that acknowledges that "the wide deployment of intelligent digital technologies is beyond the regulatory scope of the existing regulatory authorities". But while it recognises the "critical need for policy and legislation relating to the use of data, and to ethics and security", the focus is overwhelmingly on economic opportunities and concerns.

Warp- speed advances in AI have left ethics issues in the dust, but efforts are under way to restore the balance

—  What it means:

DDP Attorneys MD Danielle du Plessis, who specialises in technology and telecoms law, says she hasn’t seen any details on the social and moral concerns of AI technologies in the government’s policy paper.

"The tone is definitely coming from a sense of best use of resources, not from a sense of how can we protect the rights of our citizens," she tells the FM.

Ruttkamp-Bloem expresses a similar concern over the tendency to push ethics to the background. "Yes, President [Cyril] Ramaphosa said that every child should have a tablet and should learn how to code," she says. "If you teach children to code in a vacuum, it’s extremely dangerous."

While some view ethics as an obstacle to innovation — 62% of the Fico respondents, for example, reported difficulty in balancing responsibility with innovation — Ruttkamp-Bloem believes the opposite is true. "Ethics is a critical enabler of science and technology innovation," she says. She argues that the ethical literacy of developers is just as important as their ability to code.

Ultimately, it’s about striking the right balance — and the draft recommendation is a first step in that direction.

Says Ruttkamp-Bloem: "I don’t want to stop AI. I think it is a wonderful technology. It can change everybody’s life for good."

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon