Artificial Intelligence Ethics Approved by 193 Countries

PARIS, France, December 1, 2021 (ENS) – The first global agreement on the ethics of artificial intelligence, AI, was adopted Thursday by 193 countries. All the member states of the UN Educational, Scientific and Cultural Organization, UNESCO, adopted the historic agreement that defines the common values and principles needed to ensure the healthy development of AI.

“The world needs rules for artificial intelligence to benefit humanity,” said UNESCO Director-General Audrey Azoulay. “The Recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving states the responsibility to apply it at their level. UNESCO will support its 193 member states in its implementation and ask them to report regularly on their progress and practices.”

The 141 measures set forth in the agreement address new types of ethical issues that AI systems raise – their impact on decision-making, employment and labor, social interaction, health care, education, media, freedom of expression, access to information, privacy, democracy, discrimination, and weaponization.

The agreement explains that new ethical challenges are created by the potential of AI algorithms to reproduce biases regarding gender, ethnicity, and age, deepening existing forms of discrimination, identity prejudice and stereotyping.

Some of these issues are related to the capacity of AI systems to perform tasks which previously only human beings could do.

“These characteristics give AI systems a profound, new role in human practices and society, as well as in their relationship with the environment and ecosystems, creating a new context for children and young people to grow up in, develop an understanding of the world and themselves, critically understand media and information, and learn to make decisions,” the agreement states.

“In the long term, AI systems could challenge human’s special sense of experience and agency, raising additional concerns about human self-understanding, social, cultural and environmental interaction, autonomy, agency, worth and dignity,” the agreement cautions.

AI is the ability of a machine such as a computer to reason, learn, plan and be creative. It enables systems to sense their environment, process what they perceive, solve problems and modify their behavior to achieve a goal by analyzing the effects of previous actions and working autonomously.

Artificial intelligence is already at work in everyday life, from booking flights and applying for loans to steering driverless cars. It is used in cancer screening or to help create specialized environments for the disabled.

AI also is supporting the decision-making of governments and the private sector, as well as helping combat global problems such as climate change and world hunger, according to UNESCO.

Yet the agency warns that AI poses unprecedented challenges.

“We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable artificial intelligence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues,” UNESCO explained in a statement.

With this in mind, the adopted Recommendation aims to guide the construction of the necessary legal infrastructure to ensure the ethical development of this technology.

The University of Wisconsin-Madison has awarded a contract to Perrone Robotics, Inc., to deliver a self-driving, electric low speed vehicle shuttle integrated with the company’s TONY® retrofit kit. October 2021, Racine, Wisconsin (Photo courtesy Gateway Technical College)

The text highlights the advantages of AI, while reducing the risks it also entails. According to UNESCO, it provides a guide to ensure that digital transformations promote human rights and contribute to the achievement of the UN’s 17 Sustainable Development Goals.

It addresses issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy.

One of its main calls is to protect data, going beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. The Recommendation also explicitly bans the use of AI systems for social scoring and mass surveillance.

The text also emphasises that AI actors should favor data, energy and resource-efficient methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and in tackling environmental issues.

UNESCO’s Assistant Director General for Social and Human Sciences Gabriela Ramos said, “Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepen them.”

Pew Research Probes AI’s Advantages and Problems

In the summer of 2018, the Pew Research Center, a nonpartisan American think tank based in Washington, DC conducted a survey of 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists on the subject of artificial intelligence.

Pew canvassers asked, As emerging algorithm-driven artificial intelligence continues to spread, will people be better off than they are today?

Those surveyed predicted that networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.

They said that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation.

They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured about everything from personal genomes to nutrition.

Ethical AI Required for Healthcare

AI holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to guidance from the World Health Organization, WHO, published in June.

The report, “Ethics and governance of artificial intelligence for health,” is the result of two years of consultations held by a panel of international experts appointed by WHO.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said WHO Director-General Dr. Tedros Adhanom Ghebreyesus. “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.”

In some wealthy countries, AI is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.

AI could empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.

However, WHO cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.

It warns against “unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.”

WHO explains that while “private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.”

The WHO report emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.

Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to new WHO guidance published today.

The report, Ethics and governance of artificial intelligence for health, is the result of 2 years of consultations held by a panel of international experts appointed by WHO.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.”

Artificial intelligence can be, and in some wealthy countries is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.

AI could also empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.

U.S. Army Sgt. Daniel Soto, an operating room specialist from Fort Hood, Texas, and Sgt. Mark Holt, emergency medical specialist for Brooke Army Medical Center, clean their hands and put on gloves in the trauma ward at the 37th Military Hospital, Accra, Ghana, February 23, 2016 (Photo courtesy U.S. Army Africa)

However, WHO’s new report cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.

It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.

For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.

The report also emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.

AI systems should therefore be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients.

Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment.

WHO and UNESCO Are Parallel on Six Ethical Principles

WHO has issued six principles to ensure AI works for the public interest in all countries:

Protecting human autonomy: Humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

The UNESCO Recommendation has as its first objective, “…to promote respect for human dignity and gender equality, to safeguard the interests of present and future generations, and to protect human rights, fundamental freedoms, and the environment and ecosystems in all stages of the AI system life cycle.”

Promoting human well-being and safety and the public interest: The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.

The UNESCO Recommendation is based in protecting human safety and the public interest. It states in Section 24, “This value demands that peace should be promoted throughout the life cycle of AI systems, in so far as the processes of the life cycle of AI systems should not segregate, objectify, or undermine the safety of human beings, divide and turn individuals and groups against each other, or threaten the harmonious coexistence between humans, non-humans, and the natural environment, as this would negatively impact on humankind as a collective.”

Ensuring transparency, explainability and intelligibility: Transparency requires that sufficient, easily accessible information be published or documented before the design or deployment of an AI technology. It also requires meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

The UNESCO agreement attempts to ensure transparency, saying, “This protection framework and mechanisms concern the collection, control over, and use of data and exercise of their rights by data subjects and of the right for individuals to have personal data erased, ensuring a legitimate aim and a valid legal basis for the processing of personal data as well as for the personalization, and de- and re-personalization of data, transparency, appropriate safeguards for sensitive data, and effective independent oversight.”

Fostering responsibility and accountability: Although AI technologies perform specific tasks, stakeholders are responsible for ensuring that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms for questioning and for redress for those that are adversely affected by decisions based on algorithms should be available.

The UNESCO agreement states, “…this Recommendation aims to enable stakeholders to take shared responsibility based on a global and intercultural dialogue.”

Ensuring inclusiveness and equity: Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.

The UNESCO agreement also provides for inclusiveness, saying, “Respect, protection and promotion of diversity and inclusiveness should be ensured throughout the life cycle of AI systems, at a minimum consistent with international human rights law, standards and principles…”

Promoting AI that is responsive and sustainable: Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should be designed to minimize environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to the use of automated systems.

The UNESCO agreement, too, emphasizes sustainability, providing, “This must comply with international law as well as with international human rights law, principles and standards, and should be in line with social, political, environmental, educational, scientific and economic sustainability objectives.”

Featured image: An illustration of artificial intelligence, October 6, 2018 (Image by Mike McKenzie)

Continue Reading