Definitions of artificial intelligence ethics can vary, but most refer to a set of guiding principles for responsible and fair development and use of AI. The purpose of AI ethics is to minimize risk and negative outcomes while maximizing safety and security to protect people and the environment.
From this chart you can see that AI technology presents multiple ethical challenges. This is just a partial list, and there are additional challenges:
Data Privacy-How can we prevent AI from being trained on personal information or private information like health records?
Copyright Infringement-Can we prevent AI from being trained on material that is under copyright, including text, images, art, and music?
Environmental Degradation-AI training and development consumes huge amounts of energy and resources which raises sustainability concerns
Accountability/Transparency-AI developers should be responsible for identifying and explaining their algorithms and data if issues arise
Exploitation-Workers in developing countries are paid subsistence wages to work on AI to benefit billion dollar companies in wealthy nations
Weaponization-What happens if a weaponized military robot goes rogue or falls into the hands of bad actors?
Ethics is not the primary concern of AI developers and engineers who tend to focus on technological innovation and commercial development. Governmental and non-governmental bodies and organizations have developed guidelines and policies designed to protect the public and regulations to implement them. Such guidelines are designed to hold AI developers accountable for developing and training AI in an ethical way.
The European Union recently passed the EU Artificial Intelligence Act (see also Five Things You Need to Know About the EU's New AI Act) and the United States has created a Blueprint for an AI Bill of Rights. Another useful resource is the AI Safety Summit which publishes an AI Regulation White Paper.
Common themes that can be found in the ethical frameworks created by governments, organizations, and companies include, privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.
Along with ethical principles, education needs to change so that AI engineers and developers can be educated about AI ethics.
AI ethics need to be built into the technology itself by designing AI systems that have the ability to detect unethical activity and by training them ethically. The company OpenAI has an initiative called Superalignment that aims to create technology designed to control potentially super-intelligent AI to prevent it from going rogue and destroying humanity.
The emergence of AI technology has raised many ethical concerns such as algorithmic bias, digital inequality, limited transparency, and data privacy. Yet it also has great potential to address societal injustices like poverty and inequality. Here are ways that AI can be or is being used to advance social justice:
Discrimination-AI has the ability to analyze vast amounts of data to identify patterns of discrimination in employment, housing and criminal justice
Education-AI chatbots can help undereducated and marginalized communities by providing real-time interactive medical and legal instruction and assistance
Human Rights-AI technology can be used to track human rights violations by analyzing diverse data like social media content and satellite imagery to identify abusers
Food Insecurity-The Famine Action Mechanism Project is using AI to gather data from Nigeria, Somalia, South Sudan, and Yemen to monitor signs of food crises to prevent famine. Bread of Life International has built an interactive digital tool designed to identify hunger-prone regions to help distribute food more equitably
Homelessness-USC Center for AI and Society is utilizing AI for predictive modeling to calculate key predictors of youth homelessness to target at-risk youth before they become unhoused