How Can Ethics Make Better AI Products?

By Jonathan Rotner
Global pressures to quickly develop and deploy AI sit in tension with understanding and mitigating its impacts. We argue—and recommend—that responsibly developed AI leads to better AI, and aligns with US economic and national security interests.

Download Resources

The growth of artificial intelligence (AI) technology has rapidly expanded the types of goods and services available, from personal convenience to professional assistance to defensive and security capabilities. Under bounded circumstances, AI is tremendously powerful at analyzing patterns, working through large amounts of data, and quickly responding to inputs. Yet as AI continues to permeate and integrate into users’ lives, it can lead to significant ethical concerns. Those concerns can range from people finding creepy the automated email replies that mimic individual personalities, to alarm over the national security implications caused by deepfakes, decrying the mishandling of private data that drives AI platforms, fear of losing jobs to AI, protest over how mass surveillance so significantly impacts minority groups, and fear of losing one’s life to AI. These examples show that there continue to be fielded systems that result in real harm, despite opportunity for employing better practices and lessons learned.

Speed is one significant motivator for maintaining the status quo: speed for companies that develop AI to be the first to market, and speed for United States (US) national security representatives to win an AI arms race against China. These market and international geopolitical pressures promote quick solutions, and as a result, complex problems are reduced to purely technical approaches, and products are deployed without adequate evaluation and oversight. When moving faster, there is less time to test, understand, and act on risk and impact assessments, security and privacy concerns, and opportunities for properly calibrating trust in the AI system. Including all of these elements would produce more responsible and ethical products. Put simply, decisions to act quickly or to act responsibly live in tension.

An essential element of any solution is to demonstrate that ethical AI products are better AI products. Public and private policies can shape AI’s development and deployment to result in ethical AI that simultaneously boosts economic and national security outcomes. Then, the US can take advantage of existing, international demand for superior AI products. Therefore, the recommendations in this essay are directed toward two groups: the organizations that develop and deploy AI, and policymakers that can enact change.

If the US enacts these practical and impactful steps, new AI products and governance can reflect the socio-technical complexity of the problems they are trying to solve, and work to empower those using and affected by the AI. New AI products and governance can respond to a growing consumer base increasingly aware of the drawbacks of unfettered AI. New AI approaches can expand the AI workforce and contribute to a stronger economy (which also bolsters domestic security). And new AI approaches can help the US maintain international leadership and security by establishing norms that favor the promotion of Western, democratic principles.