In addition to recommending movies and things to buy, AI now can help make decisions about our careers, health, and interactions with law enforcement. And sometimes, it’s biased.

AI Bias Is More than a Machine Learning Problem
Artificial intelligence (AI) is now a mainstay of our everyday lives. Traffic apps help us commute faster. Spam filters reduce clutter in our inboxes. And AI-enabled tools can even help radiologists better detect tumors in X-rays.
But AI doesn’t always get it right. Or rather, AI doesn’t always accurately reflect reality.
From misidentifying people of color to excluding women from technology jobs, AI bias is rarely intentional. But it is real.
The conventional wisdom states it’s primarily a machine learning (ML) problem that can be improved by increasing the number of training samples.
“AI bias is a complex sociotechnical challenge,” says Mikel Rodriguez, director of MITRE Lab’s AI and Autonomy Innovation Center. “Along the winding supply chain of people, systems, and datasets, bias can seep in many different ways—it’s rarely just one thing.”
Multiple companies and organizations contribute best practices, frameworks, and algorithms to ensure that AI as a tool is efficient, fair, and balanced across sectors. As part of our Social Justice Platform, MITRE will leverage these existing analytical tools and explore new approaches to mitigating bias in AI.
To provide insights into this important topic, we asked three of our experts to join in a roundtable to discuss the challenges of bias in AI and how MITRE is helping to find solutions: Florence Reeder, Chongeun Lee, and Jay Crossler.
Q: How does unintentional AI bias happen?
Flo Reeder, artificial intelligence engineer: In addition to the more well-understood issue that a machine learning program simply isn’t given enough variety of training samples, there are at least a half-dozen ways bias creeps into AI.
One, the biggest problem is the systemic bias in society—and it’s repeated, baked into the algorithm. For instance, according to ProPublica, researchers found the COMPAS algorithm used by courts and parole boards to forecast future criminal behavior had been “…written in a way that guarantees Black defendants will be inaccurately identified as future criminals more often than their White counterparts.”
Two, organizations may use a system for an unintended purpose or for a population it wasn’t trained for. Sticking with the COMPAS example, the system was originally developed for making recommendations about probation, but was repurposed for pre-trial phases, setting bail, and more. Thus, it made recommendations about people who hadn’t even been convicted of a crime.
Three, because programmers try to make systems as accurate as possible, they may ignore sub-populations and treat them as errors in the data. So, by optimizing for the largest population, they disproportionately affect the smaller populations.
Four, missing data means “you don’t know what you don’t know.” If certain diversity information isn’t routinely captured, there’s no way to know if a system affects a population disproportionately.
Five, hardware problems. A camera, lens, or sensor may have been manufactured to better capture light skin than dark. The algorithms trained on these pieces of hardware may make more errors for certain subpopulations.
And six, stakeholders aren’t included in decisions around the algorithm. For example, a major technology company designed a system to improve productivity—and gave it the authority to fire staff. Before long, employees were being fired because the algorithm didn’t know employees had a right to go to the restroom.
Q: What are the primary solutions to AI bias?
Chongeun Lee, AI research developer: I’d like to expand on Flo’s last point, about the importance of stakeholders. To avoid bias, AI developers need to work “left of the algorithm” before data is selected or models trained. Just as we all learned you need to bake cybersecurity into a system from the start rather than patching it at the end, the same is true for AI and bias.
That means having AI practitioners work with stakeholders who’ll be affected by the developed systems. What does fairness mean to them? What measurably effective methods can we employ to reduce the potential bias? Establishing consensus on a clear operational definition of fairness for a specific project and context can be very subtle and difficult.
But without it, any later efforts to identify and mitigate bias may be built on a foundation of sand. What are the worst-case consequences? Could someone get arrested or lose their rights if a human isn’t in the loop at certain points? Should the system even be built?
For instance, a system developer—which is often not one person, but a team—may need to decide exactly what the roles and authorities are of humans and machines in the system. Done right, that’s not just a software design decision. It’s a social, technical, and organizational issue that may require multidisciplinary teams engaging with the government and industry as well as the stakeholders.
Additionally, there’s growing “conventional wisdom” about ways to mitigate the risks of unintended bias that can actually be counterproductive. For example, a common guideline for bias mitigation is to remove all features in training data that identify membership in protected classes, such as gender or race. While that may seem logical, this approach can at times actually introduce more bias.
To mitigate this complex problem, the AI workforce must continually reexamine common practices throughout the life cycle. This includes initial research questions, development, deployment, feedback, and adjustments.
Q: What’s MITRE’s role in combatting AI bias?
Jay Crossler, MITRE technical fellow: We recognize many leading organizations are working hard to reduce AI bias. But there are three areas in particular where I believe the Technology and Analysis Focus Area of MITRE’s Social Justice Platform will make a strong contribution.
First, in support of getting “left of the algorithm,” we can apply our extensive experience with systems engineering tools. We can focus on aspects such as early stakeholder engagement and participatory design, capturing claims and supporting evidence in a “fairness case,” and systems thinking approaches to exploring hazards and unintended consequences.
Second, from our independent vantage point, we can act as “myth busters.” We can collect and share examples of when conventional wisdom may be wrong—or wrong in certain circumstances.
Third, we can share lessons learned across government and stakeholders. We support a broad range of sectors and can abstract and integrate lessons learned and best practices that otherwise might not have mutual exchange. For instance, we may learn something from clinical decision support that, when mapped correctly, will be relevant for some kinds of intelligence analysis.
While the benefits I’m describing—systems thinking, being conflict free, working across all of government—are long-term MITRE strengths, they’re also fundamental to mitigating AI bias.
I’m pleased we’re able to help reduce the risks of this valuable technology as it becomes more and more part of our daily lives.
If you are interested in working with MITRE to help advance our social justice initiative or have questions, contact us at socjusticeplatform@mitre.org.
—by Bill Eidson