
Artificial Intelligence
We’re drawing on our 65 years as a trusted, objective adviser with cross-domain expertise to catalyze the consequential use of AI in all sectors, for all people.

The AI Trust Gap
A survey exploring the American public's perceptions of and trust in AI.
The artificial intelligence revolution isn’t on the horizon. It’s here, bringing with it countless opportunities and urgent social, ethical, and policy questions. That’s why MITRE looks at AI from all angles.
We’re proactively connecting government, industry, academia, and international allies to anticipate and address national AI challenges.
How? By applying our technical capabilities through our AI Assurance and Discovery Lab, Artificial Intelligence and Autonomy Innovation Center, MITRE Labs, independent research, strategic collaborations, and the six federally funded R&D centers we operate. Our work informs assured and secure AI-enabled systems development for our sponsors, our nation, and the public.
Primary Goals
- Catalyze consequential AI use
- Extract maximum value from AI while protecting society from unacceptable risks
- Diminish technology surprise
- Maintain U.S. leadership in AI innovation
ASSURING AI FOR IMPACT
AI technologies can help solve large-scale problems and transform society for the better. But they need to be sufficiently assured so that people can trust them for more than just low-stakes tasks such as making a restaurant or movie recommendation. A MITRE-Harris Poll survey found that most Americans are skeptical of AI guiding autonomous vehicles, government benefits, or healthcare. And 78% are very or somewhat concerned about it being used for malicious purposes.
How do we close that trust gap and unleash AI’s full potential for public good? By assuring that an AI application does what it’s expected to without unacceptable risks, and in the right context at the right time. This requires new standards and best practices, dynamically adapted to match the speed of AI innovation.
While zero risk isn’t attainable in any technology or aspect of life, AI assurance can help consumers and organizations make informed decisions about the costs and benefits of adopting AI solutions.
MITRE is drawing on our deep technical expertise and role as a strategic convener to enable impactful, assured AI across sectors.
AI Horizons
MITRE is a nonprofit working in the public interest, and because of the six federally funded research and development centers that we operate, there's a lot of interest across the entire federal government in AI.
And we are seeing that interest and trying to help those agencies figure out what their AI strategies are, how they can adopt AI in their systems, how they can assure and trust the AI that they're using.
MITRE excels at the convening, connecting, the bridging, bringing together academia, industry and government to think through some of these issues.
And also building software, and tools, and systems, and frameworks that can help people understand the space, talk in a common vocabulary, and be able to apply, repeatedly, some of the same tools across multiple industries.
So MITRE ATLAS is a great example of that, that's our threat framework for AI.
And it's really a database to understand vulnerabilities that have been demonstrated in the wild against real AI systems.
Now that we know how particular threat actors are able to exploit AI systems, we understand their techniques, their procedures, how they do it, we can now build models that will emulate their behavior so that you can try them out against your own AI systems, right?.
So this is gonna be invaluable as we talk about test and evaluation.
This Arsenal tool that we've developed with Microsoft will allow you to take all that knowledge that's captured in ATLAS and actually operationalize it and throw it at a system that you're building in order to do that testing.
MITRE Arsenal is our first major open source tool set that's focused on red team testing AI.
And these are all pieces that, together, will help, I think, both government and industry and academia all really come together with a common view.
We are the trusted entity that can sit in the middle and operate these large-scale public private partnerships.
We are leveraging to try and bring a unified view across the US federal government around what we should be doing in AI and how do we build a secure, resilient, robust, unbiased and ethical AI, really, for the United States.
AI assurance is really this umbrella term that covers AI safety, meaning we don't want AI to harm people or property.
It includes things like AI security, meaning we want AI that an adversary can't hack into and make it misbehave in a particular way.
We're worried about bias and ethics in our AI systems.
We want to make sure that it is really equitable in the decisions it's making and the way it's behaving.
And we're also concerned about robustness.
We want it to perform correctly, and if there are times when you want to have a base level of security built into the systems before they go out.
And that's where a lot of these emerging security standards for AI can help define what sorts of things you wanna make sure you're doing on the front end.
So you're always trying to balance threat, which is known adversaries doing bad things to systems versus risk, which is sort of the hypothetical vulnerabilities that someone could exploit in the future.
At MITRE, we developed the ATLAS framework, which is very much focused on the threat piece where we look at actual attacks against actual AI systems and have really built out this taxonomy to help people designing AI systems know precisely the sorts of things they should be mitigating because they aren't some hypothetical risk that someone published in an academic paper, but actual tactics, techniques and procedures adversaries are using in the wild.
We've also developed in partnership with Microsoft, something called Arsenal, which takes that knowledge and actually codes it into a set of red team emulation tools.
You have an AI component that's part of the big system and you want to be able to actually do more realistic testing evaluation.
This Arsenal tool will allow you to take all that knowledge that's captured in ATLAS and actually perationalize it and throw it at a system that you're building .in order to do that testing.
Critical infrastructure is increasingly using digital systems as part of its operations, and from there it's connecting in and using cloud.
And from there it's connecting in and using artificial intelligence as a kind of key component on the backend.
And the reason critical infrastructure wants to use AI is really to drive efficiencies in the operation of these complex systems. Imagine a power grid. There's all kinds of different changes in the demand.
The heat hits a particular temperature, everyone's air conditioning kicks on at the same time, all hitting the power grid at once.
And so the more you can use sophisticated algorithms to help manage the ups and downs in these systems, the more resilient these systems can be, the more capacity they can absorb, and the longer they'll be able to use them without doing major capital upgrades to them.
We now are putting AI in charge of all kinds of different pieces of our critical infrastructure.
And in any digital system, you wanna make sure that you understand and have fully characterized the behavior of your control systems.
Because if your control systems do the wrong thing, then you end up with safety critical issues that you need to mitigate.
We're very interested in how we can better secure and test the AI that is used to control critical infrastructure systems, but also we wanna make sure that these critical infrastructure systems are secure, not just against, say, AI failures, but also against sort of traditional run-of-the-mill hackers who may seek to disrupt them.
Technology and society are so interlinked.
We have to trust the technical tools around us every day.
If we didn't trust our automobile, would we feel safe driving into work?
Trust doesn't happen overnight, and it's not gonna happen overnight with AI systems.
It's gonna be an incremental progression, where as we are more immersed in AI, nd we use AI for more and more things, we begin to trust it more and more.
For example, in the intelligence community, analysts are very skeptical of relying on AI to help inform their analysis.
Mostly just because they also probably wouldn't trust a brand new analyst who just showed up yesterday either.
Much like building trust between humans working in a team, it's gonna take time to build trust between humans and AI working jointly in teams.
But given the productivity gains that you can get from leveraging AI in more and more things, I think it's only natural that we're gonna kind of all grow up together, right?
Humans are gonna be able trust AI more, AI is gonna be able to able to do more, and it'll, we will all sort
of march forward together.

AI Horizons Episode 1: Why MITRE?

AI Horizons Episode 2: AI Assurance and Security

AI Horizons Episode 3: AI and Critical Infrastructure

AI Horizons Episode 4: AI Trust Gap

MITRE ATLAS
