Search
Close this search box.

The Forgotten EO: The Rescission of Executive Order 14091

The rescission of Executive Order 14091 has been largely overlooked, yet it played a key role in AI fairness and equity assessments in federal agencies. With its repeal, bias evaluations and oversight in AI governance are no longer required, signaling a shift toward deregulation.
A government document labeled 'Executive Order 14091' stamped as rescinded, with AI network connections fading in the background.

On January 20, 2025, President Trump rescinded Executive Order 14091, reversing a policy designed to embed racial equity into federal governance. Originally issued in 2023, EO 14091 required agencies to integrate equity considerations into decision-making, including AI governance. Its repeal signals a shift away from structured fairness evaluations, affecting AI development, oversight, and regulatory direction.

Executive Order 14091’s Impact on AI Governance

While EO 14091 was not an AI-specific order, its principles shaped how AI was developed and deployed across federal agencies. It required government institutions to assess the impact of AI-driven decisions on marginalized communities, ensuring these systems did not reinforce systemic bias.

Algorithmic Bias and Equity Assessments

Federal agencies using AI in hiring, law enforcement, and public services were required to conduct fairness evaluations. These assessments aimed to identify algorithmic discrimination, refine AI models, and prevent biased decision-making. By embedding equity considerations, EO 14091 encouraged greater transparency and accountability in AI systems.

With its repeal, these requirements disappear. Agencies are no longer obligated to assess AI fairness, increasing the risk of unchecked bias in public-sector AI applications. Without a federal directive, individual agencies must decide whether to continue these evaluations, creating an uneven regulatory landscape.

The Loss of Equity-Based AI Oversight

EO 14091 also mandated improved data collection to monitor AI’s impact on different demographic groups. Agencies gathered demographic data to analyze AI decision-making trends, helping refine models and mitigate unintended biases. This proactive approach ensured AI governance aligned with fairness objectives.

Without this mandate, agencies are not required to track AI’s effect on underserved communities. The absence of structured oversight raises concerns about discriminatory outcomes in automated hiring, credit assessments, and surveillance technologies. The private sector may implement fairness measures voluntarily, but without federal enforcement, adherence will be inconsistent.

Comparison with EO 14110

The rescission of EO 14091 occurred alongside the repeal of EO 14110, which focused directly on AI governance, risk management, and national security. While EO 14110 established technical oversight of AI, EO 14091 provided an ethical foundation by ensuring AI development did not disproportionately harm marginalized communities.

Note for AIGP Exam Candidates: Neither Executive Order 14091 nor Executive Order 14110 will be tested on the AIGP exam. The IAPP has announced the removal of U.S.-specific laws and executive orders from the curriculum to ensure global relevance for AI governance professionals. Since these policies are no longer part of the exam material, this article serves purely as an informational resource for those interested in their implications.

Policy Shift Toward Deregulation

By repealing both orders, the federal government moves toward a deregulated AI environment that prioritizes innovation over oversight. EO 14110 provided guidelines for AI safety and transparency, while EO 14091 ensured AI technologies did not reinforce systemic inequities. The removal of both policies weakens AI accountability measures, leaving agencies and businesses with fewer regulatory obligations.

Implications of Executive Order 14091’s Rescission

The repeal of EO 14091 reshapes AI governance in several ways, eliminating federal requirements for fairness evaluations, increasing regulatory uncertainty, and shifting responsibility to individual states and industries.

1. No Federal Equity Standards for AI

Agencies no longer need to assess AI systems for fairness or algorithmic bias. This raises concerns that AI-driven decision-making—especially in hiring, law enforcement, and credit assessments—may operate without safeguards against discrimination. Without structured evaluations, disparities in AI outcomes may go undetected.

2. Uncertainty in AI Regulations

The repeal of both EO 14091 and EO 14110 leaves AI governance without clear federal direction. Companies and agencies developing AI must now navigate a fragmented regulatory environment where standards vary across industries and jurisdictions. This lack of consistency could slow AI adoption in federally regulated sectors.

3. Increased Role for State AI Laws

With the federal government stepping back, states may introduce their own AI fairness laws, similar to how privacy laws developed independently after federal inaction. States like California and New York could establish equity-focused AI regulations, creating compliance challenges for businesses operating nationwide.

4. International Misalignment in AI Policy

Under the Biden administration, U.S. AI governance aligned with global frameworks like the EU AI Act, which emphasizes ethical AI development. The repeal of EO 14091 moves the U.S. away from international AI fairness standards, potentially complicating regulatory cooperation with global partners.

Future of AI Equity Without EO 14091

Without federal mandates, AI fairness will now depend on industry self-regulation, state laws, and agency-level initiatives. Some federal agencies, like the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC), may continue investigating AI discrimination cases, but these efforts will lack a unified policy framework.

Will the Private Sector Fill the Gap?

Many AI-driven companies recognize the risks of algorithmic bias and may voluntarily integrate fairness measures into their models. However, without federal enforcement, compliance will vary widely. Larger tech firms operating globally must still meet EU and international AI fairness requirements, but domestic AI regulation remains uncertain.

The Risk of Widening AI Disparities

Public-sector AI systems may no longer undergo structured bias evaluations. This could deepen inequities in automated hiring, credit scoring, predictive policing, and healthcare AI. Without consistent fairness standards, marginalized communities may face disproportionate harm from AI-driven decisions.

Whether new policies emerge to replace EO 14091 remains uncertain. If no alternative equity safeguards are introduced, AI fairness in the U.S. may become fragmented, voluntary, or deprioritized altogether.

Conclusion

The rescission of Executive Order 14091 removes a key equity safeguard in AI governance, reducing oversight in algorithmic bias detection, fairness evaluations, and ethical AI deployment. Without federal direction, AI fairness now depends on industry practices, state laws, and agency-led initiatives.

For AI governance professionals, this shift underscores the need for continued advocacy and regulatory development to ensure AI systems remain transparent, fair, and accountable. As the U.S. moves toward a deregulated AI policy, the absence of EO 14091 raises the question: Will equity remain a priority in AI development, or will it become an afterthought in the pursuit of innovation?