Federal AI Is Mostly Boring — And That's the Point
The federal government’s generative AI footprint nearly went ninefold in a single year. According to a GAO report examining 12 federal agencies, documented and identified generative AI use cases jumped from 32 in 2023 to 282 in 2024. Of the 11 agencies analyzed, including the Department of Homeland Security, 10 now have active generative AI deployments.
That number should get the attention of anyone in law enforcement. But where those use cases actually sit tells a more interesting story.
The Numbers
Of the 282 generative AI use cases the GAO documented, 84% fall into three categories: mission-enabling functions (61%), government services (15%), and health and medicine (9%). Mission-enabling functions include internal operations that improve written communications, streamline information access, and track program status. The federal government’s AI adoption is overwhelmingly administrative in nature.
This is not the AI that makes headlines. There are no autonomous drones, no predictive policing algorithms, no real-time facial recognition, or AI-enhanced ALPR systems in that 84%. It’s document processing. It’s drafting correspondence. It’s helping analysts find information faster, and administrative assistants now have their own administrative assistants.
For anyone following AI adoption at the state and local level, this should sound familiar. The same pattern is playing out in agencies across the country. AI’s most immediate value is in the back office, not on the street.
The Governance Reality
The GAO found that 10 of the 12 agencies studied cited existing federal policy as an obstacle to the adoption of generative AI. Four agencies specifically noted that the technology evolves faster than policy can keep up. The tension between capability and governance is not limited to the federal level.
Executive Order 14179 directs agencies to submit AI action plans and develop acceptable use policies for generative AI. The accompanying guidance (M-25-21) requires agencies to identify “high-impact” AI use cases and establish responsible deployment frameworks. The intent is clear: adopt, but with guardrails.
The challenge is building those guardrails at the same speed the technology moves. Federal agencies have resources, dedicated AI officers, and access to frameworks like NIST’s guidance on secure AI development practices. Most local law enforcement agencies lack that infrastructure. If federal agencies with dedicated teams are struggling to keep policy current, smaller agencies face a steeper version of the same problem.
What This Means for Non-Federal LE Agencies
The federal playbook is worth watching for two reasons.
First, DHS is one of the agencies in the GAO study. Whatever frameworks, acceptable use policies, and acquisition standards emerge from federal adoption will likely influence grant requirements, compliance expectations, and best-practice guidance that flows down to state and local agencies. The federal government sets the template whether it intends to or not.
Second, the 84% finding validates what practitioners already know: the real AI adoption story is administrative. Report generation, records management, FOIA processing, and training documentation are the use cases that scale first because they carry lower risk and address genuine operational pain. The GAO data confirms this isn’t just a local trend. It’s the pattern everywhere.
The Takeaway
Federal agencies doubled the number of their AI use cases in one year, from 571 to 1,110. Generative AI specifically grew ninefold. And the vast majority of that growth is in internal operations that most people will never see or think about.
The boring revolution isn’t just local. It’s federal policy now.