With the incoming Trump administration reportedly looking to slash food stamps and Medicaid, a new report finds that artificial intelligence and related technologies are playing a growing role in determining who receives those benefits — and in some cases mistakenly denying them. The 197-page report by TechTonic Justice, a nonprofit launched Tuesday, finds that almost all 92 million low-income Americans already “have some basic aspect of their lives decided by AI” or automated decision-making software. That includes people affected by: - eligibility and enrollment processes for Medicaid;
- prior authorization processes used by private insurers;
- fraud-detection systems used by food stamp programs; and
- landlords’ use of background-screening algorithms and rent-setting algorithms.
That’s concerning, said TechTonic Justice founder Kevin De Liban, because “the technology is almost always deficient.” De Liban started the nonprofit after working for 12 years at Legal Aid of Arkansas, where he represented disabled clients who sued the state’s Department of Human Services for cutting their in-home care services based on a flawed computerized system. The agency paid $460,000 to settle the lawsuit and agreed to change the system. The problem isn’t that automation is inherently bad, De Liban said. It’s that the systems are too often designed to prioritize cutting benefits over ensuring that people can access them. And they’re being used as a cheaper replacement for more careful, human-led review. He calls it “inescapable AI.” “In any of these AI-based decision-making tools, you end up eliminating people who should be eligible for programs,” he said. “The accountability measures around this stuff are so broken that low-income people almost always end up with the short end of the stick.” While AI policy discussions sometimes focus on the hypothetical risks of runaway superintelligence, the report emphasizes the ways less sophisticated automated systems are already causing harm. For instance, De Liban said, the Social Security Administration's Supplemental Security Income program, meant to help the poor and elderly, uses data-matching systems to check people’s property records or bank account records to determine eligibility. Those can fail in “very straightforward ways,” such as mistakenly cutting off someone’s benefits on the basis of property owned by a different person with the same name. These problems, he predicted, are “only going to get worse” as the federal government looks to cut costs by further restricting access to benefits and encouraging states to do the same. To address them, the report recommends bringing low-income communities into AI policy discussions and pushing for laws that require transparency and accountability in the use of AI tools and that establish liability for their creators. There have been some high-profile recent examples of algorithms going wrong in public benefit programs. Last year, a STAT News investigation found that the multinational insurance firm UnitedHealth used algorithms to override clinicians’ judgment and deny care to seriously ill elderly patients. And Dutch authorities faced a years-long scandal after revelations that an algorithm for spotting child-care benefits fraud had wrongly accused tens of thousands of eligible recipients, driving families into poverty and children into foster care. “The lack of transparency around whether and how AI is used in our daily lives creates an environment in which harm can remain undiscovered and unchecked,” said Elizabeth Laird, director of equity in civic technology for the nonprofit Center for Democracy and Technology. “This report takes a much-needed step toward shining a light on the impact of AI by quantifying its prevalence in the ways that matter most — how it affects the lives and opportunities of everyday people, for better and for worse.” The issue could become more pressing as the Trump administration turns to technocrats to slash government spending. Trump has tapped business executives Elon Musk and Vivek Ramaswamy to lead an advisory body called the Department of Government Efficiency tasked with identifying dramatic cuts. While it is not yet clear how they’ll do that, health care and housing assistance are among the plausible targets. The idea of using AI to aid in cost-cutting has circulated in conservative policy circles, as Politico observed last week. An October report by the center-right Foundation for American Innovation recommended “using artificial intelligence to streamline operations and reduce bureaucracy,” pointing specifically to the prospect that “the adoption of AI tools within the Centers for Medicare and Medicaid Services could drive savings across the entire U.S. healthcare system.” And the Heritage Foundation’s Project 2025 report recommends reducing “waste, fraud, and abuse, including through the use of artificial intelligence for their detection.” The TechTonic Justice report “shows the harms of large-scale AI adoption by public sector entities without any governance protections for the people they purport to serve,” said Janet Haven, executive director of the nonprofit Data and Society. “This is the dark side of ‘government efficiency’ via technology, and a stark warning of what is truly at stake.” |