Tax Notes logo

IRS Must Be Aware of Risks of AI Use, Tax Professionals Say

Posted on June 5, 2023

Artificial intelligence can be helpful in tax administration, but tax professionals warn that the government must understand the risks associated with its use to effectively manage it.

AI can use past behavior to predict how a taxpayer might act in the future and can use special algorithms to detect irregularities. It can allow the IRS to process documents in less time, help detect noncompliance, and help with customer service.

With rapid advancements in technology, AI has become a topic of discussion in the tax community and elsewhere. While it can help improve government efficiency and reduce costs, concerns have been raised about its use by the IRS and other agencies.

“I really like the idea of AI and its capacity to improve . . . tax administration and government and activities. But I am a deeply skeptical human being about the ability of other human beings and myself to manage technology and really understand its impact,” Nina E. Olson of the Center for Taxpayer Rights said May 4 during the American Bar Association Section of Taxation meeting.

The IRS’s strategic operating plan for spending its additional funding from the Inflation Reduction Act (IRA, P.L. 117-169) says it will enhance its “use of data and analytics to drive operations and decision-making. Improved data analytics will better position us to optimize operations for taxpayers and employees alike.”

Travis W. Thompson of Sideman & Bancroft LLP told Tax Notes that he expects the IRS to invest some of its IRA funding in AI technology. But with the debt limit deal eliminating more than $20 billion of the agency’s supplemental funding, he said it’s unclear how that might affect planned technology upgrades.

IRS Commissioner Daniel Werfel, in a recent email to employees, said he is confident that the debt ceiling deal “will allow the IRS to continue on a positive trajectory and build on our recent successes to improve taxpayer service, rebalance our enforcement work to increase our capacity to review complex returns from wealthy filers and large corporations, and update our technology needed to modernize all aspects of our operations.”

Werfel said he would share more details soon and is “optimistic about the ultimate impact the Inflation Reduction Act will have in helping us meet all of our critical goals to improve our work for taxpayers and the nation.”

The Data Problem

The IRS has lots of data from the tax returns and information returns it collects, but it lacks the “systems, skills, and ability to really leverage those data with modern information technology systems,” James R. McTigue Jr. of the Government Accountability Office said May 30 during a webcast hosted by the Center for Taxpayer Rights.

In order for the IRS to even begin to think about using AI technology, it’s imperative to invest in the technology, the skills, and the people to make it happen, McTigue said.

Don Fort of Kostelanetz LLP, a former chief of the IRS Criminal Investigation division, said that when he was at the agency, it was “woefully underutilizing data” because of a lack of funding and talent.

Fort said the IRS did a pilot that built a model and algorithm to help detect payroll tax fraud. “With the help of some really smart people and certain technology, we were able to successfully build that out and identify some really significant payroll tax fraud cases,” he said.

But the pilot used only internal data and not external data, such as open-source information that is publicly available, which according to Fort “only tells part of the story.”

Unforeseen Consequences

“AI systems pose unique challenges to accountability, especially as they relate to civil liberties, ethics, and social disparities,” according to the GAO.

A major unintended consequence of using AI — if you’re not sufficiently attentive to the history of the data that you’re processing and considering — is the risk of projecting those problems, Abdi Aidid of the University of Toronto Faculty of Law said May 4 at the ABA meeting.

For example, if you built a predictor tool for setting optimal wages by asking questions and having participants submit their resumes, and “you’re not attentive to the fact that statistically, women are more likely to have” interruptions in their resume and a history of lower wages, “you run the risk of reproducing it,” Aidid said.

Olson, who was the national taxpayer advocate for 18 years and is a member of Tax Analysts’ board of directors, said that a situation involving child care benefits in the Netherlands is a cautionary tale.

In January 2021 the entire Dutch government resigned in the wake of a scandal over an AI tool that was used to root out fraud in child care subsidies by selecting applicants with foreign origin for scrutiny. The use of the algorithms caused thousands of families to be wrongfully accused of fraud and asked to repay benefits.

The IRS is under scrutiny for bias that was found in audit selection.

While the IRS has said it doesn’t collect information on taxpayers’ race or ethnicity and therefore doesn’t consider it in audit determinations, Stanford University’s Institute for Economic Policy Research said in a January 30 report that Black taxpayers are about three to five times more likely to be audited by the IRS than non-Black taxpayers, mostly because of algorithms used by the agency.

The study sparked an outcry from members of Congress, including Senate Finance Committee Chair Ron Wyden, D-Ore., who requested information on the matter.

In a May 15 letter to Wyden, Werfel said the IRS’s “initial findings support the conclusion that Black taxpayers may be audited at higher rates than would be expected given their share of the population. We are dedicating significant resources to quickly evaluating the extent to which IRS’s exam priorities and automated processes, and the data available to the IRS for use in exam selection, contribute to this disparity.”

“Additional information will be shared externally regarding the research findings and the appropriate corrective actions IRS will take,” Werfel wrote. “I will stay laser-focused on this to ensure that we identify and implement changes prior to next tax filing season.”

Guardrails

Thompson said he is in favor of technological upgrades needed to modernize the IRS, but would like to see thoughtfully considered regulations that limit the IRS’s use of AI and that “help mitigate potential issues, like for instance racial bias.”’

The GAO released a 2021 report that lays out accountability practices to help ensure that federal agencies use AI responsibly.

“AI is evolving at a pace at which we cannot afford to be reactive to its complexities, risks, and societal consequences,” and it “is necessary to lay down a framework for independent verification of AI systems even as the technology continues to advance,” the report said.

Daniel E. Ho of Stanford Law School, coauthor of the study on racial disparities in audit selection, said that the GAO’s framework has been important in considering how the federal government can modernize and take advantage of the advances happening in AI, while also having monitoring systems in place that are “sorely needed with the rise of this technology.”

Ho agreed with McTigue that “the federal government is not yet ready,” mostly because of its problems with data, digital infrastructure, and technical talent.

Ho said the IRS should first develop a strategic plan for how to incorporate AI technology, figure out how to bring in the technical talent that is needed, and put the right oversight mechanisms in place to ensure innovation happens responsibly.

During the May 30 Center for Taxpayer Rights webcast, Olson said the agency should consider hiring data ethicists and privacy experts who can think philosophically about how the data will be used.

Fort said he had never heard of the term “data ethicist.”

“I would imagine there’s many people in the IRS that haven’t, which probably demonstrates the need for that type of position,” he said.

Copy RID