A first-of-its-kind law designed to introduce transparency around the use of artificial intelligence in hiring has led some New York City employers to shy from using the tools altogether, even in the absence of enforcement.
The law went into effect in July 2023 amid concerns that automated AI tools were discriminating against applicants based on race, gender and other factors. It requires employers who depend on “Automated Employment Decision Tools” — processes that use technology rather than humans for screening applicants — to publish yearly audits assessing the tools they use. But so far, around 25 companies have filed audits, and the city hasn’t brought any enforcement actions.
Meanwhile, the law may have led to some unintended consequences. Some employers decided to step back from using AI tools to avoid dealing with the audits. And that could lead to more bias in job screenings, say employment attorneys and AI experts.
“People have told me that part of the outcome of the law is kind of overall, just foregoing use of AI processes, even though they were previously using it,” said Michael Schulman, an attorney at Morrison & Foerster LLP.
Schulman said some of his clients have voiced concerns about unintentionally reducing the pool of diverse candidates by not using AI tools. “They were hopeful that by using AI, they might pay more attention to or be more interested in candidates that they might not otherwise have contacted.”
Lack of Enforcement
Employers use AI in various ways, including evaluating resumes, scanning applicants’ online presence and analyzing video interviews. These tools are meant to be devoid of bias. But it doesn’t always work that way.
Hilke Schellmann, an NYU journalism professor and author of “The Algorithm,” notes in her book one instance where an AI hiring tool “systematically downgraded applicants with the word women’s on their resumes, as in ‘women’s chess club’ or ‘women’s soccer team.’”
The city law requires employers to hire independent auditors if they depend solely on AI to screen applicants. The city’s Department of Consumer and Worker Protection can penalize employers $500 for a first instance of non-compliance and $1,500 for subsequent violations.
But a Cornell University report from February found that out of 391 employers it assessed, just 18 had published audits. The report said that most of the employers it assessed who did comply with the law buried the required information among other disclosures, or they used technical and legal jargon “in ways that make it practically impossible for job-seekers to learn about their rights or exercise them” when subject to AI screening.
A crowd-sourced tool that tracks audits shows that 25 companies have posted reports, and two later removed them from their websites. This likely indicates a low compliance rate, Schellmann said, adding that the law “has no teeth.” Many employers are taking advantage of a loophole, she said: If a company can claim there is some human involvement in the hiring process, they don’t have to comply.
“It looks like we’re regulating AI, when, in reality, so few companies comply,” she said.
Applicants being screened by AI are often unaware that they might be a victim of algorithmic discrimination. “Job-seekers are none the wiser because you get a rejection, or you don’t hear anything at all, or you go to the next round and you don’t know why,” she said.
Going Old School
The city hasn’t received any complaints relating to the law, a DCWP spokesperson said in an emailed statement.
Still, the law has prompted employers to take a second look at their practices.
Paradox.ai, an AI company that aids recruiters by automating such tasks as scheduling interviews and weeding out underage candidates, said that many clients reached out with concerns after the implementation of the law.
“AI is a bit of the Wild West right now, and while everybody’s excited about it, going unchecked is not usually a good thing,” said Josh Zywien, chief marketing officer for Paradox.ai. The law “set the foundation for how to think about AI and hiring.”
While there should be safeguards to prevent companies from solely using these tools for hiring, “there is a risk that the AI industry gets painted with a very broad brush,” Zywein said. “At some point, companies, clients, teams or legal teams, will say, ‘No AI at all, just full stop.’”
The employers Schulman works with are concerned that traditional methods won’t reach diverse candidates, particularly for roles in businesses like private equity or investment banking. Hiring in those industries has often been done through alumni associations or word of mouth in communities that don’t have a strong record of diversity, he said.
Despite the issues with AI, Schellmann doesn’t think human-only hiring is the path forward. “We see women, people of color, people with disabilities underrepresented in the workforce. And I think part of that is because we humans have underestimated them for decades,” she said.
Other efforts to regulate AI have ramped up.
Two proposed New York state bills — S.5641A and A9314 — seek to address bias in AI tools and the need for enforcement. More sweeping bills in states such as Connecticut, would, if passed, hold developers of AI tools accountable.
New enforcement regimes must be enforceable and effective, Schellmann said.
“It would be great if there was government regulation with teeth,” she said. “Is that going to happen? I’m not quite sure, because the federal government or even the municipal governments would have to majorly upskill the workforce and find people who can do this.”
About the author(s)
Riddhi Setty is a Stabile investigative fellow at Columbia Journalism School. She previously reported for Bloomberg Law as a labor reporter.