In the world of recruiting, there hasn’t been something as groundbreaking as leveraging the power of artificial intelligence for quite some time. With how notable the use of AI in recruitment has become, some experts theorize that it was only a matter of time before legal restrictions were implemented to prevent AI bias from infiltrating the hiring process.
New York City is already cracking down on using artificial intelligence in hiring, as evidenced by their new bill requiring bias audits of AI hiring tools. This isn’t to say that legislation like this will sweep the nation immediately. Still, with similar regulations being advocated for in places like Illinois and Maryland, this bill certainly warrants an extra close eye.Find out exactly what the new NYC #AI bias law means for recruiters in the latest blog from @IQTalent:Click to Tweet
Ultimately, the emerging regulations on AI in recruiting and hiring force recruiters to evaluate their current tools and possibly rethink their future tech expansion plans. Continue reading to learn what this bill means for AI in recruiting, what compliance with this law looks like, and predictions for the future of AI in hiring.
The NYC Bill Requiring Bias Audits for AI Hiring
In December of 2021, New York City passed a bill requiring bias audits for AI hiring, and it will go into effect in January 2023. Essentially, this law prohibits employers from using artificial intelligence or algorithmic software for recruiting or hiring without the tools being audited for bias beforehand.
The bill defines AI and algorithm-based technology used in the hiring process as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” (Source)
Mark Girouard, a Minneapolis lawyer specializing in pre-employment assessments, confirmed to SHRM, “It’s not clear if the statute captures only the pure AI tools or sweeps in a broader set of selection tools.” Even the smallest use of algorithms in the hiring process could be scrutinized.
The law also requires recruiters to disclose exactly how any AI or algorithmic tools are used, their qualifications, and the characteristics they look for. As a result, candidates or employees are granted the option to request an “alternative process or accommodation” instead of the technology in question.
The employer must notify each applicant or employee who resides in New York City, at least ten business days before the use of the tool, that: (1) an automated employment decision tool will be used to evaluate and/or screen New York City residents’ applications and allow the applicant to request an accommodation or alternative screening process; and (2) the job qualifications and characteristics that the tool will use to assess the applicant. (Source)
Between the implementation of this new bill in early 2023 and the slightly ambiguous nature of the tools that fall under its oversight, employers in the area are experiencing a bit of stress, but more largely, this bill has started nationwide conversations about bias in AI and the responsibility recruiters have to ensure bias stays out of the hiring process.
Why It Came About
Although the purpose of initially including AI in the hiring process is to mitigate bias and provide a more well-rounded view of the potential of a candidate, studies have found AI could unintentionally reinforce biases if not properly overseen.
For example, in 2018, Amazon found that their AI hiring software downgraded resumes that included the word “women” and candidates from all women’s colleges because the company did not have much of a history of hiring female engineers and computer scientists. This is just one example of how bias can accidentally seep into AI software and why human oversight is needed when using AI or algorithms.Learn more about the NYC #AI bias law that will go into effect in January 2023 and what it potentially means for recruiters in the future via @IQTalent:Click to Tweet
Recruiters are still at the beginning of the AI and automation revolution. Although we are just beginning to see the results from the last few months and years of use of AI in hiring, the findings are important and reinforce a need for a broader discussion.
New York City may be the first place to pass a bill of this magnitude about AI in hiring, but they do not appear to be the last. In many different areas around the country, legal professionals are assessing the use of AI in recruiting and proposing similar legislation to help prevent bias in the process.
- In 2019, Illinois passed a measure to crack down on the use of such technology in employment decisions which enforces consent, transparency, and data destruction requirements on employers that implement AI technology during the job interview process.
- In 2020, Maryland passed legislation prohibiting employers from using facial recognition technology without job applicants’ consent beforehand.
The NYC bill isn’t a one-off. It foreshadows more legislation coming down the pike and will likely impact most employers directly in just a few years.
What Compliance Looks Like
Under this law, employers must audit their recruiting tools that use AI for any bias. Employers must provide a public record of those audit results.
Employers who fail to meet these requirements may be subject to a fine of up to $500 for a first violation and then penalized by fines between $500 and $1,500 daily for each subsequent violation.
Before this bill takes effect, companies would be well off to start evaluating and identifying anywhere their tools may let bias in, implementing changes to combat it, and documenting the changes made to help alleviate the presence of any bias.
It’s important to note that AI tools can still be used, they simply have to be publicly audited, and candidates must be informed about their usage.
How IQTalent Can Help
All hope is not lost for AI in hiring. In fact, this bill can be viewed as a giant step forward in holding recruiters accountable for the technology they use and the decisions they make based on it. RecruitingDaily confirms, “Rather than using AI to screen candidates and run the risk of screening out diverse candidates, as is prohibited by New York City’s law, companies can use AI to source potential candidates and invite more diverse candidates that fit the job requirements to apply.”
Luckily, that’s IQTalent’s exact wheelhouse. By using AI to source potential candidates, more diverse candidates can be brought into your talent pipeline and (likely) your organization.
The IQTalent platform uses research to seamlessly source passive candidates and instantly generate a catalog of the best-fit candidates for your open roles. We access a database of 700 million professionals and use our candidate sourcing techniques and a variety of recruiting tools (without relying too heavily on one) to maximize our ability to find the best candidates and the most options for contacting them.
IQTalent employs AI at the beginning of the hiring process to source passive candidates, and those findings are then reviewed by our human experts. When you use Diversify by IQTalent, we use the technology to identify the underrepresented talent you seek in order to ensure access to an inclusive list of qualified candidates.
The NYC AI in hiring bill may be one of the first of its kind, but it likely won’t be the last. Understanding what it means and the nuances it will bring to recruiting tools is important.
Fortunately, tools are already available that meet this bill’s requirements, like IQTalent. Employers will undoubtedly be seeking ways to upgrade their hiring practices while remaining within the parameters of the NYC bill that they might not have looked into otherwise.
To learn more about IQTalent or to get started sourcing the candidates you need, reach out to our team of experts today.