Employer Update: New Jersey Issues Guidance on Algorithmic Discrimination, by Christopher Nucifora, Esq., 3-7-2025
The New Jersey Office of the Attorney General and the Division on Civil Rights (DCR) has issued guidance clarifying how the New Jersey Law Against Discrimination (NJLAD) applies to algorithmic discrimination resulting from the use of new and emerging data-driven technologies such as artificial intelligence (AI).
The 13-page guidance was prompted by the increasing use of these technologies by employers and other entities covered by the NJLAD.
While the guidance does not establish new obligations under the NJLAD, it makes clear that even though the technology powering automated decision-making tools may be new, “the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.”
LAD’S Application to Algorithmic Discrimination
The LAD prohibits algorithmic discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics, according to the guidance.
LAD also prohibits algorithmic discrimination when it precludes or impedes the provision of reasonable accommodations, or of modifications to policies, procedures, or physical structures to ensure accessibility for people based on their disability, religion, pregnancy, or breastfeeding status.
The guidance explains that the use of automated decision-making tools, if those tools are not carefully designed, evaluated, and tested, may result in disparate treatment discrimination, disparate impact discrimination, or the failure to provide or account for reasonable accommodations. The guidance includes examples of practices involving the use of automated decision-making tools that may lead to unlawful discrimination.
Finally, the guidance clarifies that a covered entity may be held liable for violating the LAD, even if the tool was developed by a third-party.
Automated Decision-Making Tools Defined
The guidance defines “automated decision-making tool” as any “technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process.” These include technologies such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
These tools are used in a variety of ways including by employers to assess who sees a job advertisement, whether a human reviewer read an applicant’s resume, whether an applicant is hired, whether an employee receives positive reviews or is promoted, and whether an employee is demoted or fired.
Many of these tools rely on algorithms that analyze data, make predictions/recommendations and “in doing so can create classes of individuals who will be either advantaged or disadvantaged in ways that may exclude or burden them based on their protected characteristics,” according to the guidance.
This is not the case for all of these tools the guidance notes. “Ultimately, depending on how they are developed and deployed, automated decision-making tools can either contribute to or reduce the likelihood of discriminatory outcomes,” notes the guidance.
Employer Onus
Bottomline is if employers fail to account for or remedy bias in automated decision-making tools, the tools may contribute to discriminatory outcomes, cautions the guidance.
Given the risks, employers should consider the following steps:
• Assess and audit these tools for potential biases. This can include using independent parties to test these models on diverse datasets to identify disparities in how different groups are treated.
• Continuously monitor performance after deployment of these tools to detect and correct any biases that may emerge as data changes over time.
• Consider having policies in place that address the use of these automated tools.
• Train relevant staff on the risks of algorithmic discrimination
It is clear that this is a hot-button issue for lawmakers and enforcement officials. Subsequently, employers should assess their use of these technologies and implement safeguards to mitigate risk.
Author: Christopher Nucifora, Co-Managing Partner of Kaufman Dolowich’s New Jersey Office and Chair of the firm’s Commercial Litigation Practice Group