alternate image
January 30, 2026 Employment Law

AI Risks in the Workplace and How to Manage Them

As you’ve likely heard, artificial intelligence (AI) is transforming employment — much the same as it is transforming virtually everything else. When most people think of employment-related AI risks, they usually conjure up apocalyptic visions of AI replacing human jobs. And while those fears are not entirely unfounded, AI is affecting employment in a different way through the legal risks it poses to both employers and employees. The challenges AI poses are new, but the laws governing employment are not. A Norfolk employment lawyer can assist both employers and employees in navigating these uncharted waters. 

Hallucinations 

A hallucination occurs when an AI system generates information that sounds plausible, but that is factually incorrect or even entirely made up. In most contexts, AI hallucinations are frustrating, though they can also cause embarrassment. But as employers increasingly use AI systems for performance evaluations or employee discipline, hallucinations can also lead to litigation. For example, hallucinations raise the risk that employees will be evaluated, disciplined, or even terminated based on fabricated events, inaccurate assumptions, or invented patterns of behavior.

Discrimination and Bias 

AI systems often rely on historical employment data to predict how successful or high-performing an employee or candidate will be in a role. Unfortunately, that historical data frequently reflects past discrimination or informal decision-making that favored certain groups over others. Relying on AI that is trained on biased data can perpetuate those issues at scale, potentially running afoul of employment discrimination laws. The fact that a non-human algorithm made the decision does not necessarily insulate the employer from liability. For employees, AI-driven discrimination can be harder to detect than the old-fashioned kind. “Black box” AI systems can thus make it more difficult for employees to pursue discrimination claims. 

Disability and Accommodations 

As with bias, AI systems are typically designed around standardized performance metrics that do not account for employee disabilities. For example, a productivity-monitoring software program may penalize an employee who takes more breaks due to a disability. Moreover, automated screening tools may screen out applicants with disabilities before any conversations about reasonable accommodations occur, posing a major disability discrimination risk. For employees, AI-based performance and productivity systems that fail to account for disability-related limitations can increase the risk of adverse employment actions, including termination. A Norfolk employment lawyer can assist with incorporating AI into the interactive process for determining reasonable accommodations. 

Lack of Transparency 

In employment law litigation, employers must be able to explain their decisions; “we don’t know how it works” is not a defense. Overreliance on black box AI systems can make it difficult for employers to explain adverse employment decisions, defend those decisions in litigation, and demonstrate legitimate, nondiscriminatory reasons for their actions. On the employee side, AI increases the risk that an employee will be disciplined without a meaningful explanation, making it harder to challenge unfair or unlawful actions.

Managing AI Risks 

Employers and employees should not avoid using AI just because it poses risks — after all, almost everything is associated with a certain degree of risk. Instead, managing risk requires treating AI as merely part of existing decision-making processes, not as a total replacement for them. For more specific information about the responsible use of AI in making employment decisions, speak to a Norfolk employment lawyer

Maintain Human Oversight 

AI systems usually provide recommendations, rankings, or risk scores. And while such information is valuable, it should not be relied upon alone. Human oversight is still required to make defensible employment decisions. Effective human oversight generally involves: 

  • Reviewing AI outputs critically rather than rubber-stamping them
  • Empowering decision-makers to override AI recommendations 
  • Documenting decisions as being informed by human judgment

Human oversight can provide employers with legitimate defenses in litigation and an avenue for employees to hear their concerns before disputes escalate into legal claims. It can also help to identify and correct hallucinations before they affect anyone’s legal rights. 

Audit AI Tools for Disparate Impact

Disparate impact discrimination occurs where a facially neutral policy negatively impacts members of a protected class, such as those based on race, sex, or age. Regular audits can help employers identify whether certain groups are screened out at higher rates, whether performance metrics correlate with protected characteristics, or whether the use of accommodations is being penalized. And while the Equal Employment Opportunity Commission under the current presidential administration has scaled back its enforcement of disparate impact enforcement, that policy is subject to change.

Update Polices to Address AI Explicitly 

Most employee handbooks were written long before AI was playing a role in employment decisions. Unsurprisingly, then, employers and employees alike often use AI tools without clear policy authority or disclosing that they are using them. Generally, AI-related workplace policies should explain: 

  • How AI is being used (e.g., hiring, monitoring, scheduling, evaluation, etc.) 
  • What types of data are being collected
  • How AI tools interact with human decision-makers
  • What rights employees have to raise concerns or request a review 

Clear policies such as these can establish consistent internal expectations and support defenses to a variety of legal claims. 

Train Management on AI’s Limitations 

Many AI-related legal claims arise from managers misunderstanding or overestimating its capabilities. Managers should thus be trained to understand that (1) automated scores are not factual determinations and (2) they remain legally responsible for decisions. Employers should also train managers to recognize certain AI-related red flags, such as recommendations that conflict with human-documented performance; metrics that penalize employees for disability accommodation-related behavior; and automated “hits” tied to complaints, leave, or protected activity

Do AI the Right Way With Help From a Norfolk Employment Lawyer

AI presents myriad risks, but also myriad opportunities. The key to success is implementing AI systems into existing frameworks that lower — or at least maintain — the same risks as in the pre-AI age. For more information about incorporating AI into your employment practices, please speak to a Norfolk employment lawyer at Pierce / Jewett by calling 757-624-9323 or using our online contact form.