Using AI-Powered Recruitment Platforms Can Compound Your Liability for Discrimination

Using AI-Powered Recruitment Platforms Can Compound Your Liability for Discrimination

Canadian Lawyer – Technology
Canadian Lawyer – TechnologyMar 27, 2026

Why It Matters

Employers risk compounded human‑rights liability if AI hiring tools discriminate without proper audits, and the emerging legal standards make non‑compliance a clear exposure for both corporations and law firms.

Key Takeaways

  • Ontario mandates AI usage disclosure for employers with 25+ staff
  • No requirement for bias testing or accommodation mechanisms
  • Canadian standard CAN‑ASC‑6.2 demands equitable AI hiring audits
  • Vendor liability established in US Workday case may influence Canada
  • Law firms using AI tools share discrimination risks with clients

Pulse Analysis

The rise of algorithmic hiring has outpaced regulatory safeguards, leaving disabled candidates vulnerable to opaque decisions. While Ontario’s disclosure rule shines a light on AI use, it offers no insight into how systems evaluate accommodation requests or whether they have been stress‑tested for disparate impact. This gap creates a legal blind spot: employers can claim they disclosed AI usage yet remain insulated from accountability for discriminatory outcomes. For businesses, the immediate remedy is to embed human review checkpoints and conduct independent bias assessments before deployment.

Across North America, the legal landscape is shifting toward holding AI vendors accountable. The U.S. Workday case, which survived dismissal and proceeded as a nationwide collective action, treats the vendor as an indirect employer liable for race, age, and disability discrimination. Canadian practitioners should monitor this precedent, as Canadian courts may adopt similar reasoning under the Human Rights Code. Coupled with Bill C‑27’s demise, the onus now rests on employers to demonstrate due diligence through rigorous testing and documentation, lest they face costly human‑rights complaints.

The newly released CAN‑ASC‑6.2:2025 standard provides a concrete framework for compliance. It requires organizations to validate AI hiring tools against disability benchmarks, treat statistical discrimination as a procurement risk, and ensure a viable human alternative for accommodation requests. Law firms, often early adopters of recruitment AI for articling and summer‑student positions, must apply the same scrutiny to their own hiring pipelines. By aligning procurement practices with this standard, firms not only mitigate liability but also reinforce their credibility when advising clients on equitable AI adoption.

Using AI-powered recruitment platforms can compound your liability for discrimination

Comments

Want to join the conversation?

Loading comments...