AI Gave Employee "False Sense of Security" About Workplace Communications

AI Gave Employee "False Sense of Security" About Workplace Communications

HR Daily (Australia)
HR Daily (Australia)Mar 20, 2026

Why It Matters

Employers must recognize that AI‑generated content does not excuse unprofessional behavior, and misuse can lead to legal liability and termination.

Key Takeaways

  • AI drafting can mask tone, leading to misconduct claims
  • Fair Work Commission ruled AI does not justify inappropriate messages
  • Senior developer dismissed for violating workplace communication standards
  • Companies need policies governing AI use in internal correspondence

Pulse Analysis

The rapid adoption of generative AI tools has transformed how employees compose emails, reports, and even grievance letters. While these systems can accelerate drafting and suggest polished language, they also obscure the author's intent and tone, creating a veneer of professionalism that may not reflect reality. Legal frameworks have yet to catch up, leaving organizations to grapple with questions of accountability when AI‑generated content crosses the line into harassment, defamation, or other misconduct. Recent tribunal decisions underscore that reliance on AI does not shield workers from the consequences of inappropriate communication.

In the Fujifilm Data Management Solutions case, a senior Java developer turned to an AI assistant to craft a series of complaints alleging managerial impropriety and to respond to a bullying accusation on Microsoft Teams. Deputy President Tony Slevin observed that the technology gave the employee a "false sense of security," masking the confrontational tone and legal risk embedded in the messages. The commission concluded that, despite the AI’s assistance, the employee remained responsible for the content, deeming the communications objectively unacceptable and justifying dismissal for serious misconduct.

The ruling sends a clear signal to businesses: AI tools are aids, not absolutes. Employers should establish clear guidelines on AI‑generated correspondence, mandate human review for sensitive matters, and provide training on digital etiquette and legal boundaries. By integrating AI policies into broader governance frameworks, companies can harness productivity gains while mitigating reputational and legal exposure. As courts continue to scrutinize AI‑mediated interactions, proactive risk management will become a competitive advantage for firms navigating the evolving digital workplace.

AI gave employee "false sense of security" about workplace communications

Comments

Want to join the conversation?

Loading comments...