
The case highlights the risk that AI‑driven security tools may replicate human biases, missing genuine threats while disproportionately targeting marginalized groups, underscoring the need for critical reassessment of how we train and define “normal” in surveillance systems.
Comments
Want to join the conversation?
Loading comments...