
Adoption of such AI could reshape hiring practices, amplify bias, and trigger urgent regulatory scrutiny.
The surge of AI‑driven facial analysis stems from early research that claimed static images could reveal the Big Five personality dimensions. Although the 2020 Scientific Reports paper introduced a model that maps facial geometry to self‑reported traits, subsequent critiques have labeled the approach "ML‑laundered junk science" due to weak causal links and methodological opacity. By leveraging a massive LinkedIn dataset of MBA alumni, the new NBER working paper demonstrates that these inferred traits correlate with measurable labor‑market outcomes, reigniting debate over the scientific validity of visual personality inference.
From a business perspective, the promise of predicting future earnings and career moves from a simple photo is alluring for talent acquisition teams seeking scalable assessments. Yet the technology inherits the same biases that have plagued traditional résumé screening—now amplified by opaque algorithms that can encode gender, race, and socioeconomic stereotypes. Companies in finance and tech are already piloting AI‑enhanced video interviews that score candidates on extraversion or conscientiousness, raising red flags for civil‑rights regulators. The lack of transparent validation and the potential for disparate impact make it a high‑risk tool, especially as regulators grapple with the EU AI Act and emerging U.S. guidance on biometric data.
Looking ahead, the academic community urges a rigorous, interdisciplinary review of facial‑based personality prediction before it becomes mainstream. Standards for data provenance, model explainability, and fairness metrics must be codified, and any deployment should be accompanied by human oversight. As policymakers consider bans or restrictions, firms will need to balance the allure of predictive efficiency against legal liability and reputational harm. Ultimately, the debate underscores a broader tension: leveraging AI for competitive advantage while safeguarding ethical hiring practices.
Academics look at problematic algorithm to inform regulatory discussion · Thomas Claburn · Tue 10 Feb 2026 // 21:24 UTC
A picture is worth a thousand words or, perhaps, a hundred thousand dollars in extra salary. Academics claim that personality traits inferred using AI photo analysis can predict how depicted individuals will fare in the labour market.
They emphasize that they don’t advocate doing so because personality extraction from facial images is fundamentally discriminatory.
Even so, they say, personality screening is already commonplace among admissions and HR committees, and AI tools that offer personality assessment are seeing rapid adoption. So they argue that an academic evaluation of the technology is necessary.
In a paper titled “AI Personality Extraction from Faces: Labor Market Implications” the authors describe how they used the LinkedIn facial images of over 96 000 MBA graduates to extract subjects’ Big Five personality traits – Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.
The machine‑learning algorithm used was originally described in a 2020 Scientific Reports paper titled “Assessing the Big Five personality traits using real‑life static facial images,” one of about two dozen papers cited as “ML‑laundered junk science” in a 2024 paper titled “The reanimation of pseudoscience in machine learning and its ethical repercussions.”
The algorithm “uses facial features to predict self‑reported personality, rather than others’ perceptions of personality based on visual appearance,” according to the authors of the new paper.
By applying this algorithm, the authors found that “personality traits inferred from facial features provide substantial incremental predictive power for labour‑market outcomes.”
The researchers determined that applying machine learning to infer personality traits from facial images produced accurate predictions for the rank of undergraduate and MBA programmes attended by the depicted individual, initial compensation, salary trajectory, and job transitions.
Were an HR department to use a similar technique to assess the personality of managerial applicants, the result could serve as a forecast of the job applicant’s future performance in the labour market – biased though it may be. And that appears to be happening.
Co‑author Marina Niessner, assistant professor of finance at Indiana University, told The Register in a phone interview that companies like banks already use personality surveys in hiring and promotion decisions and that AI hiring companies are starting to use technology like Big Five personality‑trait analysis on video interviews.
“The regulatory environment, as you probably know, is very uncertain,” said Niessner. “And so we don’t think this is necessarily a valid way to do it [or] that companies should be doing it. But I think it’s really important to have an academic evaluation of these methodologies if there’s even going to be a regulatory discussion around this.”
The paper argues that the AI‑based screening needs to be considered in conjunction with the alternative, which is human decisions based on physical appearance that may also be inconsistent or biased.
The other authors were Marius Guenzel (Wharton), Shimon Kogan (Reichman University), and Kelly Shue (Yale University).
References
“AI Personality Extraction from Faces: Labor Market Implications,” NBER Working Paper w34808.
“Assessing the Big Five personality traits using real‑life static facial images,” Scientific Reports 2020.
“The reanimation of pseudoscience in machine learning and its ethical repercussions,” 2024.
Comments
Want to join the conversation?
Loading comments...