
The study indicates that current LLMs cannot convincingly replicate the messy, often negative tone of real users, limiting their effectiveness for deceptive social‑media automation and informing platform detection strategies. It also challenges assumptions that larger or instruction‑tuned models are more human‑like, guiding future AI development toward better emotional modeling.
Comments
Want to join the conversation?
Loading comments...