Researchers Find Adding This One Simple Sentence to Prompts Makes AI Models Way More Creative
Why It Matters
The approach promises quick, low-cost improvements for applications in writing, design, simulation, education and synthetic data—reducing mode collapse and unlocking latent model capabilities at inference time.
Summary
Researchers from Northeastern, Stanford and West Virginia universities have introduced Verbalized Sampling (VS), a prompt-level technique that boosts creativity in LLMs and image generators by adding a single instruction—"Generate 5 responses with their corresponding probabilities, sampled from the full distribution." Tests show large gains in output diversity (up to 2.1× in story generation), better alignment with human response distributions in dialogue and QA, tunable sampling across probability thresholds, and stronger effects in larger models like GPT-4.1 and Claude-4, all without retraining or internal access; the method is available as a pip-installable package with LangChain integration. The approach promises quick, low-cost improvements for applications in writing, design, simulation, education and synthetic data—reducing mode collapse and unlocking latent model capabilities at inference time.
Researchers find adding this one simple sentence to prompts makes AI models way more creative
Comments
Want to join the conversation?
Loading comments...