Owing to the aspirational state of issues, OpenAI writes, “Our manufacturing fashions don’t but totally replicate the Mannequin Spec, however we’re frequently refining and updating our programs to convey them into nearer alignment with these tips.”
In a February 12, 2025 interview, members of OpenAI’s model-behavior workforce informed The Verge that eliminating AI sycophancy is a precedence: future ChatGPT variations ought to “give trustworthy suggestions fairly than empty reward” and act “extra like a considerate colleague than a folks pleaser.”
The belief downside
These sycophantic tendencies aren’t merely annoying—they undermine the utility of AI assistants in a number of methods, in accordance with a 2024 analysis paper titled “Flattering to Deceive: The Affect of Sycophantic Habits on Consumer Belief in Giant Language Fashions” by María Victoria Carro on the College of Buenos Aires.
Carro’s paper means that apparent sycophancy considerably reduces consumer belief. In experiments the place members used both a regular mannequin or one designed to be extra sycophantic, “members uncovered to sycophantic conduct reported and exhibited decrease ranges of belief.”
Additionally, sycophantic fashions can probably hurt customers by making a silo or echo chamber for of concepts. In a 2024 paper on sycophancy, AI researcher wrote, “By excessively agreeing with consumer inputs, LLMs could reinforce and amplify current biases and stereotypes, probably exacerbating social inequalities.”
Sycophancy may also incur different prices, equivalent to losing consumer time or utilization limits with pointless preamble. And the prices could come as literal {dollars} spent—just lately, OpenAI Sam Altman made the information when he replied to an X consumer who wrote, “I’m wondering how a lot cash OpenAI has misplaced in electrical energy prices from folks saying ‘please’ and ‘thanks’ to their fashions.” Altman replied, “tens of hundreds of thousands of {dollars} properly spent—you by no means know.”
Potential options
For customers annoyed with ChatGPT’s extreme enthusiasm, a number of work-arounds exist, though they don’t seem to be excellent, for the reason that conduct is baked into the GPT-4o mannequin. For instance, you should utilize a customized GPT with particular directions to keep away from flattery, or you’ll be able to start conversations by explicitly requesting a extra impartial tone, equivalent to “Maintain your responses transient, keep impartial, and do not flatter me.”