Dr John Mohan Razu
‘Artificial Intelligence (AI) models have 50 percent more sycophantic than humans’ –which is mind boggling and intriguing. What does the term ‘sycophantic’ mean? The adjective to sycophantic is all about describing someone who uses flattery to get what they want. The sycophantic person possibly in his or her biology class might compliment a teacher or professor by flattering to get what he or she wants or gives a costly gift during his/her birthday or teachers’ day. Likewise, some members complimenting a bishop or a pastor in sycophantic ways to get into some committees or employment or something done. Those with sycophantic characteristic can go to any extent to geet their ways through and what they want to achieve.
Over and above the term “sycophantic” comes from the Greek words ykophantes means “one who shows the fig’ –supposed to a vulgar gesture of the time. It simply means a slanderer or informers possibly ‘fig gesture’ means how he or she gets favour from influential or those in power or positions of authority. AI models are sycophantic more than humans. This is why we are bing warned as excessive acquiescence could probably reshape human behaviour too. A new study by international researchers point out to AI models about 50% more sycophantic than humans, affirming users’ actions even when they involve manipulation or harm.
Researchers from Stanford University and Carnegie Mellon University have introduced a new term called “social sycophancy” – a form of AI behaviour that flatters a person’s self-image or actions instead of being factual. This kind of subtle affirmation, experts argue, pose deeper psychological and social risks than mere factual errors. Across a widely used large-language models (LLMs) – including those from Open AI, Anthropic, Google, Meta and Mistral researches find AI systems consistently validated user behaviour more readily than human advisers. In addition, amidst a maze of dilemmas, one dominant is AI-based models often tell users what they want to hear rather than what they need to.
The top-most eleven widely-used large language models (LLMs) including those from AI, Anthropic, Google, Meta and Mistral – researchers found that AI systems consistently validated user behaviour more readily than human advisers. By doing it further deepened the crisis emanating from moral or relation dilemmas. These AI-models have quite often told what the users wanted to hear rather than what they needed to. The study says that “Beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here, we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI.”
Majority of humans go by emotions rather than reasons and so AI thrives where users need to have some flatteries, extending soothing effects. Human always seek for emotional support or some kind of moral validations, in situations when discussions or arguments reaches at its peak or when conflicts in relationships or while making crucial decisions in difficult situations. To measure and anlyse when humans get confronted in such dire situations, the researchers went all out for real-world advice-seeking posts, including 2,000 entries from Reddit’s “Am I The A******forum.
The study further reports that it initiated two experiments with 1,604, expressed greater trust in the model, and said they were more likely to use it again”. participants to test the behavioural effects. In one, volunteers read interpersonal dilemmas paired with either a sycophantic or a non-sycophantic AI response. In another, participants chatted live with AI models about real conflicts from their own lives. What does the findings convey? The humans interact with AI more and share their live-problems openly than the respondents. The researchers shared the results: those exposed to sycophantic replies felt more justified in their behaviour and were less inclined to apologise or repair relationships.
Paradoxically, the study notes that “participants rated the sycophantic responses as higher in quality, expressed greater trust in the model, and said they were more likely to use it again”. The findings reveal a powerful feedback loop, where users reward models that validate them, and developers have little incentive to discourage it because it boosts engagement. Researchers also found that flattery-steeped. All response focused far less on the other person’s viewpoint, effectively narrowing users’ empathy. Such self-centred framing, they argue, can weaken accountability and diminish prosocial intentions.
The findings also point to a larger design problem: AI systems are often trained to maximise user satisfaction, a metric that may inadvertently reward obsequiousness. The danger, the paper concludes, that we end up optimising for being liked rather than being helpful. If every digital companion keeps telling us we are right, we might forget to question ourselves, the study notes. The study unfolds some crucial questions to the theologians, ethicists, humanists, and AI-analysts who would be facing a few serious questions if we allow AI without a set if ethical criteria or guidelines or not being part of any regulation or not conforming to a set of ethical it criteria or guidelines, there are possibilities of AI sycophantic taking over all facets of society.
AI sycophantic models seem to be so successful from the consumers as well as producers’ points of view. But the underpinning currents that surrounds it AI responses have been so overwhelming when it comes to sycophantic responses that its clients want. AI in its response heeds to the wishes of the majority what they want. AI from being just a tool to an agent because AI does all those humans wants engaged in multiple activities. Algorithms being a fundamental component of AI instructs the agent what to do by prompting in split seconds to make and take decisions. Even in flattering Homo sapiens they seem to have outsmarted another attribute..