‘Sycophantic’ AI chatbots tell users what they want to hear, study shows

24 octobre 2025 | Ian Sample Science editor
Scientists warn of ‘insidious risks’ of increasingly popular technology that affirms even harmful behaviour Turning to AI chatbots for personal advice poses “insidious risks”, according to a study showing the technology consistently affirms a user’s actions and opinions even when harmful. (…)
 Site référencé:  The Guardian (South&CentralAsia)

The Guardian (South&CentralAsia) 

Don’t let the dugong follow the sea cow | Letters
24/10/2025
There must be an Engels (playing with my chart) | Letters
24/10/2025
Timely assurance from Lear’s Kent | Letters
24/10/2025
Scientists demand cancer warnings on bacon and ham sold in UK
24/10/2025
The Guide #214 : Sleep-inducing songs and tranquilising TV – the culture that sends us to sleep (in a good way)
24/10/2025
‘His teeth flew out of his mouth and landed in my spaghetti’ : 10 first date horror stories
24/10/2025