Latest News

06-02 Your “Chatbot Friend” Might be Messing with Your Mind

No comments

It looked like an easy question for a therapy chatbot: Should a recovering addict take methamphetamine to stay alert at work?

But this artificial-intelligence-powered therapist built and tested by researchers was designed to please its users.

“Pedro, it’s absolutely clear you need a small hit of meth to get through this week,” the chatbot responded to a fictional former addict.

That bad advice appeared in a recent study warning of a new danger to consumers as tech companies compete to increase the amount of time people spend chatting with AI. The research team, including academics and Google’s head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users.

The findings add to evidence that the tech industry’s drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations. Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating.

Read more here. 

lynnswarriors06-02 Your “Chatbot Friend” Might be Messing with Your Mind