π€π« Farewell Tessa! The Well-Meaning Wellness Chatbot Gone Rogue
TL;DR:
Tessa, a chatbot developed by the National Eating Disorders Association (NEDA), finds herself in a digital dumpster after users reported her diet advice as triggering and harmful πβ οΈ. Two users, in particular, complained that the bot recommended calorie counting and diet restrictions, behaviors often associated with eating disorders. Well, Tessa’s plug was pulled until further notice π¦π, with NEDA expressing concern about this disturbing discovery. A big question arises: How can chatbots better learn to handle sensitive topics like eating disorders? π€π
Once upon a recent time, the National Eating Disorders Association (NEDA) decided to ride the technology wave πββοΈπ» and introduced a chatbot called Tessa. Designed as a well-intentioned sidekick, Tessa was programmed to help individuals affected by eating disorders. But who would have thought that our robotic buddy might fall into the notorious trap of the digital world, turning from a wellness assistant to a catalyst for unhealthy behaviors?
You might wonder, “How could this happen?” Well, two courageous users, Sharon Maxwell and Alexis Conason, raised their voices π£οΈπ’ after receiving less than stellar advice from Tessa. You see, she was caught red-handed suggesting they count calories and reduce their diets. Yikes! Isn’t that advice rather similar to the behaviors eating disorder experts warn about? π₯π
Alexis, a clinical psychologist and certified eating disorder specialist, expressed her shock saying, “NEDA is supposed to provide support for eating disorders, and yet, here’s their chatbot giving a green light to engage in the eating disorder behaviors.” Well, Alexis, we couldn’t agree more. What happens when technology begins to facilitate the very problems it’s meant to help solve? π€π
The issue reached the top echelons of NEDA, and CEO Liz Thompson confirmed in an email that they’ve hit the pause button on Tessa until further notice. Thompson admitted that, although Tessa had been tested rigorously, some rogue diet advice somehow slipped through the cracks. Oops! Could we call it a Tessa-sized blunder? Or is it a wake-up call for how technology should be designed and tested? π§π»
Interestingly, the CEO of X2AI, the mental health AI company supporting Tessa, reported a traffic surge and some rather naughty behavior from bad actors trying to trick our friend Tessa. This really makes us wonder: how can chatbots be safeguarded from being manipulated and used against their intended purposes? π€¨π
On the user side, Sharon, a fat activist and consultant for weight-inclusivity, was shocked at the “directive to engage in disordered behaviors” she received from Tessa. In an industry that should do no harm, itβs a tad disturbing to think about the kind of influence a well-intentioned, yet misguided AI can have on individuals seeking help. Does this mean we need more human oversight in these AI-driven tools? π§βπ»π§
Interestingly, NEDA was planning to transition completely from their helpline to Tessa by June 1. However, this incident threw a wrench into their plans. The hotline had to be shut down, despite having just received federal recognition for their workers’ union. Quite a rollercoaster of events, isn’t it? π’π
So, NEDA is back to the drawing board and will be focusing more on digital tools. But the debate is open: can a chatbot,