Schadenfreude is satisfying, but unhelpful, when it comes to AI chatbot failures.
 
FullLogo-White
the-supportive-weekly-orange

Harm Joy

That’s the literal translation of “schadenfreude”, the pleasure of another’s misfortune. I’m not sure if it is the right word when the “other” in question is not a person but AI, a complicated bit of maths1 wrapped in software. Germans, please let me know.

As a human of support, I sometimes indulge in a brief-but-enjoyable bit of schadenfreude whenever I read about AI support chatbots gone wild, a topic I covered in my recent article, Air Canada’s Chatbot Walked So Cursor’s Chatbot Could Ruin. 

I don’t think it's a healthy response though, no matter how tempting. For one, there are real customers being harmed by the mistakes of bots. More broadly, I don’t want to become a person who only notices the flaws in a new process, or a new technology. I don’t want to go hard-Bjorn-Borg2. 

Generative AI and machine learning allow us to deliver service in a multitude of new ways. Some of the tools won’t work well, and some companies will apply those tools in unhelpful or even harmful ways. But there will be successes and surprises, and whatever the new normal looks like will include those new technologies in some form.

I’d rather help shape that new normal than fruitlessly shout at it from the outside. I think that the core of my personal concern is the positioning of AI bots as replacing (or improving on) human support. 

We’re so primed as people to anthropomorphize things, and because generative AI can communicate in our language it feels so much like a person. But AI bots fundamentally are not, and cannot be, people. We can’t judge their output in the same way we can judge a person's.

So what if we instead treat them like software? What if we think of AI chatbots as a better interface into self-service? Consider them a specialized, conversational-interface search engine that can help you find an answer that very often already exists?

That’s something they can do more quickly, at a greater scale, and over more topics than most people can do it. It wouldn’t "fix" AI's work product—quality control remains a challenge to be addressed—but it would help frame our approach to the tools.

Would our customers have more accurate expectations, and ultimately benefit more, if the AI bot was always presented to them in that way? Not an AI pretending to be a person, but as a clearly-labeled self-service assistant with a defined scope of work that is good at helping you help yourself to an answer. AI as a much better vending machine, instead of a much worse support agent.

Self-service has historically been undervalued in online support, but the future of support is self-service first. Let’s figure out how to make that better.

1 Yes, I know. But we don't say we're learning "mathematic", do we? 

2 If you're going full Bjorn, maybe go Ulvaeus. ABBA have kept up with modern technology. 

      patto-headshot Mat Patterson
      Help Scout
       
      the-supportive-green-small
      Air Canada’s Chatbot Walked So Cursor’s Chatbot Could Ruin
      Putting the AI in pain ›
      Supportive Shorts-Newsletter (with play button)
      The Supportive Podcast Wraps-Up Season 1
      It's a clip show, baby! ›
       

      Want to share this newsletter?

      Send them a link using the button below, or encourage people to signup for their own copy right here. 

      View this newsletter as a web page
      mark
      68 Harrison Ave Ste 605
      PMB 78505  •  Boston, MA 02111
      view in your web browser  |  unsubscribe
      Twitter-1 Facebook-sq LinkedIn-sq Instagram-sq
      The only people-first customer support platform.
      © 2025 Help Scout

      View in browser