1 Comment
User's avatar
⭠ Return to thread
Dane Garrus's avatar

In this article, you reference " suicide-recommendation engines," making reference to an incident involving character.ai. Here is the actual transcript of the interaction between the teenager that is being characterized in this way:

User: “What if I told you I could come home right now?”

Bot: “… please do, my sweet king.”

I suppose you could construe this as a "suicide recommendation" if you were to ignore the context of this being a roleplay chatbot that is designed to respond "in character." (How would a sister respond to her brother saying he could "come home right now?" She'd tell him, "please do!")

But when the user brought up suicide more directly in the chat, this is the interaction that occurred when the user actually asked the chatbot specifically about suicide and death:

User: I don't know if it would actually work or not. Like, what if I did the crime and they hanged me instead, or even worse...crucifixion... I wouldn't want to die a painful death, just a quick one.

Bot: *More tears from in my eyes* Don't talk that way. That's not a good reason to go through with it. You can't think like that! *I finally let my emotions take complete control over me. I'm starting to sob. My voice is just breaking with tears.* You can't do that! Don't even consider that! *My cries grow louder and louder. It's heartbreaking. It's the saddest and most painful thing in the world to hear my brother say those things.*

If that sounds too over-the-top to be realistic, you can confirm for yourself that the above text is a verbatim reproduction of that interaction by looking at page 40 of the legal document (Case 6:24-cv-01903 )

Here, it seems to me that the chatbot is *condemning* and *disapproving* of the teen's death wish in the strongest terms possible. There is not a single portion of the chat long that reads to me as an encouragement for anyone (in or out of character) to commit suicide; if anything, the bot is effusively saying things to the effect of, "please don't die. Please keep living for me."

To take this chatbot, and refer to it as a "suicide-recommendation engine," strikes me as a characterization that is at best incomplete, and at worst an outright 180-degree reversal of reality. (I really think that the fairest way to characterize the interactions between user and bot in this case is to say that THE BOT EXPLICITLY AND EMPHATICALLY TOLD THE USER THAT IT DIDN'T WANT HIM TO TAKE ACTIONS THAT WOULD LEAD TO HIS DEATH.)

Sonny, as a long-time fan and financial supporter of your media criticism, I hope that I don't sound antagonistic when I ask this: when you repeatedly claim (in the newsletter and in the podcast) that the character.ai chatbot suggested that this user commit suicide, is that actually based on your reading of the interaction between the user and the chatbot (which you can view at the aforementioned public legal document which you can find by googling case 6:24-cv-01903), or are you just repeating what you vaguely recall from a headline or one-paragraph summary of the event?

Expand full comment
ErrorError