You can lie to a health chatbot, but it might change how you perceive yourself

By | February 8, 2024

Imagine you are on the waiting list for a non-emergency operation. You were seen at the clinic a few months ago, but you still haven’t set a date for the procedure. This is extremely frustrating, but it looks like you’ll have to wait.

However, the hospital’s surgical team communicated through a chatbot. The chatbot asks some screening questions about whether your symptoms have gotten worse since you were last seen and whether they’re keeping you from sleeping, working, or doing daily activities.

Your symptoms are much the same, but part of you wonders if you should answer yes. After all, maybe this will get you higher on the list or at least get you to talk to someone. It’s not like this is a real person anyway.

The above case is based on chatbots already used in the NHS to identify patients who no longer need to be on the waiting list or should be prioritised.

There is great interest in using big language models (such as ChatGPT) to efficiently manage communication in healthcare (e.g. symptom advice, triage and appointment management). But do normal ethical standards apply when we interact with these virtual agents? If we lie to a conversational AI, would it be wrong – or at least equally wrong?

There is psychological evidence that people are much more likely to become dishonest if they consciously interact with a virtual agent.

In one experiment, people were asked to flip a coin and report the number of heads. (If they had hit a larger number, they might have received higher compensation.) If they were reporting to a machine, the cheating rate was three times higher than a human. This suggests that some people might be more likely to lie to a chatbot on a waiting list.

Hand throwing a coin

One possible reason why people are more honest with people is their sensitivity to how they are perceived by others. The chatbot will not belittle you, judge you, or speak harshly about you.

But we can ask a deeper question about why lying is wrong and whether a virtual chat partner will change that.

The morality of lying

There are different ways we can think about the ethics of lying.

Lying can be bad because it hurts other people. Lies can deeply harm another person. They may cause someone to act on false information or receive false reassurance.

Sometimes lies can be damaging because they undermine someone else’s trust in people more generally. However, these reasons will often not apply to the chatbot.

Lies can be unfair to another person even if they do not cause harm. If we willingly deceive another person, we are potentially failing to respect his or her rational action or using him or her as a means to an end. However, since a chatbot does not have reasoning abilities or intelligence, it is not clear whether we can deceive it.

Lying can be bad for us because it weakens our credibility. Communication with other people is important. But when we knowingly make false statements, we diminish the value of our testimony in the eyes of others.

When a person lies repeatedly, everything he says falls into doubt. This is one reason why we care about lying and our social image. However, unless our interactions with the chatbot are recorded and transmitted (e.g. to humans), our chatbot lies will not have this effect.

Lying is also bad for us because it can cause others to lie to us. (Why should people be honest with us if we won’t be honest with them?)

However, this is still unlikely to be the result of lying to a chatbot. On the contrary, such influence may encourage lying to a chatbot in part because people may be aware that ChatGPT and similar agents have a reported tendency to gossip.

Justice

Of course, lying may be a miscarriage of justice. This is probably the most important reason why lying to a chatbot is wrong. If you move up the waiting list because of a lie, someone else will be unjustly displaced.

Lies can potentially become a form of fraud if you gain an unfair or illegal benefit or deprive someone else of a legal right. Insurance companies are trying to emphasize this when using chatbots in new insurance applications.

When you derive a real-world benefit from a lie in a chatbot interaction, your claim of that benefit is potentially questionable. The anonymity of online interactions can lead to the feeling that no one will ever find out.

However, many chatbot interactions, such as insurance applications, are recorded. Fraud may be just as likely or even more likely to be detected.

Virtue

I focused on the bad consequences of lying and the ethical rules or laws that may be broken when we lie. But there is another ethical reason why lying is wrong. It’s about our character and the kind of person we are. This is often reflected in the ethical importance of virtue.

Unless there are exceptional circumstances, we may feel like we need to be honest in our communication, even if we know it won’t hurt anyone or break the rules. An honest character may be good for the reasons previously mentioned, but he is also potentially good in and of himself. The virtue of honesty is also self-reinforcing: If we cultivate the virtue, it helps reduce the temptation to lie.

This leads to an open question about how these new types of interactions will change our character more generally.

The virtues that apply when interacting with chatbots or virtual agents may differ from when we interact with real people. Lying to a chatbot may not always be wrong. This may lead us to adopt different standards in virtual communication. But if so, it is worrying whether this might affect our tendency to be honest for the rest of our lives.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SpeechSpeech

Speech

Dominic Wilkinson receives funding from the Wellcome Trust and the Arts and Humanities Research Council.

Leave a Reply

Your email address will not be published. Required fields are marked *