Resume: New research finds that people are more likely to accuse others of lying if AI makes the accusation first. This finding highlights the potential social impact of AI in lie detection and suggests caution for policymakers. The study found that the presence of AI increased the number of accusations and influenced behavior, despite people’s general reluctance to use AI lie detection tools.
Key Facts:
- AI predictions led to higher numbers of lie accusations compared to human judgment alone.
- Participants accused the statements of being untrue more often when AI indicated so.
- Despite AI’s higher accuracy, only a third of participants chose to use it for lie detection.
Source: Cell press
Although people lie a lot, they generally restrain themselves from accusing others of lying because of social norms around making false accusations and being polite. But artificial intelligence (AI) could soon shake up the rules.
In a study published June 27 in the journal iScienceResearchers show that people are much more likely to accuse others of lying when an AI makes an accusation.
The findings provided insight into the societal implications of using AI systems for lie detection. These insights can help policymakers implement similar technologies.
“Our society has strong, well-established norms about accusations of lying,” said lead author Nils Köbis, a behavioral scientist at the University of Duisburg-Essen in Germany.
“It would take a lot of courage and evidence to openly accuse others of lying. But our research shows that AI can become an excuse that people can easily hide behind, so that they are not held accountable for the consequences of accusations.”
Human society has long functioned on the basis of truth-standard theory, which explains that people generally assume that what they hear is true. Because of this tendency to trust others, people are bad at detecting lies. Previous research has shown that people perform no better than chance when attempting to detect lies.
Köbis and his team wanted to know whether the presence of AI would change established social norms and behaviors about making accusations.
To investigate, the team asked 986 people to write one true and one false description of what they were doing this weekend. The team then trained an algorithm on the data to create an AI model that could correctly identify true and false statements 66% of the time, an accuracy significantly higher than what the average person can achieve.
The team then recruited more than 2,000 people as jurors who would read a statement and decide whether it was true or false. The researchers divided the participants into four groups: ‘baseline’, ‘forced’, ‘blocked’ and ‘choice’.
In the baseline group, participants answered true or false without any help from the AI. In the forced group, participants always received an AI prediction before making their own judgment. In the blocked and choice groups, participants had the option to receive an AI-generated prediction. People who requested the prediction from the blocked group did not receive it, while people in the choice group did.
The research team found that participants in the basic group had 46% accuracy in identifying the statements as true or false. Only 19% of people in the group accused the statements they read of being false, even though they knew that 50% of the statements were false. This confirms that people tend not to accuse others of lying.
In the forced group where participants were given an AI prediction regardless of whether they wanted it, more than a third of participants accused the statements of being false. The percentage is significantly higher than both the baseline group and the blocked groups that did not receive AI predictions.
When the AI predicted a statement to be true, only 13% of participants said the statement was false. However, when the AI predicted a statement to be false, over 40% of participants accused the statement of being false.
Furthermore, it was found that a whopping 84% of participants who requested and received an AI prediction adopted the prediction and made accusations when the AI said the statement was incorrect.
“It shows that once people have such an algorithm in hand, they trust it and perhaps change their behavior. If the algorithm calls something a lie, people are willing to jump on it. This is quite alarming and it shows that we have to be very careful with this technology,” says Köbis.
Interestingly, people seemed reluctant to use AI as a lie detection tool. In the blocked and choice groups, only a third of participants asked for the AI prediction.
The result was surprising to the team, because the researchers had told the participants beforehand that the algorithm was better at detecting lies than humans. “It could be that this very robust effect that we have seen in several studies is that people are overconfident in their lie detection ability, even though humans are very bad at it,” says Köbis.
AI is known for making frequent mistakes and reinforcing biases. Given the findings, Köbis suggests that policymakers should reconsider the use of the technology in important and sensitive cases, such as granting asylum at borders.
“There is such a big hype around AI, and many people believe that these algorithms are really powerful and even unbiased. I am really afraid that people will start to rely on it too much, even though it doesn’t work that well,” says Köbis.
About this AI research news
Author: Kristof Benke
Source: Celpers
Contact: Kristopher Benke – Celpers
Image: The image is credited to Neuroscience News
Original research: Open access.
“Lie Detection Algorithms Disrupt the Social Dynamics of Accusation Behavior” by Nils Köbis et al. iScience
Abstract
Lie detection algorithms disrupt the social dynamics of blaming behavior
Highlights
- Supervised learning algorithm surpasses human accuracy in text-based lie detection
- Without algorithmic support, people are reluctant to accuse others of lying
- The availability of a lie detection algorithm increases the number of lying accusations made by people
- 31% of participants ask for algorithmic advice, most of them follow the advice
Resume
People, aware of the social costs of false accusations, are generally reluctant to accuse others of lying. Our study shows how lie detection algorithms disrupt this social dynamic.
We develop a supervised machine learning classifier that surpasses human accuracy and conduct a large-scale incentivized experiment to manipulate the availability of this lie detection algorithm.
In the absence of algorithmic support, people are reluctant to accuse others of lying. But once the algorithm is available, a minority actively seeks out its prediction and consistently relies on it for accusations.
Although people who ask for machine predictions are not necessarily more likely to accuse, they are more likely to follow predictions that indicate accusations than people who receive such predictions without actively seeking them.