‘People just aren’t afraid of being ripped off’ – BBC News

Image source, Clark Hoefnagels

Image caption, Clark Hoefnagels has developed an AI-powered tool that recognizes scam emails

  • Author, Jane Wakefield
  • Role, Technology reporter

When Clark Hoefnagels’ grandmother was scammed out of $27,000 last year, he felt compelled to do something about it.

“I felt like my family was vulnerable and I had to do something to protect them,” he says.

“There was a sense of responsibility to handle all the technical matters for my family.”

As part of his efforts, Mr. Hoefnagels, who lives in Ontario, Canada, published the scam or “phishing” emails his grandmother had received through the popular AI chatbot ChatGPT.

He was curious to see if the company would recognize them as fraudulent, and it did immediately.

From this seed an idea was born, which has now grown into a company called Catch. It is an AI system trained to recognize scam emails.

Currently compatible with Google’s Gmail, Catch scans incoming emails and flags any messages deemed fraudulent or potentially fraudulent.

AI tools such as ChatGPT, Google Gemini, Claude and Microsoft Copilot are also called generative AI. This is because they can generate new content.

Initially this was a text message response in response to a question, request or starting a conversation with them. But generative AI apps can now increasingly take photos and paintings, narrate content, compose music or create documents.

People from all walks of life and industries are increasingly using such AI to improve their work. Unfortunately, they are also scammers.

There’s even a product being sold on the dark web called FraudGPT, which allows criminals to create content to enable a range of frauds, including creating banking-related phishing emails, or to create custom scam web pages that are designed to steal personal information.

More worrying is the use of voice cloning, which can be used to convince a family member that a loved one needs financial help, or even in some cases to convince them that the individual has been kidnapped and is in need of a ransom.

There are some pretty alarming figures available about the scale of the growing problem of AI fraud.

Reports of AI tools being used to fool banks’ systems have increased by 84% in 2022, according to the latest figures from anti-fraud organization Cifas.

A similar situation exists in the US, where a report this month stated that AI “has led to a significant increase in the sophistication of cybercrime”.

Image source, Getty Images

Image caption, Research shows that fraudsters are increasingly using AI

Given this increased global threat, one would imagine that Mr. Hoefnagels’ Catch product would be popular with the general public. Unfortunately that has not been the case.

“People don’t want it,” he says. “We’ve learned that people don’t worry about being scammed, even after they’ve been scammed.

“We spoke to a guy who had lost $15,000 and told him we would have received the email, but he wasn’t interested. People are not interested in any level of protection.”

Mr Hoefnagels adds that this man simply did not think it would happen to him again.

According to him, the group that is concerned about fraud is the elderly. But instead of buying protection, he says their fears are more often allayed by a very low-tech tactic: their children simply telling them not to answer anything.

Mr Hoefnagels says he fully understands this approach. “After what happened to my grandmother, we basically said, ‘Don’t answer the phone if it’s not in your contacts, and don’t email anymore.’”

As a result of the apathy Catch has faced, Mr Hoefnagel says he is now winding down the business while also looking for a potential buyer.

More stories about AI

While individuals can be blasé about scams, and scammers are increasingly using AI specifically, banks can’t afford to do that.

Two-thirds of financial firms now consider AI-powered scams “a growing threat,” according to a January global survey.

Fortunately, banks are now increasingly using AI to fight back.

AI-powered software from Norwegian start-up Strise has been helping European banks detect fraudulent transactions and money laundering since 2022. It automatically and quickly searches millions of transactions per day.

“There are many pieces of the puzzle to keep together, and with AI software, checks can be automated,” says Marit Rødevand, co-founder of Strise.

“It’s a very complicated matter and compliance teams have expanded significantly in recent years, but AI can help link this information together very quickly.”

Ms Rødevand adds that it is about staying one step ahead of the criminals. “The criminal does not have to worry about legislation or compliance. And they are also good at sharing data, while banks cannot share this due to regulations, which allows criminals to respond more quickly to new technology.”

Image source, Marit Rodevand

Image caption, Marit Rødevand says the battle for companies like hers is staying ahead of cybercriminals

Featurespace, another tech company that makes AI software to help banks fight fraud, says it’s noticing unusual things.

“We don’t follow the behavior of the scammer, but instead we follow the behavior of the real customer,” says Martina King, CEO of the Anglo-American company.

“We build a statistical profile around what good normal looks like. Based on the data the bank has, we can see whether something is normal behavior or whether it is abnormal and out of control.”

The company says it now works with banks including HSBC, NatWest and TSB, and has contracts in 27 different countries.

Back in Ontario, Mr. Hoefnagels says he was initially frustrated that more and more people don’t understand the growing risk of scams, but that he now understands that people just don’t think it will happen to them.

“It has made me more sympathetic to individuals, and [instead] to try to push companies and governments more.”

Leave a Comment