Artificial intelligence (AI) scams are on the rise and are opening a new frontier in how we need to keep our money safe and avoid crooks.
“AI has opened up alarming avenues for crooks to exploit unsuspecting individuals and deceive them with their hard-earned money,” says Dr. Klaus Schenk, senior vice president, security and threat research at the security firm Verifier.
“By taking advantage of sophisticated automations and deepfake technologies, scammers can create deceptive voices and incredibly convincing videos. It undermines trust and makes people vulnerable to manipulation,” he adds.
Join Money Morning
Don’t miss the latest investing and personal finance news, market insights, and money-saving tips with our free twice-daily newsletter.
Don’t miss the latest investing and personal finance news, market insights, and money-saving tips with our free twice-daily newsletter.
And with AI set to become increasingly mainstream, knowing how to protect against AI-based scams will be crucial in the coming years. Here are four AI scams to watch out for and how to protect yourself and your money.
Four AI scams on the rise
Wrong wrong
Deepfakes are a form of artificial intelligence used by scammers to create seemingly legitimate audio and video. Typically, this will involve crooks using large datasets of images, video, and audio to replicate the voice and appearance of a famous face. Software is then used to make the “person” say and do things before being put online.
Already, this technology has been used for financial fraud attempts, Martin Lewis having been the victim of a recent attempt to impersonate him.
“Musk’s new project opens up new opportunities for British citizens. No project has ever given such opportunities to the people of the country,” the mock Lewis says in the fraudulent clip, widely shared on social media. The video tries to convince the viewer that Lewis approves of a supposed new investment opportunity from Elon Musk – but it quickly becomes apparent that such a scheme does not exist and Lewis himself has tackled the risks posed by such technology.
“These people are trying to pervert and destroy my reputation in order to steal money from vulnerable people, and frankly, it’s shameful, and people are going to lose money and people’s mental health is going to be affected,” he told the BBC.
ChatGPT Phishing
Phishing emails are nothing new – scammers have long been sending emails pretending to be from a genuine source, such as a bank, technology provider or government department. Scammers will often attempt to trick you into visiting a website that could lead to the theft of your bank details or other personal information.
But AI has revolutionized the way scammers can produce the emails used to lure you in. ChatGPT and other generative AI tools, can easily create body text for free that mimics the tone and consistency of legitimate messages. This means spelling mistakes, clumsy grammar, and other tell-tale signs of a fake email are harder to spot. Such software is freely available, and although ChatGPT has some built-in features to prevent it from being misused, these can be easily circumvented.
Voice Cloning
This is another form of deep AI, but rather than producing a video, voice cloning replicates an individual’s tone and language – great for convincing someone they’re having a real conversation telephone with this person.
In one notable example, a mother in America received a call that appeared to be from her distressed daughter asking for money. It soon turned out that the call was fake and that the girls’ voices had been cloned using AI.
“I never doubted for a second that it was her. It was the weird part that really got to me,” she told the local outlet. A-Z family.
And according to the security technology company McAfeeit only takes three seconds of audio for crooks to create a convincing AI voice clone.
Of 7,000 people surveyed by the company, one in four said they had been the victim of an AI voice cloning scam or knew someone who had. Of those who said they lost money, 36% said they lost between $500 and $3,000, while 7% were taken for amounts between $5,000 and $15,000.
Verification Fraud
We’ve all grown accustomed to passwords, passkeys, and biometrics to access our phones and banking apps. Even when creating an account for the first digital banks like Monzo, you have to send a video of yourself saying a certain phrase.
But AI can be used to circumvent these security checks, says Jeremy Asher, regulatory counsel at a law firm Setfordrepresenting a “huge risk” for both consumers and institutions.
“Fake videos and photographs of people who do not exist, yet appear to have authority, are generated through the use of AI. This “proof” is then used to pass identity and security checks, which can lead to a whole rash of danger. Bank accounts can be viewed, wire transfers can be authorized, even fake assets are created to secure financial loans,” he says.
How to spot AI scams
What makes many AI-based scams dangerous is that they are much harder to spot than more conventional scams. Thanks to advancements in technology, fake emails seem to be more authentic, while voice cloning and deepfake videos continue to improve.
“Distinguishing a deepfake video from a genuine video can be quite difficult due to the sophistication of the technology used to create them,” says Louise Cockburn, manager of information security awareness and culture at the manager. of heritage. quilterbut that doesn’t mean that AI scams are impossible to detect.
Take the recent Martin Lewis deepfake for example – have you spotted the large number of inconsistencies, the awkward language or the robotic facial movements? There were a number of giveaways in the video that hold true for many deepfakes; here is an overview:
- Use common sense
Many on social media were quick to discredit Martin Lewis’ deepfake, noting that he doesn’t talk about investments and wouldn’t use his platform to encourage people to invest their money in a certain way. manner. So if you’re watching a video featuring a famous face discussing something they wouldn’t ordinarily talk about, take the clip with a pinch of salt.“If a video seems suspicious or irrelevant to the person being shown, it’s always worth investigating further,” Cockburn adds.
Generally speaking, AI will make it “much more complicated to detect scams”, says Matthew Berzinski, senior director of the identity management company ForgeRockrequiring a vigilant approach on the part of consumers.
“Remember, never provide security information, agree to take any action, or click on any link through any communication from ‘your bank’. If you need action on your account, call Talk to them directly on their helpline or go directly to their website where you can be sure to speak to them directly,” he says.
What to do if you are the victim of an AI scam
Currently, most advice on what to do if you are scammed is the same whether the AI has a role to play or not. It is essential to secure your information as much as possible and to report it to your bank as well as to support services like Action Fraud.
If the scammer gained access to your computer, online accounts, or financial information, you should:
If you have received an email that you are not entirely sure about, forward it to the Suspicious Email Reporting Service (SERS): report@phishing.gov.uk
Similarly, suspicious text messages can be reported free of charge by forwarding them to 7726. Suspicious calls can also be sent to this number – simply text 7726 with the word ‘call’ followed by the suspicious number.
For more tips, visit Action Fraud website.
window.reliableConsentGiven.then(function(){
!function(f,b,e,v,n,t,s){if(f.fbq)return;n=f.fbq=function()
{n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}
;if(!f._fbq)f._fbq=n;
n.push=n;n.loaded=!0;n.version=’2.0′;n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];s.parentNode.insertBefore(t,s)}(window,
document,’script’,’https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ”);
fbq(‘track’, ‘PageView’);
})