“If anyone still has voicemail delete it” warns AP+ CIO

SXSW panel - Friend or Foe: Whose side is AI on in the Digital Scam Wars?
Pictured: SXSW Panel - Friend or Foe: Whose side is AI on in the Digital Scam Wars?
//Just text me
Anula Wiwatowska
Oct 14, 2024
Icon Time To Read2 min read

Asked what the layperson can do to protect themselves from payment scams and fraud, Australian Payments Plus (AP+) CIO May Lam told the room “if anyone still has your voicemail, delete it”. Lam explained that scammers may not necessarily want you to answer your phone, but may be more interested in getting a sample of your voice to aid in deepfake creation.

Deepfakes are a digital facsimile of a real person, that has been edited to create a realistic, but fake depitction of them. These can present as videos, photographs, or audio recordings, and have been used to facilitate both scams and fraud globally.

Deepfake scams emerged in 2017, but have rapidly grown in popularity and reach thanks to generative AI adoption. While most of the high-profile cases target business transactions in the hundreds of thousands, to millions of dollars, the democratisation of generative AI technology means that more criminals are able to apply the same tactics on smaller scales.

“First thing to remember is that all the criminal groups that do either cyber attacks, data breaches, [or] individualised scams, they all work together really well,”  warns Financial Services and Insurance Lead for CyberX, Shameela Gonzalez.

“In our industry we are still trying to enforce collaboration and it happens in really meaningful ways, but criminal groups have already been ten steps ahead of us and are already info sharing, and onselling information.”

According to eSafety Commissioner Julie Inman Grant, our deepfake detection tools are lagging behind the technology itself. Free, open-source apps can already be used to create deepfake imagery, but the eSafety approach in Australia mostly revolves around awareness, education, and removal of harmful material. Meanwhile criminals are capitalising on the open-source access to create their own tools.

Fraudsters have already developed and are selling their own GPTs. FraudGPT, a large language model that creates content to simplify cyber attacks, has been reportedly available on the darkweb and Telegram since 2023. The model reportedly lacks guard rails, and can create phishing emails, scam landing pages, and direct users to external resources like hackers for hire, in a similar way that ChatGPT might write a social media caption. Subscriptions can be purchased for as low as $200USD per month, which significantly lowers the barrier to entry for potential scammers.

Although fraud is still one of the vectors criminals work within, Gonzalez thinks that scamming will continue to gain prevalence.

“A scam fundamentally involves manipulation. The scam is intended for you to go and authorise that transaction yourself believing it is legitimate… Scams, I think, ended up being a far more effective way [for criminals] to bypass traditional fraud controls, and fraud prevention activities.”

Apart from turning off your voicemail, the panelists encouraged users to set up 2FA, refrain from clicking links in emails and text messages, and to keep informed on scams using tools like Scamwatch.

Anula Wiwatowska
Written by
Anula is the Home and Lifestyle Tech Editor within the Reviews.org extended universe. Working in the tech space since 2020, she covers phone and internet plans, gadgets, smart devices, and the intersection of technology and culture. Anula was a finalist for Best Feature Writer at the 2022 Consensus Awards, and an eight time finalist across categories at the IT Journalism Awards. Her work contributed to WhistleOut's Best Consumer Coverage win in 2023.

Related Articles