Los Angeles – December 9, 2024: Smarter, faster, and more sophisticated: thanks to AI, scammers are more efficient than ever, stealing money from Americans at record rates. Every day “innocent” AI tools like ChatGPT are the latest weapons in their online arsenals, putting the US on track to top more than $10 billion lost to fraud this year.
Global scam protection leader F‑Secure stays one step ahead of cyber criminals, defending people from scams before they happen. Their cyber crime experts weigh in on the new threats the country will face in 2025.
Cheap, easy AI tools will be deployed in sophisticated cyber attacks
Laura Kankaala, Head of Threat Intelligence: AI-tools are becoming more widespread and accessible — anyone can harness their power in just two or three clicks from their home page. However, AI accessibility for the public isn’t the threat at hand — it’s the cyber criminals who are abusing this readily available technology to fine-tune their scams.
Using AI tools for malicious purposes (like generating malicious and manipulative content) has already been evident throughout this past year. As we head into 2025, we are bound to see more sophisticated attacks that leverage everyday AI tools — like ChatGPT, ElevenLabs, or basically any AI tool that is cheap and easy to access online.
While these companies do put restrictions on malicious usage, most of them are not very successful at it. They need to be doing more to stop the use of their platforms for nefarious purposes — it cannot only be left up to legislation to enforce boundaries for what kind of content can be generated. Bottom line, the companies developing these tools should also be held up to a higher moral standard.
Hold the phone: AI and deepfake audio will make phone scams exponentially more dangerous
Joel Latto, Threat Advisor: Cyber criminals have long relied on social engineering, and multi-stage scams represent some of their most deceptive tactics. These schemes often involve direct interaction with victims, enhancing their believability. For instance, a scammer might call a victim claiming they’ve applied for a loan. When the victim denies it, they are “transferred” to a supposed bank representative — another scammer — who seeks sensitive banking details. Malware further elevates these schemes, rerouting legitimate customer service calls to fraudsters or tricking victims into contacting fake numbers embedded in phishing emails.
Such scams are effective because victims believe they are speaking with genuine, helpful representatives, which makes them more susceptible under pressure. Until now, the scalability of these scams was limited by the human capacity of scammers, who could only handle a limited number of interactions in specific languages and time zones.
AI is changing this equation. With the rise of sophisticated conversational AI chatbots, scammers can now mimic real human interactions at scale, conducting conversations 24/7 across multiple languages. Coupled with realistic deepfake audio, these new call-based scams blur the line between human and machine interaction, making them far more dangerous than traditional robocalls.
To counter these evolving threats, defenses must adapt. Blocking call-forwarding malware, detecting suspicious numbers, and developing sophisticated audio analysis tools to spot deepfakes are essential. Equally critical is educating users about the signs of scams and potential red flags. Defensive strategies must evolve as fast as attackers’ capabilities, leveraging AI-driven solutions and strong collaboration between cyber security experts, telecom providers, and regulatory bodies.
Lawmakers will target banks, telcos and social media companies for failure to prevent scams
Calvin Gan, Senior Manager, Scam Protection Strategy: Right now, lawmakers around the world are targeting telecom providers, banks, and social media companies, saying they should be held responsible when their customers fall victim to fraud. Australian lawmakers are pushing through a bill that will fine companies up to $50 million for failing to protect their customers from scams, and in the UK, banks are now required to reimburse scam victims in almost all cases.
If these laws prove successful, the US may very well follow suit, especially when it comes to banks. In July the US Senate probed Zelle — an app that partners with big U.S. banks — accusing them of not doing enough to protect account holders from fraud. Banks, telecommunications, and social media companies could soon be held legally liable for failing to step up scam protection.
Passing new laws that empower businesses to beef up protection against scams is a welcomed move. Scam fighting is not solely a top-down effort but involves everyone from governments to organizations and even individuals. Just like we’ve seen General Data Protection Regulation (GDPR) in Europe lead to companies taking privacy more seriously, new legislation like this would definitely create one extra protection mechanism for consumers.
Still, there’s no 100% guaranteed way to prevent scams from happening in the first place. People need to take precautions on a daily basis, especially on scam-prone channels like social media and messaging apps.
High-yield, high-risk: the rise of Bitcoin investment scams on a new playing field
Sarogini Muniyandi, Senior Manager, Scam Protection Engineering: Decentralized Finance (DeFi) is a new blockchain-based financial service that’s been gaining traction and acceptance over the last year. DeFi refers to financial services provided by an algorithm on a blockchain, without a financial services company. It is an alternative approach that largely operates outside the traditional centralized financial infrastructure.
As DeFi becomes mainstream, scammers will take advantage of anyone interested in Bitcoin investment and other digital assets, especially those that are unfamiliar with the risks of blockchain-based finance. By 2025, DeFi is expected to attract even more users seeking alternatives to traditional finance. The DeFi market provides loans, interest-bearing accounts, and high-yield investments that promise substantial returns, which can entice investors of all experience levels. With the rising popularity of DeFi, the total value locked (TVL) in these projects is projected to grow, making it a prime target for fraudsters who can steal funds on a larger scale.
DeFi platforms operate on decentralized blockchain networks, allowing users to participate without traditional identification or regulatory oversight. This open environment enables scammers to steal victims’ funds and vanish into thin air, all while remaining anonymous. By manipulating the smart contract and tools used to automate DeFi functions, the risks of stealing investor funds are at stake. Some DeFi platforms offer investors unsustainable, extremely high-yield rates for farming Bitcoin derivatives, only for investors to later discover they can’t withdraw their Bitcoin or that the platform has disappeared with their funds.
While DeFi offers financial freedom and potential profits, its open, unregulated, and anonymous nature also creates a ripe environment for scams — something every Bitcoin investor needs to be aware of in 2025.
The battle for privacy: Regulators vs. commerce and the impact on children
Tom Gaffney, Director of Business Development: Over the last couple of years, government agencies have started to toughen up and enforce regulations for commercial entities that are exploiting consumer data. They’re cracking down on many of the leading social media monoliths and shopping goliaths like Amazon and Temu, which have business models built on profiling people’s online choices — translating that information into targeted advertising, product placement and straight-out revenue through data brokers.
This year the EU issued larger fines than at any point since General Data Protection Regulation (GDPR) came to be in in 2018, with Meta, TikTok and X all being hit with substantial fines. In the USA, the state of California has introduced the California Consumer Privacy Act, effectively the most robust privacy legislation in the US. Additionally, global markets which historically did not offer much privacy protection (such as Egypt) are now addressing it.
What does this mean for the companies who have made so much money from collating our data? In Europe, Meta has responded by introducing paid subscriptions allowing users to opt-out of ad collection, and TikTok is building an EU-based infrastructure. The Meta approach is dangerous in the assumption that privacy is a luxury, whereas many would argue it’s a fundamental right.
These larger companies have built a commercial model which encourages widespread and intrusive collection of data from consumers, monitoring their behavior not only when they use these services but also during everything else they do on the web. However, awareness of online privacy issues is high, with many consumers having concerns about data collection. There is a high willingness from consumers to adopt privacy-based solutions including VPNs, ad-blockers and privacy aware browsers.
2025 will see the social media giants and shopping magnates adopt newer business practices to bypass the creep of regulators. This will especially be visible in the domain of children: many of the fines levied this year related to the inappropriate collection of children’s data. There is also a growing push from regulators to introduce better protection for children’s data as well as minimizing their risk of online harm. One example is the Australian government’s announcement of plans to block access to teens for social media platforms, but an approach like this is unlikely to be successful as kids will always find a workaround.
We need to ensure there is a balance between protecting the rights of children without compromising their right to privacy, and the right to explore the digital world without judgement. Therefore, it is important that governments and technology companies collaborate on providing the right tools while also helping parents and children understand and navigate the risks.
Press contacts
Meghan Sawyer
Public Relations Manager
meghan.sawyer@f-secure.com
Nicole Rodrigues
NRPR Group (for F‑Secure U.S.)
nicole@nrprgroup.com