Skip to main content

Can you spot an AI-generated scam?

Web Hosting & Remote IT Support

As AI tools become part of everyday life, most people believe they would be better equipped to spot AI-generated scams, but new research reveals a worrying trend: as people get more familiar with AI, they’re more likely to fall for these scams.

New research finds that the generations most confident in detecting an AI-generated scam are the ones most likely to get duped: 30% of Gen Z have been successfully phished, compared to just 12% of Baby Boomers.

Ironically, the same research found that fear of AI-generated scams decreased by 18% year-over-year, with only 61% of people now expressing worry that someone would use AI to defraud them. During the same period, the number of people who admitted to being successfully duped by these scams increased by 62% overall.

A Proliferation of Scams

Traditional scam attempts rely on mass, generic messages hoping to catch a few victims. Someone receives a message from the “lottery” claiming that a recipient won a prize, or a fake business offering someone employment. In exchange for providing their bank account details, the messages would promise money in return. Of course, that was never true, and instead the victim lost money.

With AI, scammers are now getting more personalized and specific. A phishing email may no longer be riddled with grammatical errors or sent from an obviously spoofed account. AI also gives scammers more tools at their disposal.

For example, voice cloning allows scammers to replicate the voice of a friend or family member with just a three second audio clip. In fact, we’re starting to see more people swindled out of money because they believe a message from a family member is asking for ransom, when it’s actually from a scammer.

The Trust Breakdown

This trend harms both businesses and consumers. If a scammer were to gain access to a customer’s account information, they could drain an account of loyalty points or make purchases using a stolen payment method. The consumer would need to go through the hassle of reporting the fraud, while the business would ultimately need to refund those purchases (which can lead to significant losses).

There’s also a long-term impact to this trend: AI-generated scams erode trust in brands and platforms. Imagine a customer receiving an email claiming to be from Amazon or Coinbase support, an unauthorized user was trying to gain access to their account, and that the user should call support immediately to fix the issue. Without obvious red flags, they may not question its legitimacy until it’s too late.

A customer who falls for a convincing deepfake scam doesn't just suffer a financial loss; their confidence in the brand is forever tarnished. They either become hyper-cautious or opt to take their business elsewhere, leading to further revenue loss and damaged reputations.

The reality is that everyone pays the price when scams become more convincing, and if companies fail to take steps to establish trust, they wind up in a vicious cycle.

What's Fueling the Confidence Gap?

To address this confidence gap, it’s important to understand why the divide exists in the first place. Digital natives have spent years developing an intuitive sense for spotting "obvious" scams — the poorly written emails or suspicious pop-ups offering a free iPod. This exposure creates a dangerous blind spot: when AI-generated scams perfectly mimic legitimate communication, that same intuition fails.

Consider how the brain processes a typical workday. You're juggling emails, Slack messages, and phone calls, relying on split-second pattern recognition to separate signal from noise. A message from "your bank" looks right, feels familiar, and arrives at a plausible time.

The problem compounds when scammers use AI to perfectly replicate not just logos and language, but entire communication ecosystems. They're not just copying Amazon's email template; they're replicating the timing, context, and behavioral patterns that make legitimate messages feel authentic. When a deepfake voice call sounds exactly like a colleague asking for a quick favor, a pattern-matching brain tends to confirm that interaction as normal.

This explains why the most digitally fluent users are paradoxically the most vulnerable. They've trained themselves to navigate digital environments quickly and confidently. But AI-powered scams exploit that very confidence.

What Tech Leaders Should Do Now

For companies, addressing this overconfidence problem requires a multi-pronged approach:

Inform customers without fear-mongering: Help users understand that AI-powered scams are convincing precisely because they're designed to deceive the most confident, tech-savvy people. The goal isn't to make people stop using AI, rather to help them maintain appropriate skepticism.

Educate them on deepfake scams: Focus on identifying the key signs of a legitimate versus fraudulent message (sent from an unknown number, a message with false urgency, a suspicious link or PDF attached). Show current examples of deepfakes and AI-generated phishing, rather than just talking about traditional fraud awareness.

Keep communication channels transparent: Establish clear, verified communication channels and educate customers about how your company will and won't contact them. The good news is that many providers, including Google, Apple, and WhatsApp currently or will soon offer branded caller ID services.

This means companies can establish a business profile with these apps, adding another layer of verification. That way, when a verified business contacts a customer, their message will clearly show the brand name and a verified badge. Similarly, most brands now authenticate their outbound email to conform with the DMARC delivery standard and qualify for a branded trust mark to show up next to the subject line.

Invest in knowledge sharing: If one company is dealing with an influx of scam attempts, other companies are likely facing similar problems. Scammers often collaborate to share tactics and vulnerabilities; companies should do the same.

Many companies fight fraud by using technologies that incorporate insight-sharing “consortiums”—business networks where fraud patterns are shared across companies. By being open about current challenges, companies can better understand the risks and implement the proper safeguards to keep their customers safe.

The Strategic Advantage of Getting This Right

The businesses that will thrive in this environment are those that maintain identity trust—that is, the ability to recognize a user or interaction within a digital environment—while effectively combating increasingly sophisticated threats. Fraud prevention is no longer just about protection from losses, it’s a critical part of the customer experience. That’s because when customers feel safe, they shop confidently.

By tackling users’ AI blindspots while maintaining trust, companies gain a competitive edge. While the AI revolution has introduced incredibly capable tools, it’s also created unexpected vulnerabilities. Addressing this challenge requires more than just different tools. It demands a fundamental rethinking of how we maintain trust when seeing is no longer enough to believe.

We've listed the best Antivirus Software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



via Hosting & Support

Comments

Popular posts from this blog

Microsoft, Google, and Meta have borrowed EV tech for the next big thing in data centers: 1MW watercooled racks

Web Hosting & Remote IT Support Liquid cooling isn't optional anymore, it's the only way to survive AI's thermal onslaught The jump to 400VDC borrows heavily from electric vehicle supply chains and design logic Google’s TPU supercomputers now run at gigawatt scale with 99.999% uptime As demand for artificial intelligence workloads intensifies, the physical infrastructure of data centers is undergoing rapid and radical transformation. The likes of Google, Microsoft, and Meta are now drawing on technologies initially developed for electric vehicles (EVs), particularly 400VDC systems, to address the dual challenges of high-density power delivery and thermal management. The emerging vision is of data center racks capable of delivering up to 1 megawatt of power, paired with liquid cooling systems engineered to manage the resulting heat. Borrowing EV technology for data center evolution The shift to 400VDC power distribution marks a decisive break from legacy sy...

When might Captain America: Brave New World be available to stream on Disney Plus?

Web Hosting & Remote IT Support Captain America: Brave New World has landed in theaters worldwide and I bet you're already wondering when it might debut on Disney Plus . Indeed, Marvel's latest movie has just taken flight in cinemas as of today (February 14), but, if you're not planning to watch it on the biggest screen possible, you'll want to know when it could come to Disney's primary streaming service. Right now, I can't tell you when it'll be released on one of the world's best streaming services . However, I can use some of its predecessors' Disney Plus launch dates to predict its arrival. Before you continue scrolling, though, read my Captain America: Brave New World review to see if it's worth watching, plus my Captain America: Brave New World hub and Captain America 4 cast and character guide for details on its cast, story, trailers, and more. When do we think Captain America 4 will debut on Disney Plus? You won...