Cybercriminals are adopting AI tools to increase the speed, efficiency, scale, and effectiveness of their attacks. And these new AI-powered attacks are having a very real impact on organizations.
Deepfake scams, which might involve fraudulent voicemails or video calls, have resulted in millions of dollars in losses. In one case, a deepfake of the CFO from a multinational business duped a finance worker into paying $25 million to fraudsters. Meanwhile, AI-generated phishing attacks are tricking more people into disclosing credentials while AI-enhanced malware is evading existing defenses.
AI is also fueling misinformation campaigns, data poisoning, and model manipulation, potentially compromising AI-driven systems. Gartner has recently ranked AI-assisted misinformation as a top cyber risk.
We’ll see increasingly sophisticated AI-based attacks as cybercriminals reinvest more of their profits into new technology. Many are already using gains from low-cost attacks to fund R&D for higher-cost, higher-yield schemes. In many cases, attackers’ budgets are much more focused than the budgets of their target organizations.
To explore how AI is changing the cybersecurity landscape, I spoke with Khalid Kark, Cloudflare’s field CIO for the Americas, and Daniel Kendzior, the global data and AI security practice leader at Accenture. We agreed that most traditional security tools are ineffective against AI-enhanced attacks. To bolster resilience, organizations have to transform their approach to security. Instead of relying on passive defenses, they need to take a more active, preventive approach — one that capitalizes on AI to combat AI.
As attackers continue to adopt AI tools, we all need to be ready for more polished versions of familiar social engineering schemes as well as larger-scale bot-based attacks. Meanwhile, we will likely see a variety of new tactics, including new types of identity fraud. Boosting awareness of these changes — across our organizations — will be essential for building stronger security.
Some things don’t change. For attackers, social engineering is still the easiest and cheapest tactic. It’s also very successful. According to a report from Verizon, 68% of breaches occur because of human error — and many of those errors are tied to social engineering, such as phishing schemes.
But what we’re seeing is an increase in the sophistication and consistency of attacks. Attackers are using generative AI (GenAI) to create more convincing phishing emails, for example, without any of the spelling or grammatical errors that might signal a fraudulent message.
At the same time, attackers are increasingly using AI to create deepfakes. An employee might receive a voicemail that sounds like it’s from a manager, though it was really created by an AI model. The employee might be tricked into sharing credentials, approving transactions, or exposing sensitive data. And with AI, attackers can create these deepfake messages in seconds.
Unfortunately, it is extremely difficult to fight the cognitive biases that make us vulnerable to these attacks. We all make errors as we process and interpret information. For example, we tend to favor information that confirms our existing beliefs while ignoring information that doesn’t. This confirmation bias makes us more likely to fall for a deepfake: If we receive a voice message that sounds like it’s from our manager, we’ll tend to believe that it’s true.
In addition to creating better phishing emails and deepfakes to deceive employees, attackers are using AI to manufacture new identities. Synthetic identity fraud (SIF) involves the creation of hyper-realistic identities by blending real and fake data to bypass traditional verification systems. AI-generated personal details and automated credential stuffing make these identities increasingly more difficult to detect.
Fraudulent identities pose major risks to heavily targeted industries such as financial services, healthcare, and government. Because SIF often lacks immediate victims, it often goes unnoticed. As a result, fraudsters can build credit histories and successfully execute scams.
Bots are another type of human substitute that have potential for real harm. As the 2025 Cloudflare Signals Report shows, 28% of all application traffic observed by Cloudflare in 2024 came from bots, a percentage that has remained mostly steady from the previous four years. While bots can serve legitimate purposes — such as search engine indexing or availability monitoring — the vast majority of bots (93%, according to Cloudflare traffic analysis) are unverified and potentially malicious.
AI-powered bots are enabling attackers to launch large-scale, automated attacks with unprecedented efficiency. We’re seeing AI-powered bots that can produce 200 times more hits on a website than previous attacks. Bots are used not only to mount large distributed denial-of-service (DDoS) attacks but also to scrape sensitive data and intellectual property, conduct credential stuffing, and execute fraud at machine speed. AI models supercharge these capabilities by bypassing traditional CAPTCHAs and evading detection with adaptive behavior.
The challenge, then, is to separate the good from the bad — and block the bad.
Staying ahead of AI-powered threats requires a proactive — and AI-enhanced — approach to security. In my conversation with Khalid Kark and Daniel Kendzior, we touched on seven ways we can all strengthen our security posture to defend against these emerging threats.
Increase observability.
“A lot of vendors are adding AI to everything, like it’s a high-end garnish,” says Kendzior. AI is even being sprinkled into software you might already be using. Not all of those AI capabilities will be helpful — and some might create new vulnerabilities. To minimize any AI-related risks, we all need to adjust how we purchase software, gaining better visibility into where AI has been incorporated.
At the same time, enhancing observability of AI usage within our organizations will become more important. Consider that organizations typically have about 33% more APIs than they’ve cataloged. And those uncataloged APIs are potential vectors for attacks. Similarly, we often don’t know where and how AI is being used throughout the organization — and yet each of those instances of AI tools could leave us exposed.
By implementing a platform that can unify logs, analytics, alerts, and forensics, we can then apply AI capabilities to this data to identify risks and pinpoint their root causes.
Detect and neutralize threats in real time.
As attackers use AI to increase the speed and volume of threats, we need a way to keep pace. Fortunately, AI can help us bolster cybersecurity. For example, AI can analyze vast data sets and identify anomalies that might indicate threats. AI tools can then automate responses to threats, addressing issues in real time.
Analyzing employee user behavior — including access patterns, privilege escalations, and data exfiltration attempts — can help establish baselines and detect anomalies that might indicate insider threats. With these AI-assisted capabilities, we can spot insider risks before they escalate.
Protect against AI-enhanced phishing and deepfakes.
As attackers adopt AI to improve phishing and to create deepfakes, we need to put more sophisticated controls and policies in place — including controls and policies that help us combat our own biases and errors. Let’s say an employee receives a text that appears to come from a manager instructing the employee to wire money to a particular account. We can avoid damage if we have, first, a policy that prevents employees in certain roles from executing wire transfers. That policy must also include process controls to ensure multiple checks are made before anything is executed.
Improve identity management.
Managing identity has always been difficult, but AI has made it much harder. “We’ve had clients that use remote registration for new customers and employees. And now in this deep fake–enabled world, you need to have some skepticism of whether you’re dealing with an actual person,” says Kendzior.
Zero Trust security plays a key role in improving identity management. With the right Zero Trust solution, organizations can prevent unauthorized individuals and AI bots from accessing enterprise resources while also streamlining access for authorized people.
Identify and block bad bots. Bot management capabilities can separate good bots from bad. They can then prevent the bad bots from scraping data, stealing content, or slowing website performance.
Address the human element. There will always be a technology layer in combating cybersecurity threats. But stopping AI-enhanced threats also involves behavioral training. We need to educate employees about the phishing attempts, deepfakes, and other AI-enhanced attacks that are coming, so they can be more skeptical of the messages they receive.
Organizations must also work to create not just highly aware employees but also better digital citizens — because these AI-enhanced schemes reach people even after the working day is over. For example, they might receive texts telling them that they urgently have to pay a vehicle toll. If they mistakenly click on those links — or worse, pay the attackers — they are inadvertently funding future attacks that could impact their employers.
Collaborate with partners. AI is helping cybercrime become an even bigger business — we’re no longer fighting individual hackers. In addition to adopting the latest AI-enhanced cybersecurity capabilities, we need to scale up our alliances so we have a better chance of protecting ourselves against organized attackers. The more we work with partners, the better our defenses will be.
The number, scale, and sophistication of AI-powered threats will continue to rise. And with each successful phishing attempt or data breach, cybercriminals will reinvest profits in more advanced technologies. Many organizations need to significantly revamp their defenses for this new era of AI-powered attacks — and they need to get started now.
Cloudflare’s connectivity cloud provides a growing array of tools and services that can help you safeguard your organization from AI-driven threats. The Cloudflare AI Gateway provides visibility into the cost, usage, latency, and overall performance of AI applications. And Firewall for AI enables you to protect apps powered by large language models (LLMs) against abuse. These tools work with Cloudflare’s deep portfolio of cloud-native security services to help you scale your defenses for larger, more sophisticated AI-based threats.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Learn more about AI-driven threats and other trends that will require a shift toward improving resilience in the 2025 Cloudflare Signals Report: Resilience at Scale.
Get the guide!Mike Hamilton — @mike-hamilton-us
CIO, Cloudflare
After reading this article, you will be able to understand:
How cybercriminals are using AI to enhance their attacks
3 tactics most commonly employed today
7 ways to strengthen security for AI-based threats