There is no other way to say it clearer, social engineering is going to be a lot, lot worse soon and far more successful than it is today. And that’s saying a lot. It’s already pretty bad.
As I’ve been touting for over 20 years…in hundreds of articles…social engineering is involved in more successful data breaches than any other single hacker method. Social engineering is involved in 70% – 90% of all successful data breaches, and any other source telling you differently is not counting the entire population of the world or is co-mingling or sorting attack categories incorrectly.
Here are relevant links about those figures if you would like to see more details:
Note: According to Google’s Mandiant, unpatched software and firmware are involved in 33% of successful data breaches (https://blog.knowbe4.com/hands-on-defense-unpatched-software-causes-33-of-successful-attacks). Anecdotally, I think that figure is likely rising closer to 40% over the last two years, but I haven’t seen good data to back it up, yet.
Unpatched software and firmware and social engineering are likely involved in 90% – 99% of all successful attacks.
But for today’s post, I just want to concentrate on social engineering.
It’s bad, and it’s going to get a lot worse soon.
Why?
AI.
I’m not saying this lightly or just to get views. I’ve been discussing AI attacks non-stop since October 2022, when OpenAI released ChatGPT. And until a few weeks ago, when I told audiences about the various types of AI attacks coming their way, I was fond of saying, “AI attacks are coming, but how you’re likely to be compromised in the coming year does not involve AI.”
As of a few weeks ago, I no longer say that statement.
Instead, what I now say is, “How you are likely to be attacked by the end of this year and certainly starting next year is by something AI-enabled. By 2026, most of the attacks will be AI-enabled and it will just be that way ever more.”
I’ve written several articles on types of AI attacks coming, including this one: AI Attacks Are Coming in a Big Way Now!
Just assume that everything a hacker could do before, including social engineering and attacking unpatched vulnerabilities, will be done faster, better, or more pervasively by AI.
AI-enabled social engineering agents will soon be crafting every email, every SMS message, and every fraudulent voicemail left on your phone. The scripts that these malicious agents will use will be more personalized, more believable, and more able to achieve success. There are already AI social engineering bots that outperform their human counterparts. In one contest, the AI outperformed the human by 24%.
This means that every social engineer will be using AI-enabled tools and deepfakes within 6-12 months. That’s how long it takes for today’s AI technologies to filter into all the hacking tools and phishing kits they buy. Within half a year to a year, every hacker worth their salt will be using AI-enabled tools to better attack and fool people.
Where will that take the current 70%-90% figure, which is already pretty bad?
No one knows.
But we can say that when our AI-enabled agents (called AIDA) select simulated phishing emails to send to end users instead of a human administrator, it fools 2-3 times more people. That’s a huge jump up. Early, the improvement was only 7%. Now it’s 200% – 300%. AI is getting better over time.
Whatever the increase is, it will be an increase. The 70% – 90% figure is going to rise.
There will be more successful social engineering.
What is a defender supposed to do?
Well, do what you should already be doing in trying to decrease human risk. That means security awareness training. That means AI-enabled tools. That means creating and implementing policies that make social engineering less likely to be successful.
Go here for our Human Risk Management (HRM) approach: www.knowbe4.com/.
Even if the scam is AI-enabled, the scammer still has to use words or text that encourage the potential victim to perform an action against their own self-interest. Use HRM and AIDA (https://www.knowbe4.com/products/aida) to put down AI social engineering threats.
However you do it, make your co-workers (and family and friends) understand the coming wave of advanced AI attacks is coming soon…by the end of this year and certainly into next. What we have been waiting for a few years is almost here. It’s time to educate, tool up, and be prepared.