We Need to Teach Our AIs to Securely Code

We Need to Teach Our AIs to Securely Code


blog.knowbe4.comhubfsSocial Image RepositoryEvangelist Blog Social GraphicsEvangelists-Roger Grimes-1I have been writing about the need to better train our programmers in secure coding practices for decades, most recently here and here.

At least a third of data compromises involved exploited software and firmware vulnerabilities and we are on our way to having over 47,000 separate, publicly known vulnerabilities this year. There are at least 130 new vulnerabilities learned and publicly reported every day, day after day. That is a lot of exploitation. That is a lot of patching.

And until now, what I have said is that we need to: 

  1. Better train our coders in secure coding practices
  2. Programming curricula need to teach secure coding practices
  3. Employers need to require programmers who have secure coding skills

Well, that is all old news now. We no longer need it.

What we now is to teach AI how to code more securely. 

Out of all the productivity gains that have come with AI, the ability for it to write code (and/or assisting developers in writing code) is easily the biggest productivity development to come out of the current level of AI maturity. Almost every coder alive is using AI to code, and if they are not, they will be. The productivity gains are very impressive. My coder friends say they are experiencing at least a 30% – 40% productivity increase by using AI. Even my programmer friends who were originally AI skeptics have come around. Coding is largely an AI-driven world, although humans still need to be in the loop.

The time to train our programmers in secure coding has passed.

If AI is doing most of the coding, it is time for AI to be forced to do secure coding. And right now it isn’t doing it well. Every study I have seen on the matter shows that AI is bad or worse at secure coding than human programmers. Here are some examples:

Early on, I had great promise that AI might finally be the solution to our security vulnerability problems. Sure, I expected AI-produced code to have some level of security vulnerabilities, but surely automated code could avoid the easy stuff and be constantly improved to remove remaining vulnerabilities. I thought in short order that most security vulnerabilities in software, services and firmware would be a thing of the past.

Boy, was I wrong!

It turns out the existing crop of AI that assist with code development is apparently as bad or worse than humans. I guess on one level that makes sense – garbage in, garbage out. How can AI trained purely on error-filled human code somehow be expected to produce fewer security vulnerabilities?

But how hard can it be? Take your AI code generation algorithms and tell them not to perform all the known common security vulnerabilities. Tell it to avoid weak programming constructs, to always perform input validation, and never put hard-coded credentials in programming code, and to avoid any coding situation covered in the OWASP Top 10.

I realize that it must be harder than it sounds, and certainly every company and person in the secure coding field is already on top of this. I am a late arrival. 

I admit it. I am preaching to the choir. But what I am revealing here is my new understanding. I am late to this understanding. I am acknowledging it here. After decades of calling for humans to be trained further in secure coding, I recognized that time has passed (possibly) and now it is time to mostly concentrate on getting our coding AIs up to speed.

And this gives me hope.

In the many decades of trying to teach humans to code more securely and not really doing it or doing it poorly, it’s time to hand over the task to automated tools. If we can train our AIs to avoid common security vulnerabilities, one day the number of exploits on our running list of new CVEs might start to go down instead of increasing.

For some unknown set of reasons, we have not been able to give our human programmers secure coding skills, at least in the right amounts. Time has changed. Technologies have shifted. Time to focus on training the AIs in secure coding. 

And astute readers will realize that the future of computer security is even more of the same. Where we once mostly focused on human training, we will increasingly focus over time on better training the AI agents that humans use. 

Our AI agents are quickly becoming an extension of ourselves and only by better educating our AIs will we better protect ourselves.





Source link