AWS CISO: In AI gold rush, folks forget application security

AWS CISO: In AI gold rush, folks forget application security

Source Node: 2576489

RSAC As corporations rush full tilt to capitalize on the AI craze and bring machine-learning-based apps to market, they aren’t paying enough attention to application security, says AWS Chief Information Security Office Chris Betz.

“Companies forget about the security of the application in their rush to use generative AI,” Betz told The Register during an interview at the RSA Conference in San Francisco last week.

There needs to be safeguards and other protections around these advanced neural networks, from training to inference, to avoid them being exploited or used in unexpected and unwanted in ways, we’re told: “A model doesn’t stand on its own. A model exists in the context of an application.”

Betz described securing the AI stack as a cake with three layers. The bottom layer is the training environment, where the large language models (LLMs) upon which generative AI applications are built. That training process needs to be robust to ensure you’re not, among other things, putting garbage in and getting garbage out.

“How do you make sure you’re getting the right data, that that data is protected, that you’re training the model correctly, and that you have the model working the way that you want,” Betz said.

The middle layer provides access to the tools needed to run and scale generative AI applications. 

“You spend all this time training and fine tuning the model. Where do you run the model? How do you protect the model? These models are really interesting because they get handed some of the most sensitive data that a company has,” Betz said.

So it’s imperative that that right data makes it into and out of the LLM, and that the data is protected throughout this process, he explained.

Securing the top layer — the applications using LLMs or those built on top of AI platforms — sometimes gets lost in the push to market.

“The first two layers are new and novel for customers,” Betz added. “Everybody’s learning as they go. But there’s a rush to get these applications out.” That rush leaves the top layer vulnerable.

During the annual cybersecurity event, AWS and IBM released a study based on a survey of 200 C-level executives conducted in September 2023. It found 81 percent of respondents said generative AI requires a new security governance model. Similarly, 82 percent said secure and trustworthy AI is essential to the success of their businesses.

However, only 24 percent of today’s gen-AI projects have a security component, according to that survey, meaning the C-suite isn’t prioritizing security.

“That disparity, I think, is part of that race to the market,” Betz said. “And as I’ve talked with customers, and as I’ve seen public data, the places where we’re seeing the security gaps first are actually at the application layer. It’s the traditional technology where we’ve got people racing to get solutions out, and they are making mistakes.” ®

Time Stamp:

More from The Register