Can you trust AI to protect AI?

Source Node: 1884060

Join today’s leading executives online at the Data Summit on March 9th. Register here.


Now that AI is heading into the mainstream of IT architecture, the race is on to ensure that it remains secure when exposed to sources of data that are beyond the enterprise’s control. From the data center to the cloud to the edge, AI will have to contend with a wide variety of vulnerabilities and an increasingly complex array of threats, nearly all of which will be driven by AI itself.

Meanwhile, the stakes will be increasingly high, given that AI is likely to provide the backbone of our healthcare, transportation, finance, and other sectors that are crucial to support our modern way of life. So before organizations start to push AI into these distributed architectures too deeply, it might help to pause for a moment to ensure that it can be adequately protected.

Trust and transparency

In a recent interview with VentureBeat, IBM chief AI officer Seth Dobrin noted that building trust and transparency into the entire AI data chain is crucial if the enterprise hopes to derive maximum value from its investment. Unlike traditional architectures that can merely be shut down or robbed of data when compromised by viruses and malware, the danger to AI is much greater because it can be taught to retrain itself from the data it receives from an endpoint.

“The endpoint is a REST API collecting data,” Dobrin said. “We need to protect AI from poisoning. We have to make sure AI endpoints are secure and continuously monitored, not just for performance but for bias.”

To do this, Dobrin said IBM is working on establishing adversarial robustness at the system level of platforms like Watson. By implementing AI models that interrogate other AI models to explain their decision-making processes, and then correct those models if they deviate from norms, the enterprise will be able to maintain security postures at the speed of today’s fast-paced digital economy. But this requires a shift in thinking away from hunting and thwarting nefarious code to monitoring and managing AI’s reaction to what appears to be ordinary data.

Already, reports are starting to circulate on the many ingenious ways in which data is being manipulated to fool AI into altering its code in harmful ways. Jim Dempsey, lecturer at the UC Berkeley Law School and a senior advisor to the Stanford Cyber Policy Center, says it is possible to create audio that sounds like speech to ML algorithms but not to humans. Image recognition systems and deep neural networks can be led astray with perturbations that are imperceptible to the human eye, sometimes just by shifting a single pixel. Furthermore, these attacks can be launched even if the perpetrator has no access to the model itself or the data used to train it.

Prevent and respond

To counter this, the enterprise must focus on two things. First, says Dell Technologies global CTO John Roese, it must devote more resources to preventing and responding to attacks. Most organizations are adept at detecting threats using AI-driven event information-management services or a managed-security service provider, but prevention and response are still too slow to provide adequate mitigation of a serious breach.

This leads to the second change the enterprise must implement, says Rapid7 CEO Corey Thomas: empower prevention and response with more AI. This is a tough pill to swallow for most organizations because it essentially gives AI leeway to make changes to the data environment. But Thomas says there are ways to do this that allow AI to function on the aspects of security it is most adept at handling while reserving key capabilities to human operators.

In the end, it comes down to trust. AI is the new kid in the office right now, so it shouldn’t have the keys to the vault. But over time, as it proves its worth in entry-level settings, it should earn trust just like any other employee. This means rewarding it when it performs well, teaching it to do better when it fails, and always making sure it has adequate resources and the proper data to ensure that it understands the right thing to do and the right way to do it.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Source: https://venturebeat.com/2022/02/04/can-you-trust-ai-to-protect-ai/

Time Stamp:

More from AI – VentureBeat