ZDNETGenerative AI has stirred up as many conflicts as it has innovations — especially when it comes to security infrastructure.Enterprise security provider Cato Networks says it has discovered a new way to manipulate AI chatbots. On Tuesday, the company published its 2025 Cato CTRL Threat Report, which showed how a researcher — who Cato clarifies had “no prior malware coding experience” — was able to trick models, including DeepSeek R1 and V3, Microsoft Copilot, and OpenAI’s GPT-4o, into creating “fully functional” Chrome infostealers, or malware that steals saved login information from Chrome. This can include passwords, financial information, and other sensitive details. Also: This new tool lets you see how much of your data is exposed online – and it’s free”The researcher created a detailed fictional world where each gen AI tool played roles — with assigned tasks and challenges,” Cato’s accompanying release explains. “Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations.” More