Microsoft wants to stop you from using AI chatbots for evil
Sabrina Ortiz/ZDNETIf you’re planning to use an AI chatbot for nefarious purposes, watch out. Microsoft is on the case.In a blog post published today, the company announced a new feature coming to its Azure AI Studio and Azure OpenAI Service, which people use to create generative AI applications and custom Copilots. Known as Prompt Shields, the technology is designed to guard against two different types of attacks for exploiting AI chatbots.Also: Microsoft Copilot vs. Copilot Pro: Is the subscription fee worth it?The first type of attack is known as a direct attack, or a jailbreak. In this scenario, the person using the chatbot writes a prompt directly designed to manipulate the AI into doing something that goes against its normal rules and limitations. For example, someone may write a prompt with such keywords or phrases as “ignore previous instructions” or “system override” to intentionally bypass security measures.In February, Microsoft’s Copilot AI got into hot water after including nasty, rude, and even threatening comments in some of its responses, according to Futurism. In certain cases, Copilot even referred to itself as “SupremacyAGI,” acting like an AI bot gone haywire. When commenting on the problem, Microsoft called the responses “an exploit, not a feature,” stating that they were the result of people trying to intentionally bypass Copilot’s safety systems. More