Elyse Betters Picaro / ZDNETOn Thursday, the Financial Times reported that OpenAI has dramatically minimized its safety testing timeline.Also: The top 20 AI tools of 2025 – and the No. 1 thing to remember when you use themEight people who are either staff at the company or third-party testers told FT that they had “just days” to complete evaluations on new models — a process they say they would normally be given “several months” for. Competitive edgeEvaluations are what can surface model risks and other harms, such as whether a user could jailbreak a model to provide instructions for creating a bioweapon. For comparison, sources told FT that OpenAI gave them six months to review GPT-4 before it was released — and that they only found concerning capabilities after two months. Also: Is OpenAI doomed? Open-source models may crush it, warns expertSources added that OpenAI’s tests are not as thorough as they used to be and lack the necessary time and resources to properly catch and mitigate risks. “We had more thorough safety testing when [the technology] was less important,” one person, who is currently testing o3, the full version of o3-mini, told FT. They also described the shift as “reckless” and “a recipe for disaster.” Also: This new AI benchmark measures how much models lieThe sources attributed the rush to OpenAI’s desire to maintain a competitive edge, especially as open-weight models from competitors, like Chinese AI startup DeepSeek, gain more ground. OpenAI is rumored to be releasing o3 next week, which FT’s sources say rushed the timeline to under a week. More