“When you are 50 or 70 times faster than the competition, you can do things they can’t do at all,” says Cerebras CEO Andrew Feldman. Tiernan Ray/ZDNETAI computer pioneer Cerebras Systems has been “crushed” with demand to run DeepSeek’s R1 large language model, says company co-founder and CEO Andrew Feldman.”We are thinking about how to meet the demand; it’s big,” Feldman told me in an interview via Zoom last week.DeepSeek R1 is heralded by some as a watershed moment for artificial intelligence because the cost of pre-training the model can be as little as one-tenth that of dominant models such as OpenAI’s GPTo1 while having results as good or better.The impact of DeepSeek on the economics of AI is significant, Feldman indicated. But the more profound result is that it will spur even larger AI systems. Also: Perplexity lets you try DeepSeek R1 without the security risk”As we bring down the cost of compute, the market gets bigger and bigger and bigger,” said Feldman. Numerous AI cloud services rushed to offer DeepSeek inference after the AI model became a sensation, including Cerebras but also much larger firms such as Amazon’s AWS. (You can try Cerebras’s inference service here.)Cerebras’s edge is speed. According to Feldman, running inference on the company’s CS-3 computers achieves output 57 times faster than other DeepSeek service providers. Cerebras also highlights its speed relative to other large language models. In a demo of a reasoning problem done by DeepSeek running on Cerebras versus OpenAI’s o1 mini, the Cerebras machine finishes in a second and a half, while o1 takes a full 22 seconds to complete the task. More