I put DeepSeek AI’s coding skills to the test – here’s where it fell apart
David Gewirtz/ZDNETDeepSeek exploded into the world’s consciousness this past weekend. It stands out for three powerful reasons: It’s an AI chatbot from China, rather than the USIt’s open source.It uses vastly less infrastructure than the big AI tools we’ve been looking at.Given the US government’s concerns over TikTok and possible Chinese government involvement in that code, a new AI emerging from China is bound to generate attention. ZDNET’s Radhika Rajkumar did a deep dive into those issues in her article Why China’s DeepSeek could burst our AI bubble.Also: The best AI for coding in 2025 (and what not to use)In this article, we’re avoiding politics. Instead, I’m putting DeepSeek R1 through the same set of AI coding tests I’ve thrown at 10 other large language models. The short answer is this: impressive, but not perfect. Let’s dig in. Test 1: Writing a WordPress plugin This test was actually my first test of ChatGPT’s programming prowess, way back in the day. My wife needed a plugin for WordPress that would help her run an involvement device for her online group. Also: How to use ChatGPT to write code: What it does well and what it doesn’tHer needs were fairly simple. It needed to take in a list of names, one name per line. It then had to sort the names, and if there were duplicate names, separate them so they weren’t listed side-by-side.I didn’t really have time to code it for her, so I decided to give the AI the challenge on a whim. To my huge surprise, it worked.Since then, it’s been my first test for AIs when evaluating their programming skills. It requires the AI to know how to set up code for the WordPress framework and follow prompts clearly enough to create both the user interface and program logic. More