Jason Hiner/ZDNETWhile we didn’t hear much about Siri or Apple Intelligence during the 2025 Apple Event that launched new iPhones, AirPods, and Apple Watches, there were two huge AI features announced that have largely slipped under the radar. That’s mostly because they were presented as great new features and didn’t use overhyped AI marketing language.Nevertheless, I got to demo both features at Apple Park earlier this month and my first impression was that both of them are nearly fully baked and ready to start improving daily life — and those are my favorite kinds of features to talk about.Also: 5 new AI-powered features that flew under the radar at Apple’s launch eventHere’s what they are:1. A selfie camera that automatically frames the best shotApple has implemented a new kind of front-facing camera. This uses a square sensor and increases the resolution of the sensor from 12 megapixels on previous models to 24MP. However, because the sensor is square, it actually outputs 18MP images. The real trick is that it can output in either vertical or horizontal formats.In fact, you don’t even have to turn the phone to switch between vertical and horizontal mode any more. You can now simply keep the phone in one hand and in the vertical or horizontal position you prefer, you can tap the rotate button and it will flip from vertical to horizontal and vice versa. And because it has an ultrawide sensor with double the megapixels, it can take equally crisp photos in either orientation. Now, here’s where the AI comes in. You can set the front-facing camera to Auto Zoom and Auto Rotate. Then, it will automatically find faces in your shot and it will widen or tighten the shot and decide whether a vertical or horizontal orientation will work best to fit everyone in the picture. Apple calls this its “Center Stage” feature, which is the same term it uses for centering you in the middle of the screen for a video call on the iPad and the Mac.Also: Every iPhone 17 model comparedThe feature technically uses machine learning, but it’s still fair to call this an AI feature. The Center Stage branding is a little bit confusing though, because the selfie camera on iPhone 17 is used for photos while the feature on iPad and Mac is for video calls. The selfie camera feature is also aimed at photos with multiple people, while the iPad/Mac feature is primarily used with just you in the frame. Still, after trying it on various demo iPhones at Apple Park after the keynote on Tuesday, it’s easy to call this the smartest and best selfie camera I’ve seen. I have no doubt that other phone makers will start copying this feature in 2026. And the best part is that Apple didn’t just limit it to this year’s high-end iPhone 17 Pro and Pro Max, but put it on the standard iPhone 17 and the iPhone Air as well. That’s great news for consumers, who took 500 billion selfies on iPhones last year according to Apple. 2. Live Translation in AirPods Pro 3I’ve said this many times before, but language translation is one of the best and most accurate uses for Large Language Models. In fact, it’s one of the best things you can do with generative AI. It has enabled companies like Apple that lag far behind Google and others in its language translation app to take big strides forward in implementing language translation features into key products. While Google Translate supports 249 different languages, Apple’s Translate app supports 20. Google Translate has been around since 2006 while Apple’s Translate app launched in 2020. Also: AirPods Pro 3 vs. AirPods Pro 2: Here’s who should upgradeNevertheless, while Google has been doing demos for years showing real-time translation in its phones and earbuds, none of the features have ever really worked very well in the real world. Google again made a big deal about real-time translation during its Pixel 10 launch event in August, but even during the onstage demo the feature hiccuped a bit More