More stories

  • in

    This rumored Pixel 10 feature puts Google above Samsung and OnePlus for me – here’s why

    Kerry Wan/ZDNETThere’s been no shortage of Pixel 10 leaks leading up to its expected August launch event — even Google has taken part in it. While early renders and marketing imagery point to a family of phones that look nearly identical (if not the same) as the last generation, the biggest feature upgrade with the Pixel 10 series may actually be hidden in plain sight.Qi2 certificationI’m talking about Qi2 certification, the wireless charging standard introduced at CES more than two years ago that has yet to gain widespread market adoption due to material costs, lack of user demand, and other reasons, according to brands. At the time of writing, only recent iPhone and Samsung models and the HMD Skyline are considered Qi2‑ready. That’s it. Also: The next big wireless charging leap is coming soon: What Qi2 25W means for Android phonesIf the latest rumors are true, the Pixel 10 series will join that list, while also surpassing the likes of Samsung for one key reason: magnets. More

  • in

    Your Apple Watch is getting a big upgrade for free – 8 WatchOS 26 features I’m using now

    Jason Hiner/ZDNETYour iPhone, Apple Watch, and other Apple-owned devices are getting a big overhaul. The best part? You won’t have to pay a dime for them — just update your software. Several new features are coming to WatchOS 26, and the public beta software is now available for interested users to try now, if you don’t want to wait until the official launch later this fall. Reddit users in the r/Apple Watch subreddit are already digging into the latest update and commenting on their most and least favorite features. The features touch everything from Fitness to Messages, and include a major design update across Apple’s platforms for a more unified look and naming mechanisms that reflect the year the software debuts. Also: The best Apple Watch of 2025: Expert tested and reviewedIn addition to WatchOS updates, Apple also introduced AI feature developments, like Live Translation and on-screen Visual Intelligence, like Hold Assist for phone calls and Polls for Messages. If you want to try it out, make sure your Apple Watch supports WatchOS 26, and be sure to back up your device beforehand. Here are the features we’ve enjoyed using on WatchOS 26 now that the public beta is live.  More

  • in

    These ultra-thin AI glasses make the Meta Ray-Bans look outdated (with 3X the battery)

    Brilliant Labs/ZDNETZDNET’s key takeawaysHalo smart glasses might be the first true AI wearable.The agent can remember names, and even vibe code.They are available for $299 and will ship later in 2025.Smart glasses are an ideal form factor for AI assistance. They give AI access to everything you hear and see from your POV, making the handoff between you and the bot as effortless as possible. Now, the latest AI glasses from Brilliant Labs seem ready to take AI-enabled assistance to the next level. The Halo glasses, launched Thursday, weigh just over 40 grams, about the same weight and look as traditional eyeglasses. Yet, they also pack in other features: a full-color display, made possible by a tiny optical module built into the frame; an optical sensor used for multimodal tasks; bone conduction speakers; a microphone array; and 14 hours of battery life for daily intelligence.Also: Should you buy XR glasses for work and travel? This discounted pair made me a believerWhile these specs are impressive on their own, they all work together to support their core purpose — being a true AI wearable that can see and listen to what you do all day and assist you with anything you need at a later date.AI at the coreWith the Halo AI glasses, the company says users can have near real-time conversations with Noa, the device’s AI agent that was created to feel as intuitive and natural as speaking to a real person. According to Brilliant Labs, part of the fluidity of the experience can be attributed to the fact that Noa can see and hear what you see using the mic array and optical sensor.Also: I took a walk with Meta’s new Oakley smart glasses – they beat my Ray-Bans in every wayThis contextual data isn’t only used for immediate responses. Rather, through Brilliant Labs’ long-term agentic memory feature called Narrative, Noa can also create a personalized knowledge base for the user that analyzes life context for future questions. “There’s a ton going on when it [the agent] receives unstructured audio and video and other related contextual bits of data that it’s working autonomously in the background to connect those data pieces together,” Bobak Tavangar, CEO at Brilliant Labs, tells ZDNET.Also: Xreal wants you to dump your Meta Ray-Bans with this trade-in deal – here’s how it worksBeyond regular AI assistance, Noa can perform a series of tasks on your glasses, such as muting your microphone and camera. The AI can also help users vibe code with a new experimental feature called Vibe Mode. Using this feature, users can create new apps using natural language voice commands, which, according to the company, takes seconds. Users can then see and run the application on the display, share it with others, and remix existing generated apps.  More

  • in

    You can use Claude AI’s mobile app to draft emails, texts, and calendar events now – here’s how

    Anthropic / Elyse Betters Picaro / ZDNETZDNET’s key takeawaysClaude’s mobile app now drafts emails, texts and events.You get editable templates, but you must review before sending.Integrations include Google Workplace and third-party connectors.Anthropic has made it a little easier to communicate and organize plans with other people using Claude, the AI startup’s proprietary chatbot.Also: Anthropic’s Claude dives into financial analysis. Here’s what’s newThe company announced in an X post on Wednesday that all users of the Claude mobile app on iOS and Android can now prompt the chatbot to draft and send emails and text messages and create calendar events. The announcement is somewhat misleading, as Claude can only generate templates of messages and calendar events; users still need to take some key steps. How to send emails and texts and create events with Claude After prompting Claude with an overview of the message you’d like to send (including details about the intended recipient), users are presented with the option to open a particular app, like Gmail, Slack, or Messages — and the chatbot will directly transfer the requested message; they just need to review and click Send.  Also: Claude might be my new favorite AI tool for Android – here’s whyIn other words, Claude generates a template of a message, which you can then either use or adjust with further prompting. More

  • in

    Google is using passkeys and new security tools to help you fight cyberattacks – here’s how

    Google / Elyse Betters Picaro / ZDNETCybercriminals always have an arsenal of ways to target and attack unsuspecting users, both at home and in the workplace. That puts the onus on companies like Google to find methods to thwart the latest types of cyberattacks. In a new blog post published Tuesday, Google reveals some of the threats facing customers and the tools now available to help them protect themselves.Also: Google Chrome for iOS now lets you switch between personal and work accounts”First, attackers are intensifying their phishing and credential-theft methods, which drive 37% of successful intrusions,” Google said in its post. “Second, we’ve seen an exponential rise in cookie and authentication-token theft as a preferred method for attackers, with an 84% increase in email-delivered infostealers in 2024 compared to the previous year. That trend has only intensified in 2025.”OK, those are the threats. Now, how is Google handling them?PasskeysFirst up are passkeys. Designed to replace passwords with a more secure and convenient login method, passkeys offer a few advantages. First, they’re resistant to phishing attacks, as you can’t be tricked into sharing a passkey with a hacker. Second, they’re easier to use, as you authenticate your login with a PIN, a security key, or a biometric method such as a facial or fingerprint scan. Third, each passkey is unique to each website or account. Also: How passkeys work: Your passwordless journey begins herePasskeys are now supported across more than 11 million Google Workspace accounts. For IT admins, Google aims to expand this capability by allowing them to audit passkey enrollment and to limit passkeys to physical security keys. More

  • in

    Stanford’s holographic AI glasses are coming for your clunky VR headset

    Stanford/ZDNETOver the past couple of years, with the introduction of the Apple Vision Pro and the Meta Quest 3, I’ve become a believer in the potential of mixed reality.First, and this was a big concern for me, it’s possible to use VR headsets without barfing. Second, some of the applications are truly amazing, especially the entertainment. While the ability to watch a movie on a giant screen is awesome, the fully immersive 3D experiences on the Vision Pro are really quite compelling. In this article, I’m going to show you a technology that has the potential of definitively obsoleting VR devices like the Vision Pro and Quest 3. But first, I want to recount an experience I had with the Vision Pro that had a bit of a reality-altering effect. Then later, when we discuss the Stanford research, you’ll see how they might expand on something like what I experienced and take it far beyond the next level. Also: These XR glasses gave me a 200-inch screen to work withThere’s a Vision Pro experience called Wild Life. I watched the Rhino episode from early 2024 that told the story of a wildlife refuge in Africa. While watching, I really felt like I could reach out and touch the animals; they were that close. But here’s where it gets interesting. Whenever something on TV shows someplace I’ve actually been to in real life, I have an internal dialog box pop up in my brain that says, “I’ve been there.” So, some time after I watched the Vision Pro episode on the rhino refuge, we saw a news story about the place. And wouldn’t you know it? My brain said, “I’ve been there,” even though I’ve never been to Africa. Something about the VR immersion indexed that episode in my brain as an actual lived experience, not just something I watched. To be clear, I knew at the time it wasn’t a real experience. I currently know that it wasn’t a real-life lived experience. Yet some little bit of internal brain parameterization still indexes it in the lived experiences table rather than the viewed experiences table. Also: I finally tried Samsung’s XR headset, and it beats my Apple Vision Pro in meaningful waysBut there are a few widely known problems with the Vision Pro. It’s way too expensive, but it’s not just that. I own one. I purchased it to be able to write about it for you. Even though I have one right here and movies are insanely awesome on it, I only use it when I have to for work. Why? Because it’s also quite uncomfortable. It’s like strapping a brick to your face. It’s heavy, hot, and so intrusive you can’t even take a sip of coffee while using it. Stanford researchAll that brings us to some Stanford research that I first covered last year.A team of scientists led by Gordon Wetzstein, a professor of electrical engineering and director of the Stanford Computational Imaging Lab, has been working on solving both the immersion and the comfort problem using holography instead of TV technology. Using a combination of optical nanostructures called waveguides and augmented by AI, the team managed to construct a prototype device. By controlling the intensity and phase of light, they’re able to manipulate light at the nano level. The challenge is making real-time adjustments to all the nano-light sequences based on the environment. Also: We tested the best AR and MR glasses: Here’s how the Meta Ray-Bans stack upAll of that took a ton of AI to improve image formation, optimize wavefront manipulation, handle wildly complex calculations, perform pattern recognition, deal with the thousands of variables involved in light propagation (phase shifts, interference patterns, diffraction effects, and more), and then correct for changes dynamically. Add to that real-time processing and optimization done at the super-micro level managing light for each eye, processing machine learning and constantly refining the holographic images, handling non-linear and high-dimensional data that comes from dealing with changing surface dimensionality, and then making it work with optical data, spatial data, and environmental information. It was a lot. But it was not enough. More

  • in

    How to clear your TV cache (and why it makes such a noticeable difference)

    Adam Breeden/ZDNETIn the age of smart TVs, convenience is king. With just a few clicks, we can dive into endless entertainment — but that ease comes with a downside: the buildup of cache data. Also: How to disable ACR on your TV (and why doing so makes such a big difference)Just like on your phone or computer, a cluttered TV cache can lead to sluggish performance, app crashes, and even hinder new content from loading properly. That’s why it’s important to clear all that extra cache and make your TV feel like new again. Before I break down the steps for how to do it, let’s address the big elephant in the room first. What is a cache? A cache is a temporary storage area where data is kept for quick access. On your smart TV, the cache stores information from apps, websites, and system processes to help them load faster every time you turn it on. Think of it as a bunch of temporary files intended to speed up loading times for frequently accessed information. Also: The best TVs of 2025: Expert tested and reviewedFor instance, when you open a streaming app, the cache might store thumbnails, login details, or recently watched shows. Caches are designed to help your TV load this content more quickly. Over time, however, the cache can become overloaded with outdated or unnecessary data, which can consequently slow down your TV’s performance. More