More stories

  • in

    I test wearable tech for a living. These are my favorite products of 2024

    Nina Raemont/ZDNETThe new year is around the corner, but before we start thinking about all the technology that will emerge in 2025, let’s take a look back and remember this year’s greatest hits. I spend 40 hours a week testing products, writing reviews, and curating best lists (like the one you’re about to read). To compile a list of the best products released in 2024, I considered a few things. The first is my pure enjoyment of the product; that is, how badly did I want to continue using it — even after testing ended? Also: Everything you need to host a holiday partyThe second is how transformative or innovative the product is in its respective space — like sleep tech, health tech, or audio tech. The third is how value-packed the product is for its price. Sure, a few of these products might be expensive, but I’m including them because this product is the crème de la crème for its price point. This was a big year for wearable health tech, from sleep earbuds that actually put me to sleep to smart rings that track my activity.  But other products, like a great pair of earbuds for working out and a stellar portable speaker, also made the list. Oura Ring 4  More

  • in

    No Wi-Fi? Dial 1-800-ChatGPT for the AI assistance you need

    Screenshot by Sabrina Ortiz/ZDNETWith the holiday season upon us, many companies are finding ways to take advantage through deals, promotions, or other campaigns. OpenAI has found a way to participate with its “12 days of OpenAI” event series.On Wednesday, OpenAI announced via an X post that starting on Dec. 5, the company would host 12 days of live streams and release “a bunch of new things, big and small,” according to the post. Also: I’m a ChatGPT power user – here’s why Canvas is its best productivity featureHere’s everything you need to know about the campaign, as well as a round-up of every day’s drops. What are the ’12 days of OpenAI’?OpenAI CEO Sam Altman shared more details about the event, which kicked off at 10 a.m. PT on Dec. 5 and will occur daily for 12 weekdays with a live stream featuring a launch or demo. The launches will be both “big ones” or “stocking stuffers,” according to Altman. 🎄🎅starting tomorrow at 10 am pacific, we are doing 12 days of openai. each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers. we’ve got some great stuff to share, hope you enjoy! merry christmas.— Sam Altman (@sama) December 4, 2024

    What’s dropped so far?Wednesday, December 18Have you ever wanted to use ChatGPT without a Wi-Fi connection? Now, all you have to do is place a phone call.  Here’s what OpenAI released on the 10th day:By dialing 1-800-ChatGPT, you can now access the chatbot via a toll-free number. OpenAI encourages users to save ChatGPT in their contacts for easy access. Users can call anywhere in the US; in other countries, users can message ChatGPT on WhatsApp. Users get 15 minutes of free ChatGPT calls per month.In WhatsApp, users can enter a prompt via a text as they would with any other person in their contacts. In this experience, it is just a text message. The phone call feature works on any phone, from a smartphone to a flip phone — even a rotary phone.  The presenters said it is meant to make ChatGPT more accessible to more users. Tuesday, December 17The releases on the ninth day all focus on developer features and updates, dubbed “Mini Dev Day.”  These launches include:  The o1 model is finally out of preview in the API with support for function calling, structured outputs, developer messages, vision capabilities, and lower latency, according to the company. o1 in the API also features a new parameter: “reasoning effort.” This parameter allows developers to tell the model how much effort is put into formulating an answer, which helps with cost efficiency. OpenAI also introduced WebRTC support for the Realtime API, which makes it easier for developers “to build and scale real-time voice products across platforms.”The Realtime API also got a 60% audio token price drop, support for GPT-4o mini, and more control over responses.The fine-tuning API now supports Preference Fine-Tuning, which allows users to “Optimize the model to favor desired behavior by reinforcing preferred responses and reducing the likelihood of unpreferred ones,” according to OpenAI.  OpenAI also introduced new Go and Java SDKs in beta. An “AMA” (ask me anything) session will be held for an hour after the live stream on the OpenAI GitHub platform with the presenters. Monday, December 16 The drops for the second Monday in the 12 days of OpenAI series all focused on Search in ChatGPT. The AI search engine is available to all users starting today, including all free users who are signed in anywhere they can access ChatGPT. The feature was previously only available to ChatGPT Plus users. The search experience, which allows users to browse the web from ChatGPT, got faster and better on mobile and now has an enriched map experience. The upgrades include image-rich visual results.Search is integrated into Advance Voice mode, meaning you can now search as you talk to ChatGPT. To activate this feature, just activate Advance Voice the same way you regularly would and ask it your query verbally. It will then answer your query verbally by pulling from the web. OpenAI also teased developers, saying, “Tomorrow is for you,” and calling the upcoming livestream a “mini Dev Day.”Friday, December 13One of OpenAI’s most highly requested features has been an organizational feature to better keep track of your conversations. On Friday, OpenAI delivered a new feature called “Projects.”Projects is a new way to organize and customize your chats in ChatGPT, meant to be a part of continuing to optimize the core experience of ChatGPT.When creating a Project, you can include a title, a customized folder color, relevant project files, instructions for ChatGPT on how it can best help you with the project, and more in one place. In the Project, you can start a chat and add previous chats from the sidebar to your Project. It can also answer questions using your context in a regular chat format. The chats can be saved in the Project, making it easier to pick up your conversations later and know exactly what to look for where. It will be rolled out to Plus, Pro, and Teams users starting today. OpenAI says it’s coming to free users as soon as possible. Enterprise and Edu users will see it rolled out early next year. Thursday, December 12When the live stream started, OpenAI addressed the elephant in the room — the fact that the company’s live stream went down the day before. OpenAI apologized for the inconvenience and said its team is working on a post-mortem to be posted later. Then it got straight into the news — another highly-anticipated announcement: Advanced Voice Mode now has screen-sharing and visual capabilities, meaning it can assist with the context of what it is viewing, whether that be from your phone camera or what’s on your screen. These capabilities build on what Advanced Voice could already do very well — engaging in casual conversation as a human would. The natural-like conversations can be interrupted, have multi-turns, and understand non-linear trains of thought. In the demo, the user gets directions from ChatGPT’s Advanced Voice on how to make a cup of coffee. As the demoer goes through the steps, ChatGPT is verbally offering insights and directions. There’s another bonus for the Christmas season: Users can access a new Santa voice. To activate it, all users have to do is click on the snowflake icon. Santa is rolling out throughout today everywhere that users can access ChatGPT voice mode. The first time you talk to Santa, your usage limits reset, even if you have reached the limit already, so you can have a conversation with him. Video and screen sharing are rolling out in the latest mobile apps starting today and throughout next week to all Team users and most Pro and Plus subscribers. Pro and Plus subscribers in Europe will get access “as soon as we can,” and Enterprise and Edu users will get access early next year. Wednesday, December 11Apple released iOS 18.2 on Wednesday. The release includes integrations with ChatGPT across Siri, Writing Tools, and Visual Intelligence. As a result, the live stream focused on walking through the integration. Siri can now recognize when you ask questions outside its scope that could benefit from being answered by ChatGPT instead. In those instances, it will ask if you’d like to process the query using ChatGPT. Before any request is sent to ChatGPT, a message notifying the user and asking for permission will always appear, placing control in the user’s hands as much as possible. Visual Intelligence refers to a new feature for the iPhone 16 lineup that users can access by tapping the Camera Control button. Once the camera is open, users can point it at something and search the web with Google, or use ChatGPT to learn more about what they are viewing or perform other tasks such as translating or summarizing text. Writing Tools now features a new “Compose” tool, which allows users to create text from scratch by leveraging ChatGPT. With the feature, users can even generate images using DALL-E. All of the above features are subject to ChatGPT’s daily usage limits, the same way that users would reach limits while using the free version of the model on ChatGPT. Users can choose whether or not to enable the ChatGPT integration in Settings.Read more about it here: iOS 18.2 rolls out to iPhones: Try these 6 new AI features todayTuesday, December 10 Canvas is coming to all web users, regardless of plan, in GPT-4o, meaning it is no longer just available in beta for ChatGPT Plus users.Canvas has been built into GPT-4o natively, meaning you can just call on Canvas instead of having to go to the toggle on the model selector. The Canvas interface is the same as what users saw in beta in ChatGPT Plus, with a table on the left hand side that shows the Q+A exchange and a right-hand tab that shows your project, displaying all of the edits as they go, as well as shortcuts. Canvas can also be used with custom GPTs. It is turned on by default when creating a new one, and there is an option to add Canvas to existing GPTs. Canvas also has the ability to run Python code directly in Canvas, allowing ChatGPT to execute coding tasks such as fixing bugs. Read more about it here: I’m a ChatGPT power user – and Canvas is still my favorite productivity feature a month laterMonday, December 9OpenAI teased the third-day announcement as “something you’ve been waiting for,” followed by the much-anticipated drop of its video model — Sora.  Here’s what you need to know:Known as Sora Turbo, the video model is smarter than the February model that was previewed. Access is coming in the US later today; users need only ChatGPT Plus and Pro.Sora can generate video-to-video, text-to-video, and more. ChatGPT Plus users can generate up to 50 videos per month at 480p resolution or fewer videos at 720p. The Pro Plan offers 10x more usage. The new model is smarter and cheaper than the previewed February model. Sora features an explore page where users can view each other’s creations. Users can click on any video to see how it was created. A live demo showed the model in use. The demo-ers entered a prompt and picked aspect ratio, duration, and even presets. I found the live demo video results to be realistic and stunning. OpenAI also unveiled Storyboard, a tool that lets users generate inputs for every frame in a sequence. Friday, December 6:On the second day of “shipmas,” OpenAI expanded access to its Reinforcement Fine-Tuning Research Program:The Reinforcement Fine-Tuning program allows developers and machine learning engineers to fine-tune OpenAI models to “excel at specific sets of complex, domain-specific tasks,” according to OpenAI. Reinforcement Fine-Tuning refers to a customization technique in which developers can define a model’s behavior by inputting tasks and grading the output. The model then uses this feedback as a guide to improve, becoming better at reasoning through similar problems, and enhancing overall accuracy.OpenAI encourages research institutes, universities, and enterprises to apply to the program, particularly those that perform narrow sets of complex tasks, could benefit from the assistance of AI, and perform tasks that have an objectively correct answer. Spots are limited; interested applicants can apply by filling out this form. OpenAI aims to make Reinforcement Fine-Tuning publicly available in early 2025.Thursday, December 5: OpenAI started with a bang, unveiling two major upgrades to its chatbot: a new tier of ChatGPT subscription, ChatGPT Pro, and the full version of the company’s o1 model.  More

  • in

    Using Windows 11? Change these 4 settings to keep your PC running smoothly

    Kyle Kucharski/ZDNETWindows 11 has been around for a few years now, and if you were an early adopter like I was, there’s a good chance your computer has slowed down significantly since then. There are several reasons why a device’s performance may plummet. It could be the result of too many apps taking up resources, unoptimized settings, or even a virus infecting the hardware. The easiest thing you can do in the short term is restart your computer. A simple reboot restores the RAM and re-establishes connections.Also: The ultimate Windows 11 upgrade guide: Everything you need to knowBut to enjoy improved performance over the long term, you’ll need to alter how your computer operates. The advice I’m about to give you is all the things you can do right now to enhance your Windows 11 experience. And you won’t need to go into a device’s BIOS or download some random app from an unverified source; these changes can be made right from the system menus.1. Download the latest updates More

  • in

    US may ban world’s most popular routers and modems – what that means for you

    Bloomberg/Getty Images The US may soon ban the world’s most popular router over national security fears. According to a report from the Wall Street Journal, Chinese-owned TP-Link is currently under investigation by the US Justice, Commerce, and Defense departments because of its link to several high-profile hacking incidents. The move comes as the US government […] More

  • in

    Ham radio is still a disaster lifeline, even in the iPhone era – here’s why

    Bloomberg/Contributor/Getty Images When I was a kid living near Grantsville, West Virginia, some of my neighbors were into amateur ham radio. I found their analog electronics and antennas and their mastery of Morse code fascinating.  They were into it because they talked with people thousands of miles away, played with tech, and knew they could […] More

  • in

    This $1 phone scanner app can detect Pegasus spyware. Here’s how

    PerlaStudio/Getty Images Between unencrypted messaging hacks, data breaches, and AI scam calls, smartphone-centered security threats appear to be everywhere. iVerify found that one type of spyware is trying to make a comeback. Also: Why you should power off your phone once a week – according to the NSA Last week, the mobile security firm resurfaced findings from its spyware […] More

  • in

    iOS 18.3 to bring Home support for robot vacuums, beta code shows

    Maria Diaz/ZDNETDuring the iOS 18 unveiling at WWDC 2024, Apple announced plans to give the Apple Home app support for the core functionality of vacuum cleaners by the end of 2024 — but those plans were delayed. Now that iOS 18.2 is live, new clues indicate that the next big iOS drop of 18.3 could finally bring Apple Home support for robot vacuums. Along with delaying robot vacuum support for HomeKit, Apple had already delayed several Apple Intelligence features that were expected with iOS 18 — which has been generally available since September — until the launch of version 18.2 some days ago. Also: The best robot vacuums for 2024: Expert tested and reviewedThe Home app’s iOS 18.3 beta 1 code features a list of values referencing robot vacuums, indicating that this is the iOS version that would give Apple Home robot vacuum control. This timeline supports what’s currently visible on Apple’s HomeKit page, which indicates that the robot vacuum “feature will be available in early 2025.” More