More stories

  • in

    IBM’s new enterprise AI models are more powerful than anything from OpenAI or Google

    ZDNETIBM is zooming along with new open-source Granite Large Language Models (LLM) releases every few months. Granite 3.1  is the latest generation model, building upon the success of Granite 3.0. The model offers enhanced capabilities and performance optimized for business applications.Also: Gemini Advanced users can now access Google’s most experimental modelThe family of Granite 3.1 models boasts an impressive 128K token context window, a substantial increase from their predecessors. This expansion allows the models to process and understand much larger amounts of text — equivalent to approximately 85,000 English words — enabling more comprehensive analysis and generation tasks. By comparison, OpenAI’s ChatGPT 3, which ignited the AI revolution, could handle only 2,000 tokens.Outperforming the competitionBig Blue claims its new Granite 8B Instruct model outperforms its rivals, such as Google Gemma 2, Meta Llama 3.1, and Qwen 2.5, on HuggingFace’s OpenLLM Leaderboard benchmarks.Also: Want generative AI LLMs integrated with your business data? You need RAGThe Granite 3.1 family includes dense models and Mixture of Experts (MoE) variants. IBM states its Granite 2B and 8B models are text-only dense LLMs trained on over 12 trillion data tokens. The dense models are designed to support tool-based use cases and for retrieval augmented generation (RAG), streamlining code generation, translation, and bug fixing.The MoE models are trained on over 10 trillion tokens of data. IBM claims these models are ideal for deployment in on-device applications with low latency. More

  • in

    A hidden Google Maps feature is making people emotional – here’s why

    @Tyra__Lynn (TikTok)/ZDNETA viral trend is currently sweeping TikTok, where people are revisiting the past by looking up old versions of their homes through Google Street View. As they do, many are spotting long-gone family members on front porches or mowing the lawn, catching glimpses of their younger selves riding a bike in the driveway, or seeing beloved trees that were cut down years ago. Also: Google Maps adds more incident reports to your drive, thanks to Waze usersWhile this Google Maps feature isn’t new, it has become an emotional and nostalgic internet sensation — inspiring others to open the app and see what moments were captured in past years by Street View cars quietly passing their homes. How to find old Google Maps Street View images of your home Google Maps makes it easy to see images of your home from the past, thanks to a “See More Dates” feature in Street View. If you’re ready to join this TikTok trend and look at old photos of your house from as far back as 2007, here’s a step-by-step guide. More

  • in

    Fire TV just got three new accessibility features for people with disabilities

    Adam Breeden/ZDNETFire TV users with impairments that limit their hearing or vision might soon find life easier thanks to some new features from Amazon.Amazon has introduced three new features to make Fire TVs more accessible for people with disabilities.Also: Turn your AirPods Pro 2 into hearing aids: Testing and tracking hearing healthFirst up is an expansion of Audio Streaming for Hearing Aids (ASHA). Amazon Fire devices already support ASHA, which lets you send TV audio directly to your hearing aids. But now, Amazon is adding a feature called Dual Audio, which allows users to listen to audio from the set’s built-in speakers and audio sent to a hearing aid simultaneously, meaning everyone can listen together.  More

  • in

    6 ways to deal with mental fatigue at work

    10’000 Hours/Getty Images The holiday season is an opportunity to unwind and relax with family and friends. Everyone should step away from the daily grind and forget about work sometimes. However, mental fatigue remains a big issue — especially at this time of year. What’s more, research suggests it has a measurable effect on behavior, […] More

  • in

    Gemini Advanced users can now access Google’s most experimental model

    Yaroslav Kushta/Getty Images With the year coming to an end, Google has been on an AI launch streak, releasing Veo 2, Whisk, a redesign of Google Labs, Project Mariner, and more. One of the biggest standouts was its Gemini 2.0 family of models, and users can get started with it today.  Also: You can interview […] More

  • in

    You can turn your Instagram profile into a digital business card – here’s how

    Sabrina/ZDNETInstagram has introduced several major changes and updates during the last few months, including new features and privacy protections.A couple of months ago, the social media site announced profile cards: Users no longer need to type in handle names to find a mutual or new connection; now you can set up a profile card that makes you easily findable with a custom QR code. Also: Instagram just beat TikTok to new feature creators will loveThe new feature is not unlike how digital payment apps, such as Venmo and Cash App, enable users to share profiles. Instagram’s version is a two-sided profile card with one side for the scannable QR code and the other side featuring your IG handle, profile picture, short bio, and a song of your choice.Instagram compares the new update to the digital version of a business card that you can design to house contact links and showcase your bio “in one sleek digital package.” It’s a great way to network and “make new connections,” Instagram added, and — by adding a favorite song — to attract new friends with similar interests. More

  • in

    I test wearable tech for a living. These are my favorite products of 2024

    Nina Raemont/ZDNETThe new year is around the corner, but before we start thinking about all the technology that will emerge in 2025, let’s take a look back and remember this year’s greatest hits. I spend 40 hours a week testing products, writing reviews, and curating best lists (like the one you’re about to read). To compile a list of the best products released in 2024, I considered a few things. The first is my pure enjoyment of the product; that is, how badly did I want to continue using it — even after testing ended? Also: Everything you need to host a holiday partyThe second is how transformative or innovative the product is in its respective space — like sleep tech, health tech, or audio tech. The third is how value-packed the product is for its price. Sure, a few of these products might be expensive, but I’m including them because this product is the crème de la crème for its price point. This was a big year for wearable health tech, from sleep earbuds that actually put me to sleep to smart rings that track my activity.  But other products, like a great pair of earbuds for working out and a stellar portable speaker, also made the list. Oura Ring 4  More

  • in

    No Wi-Fi? Dial 1-800-ChatGPT for the AI assistance you need

    Screenshot by Sabrina Ortiz/ZDNETWith the holiday season upon us, many companies are finding ways to take advantage through deals, promotions, or other campaigns. OpenAI has found a way to participate with its “12 days of OpenAI” event series.On Wednesday, OpenAI announced via an X post that starting on Dec. 5, the company would host 12 days of live streams and release “a bunch of new things, big and small,” according to the post. Also: I’m a ChatGPT power user – here’s why Canvas is its best productivity featureHere’s everything you need to know about the campaign, as well as a round-up of every day’s drops. What are the ’12 days of OpenAI’?OpenAI CEO Sam Altman shared more details about the event, which kicked off at 10 a.m. PT on Dec. 5 and will occur daily for 12 weekdays with a live stream featuring a launch or demo. The launches will be both “big ones” or “stocking stuffers,” according to Altman. 🎄🎅starting tomorrow at 10 am pacific, we are doing 12 days of openai. each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers. we’ve got some great stuff to share, hope you enjoy! merry christmas.— Sam Altman (@sama) December 4, 2024

    What’s dropped so far?Wednesday, December 18Have you ever wanted to use ChatGPT without a Wi-Fi connection? Now, all you have to do is place a phone call.  Here’s what OpenAI released on the 10th day:By dialing 1-800-ChatGPT, you can now access the chatbot via a toll-free number. OpenAI encourages users to save ChatGPT in their contacts for easy access. Users can call anywhere in the US; in other countries, users can message ChatGPT on WhatsApp. Users get 15 minutes of free ChatGPT calls per month.In WhatsApp, users can enter a prompt via a text as they would with any other person in their contacts. In this experience, it is just a text message. The phone call feature works on any phone, from a smartphone to a flip phone — even a rotary phone.  The presenters said it is meant to make ChatGPT more accessible to more users. Tuesday, December 17The releases on the ninth day all focus on developer features and updates, dubbed “Mini Dev Day.”  These launches include:  The o1 model is finally out of preview in the API with support for function calling, structured outputs, developer messages, vision capabilities, and lower latency, according to the company. o1 in the API also features a new parameter: “reasoning effort.” This parameter allows developers to tell the model how much effort is put into formulating an answer, which helps with cost efficiency. OpenAI also introduced WebRTC support for the Realtime API, which makes it easier for developers “to build and scale real-time voice products across platforms.”The Realtime API also got a 60% audio token price drop, support for GPT-4o mini, and more control over responses.The fine-tuning API now supports Preference Fine-Tuning, which allows users to “Optimize the model to favor desired behavior by reinforcing preferred responses and reducing the likelihood of unpreferred ones,” according to OpenAI.  OpenAI also introduced new Go and Java SDKs in beta. An “AMA” (ask me anything) session will be held for an hour after the live stream on the OpenAI GitHub platform with the presenters. Monday, December 16 The drops for the second Monday in the 12 days of OpenAI series all focused on Search in ChatGPT. The AI search engine is available to all users starting today, including all free users who are signed in anywhere they can access ChatGPT. The feature was previously only available to ChatGPT Plus users. The search experience, which allows users to browse the web from ChatGPT, got faster and better on mobile and now has an enriched map experience. The upgrades include image-rich visual results.Search is integrated into Advance Voice mode, meaning you can now search as you talk to ChatGPT. To activate this feature, just activate Advance Voice the same way you regularly would and ask it your query verbally. It will then answer your query verbally by pulling from the web. OpenAI also teased developers, saying, “Tomorrow is for you,” and calling the upcoming livestream a “mini Dev Day.”Friday, December 13One of OpenAI’s most highly requested features has been an organizational feature to better keep track of your conversations. On Friday, OpenAI delivered a new feature called “Projects.”Projects is a new way to organize and customize your chats in ChatGPT, meant to be a part of continuing to optimize the core experience of ChatGPT.When creating a Project, you can include a title, a customized folder color, relevant project files, instructions for ChatGPT on how it can best help you with the project, and more in one place. In the Project, you can start a chat and add previous chats from the sidebar to your Project. It can also answer questions using your context in a regular chat format. The chats can be saved in the Project, making it easier to pick up your conversations later and know exactly what to look for where. It will be rolled out to Plus, Pro, and Teams users starting today. OpenAI says it’s coming to free users as soon as possible. Enterprise and Edu users will see it rolled out early next year. Thursday, December 12When the live stream started, OpenAI addressed the elephant in the room — the fact that the company’s live stream went down the day before. OpenAI apologized for the inconvenience and said its team is working on a post-mortem to be posted later. Then it got straight into the news — another highly-anticipated announcement: Advanced Voice Mode now has screen-sharing and visual capabilities, meaning it can assist with the context of what it is viewing, whether that be from your phone camera or what’s on your screen. These capabilities build on what Advanced Voice could already do very well — engaging in casual conversation as a human would. The natural-like conversations can be interrupted, have multi-turns, and understand non-linear trains of thought. In the demo, the user gets directions from ChatGPT’s Advanced Voice on how to make a cup of coffee. As the demoer goes through the steps, ChatGPT is verbally offering insights and directions. There’s another bonus for the Christmas season: Users can access a new Santa voice. To activate it, all users have to do is click on the snowflake icon. Santa is rolling out throughout today everywhere that users can access ChatGPT voice mode. The first time you talk to Santa, your usage limits reset, even if you have reached the limit already, so you can have a conversation with him. Video and screen sharing are rolling out in the latest mobile apps starting today and throughout next week to all Team users and most Pro and Plus subscribers. Pro and Plus subscribers in Europe will get access “as soon as we can,” and Enterprise and Edu users will get access early next year. Wednesday, December 11Apple released iOS 18.2 on Wednesday. The release includes integrations with ChatGPT across Siri, Writing Tools, and Visual Intelligence. As a result, the live stream focused on walking through the integration. Siri can now recognize when you ask questions outside its scope that could benefit from being answered by ChatGPT instead. In those instances, it will ask if you’d like to process the query using ChatGPT. Before any request is sent to ChatGPT, a message notifying the user and asking for permission will always appear, placing control in the user’s hands as much as possible. Visual Intelligence refers to a new feature for the iPhone 16 lineup that users can access by tapping the Camera Control button. Once the camera is open, users can point it at something and search the web with Google, or use ChatGPT to learn more about what they are viewing or perform other tasks such as translating or summarizing text. Writing Tools now features a new “Compose” tool, which allows users to create text from scratch by leveraging ChatGPT. With the feature, users can even generate images using DALL-E. All of the above features are subject to ChatGPT’s daily usage limits, the same way that users would reach limits while using the free version of the model on ChatGPT. Users can choose whether or not to enable the ChatGPT integration in Settings.Read more about it here: iOS 18.2 rolls out to iPhones: Try these 6 new AI features todayTuesday, December 10 Canvas is coming to all web users, regardless of plan, in GPT-4o, meaning it is no longer just available in beta for ChatGPT Plus users.Canvas has been built into GPT-4o natively, meaning you can just call on Canvas instead of having to go to the toggle on the model selector. The Canvas interface is the same as what users saw in beta in ChatGPT Plus, with a table on the left hand side that shows the Q+A exchange and a right-hand tab that shows your project, displaying all of the edits as they go, as well as shortcuts. Canvas can also be used with custom GPTs. It is turned on by default when creating a new one, and there is an option to add Canvas to existing GPTs. Canvas also has the ability to run Python code directly in Canvas, allowing ChatGPT to execute coding tasks such as fixing bugs. Read more about it here: I’m a ChatGPT power user – and Canvas is still my favorite productivity feature a month laterMonday, December 9OpenAI teased the third-day announcement as “something you’ve been waiting for,” followed by the much-anticipated drop of its video model — Sora.  Here’s what you need to know:Known as Sora Turbo, the video model is smarter than the February model that was previewed. Access is coming in the US later today; users need only ChatGPT Plus and Pro.Sora can generate video-to-video, text-to-video, and more. ChatGPT Plus users can generate up to 50 videos per month at 480p resolution or fewer videos at 720p. The Pro Plan offers 10x more usage. The new model is smarter and cheaper than the previewed February model. Sora features an explore page where users can view each other’s creations. Users can click on any video to see how it was created. A live demo showed the model in use. The demo-ers entered a prompt and picked aspect ratio, duration, and even presets. I found the live demo video results to be realistic and stunning. OpenAI also unveiled Storyboard, a tool that lets users generate inputs for every frame in a sequence. Friday, December 6:On the second day of “shipmas,” OpenAI expanded access to its Reinforcement Fine-Tuning Research Program:The Reinforcement Fine-Tuning program allows developers and machine learning engineers to fine-tune OpenAI models to “excel at specific sets of complex, domain-specific tasks,” according to OpenAI. Reinforcement Fine-Tuning refers to a customization technique in which developers can define a model’s behavior by inputting tasks and grading the output. The model then uses this feedback as a guide to improve, becoming better at reasoning through similar problems, and enhancing overall accuracy.OpenAI encourages research institutes, universities, and enterprises to apply to the program, particularly those that perform narrow sets of complex tasks, could benefit from the assistance of AI, and perform tasks that have an objectively correct answer. Spots are limited; interested applicants can apply by filling out this form. OpenAI aims to make Reinforcement Fine-Tuning publicly available in early 2025.Thursday, December 5: OpenAI started with a bang, unveiling two major upgrades to its chatbot: a new tier of ChatGPT subscription, ChatGPT Pro, and the full version of the company’s o1 model.  More