Back in March, Google started testing the ‘Jump Ahead’ button in the YouTube app for Android by making it available to a small number of people. Pressing this button skips the part of the video that most people have skipped. Well, 9To5Google says that the company is now rolling out the Jump Ahead button more widely by making it available to Premium subscribers in the United States as an experimental feature.
With this feature enabled, when you double-tap on the display to fast-forward a video, YouTube shows the Jump Ahead button at the bottom-right corner of the screen. Once you click it, the app will say “Jumping over commonly skipped section” and it will “jump you to where most viewers typically skip ahead to.” According to Google, Jump Ahead “combines watch data and AI to help identify the next best point a viewer typically skips ahead to.”
Google says that the new feature is available for only videos that are in the English language, and “not available on every video.” To get the new feature. To get the new feature, open to the YouTube app and go to You » Settings » Try experimental new features and click on ‘Try it out’ on the Jump Ahead banner. Hopefully, Google will make the feature available in more regions and platforms soon.
Former business smartphone company turned security firm, Blackberry, has lifted the wraps of its new AI-driven threat detection and response tool.
CylanceMDR, the company’s new Managed Detection and Response (MDR) solution, is set to be powered by Cylance AI platform together with security operations center analysts in order to provide round-the-clock coverage.
The newly introduced MDR, which was formerly known as CylanceGUARD, has been launched with three tiers – Standard, Advanced and On-Demand.
Blackberry launches AI-powered CylanceMDR
Nathan Jenniges, SVP and GM at Blackberry Cybersecurity, commented: “CylanceMDR offers more than just industry-leading technology; you’re getting a true AI-driven MDR fuelled by proprietary threat intelligence.”
Jenniges continued to discuss the importance of having “the right team” of cybersecurity experts to support a good technological foundation. Jenniges added: “Our philosophy is to combine our technical excellence with our human expertise to provide unparalleled support to organisations of any size.”
The suite of services provided under the CylanceMDR solution includes onboarding, alert triage, investigation, managed threat hunting, digital forensics, comprehensive incident response and critical event management. Blackberry is also so sure of its products that it’s offering a $1 million guarantee.
In addition to the Standard and Advanced tiers, On-Demand adds exclusively tailored features for customers with their own security teams seeking extra support.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
In its announcement, the company cited independent research, revealing that Cylance AI threat detection “acted up to 13 times faster, preventing 98% of attacks earlier in the kill chain.” Blackberry also believes that its solution could prove up to 85% cheaper than building an in-house SOC, with CylanceMDR reporting a 3x ROI in another third-party study.
More information about Blackberry’s latest take on detection and response can be found on the firm’s website.
Big things could be coming to Apple’s Safari web browser, as a new report claims it’s in line for a serious overhaul that could transform it into the best web browser around.
It’s just the latest in a long list of other Apple apps expected to see new artificial intelligence (AI) features in iOS 18 and macOS 15, both of which are expected to debut this summer at WWDC 2024.
According to AppleInsider, Safari is going to be revamped in several key ways. That includes changes to the user interface, “advanced content blocking features,” plus a new tool called Intelligent Search that uses AI to level up your browsing experience.
The latter feature looks to be a headline grabber. It appears to use Apple’s in-house on-device AI – dubbed Ajax – to find key topics on a web page and condense them down into a handy readable summary. That could be a nifty tool for quickly getting the gist of a page when you’re in a hurry.
Elsewhere, there’s apparently a new feature called Web Eraser that lets you select specific parts of a web page and easily remove them. For instance, you might want to erase a banner ad or an image without breaking the rest of the page. Safari will remember your changes and keep them in place when you next visit the page, although there could be an option to revert them if you want.
A total AI overhaul
(Image credit: Apple)
There’s one more piece to the puzzle: AppleInsider believes that Apple will consolidate a bunch of settings from different Safari menus and group them together behind a single button in the app’s address bar. This could include the new AI and Web Eraser tools, as well as options for controlling zoom levels, privacy settings, extension shortcuts and more.
These changes are all expected to debut in Safari 18, which will come to iOS 18 and macOS 15 and should make an appearance later in the year. They could be one of many AI-inspired alterations coming to Apple’s apps this June, with WWDC set to be packed full of AI announcements and new software features.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Beyond this summer, AppleInsider says that Apple is working on another AI-enhanced feature called Visual Search that would let you look up consumer products in images. Like the existing Visual Lookup feature, it might be implemented system-wide, which could see it loaded into Safari, the Photos app, and more. However, it’s not expected to launch until 2025.
This year, though, could be a massive one for Apple’s apps and software. On the iPhone, iOS 18 is expected to be one of the largest overhauls in the operating system’s history, and macOS 15 might not be far behind. We’ve rounded up everything you need to know about what’s coming to WWDC 2024, and with the show just a few weeks away, there’s a lot to look forward to.
Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI – specifically its ‘Ajax’ large language model (LLM) – may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods.
Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works.
The current justification for why copyrighted material is so widely used as far as some of these companies to train their LLMs is that, not dissimilar to humans, these models need a substantial amount of information (called training data for LLMs) to learn and generate coherent and convincing responses – and as far as these companies are concerned, copyrighted materials are fair game.
Many critics of generative AI consider it copyright infringement if tech companies use works in training and output of LLMs without explicit agreements with copyright holders or their representatives. Still, this criticism hasn’t put tech companies off from doing exactly that, and it’s assumed to be the case for most AI tools, garnering a growing pool of resentment towards the companies in the generative AI space.
(Image credit: Shutterstock/photosince)
The forest of legal battles and ethical dilemmas in generative AI
There have even been a growing number of legal challenges mounted in these tech companies’ direction. OpenAI and Microsoft have actually been sued by the New York Times for copyright infringement back in December 2023, with the publisher accusing the two companies of training their LLMs on millions of New York Times articles. In September 2023, OpenAI and Microsoft were also sued by a number of prominent authors, including George R. R. Martin, Michael Connelly, and Jonathan Franzen. In July of 2023, over 15,000 authors signed an open letter directed at companies such as Microsoft, OpenAI, Meta, Alphabet, and others, calling on leaders of the tech industry to protect writers, calling on these companies to properly credit and compensate authors for their works when using them to train generative AI models.
In April of this year, The Register reported that Amazon was hit with a lawsuit by an ex-employee alleging she faced mistreatment, discrimination, and harassment, and in the process, she testified about her experience when it came to issues of copyright infringement. This employee alleges that she was told to deliberately ignore and violate copyright law to improve Amazon’s products to make them more competitive, and that her supervisor told her that “everyone else is doing it” when it came to copyright violations. Apple Insider echoes this claim, stating that this seems to be an accepted industry standard.
As we’ve seen with many other novel technologies, the legislation and ethical frameworks always arrive after an initial delay, but it looks like this is becoming a more problematic aspect of generative AI models that the companies responsible for them will have to respond to.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
(Image credit: Apple)
The Apple approach to ethical AI training (that we know of so far)
It looks like at least one major tech player might be trying to take the more careful and considered route to avoid as many legal (and moral!) challenges as possible – and somewhat surprisingly, it’s Apple. According to Apple Insider, Apple has been pursuing diligently licensing major news publications’ works when looking for AI training material. Back in December, Apple petitioned to license the archives of several major publishers to use these as training material for its own LLM, known internally as Ajax.
It’s speculated that Ajax will be the software for basic on-device functionality for future Apple products, and it might instead license software like Google’s Gemini for more advanced features, such as those requiring an internet connection. Apple Insider writes that this allows Apple to avoid certain copyright infringement liabilities as Apple wouldn’t be responsible for copyright infringement by, say, Google Gemini.
A paper published in March detailed how Apple intends to train its in-house LLM: a carefully chosen selection of images, image-text, and text-based input. In its methods, Apple simultaneously prioritized better image captioning and multi-step reasoning, at the same time as paying attention to preserving privacy. The last of these factors is made all the more possible for the Ajax LLM by it being entirely on-device and therefore not requiring an internet connection. There is a trade-off, as this does mean that Ajax won’t be able to check for copyrighted content and plagiarism itself, as it won’t be able to connect to online databases that store copyrighted material.
There is one other caveat that Apple Insider reveals about this when speaking to sources who are familiar with Apple’s AI testing environments: there don’t currently seem to be many, if any, restrictions on users utilizing copyrighted material themselves as the input for on-device test environments. It’s also worth noting that Apple isn’t technically the only company taking a rights-first approach: art AI tool Adobe Firefly is also claimed to be completely copyright-compliant, so hopefully more AI startups will be wise enough to follow Apple and Adobe‘s lead.
I personally welcome this approach from Apple as I think human creativity is one of the most incredible capabilities we have, and I think it should be rewarded and celebrated – not fed to an AI. We’ll have to wait to know more about what Apple’s regulations regarding copyright and training its AI look like, but I agree with Apple Insider’s assessment that this definitely sounds like an improvement – especially since some AIs have been documented regurgitating copyrighted material word-for-word. We can look forward to learning more about Apple’s generative AI efforts very soon, which is expected to be a key driver for its developer-focused software conference, WWDC 2024.
Blackmagic Design released its annual NAB 2024 update and announced over a dozen new products, including a new version of its popular DaVinci Resolve editing suite. Other key products include the Micro Color Panel for DaVinci Resolve on iPad, a 17K 65mm camera and the Pyxis 6K cube camera.
Davinci Resolve 19
DaVinci Resolve has become a popular option for editors who don’t want to pay a monthly subscription for Adobe’s Premiere Pro, and is arguably more powerful in some ways. The latest version 19 takes a page from its rival, though, with a bunch of new AI-powered features for effects, color, editing, audio and more.
Starting with the Edit module, a new feature lets you edit clips using text instead of video. Transcribing clips opens a window showing text detected from multiple speakers, letting you remove sections, search through text and more. Other features include a new trim window, fixed play head (reducing zooming and scrolling), a window that makes changing audio attributes faster and more.
The Color tool introduces “Color Slice,” a way to adjust an image based on six vectors (red, green, blue, yellow, cyan and magenta) along with a special skin tone slider. For instance, you can adjust any of those specific colors, easily changing the levels of saturation and hues, while seeing and adjusting the underlying key. The dedicated skin slider will no doubt make it attractive for quick skin tone adjustments.
Another key feature in Color is the “IntelliTrack” powered by a neural engine AI that lets you quickly select points to track to create effects or stabilize an image. Blackmagic also added a new Lightroom-like AI-powered noise reduction system that quickly removes digital noise or film grain from images with no user tweaking required.
“Film Look Creator” is a new module that opens up color grading possibilities with over 60 filmic parameters. It looks fairly easy to use, as you can start with a preset (default 65mm, cinematic, bleach bypass, nostalgic) and then tweak parameters to taste. Another new trick is “Defocus Background,” letting users simulate a shallow depth of focus via masking in a realistic way (unlike smartphones), while Face Refinement tracks faces so editors tweak brightness, colors, detail and more.
The Fusion FX editor adds some new tools that ease 3D object manipulation and on the audio (Fairlight) side, BMD introduced the “Dialogue Separator FX” to separate dialogue, background or ambience. DaVinci Resolve 19 is now in open beta for everyone to try, with no word yet on a date for the full release. As usual, it costs $295 for the the Studio version and the main version is free.
Micro Color Panel
Blackmagic Design
BMD’s DaVinci Resolve for iPad proved to be a popular option for editors on the go, and now the company has introduced a dedicated control surface with the new Micro Color Panel. It’ll offer editors control that goes well beyond the already decent Pencil and multitouch input, while keeping a relatively low profile at 7.18 x 14.33 inches.
A slot at the top front lets you slide in your iPad, and from there you can connect via Bluetooth or USB-C. The company promises a “professional” feel to the controls, which consist of three weighted trackballs, 12 control dials and 27 buttons. With those, you can perform editors, tweak parameters like shadows, hues and highlights, and even do wipes and other effects.
“The old DaVinci Resolve Micro Panel model has been popular with customers wanting a compact grading panel, but we wanted to design an even more portable and affordable solution,” said Blackmagic Design President Grant Petty. It’s now on pre-order for $509.
Pyxis 6K camera
Blackmagic Design
Blackmagic Design is following rivals like RED, Sony and Panasonic with a new box-style camera, the Pyxis 6K full-frame camera. The idea is that you start with the basic brain (controls, display, CFexpress media, brain and sensor), then use side plates or mounting screws to attach accessories like handles, microphones and SSDs. It’s also available with Blackmagic’s URSA Cine EVF (electronic viewfinder) that adds $1,695 to the price.
Its specs are very similar to the Blackmagic Cinema Camera 6K I tested late last year. The native resolution is 24-megapixels (6K) on a full 36 x 24mm sensor that allows for up to 13 stops of dynamic range with dual native ISO up to 25,600. It can record 12-bit Blackmagic RAW (BRAW) directly to the CFexpress Type B cards or an SSD.
It also supports direct streaming to YouTube, Facebook, Twitch and others via RTMP and SRT either via Ethernet or using a cellular connection. Since the streaming is built into the camera, customers and csee stream status and data rate directly in the viewfinder or LCD. The Pyxis 6K arrives in June for $2,995 with three mounts (Canon EF, Leica L and Arri PL).
Blackmagic URSA Cine 12K and 17K
Blackmagic Design
Along with the Pyxis, Blackmagic introduced a pair of cinema cameras, the URSA Cine 12K and 17K models. Yes, those numbers represent the resolution of those two cameras, with the first offering a full-frame sensor 36 x 24mm with 12K resolution (12,888 x 6,480 17:9) at up a fairly incredible 100 fps. The second features a 65mm (50.8 x 23.3 sensor) with 17,520 x 8,040 resolution offering up to 16 stops of dynamic range.
Both models will come with features like built-in ND filters, an optical low pass filter and BMD’s latest gen 5.0 color science. The URSA Cine 12K will come with 8TB of internal storage, or you can use your own CFexpress media. Other features include live streaming, a high-resolution EVF, V-battery support, wireless Bluetooth camera control and more. The URSA Cine 12K model is on pre-order for $14,995 or $16,495 with the URSA Cine EVF, with April availability. The URSA Cine 17K is under development, with no pricing or release yet announced.
This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.
After a handful of rumors and speculation suggested Meta was working on a pair of AR glasses, it unceremoniously confirmed that Meta AR glasses are on the way – doing so via a short section at the end of a blog post celebrating the 10th anniversary of Reality Labs (the division behind its AR/VR tech).
While not much is known about them, the glasses were described as a product merging Meta’s XR hardware with its developing Meta AI software to “deliver the best of both worlds” in a sleek wearable package.
We’ve collected all the leaks, rumors, and some of our informed speculation in this one place so you can get up to speed on everything you need to know about the teased Meta AR glasses. Let’s get into it.
Meta AR glasses: Price
We’ll keep this section brief as right now it’s hard to predict how much a pair of Meta AR glasses might cost because we know so little about them – and no leakers have given a ballpark estimate either.
Current smart glasses like the Ray-Ban Meta Smart Glasses, or the Xreal Air 2 AR smart glasses will set you back between $300 to $500 / £300 to £500 / AU$450 to AU$800; Meta’s teased specs, however, sound more advanced than what we have currently.
Meta’s glasses could cost as much as Google Glass (Image credit: Future)
As such, the Meta AR glasses might cost nearer $1,500 (around £1,200 / AU$2300) – which is what the Google Glass smart glasses launched at.
A higher price seems more likely given the AR glasses novelty, and the fact Meta would need to create small yet powerful hardware to cram into them – a combo that typically leads to higher prices.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
We’ll have to wait and see what gets leaked and officially revealed in the future.
Meta AR glasses: Release date
Unlike price, several leaks have pointed to when we might get our hands – or I suppose eyeballs – on Meta’s AR glasses. Unfortunately, we might be waiting until 2027.
That’s according to a leaked Meta internal roadmap shared by The Verge back in March 2023. The document explained that a precursor pair of specs with a display will apparently arrive in 2025, with ‘proper’ AR smart glasses due in 2027.
(Image credit: Meta)
In February 2024 Business Insider cited unnamed sources who said a pair of true AR glasses could be shown off at this year’s Meta Connect conference. However, that doesn’t mean they’ll launch sooner than 2027. While Connect does highlight soon-to-release Meta tech, the company takes the opportunity to show off stuff coming further down the pipeline too. So, its demo of Project Orion (as those who claim to be in the know call it) could be one of those ‘you’ll get this when it’s ready’ kind of teasers.
Obviously, leaks should be taken with a pinch of salt. Meta could have brought the release of its specs forward, or pushed it back depending on a multitude of technological factors – we won’t know until Meta officially announces more details. Considering it has teased the specs suggests their release is at least a matter of when not if.
Meta AR glasses: Specs and features
We haven’t heard anything about the hardware you’ll find in Meta’s AR glasses, but we have a few ideas of what we’ll probably see from them based on Meta’s existing tech and partnerships.
Meta and LG recently confirmed that they’ll be partnering to bring OLED panels to Meta’s headsets, and we expect they’ll bring OLED screens to its AR glasses too. OLED displays appear in other AR smart glasses so it would make sense if Meta followed suit.
Additionally, we anticipate that Meta’s AR glasses will use a Qualcomm Snapdragon chipset just like Meta’s Ray-Ban smart glasses. Currently, that’s the AR1 Gen 1, though considering Meta’s AR specs aren’t due until 2027 it seems more likely they’d be powered by a next-gen chipset – either an AR2 Gen 1 or an AR1 Gen 2.
The AR glasses could let you bust ghost wherever you go (Image credit: Meta)
As for features, Meta’s already teased the two standouts: AR and AI abilities.
What this means in actual terms is yet to be seen but imagine virtual activities like being able to set up an AR Beat Saber jam wherever you go, an interactive HUD when you’re navigating from one place to another, or interactive elements that you and other users can see and manipulate together – either for work or play.
AI-wise, Meta is giving us a sneak peek of what’s coming via its current smart glasses. That is you can speak to its Meta AI to ask it a variety of questions and for advice just as you can other generative AI but in a more conversational way as you use your voice.
It also has a unique ability, Look and Ask, which is like a combination of ChatGPT and Google Lens. This allows the specs to snap a picture of what’s in front of you to inform your question, allowing you to ask it to translate a sign you can see, for a recipe using ingredients in your fridge, or what the name of a plant is so you can find out how best to care for it.
The AI features are currently in beta but are set to launch properly soon. And while they seem a little imperfect right now, we’ll likely only see them get better in the coming years – meaning we could see something very impressive by 2027 when the AR specs are expected to arrive.
Meta AR glasses: What we want to see
A slick Ray-Ban-like design
The design of the Ray-Ban Meta Smart Glasses is great (Image credit: Meta)
While Meta’s smart specs aren’t amazing in every way – more on that down below – they are practically perfect in the design department. The classic Ray-Ban shape is sleek, they’re lightweight, super comfy to wear all day, and the charging case is not only practical, it’s gorgeous.
While it’s likely Ray-Ban and Meta will continue their partnership to develop future smart glasses – and by extension the teased AR glasses – there’s no guarantee. But if Meta’s reading this, we really hope that you keep working with Ray-Ban so that your future glasses have the same high-quality look and feel that we’ve come to adore.
If the partnership does end, we’d like Meta to at least take cues from what Ray-Ban has taught it to keep the design game on point.
Swappable lenses
We want to change our lenses Meta! (Image credit: Meta)
While we will rave about Meta’s smart glasses design we’ll admit there’s one flaw that we hope future models (like the AR glasses) improve on; they need easily swappable lenses.
While a handsome pair of shades will be faultless for your summer vacations, they won’t serve you well in dark and dreary winters. If we could easily change our Meta glasses from sunglasses to clear lenses as needed then we’d wear them a lot more frequently – as it stands, they’re left gathering dust most months because it just isn’t the right weather.
As the glasses get smarter, more useful, and pricier (as we expect will be the case with the AR glasses) they need to be a gadget we can wear all year round, not just when the sun’s out.
Speakers you can (quietly) rave too
These open ear headphones are amazing, Meta take notes (Image credit: Future)
Hardware-wise the main upgrade we want to see in Meta’s AR glasses is better speakers. Currently, the speakers housed in each arm of the Ray-Ban Meta Smart Glasses are pretty darn disappointing – they can leak a fair amount of noise, the bass is practically nonexistent and the overall sonic performance is put to shame by even basic over-the-ears headphones.
We know open-ear designs can be a struggle to get the balance right with. But when we’ve been spoiled by open-ear options like the JBL SoundGear Sense – that have an astounding ability to deliver great sound and let you hear the real world clearly (we often forget we’re wearing them) – we’ve come to expect a lot and are disappointed when gadgets don’t deliver.
The camera could also get some improvements, but we expect the AR glasses won’t be as content creation-focused as Meta’s existing smart glasses – so we’re less concerned about this aspect getting an upgrade compared to their audio capabilities.
MSI, which releases some of the best gaming PCs in the market, is launching several lines of desktops including the Aegis 14th series, Codex 14th series, and the newly released Vision 14th Series. Each one features 14th-Gen Intel Core processors and Nvidia RTX 4000-series graphics cards, though the exact configurations differ.
The Vision Elite is the flagship PC that has a single model type, while the Codex and Aegis lines have two model types that differ in color and chassis design. There’s not too much information on the Codex and Aegis lines right now, but as more is revealed we will make sure to update you.
Vision Elite
(Image credit: MSI)
This is the flagship gaming PC of the Vision Elite line and it’s outfitted with the highest-end components and chassis features a panoramic tempered glass panel that shows off the internals including the gorgeous RGB lighting.
Spec-wise, you’ll get an Intel Core i9-14900KF processor, Nvidia GeForce RTX 4090 graphics card, 32GB DDR5 RAM, 2TB M.2 NVMe SSD storage, and a 1000W power supply. It also supports WIFI 7 and includes Bluetooth 5.4 support alongside a 2.5G LAN port. This configuration will have an MSRP of $4,299.99 and can be found on the official MSI store.
Aegis series
(Image credit: MSI)
The Aegis series features configurations with distinct faceplates that include mesh-like designs as well as venting through the aluminum side panel. This ensures great performance from the powerful hardware within by improving airflow throughout the system.
We don’t have any specific configuration or pricing information regarding the Aegis series. It comes in two different colors, white and black.
Codex series
(Image credit: MSI)
The Codex series, like the Aegis line of gaming desktops, is also refreshed with two new chassis styles with augments to airflow and design. It’s meant to invoke the look of a PC built from scratch by using standardized parts, according to MSI.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
We don’t have any specific configuration or pricing information regarding the Codex series, either, but you can expect it to feature current-gen parts including DDR5 and PCIe 5.0 support.
Google has announced that Circle to Search will roll out to its older and mid-range Pixel devices starting from March 27. Google CEO Sundar Pichai announced the AI feature is coming to “more Pixel and Samsung phones, foldables and tablets.”
Several older Samsung devices should also receive Circle to Search beginning this week via an update to One UI 6.1. As reported by Android Central, this will include last year’s Galaxy S23 series. Both Samsung’s Galaxy Z Fold 5 and Galaxy Z Flip 5, as well as the Galaxy Tab S9 series of tablets, will receive Circle to Search too.
It’s unlikely that older phones from the Pixel 5 line or any further back will get the feature as OS support for these models has ended.
To use Circle to Search, all you need to do is press and hold the home button to activate an overlay that lets you use your finger to circle the element you want to search for. The AI-powered search should then identify what’s been circled, and provide relevant search results for that subject.
It’s good to see Google not holding its cutting-edge AI features back from older and more affordable devices, and allowing more Pixel users to get their hands on Circle to Search without forcing them to upgrade.
You might also like
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
As we get closer to the full launch of the Samsung Galaxy Ring, we’re slowly learning more about its many talents – and some fresh rumors suggest these could include planning meals to improve your diet.
Samsung calls this app an “AI-powered food and recipe platform”, as it can whip up tailored meal plans and even give you step-by-step guides to making specific dishes. The exact integration with the Galaxy Ring isn’t clear, but according to the Korean site, the wearable will help make dietary suggestions based on your calorie consumption and body mass index (BMI).
The ultimate aim is apparently to integrate this system with smart appliances (made by Samsung, of course) like refrigerators and ovens. While they aren’t yet widely available, appliances like Samsung Bespoke 4-Door Flex Refrigerator and Bespoke AI Oven include cameras that can design or cook recipes based on your dietary needs.
It sounds like the Galaxy Ring, and presumably smartwatches like the incoming Galaxy Watch 7 series, are the missing links in a system that can monitor your health and feed that info into the Samsung Food app, which you can download now for Android and iOS.
The Ring’s role in this process will presumably be more limited than smartwatches, whose screens can help you log meals and more. But the rumors hint at how big Samsung’s ambitions are for its long-awaited ring, which will be a strong new challenger in our best smart rings guide when it lands (most likely in July).
Hungry for data
(Image credit: Samsung)
During our early hands-on with the Galaxy Ring, it was clear that Samsung is mostly focusing on its sleep-tracking potential. It goes beyond Samsung’s smartwatches here, offering unique insights including night movement, resting heart rate during sleep, and sleep latency (the time it takes to fall asleep).
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
But Samsung has also talked up the Galaxy Ring’s broader health potential more recently. It’ll apparently be able to generate a My Vitality Score in Samsung’s Health app (by crunching together data like your activity and heart rate) and eventually integrate with appliances like smart fridges.
This means it’s no surprise to hear that the Galaxy Ring could also play nice with the Samsung Food app. That said, the ring’s hardware limitations mean this will likely be a minor feature initially, as its tracking is more focused on sleep and exercise.
We’re actually more excited about the Ring’s potential to control our smart home than integrate with appliances like smart ovens, but more features are never a bad thing – as long as you’re happy to give up significant amounts of health data to Samsung.
Apple has begun testing a new AI-powered tool with a small group of advertisers that automatically decides where to place ads on the App Store, Business Insider reports.
Citing two individuals familiar with the matter, Business Insider claims that the AI-powered tool mirrors the functionalities of Google’s Performance Max and Meta’s Advantage+, which allow advertisers to specify their budget, target cost-per-acquisition, desired audiences, and geographical targets. Apple’s algorithm then automatically determines the most effective placement of ads across the App Store’s existing formats.
The initiative is said to be part of Apple’s broader ambition to refine and expand its advertising offerings. Currently, Apple provides a variety of ad formats within the App Store, including search tab ads, search results page ads, “you might also like” suggestions on app product pages, and ads on the “today” tab.
Business Insider speculates that the move toward AI-enhanced ad placements suggests a future where Apple could extend advertising to other apps and services within its ecosystem, such as Apple News, Stocks, and the recently launched Sports app. The website also today reported that a number of recent Apple hiring decisions may indicate plans to introduce an ad-supported Apple TV+ tier.
While the iPhone 16 Pro and iPhone 16 Pro Max are still around six months away from launching, there are already many rumors about the devices. Below, we have recapped new features and changes expected so far. These are some of the key changes rumored for the iPhone 16 Pro models as of March 2024:Larger displays: The iPhone 16 Pro and iPhone 16 Pro Max will be equipped with larger 6.3-inch…
Apple appears to be internally testing iOS 17.4.1 for the iPhone, based on evidence of the software update in our website’s logs this week. Our logs have revealed the existence of several iOS 17 versions before Apple released them, ranging from iOS 17.0.3 to iOS 17.3.1. iOS 17.4.1 should be a minor update that addresses software bugs and/or security vulnerabilities. It is unclear when…
Earlier this week, Apple announced new 13-inch and 15-inch MacBook Air models, the first Mac updates of the year featuring M3 series chips. But there are other Macs in Apple’s lineup still to be updated to the latest M3 processors. So, where do the Mac mini, Mac Studio, and Mac Pro fit into Apple’s M3 roadmap for the year ahead? Here’s what the latest rumors say. Mac Mini Apple announced …
iOS 17.4 was released last week following over a month of beta testing, and the update includes many new features and changes for the iPhone. iOS 17.4 introduces major changes to the App Store, Safari, and Apple Pay in the EU, in response to the Digital Markets Act. Other new features include Apple Podcasts transcripts, an iMessage security upgrade, new emoji options, and more. Below, we…
Best Buy this weekend has a big sale on Apple MacBooks and iPads, including some of the first notable M2 iPad Pro discounts in months, alongside the best prices we’ve ever seen on MacBook Air, MacBook Pro, iPad Air, and more. Some of these deals require a My Best Buy Plus or My Best Buy Total membership, which start at $49.99/year. In addition to exclusive access to select discounts, you’ll get…