When browsing the latest and best broadband deals, you might have seen some 5G broadband deals. If you’ve yet to consider this option then you might want to take a look at this top deal we’ve found from Three.
Three is the UK’s leading 4G and 5G broadband provider and currently its 5G Hub package is on offer for just £11p/m for six months. You can get this if you sign up for a 24-month contract and after this initial discounted period, the price goes back up to £22 a month – which is still pretty cheap.
Alongside this low cost, you also get impressive 150Mbps average download speeds from the 5G Hub, as well as unlimited data. This should be more than enough to meet the online needs of any small to medium-sized properties. Even those that like to stream in UHD, online game, and more on several devices at once.
Another positive is that you don’t need any invasive installation work or a landline. You simply get your 5G Hub, plug it in and connect to the network it provides. Three even has a 30-day money-back guarantee if you’re not satisfied with its broadband. Moreover, if you order before 8 pm you’ll get your new Hub delivered for free the next working day.
One thing you will need to check though is whether you’re eligible for this 5G broadband deal in your location, as it does of course rely on the network coverage near you. This is something you can check directly with Three. It’s also worth noting that each April the monthly costs of the deal will increase by up to December’s CPI rate +3.9%.
Three 5G Broadband Deal
Why should I choose Three’s broadband?
As well as being the leading 5G broadband provider, as our featured deal shows, there’s a lot to love about Three.
It’s not just the easy installation, 30-day money-back guarantee and fast, free delivery that sets Three apart, a notable draw is that you can get speeds to rival other Full Fibre providers for half the price. What’s more, in the case of the deal above, it can be significantly less when Three is running one of its promotions.
Three is also reasonably flexible with its contracts as it also offers a rolling monthly package. For fairness, this is a little more expensive than its standard 24-month one, but in general broadband market terms, it’s still on the cheaper side. This option is also ideal for those who don’t want to commit to a broadband provider for the long term.
Three also has excellent customer service and there are ways to get help over the phone, online, and via its customer app. On the subject of apps, Three customers can also get rewards from its separate ‘Three+’ app, including money off experiences, food and drink, and products from popular retailers.
As a mobile network provider you can of course also get a phone tariff if you wish and get preferable rates on things like device insurance. However, as we’ve mentioned earlier, the fact its broadband relies on its 5G network means you might not be able to get Three’s service in your location.
So if this is the case, or you just want to see how cheap Three’s deals really are, we can help. Check out our best broadband deals guide or enter your postcode into our widget below. We’ll then show you what broadband deals are on offer right now in your area.
Apple’s new M4 chip powers the 2024 iPad Pro lineup. Photo: Apple
Just seven months after the M3 chip made its debut, Apple shocked expectations by unveiling the new iPad Pro with introduction of the brand-new M4 chip. The M4 chip is built on second-generation three-nanometer technology that’s even more power efficient.
Tim Millet, Apple’s vice president of platform architecture and hardware technologies, says the M4 chip was “essential to deliver incredible performance” in the new iPad Pro, which is now “the most powerful device of its kind.”
The M4 chip was a last-minute rumor broken by Mark Gurman, which was dismissed by some as being unfeasible only half a year after M3 rolled out in last year’s MacBook Pro.
M4 introduction breaks all expectations
The M4 is a significant leap over the previous generation. Image: Apple
The new CPU has the same number of four performance cores, but adds two new efficiency cores. M4 “ delivers up to 50% faster CPU performance than M2.” Every core has a next-generation ML accelerator that works alongside the Neural Engine.
The new GPU in the M4 chip makes it 4× faster than the previous iPad models with M2 and 10× faster than the original iPad Pro from 2015.
The new Neural Engine can process 38 trillion operations per second. Combined with “next-generation ML accelerators in the CPU, a high-performance GPU and more memory bandwidth” Millet makes the bold claim that this Neural Engine is more powerful than any other PC neural processing unit.
Compared to the very first Apple silicon chip with a Neural Engine, the A11 Bionic — which debuted in 2017 — the M4 chip is 60× faster.
This article is still being updated with new information.
Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.
Jaime Guevara-Aguirre (back left) and Valter Longo (back right) pose with several of the Laron study participants.Credit: Courtesy Jaime Guevara-Aguirre & Valter Longo
People with Laron syndrome — a growth-hormone-receptor condition — seem to be at lower risk of developing cardiovascular disease than relatives who do not have the syndrome. Laron syndrome had already been linked to health benefits including protection against diabetes, cancer and cognitive decline. “They seem to be protected from all the major age-related diseases,” says biogerontologist Valter Longo, a co-author on the cardiovascular study. It’s unclear if people with Laron syndrome live longer on average than those without it, but mice with a similar condition live for about 40% longer than do control animals.
The H5N1 strain of avian influenza has been spreading undetected in US cattle for months, according to a preliminary analysis of genomic data released by the US Department of Agriculture. The outbreak is likely to have begun when the virus jumped from an infected bird into a cow, probably around late December or early January. But the publicly released data do not include critical information that would shed light on the outbreak’s origins and evolution. “In an outbreak response, the faster you get data, the sooner you can act,” says genomic epidemiologist Martha Nelson. “Whether we’re not too late, to me, that’s kind of the million dollar question.”
On the outskirts of Beijing, researchers from all over the world have come together at the Synergetic Extreme Condition User Facility to push matter to its limits with extreme magnetic fields, pressures and temperatures, and examine it in new ways with extremely precise resolution in time. One particularly tantalizing goal for many researchers using this US$220-million toolbox is to discover new superconductors, materials that conduct electricity without resistance. Naturereporter Gemma Conroy steps inside to take a look.
Climate change is completely reshaping the ecosystem in one of the best-studied Arctic fjords, on the northwest side of the Norwegian archipelago Svalbard. Since the inlet stopped freezing over during the winter, Arctic mammals such as beluga whales have left and more southerly animals including Atlantic puffins have moved in. New habitats have popped up along the shoreline where sea ice once suffocated plant growth. “It’s incredible that I — in my time — have been able to see such dramatic changes,” says ecotoxicologist Geir Wing Gabrielsen.
“Lots of our members call us ‘the magic money tree’,” says Alison Baxter, head of communications for the Authors’ Licensing and Collecting Society (ALCS), a UK agency that compensates authors when their works are copied or shared after publication. Such societies also collect royalties on behalf of scientists, for example when their paper is printed out and distributed to students. Anyone with publications to their name (and to which they own the copyright) can join a collecting society — though many people don’t, because of the misconception that it might be a scam. For those who do join, the rewards can be welcome: for example, each ALCS member received an average of around £450 this year.
The debate between physicists Niels Bohr and Albert Einstein over what quantum mechanics ‘really means’ has evolved into a long-standing myth, writes science writer Jim Baggott. Einstein rejected the possibility of some of the quantum weirdness implied by the theory, such as ‘spooky action at a distance’. Bohr’s ‘Copenhagen interpretation’ has been interpreted by some as ‘shut up and calculate’. But the idea that Bohr and his followers heavy-handedly imposed his view as a dogmatic orthodoxy doesn’t hold water, argues Baggott. In reality, “by the 1950s, the physics community had become broadly indifferent…. Quantum mechanics worked. Why worry about what it meant?” Nevertheless, the myth had a role in motivating the singular personalities that challenged it, laying the foundations for quantum computing.
Working as a scientist at an environmental non-profit organization can be similar to academic research, but requires a change of mindset: studies must always address real-world challenges. Jobs are usually advertised on job boards or LinkedIn, and it’s important for applicants to emphasize soft skills alongside scientific achievement. “It’s the same educational background and the same research, but just the way that I describe things had to shift completely,” says ecologist Kenneth Davidson. For early-career scientists who make the leap, non-profits can provide more job security and flexibility than academia.
Matthew Nitschke is a senior research scientist at the Australian Institute of Marine Science, and a research fellow at Victoria University of Wellington, New Zealand.Credit: Giacomo d’Orlando for Nature
For ten years, Matthew Nitschke and his colleagues at the Australian Institute of Marine Science have been growing coral symbionts in the laboratory while gradually raising the heat. “Each time the symbionts adapt, we push the temperature up a bit,” says Nitschke. “They can now survive a constant 31 °C.” The goal is to develop corals able to survive waters warmed by climate change. “Studying marine conservation is hard,” says Nitschke. “Marine ecosystems are degrading. Coral reefs are bleaching: by 2060, without significant emissions reductions, mass coral bleaching on the Great Barrier Reef could be an annual event.” The next step is small-scale field trials out on the reef. (Nature | 3 min read)
Computational biologist Jitao David Zhang says his misconceptions about vocational training were demolished when he experienced first-hand the apprenticeship culture in Germany and Switzerland. (Nature | 7 min read)
With contributions by Katrina Krämer and Smriti Mallapaty
Want more? Sign up to our other free Nature Briefing newsletters:
• Nature Briefing: Microbiology — the most abundant living entities on our planet — microorganisms — and the role they play in health, the environment and food systems.
A cow is milked in Washington State.Credit: USDA Photo/Alamy
A strain of highly pathogenic avian influenza has been silently spreading in US cattle for months, according to preliminary analysis of genomic data. The outbreak is likely to have begun when the virus jumped from an infected bird into a cow, probably around late December or early January. This implies a protracted, undetected spread of the virus — suggesting that more cattle across the United States, and even in neighbouring regions, could have been infected with avian influenza than currently reported.
These conclusions are based on swift and summary analyses by researchers, following a dump of genomic data by the US Department of Agriculture (USDA) into a public repository earlier this week. But to scientists’ dismay, the publicly released data do not include critical information that would shed light on the outbreak’s origins and evolution. Researchers also express concern that the genomic data wasn’t released until almost four weeks after the outbreak was announced.
Speed is especially important for fast-spreading respiratory pathogens that have the potential to spark pandemics, says Tulio de Oliveira, a bioinformatician at Stellenbosch University in South Africa. The cattle outbreak is not expected to allow the virus to gain the ability to spread between people, but researchers say it is important to be vigilant.
“In an outbreak response, the faster you get data, the sooner you can act,” says Martha Nelson, a genomic epidemiologist at the National Center for Biotechnology Information (NCBI) in Bethesda, Maryland. Nelson adds that with every week that goes by, the window for controlling the outbreak narrows. “Whether we’re not too late, to me, that’s kind of the million dollar question.”
Single spillover
Federal officials announced on 25 March that a highly pathogenic bird-flu strain had been detected in dairy cows. The USDA has since confirmed infections with the strain, named H5N1, in 34 dairy herds in nine states. In late March and early April, the USDA posted a handful of viral sequences from cows sampled in Texas and a sequence from a human case, on the widely used repository GISAID.
On 21 April, the USDA posted more sequencing data on the Sequence Read Archive (SRA), a repository maintained by the NCBI. The latest upload included some 10 gigabytes of sequencing information from 239 animals, includings cows, chickens and cats, says Karthik Gangavarapu, a computational biologist at Scripps Research in La Jolla, who processed the raw data.
How countries are using genomics to help avoid a second coronavirus wave
Analysis of the genomes suggests that the cattle outbreak probably began with a single introduction from wild birds in December or early January. “It’s good news that there’s only been one jump that we can discern so far. But bad news, in many ways that it has been spreading for probably several months already,” says Michael Worobey, an evolutionary biologist at the University of Arizona in Tucson, who has analysed the genomes.
“This virus is clearly transmitting among cows in some way,” says Louise Moncla, an evolutionary virologist at the University of Pennsylvania in Philadelphia, who has studied the genomic data.
Nelson, who is analysing the data, says she was most surprised by the extent of the genetic diversity in the virus infecting cattle, which indicates that the virus has had months to evolve. Among the mutations are changes to a viral-protein section that scientists have linked to possible adaptation to spread in mammals, she says.
The data also show occasional jumps back from infected cows to birds and cats. “This is a multi-host outbreak,” says Nelson.
A single jump, many months ago, is “the most reliable conclusion you can make,” based on the available data, says Eric Bortz, a virologist at the University of Alaska Anchorage. But an important caveat is that it isn’t clear what percentage of infected cows the samples represent, he says.
Fill in the blank
That’s only one of many data gaps. Scientists lack information about each sample’s precise collection date and the state where it was collected. Such gaps are “very abnormal,” Nelson says.
The missing ‘metadata’ make it harder to answer many open questions, such as how the virus is transmitted between cows and herds, and make it tricky to pin down exactly when the virus jumped to cows. These insights could help to control further viral spread, and protect workers on cattle farms “who can least afford to be exposed,” says Worobey.
Worobey, Gangavarapu and their colleagues are now racing to analyse some metadata uncovered through online sleuthing by Florence Débarre, an evolutionary biologist at the French national research agency CNRS in Paris. Gangavarapu says dates and geographic information for 152 of the 239 samples have been extracted from a USDA presentation posted on YouTube on 26 April.
Real-time flu tracking
Researchers also want more swabbing of cattle and wild birds to gain more insights into the outbreak’s exact origin and to decipher another puzzle. The genomic data reveal that the viral genome sequenced from the infected person does not include some of the signature mutations observed in the cattle. “That is a mystery to everyone,” says Nelson.
One possibility is that the person was infected by a separate viral lineage, which infected cattle that have not been swabbed. Another less likely scenario, which can’t be ruled out, says Nelson, is that the person was infected directly from a wild bird. “It raises just a whole slew of questions about what black box of samples we are missing.”
Shilo Weir, a public affairs specialist at the USDA, says the agency decided to post the unanalysed sequence data on the SRA to make it public as soon as possible. Weir says the agency will “work as quickly as possible” to publish curated files on GISAID with relevant epidemiological information, and will continue to make raw data available on the SRA on a rolling basis.
Google Calendar is one of the most popular calendar apps in the world. While Samsung ships its phones, tablets, and smartwatches with the Samsung Calendar app, many people still prefer the Google Calendar app, and it is now getting an easier way to browse through the months.
Google Calendar for Android gets improved browsing for months
The new version of Google Calendar has introduced chips for each month just below the month view. This makes it easier to browse through months and make calendar entries far into the future. Each month chip has the month’s name in three-letter terms, and the whole horizontal bar of the months is scrollable.
Manuel Vonau spotted this new UI element, and he posted the screenshot on X. It was found in the Agenda View in the latest version (2024.13.1-624115131-release) of the Google Calendar app for Android. It may be a server-side release, which means not everyone can see this new UI element.
AMD is introducing two new adaptive SoCs – Versal AI Edge Series Gen 2 for AI-driven embedded systems, and Versal Prime Series Gen 2 for classic embedded systems.
Multi-chip solutions typically come with significant overheads but single hardware architecture isn’t fully optimized for all three AI phases – preprocessing, AI inference, and postprocessing.
To tackle these challenges, AMD has developed a single-chip heterogeneous processing solution that streamlines these processes and maximizes performance.
Early days yet
The Versal AI Edge Series Gen 2 adaptive SoCs provide end-to-end acceleration for AI-driven embedded systems, which the tech giant says is built on a foundation of improved safety and security. AMD has integrated a high-performance processing system, incorporating Arm CPUs and next-generation AI Engines, with top-class programmable logic, creating a device that expertly handles all three computational phases required in embedded AI applications.
AMD says the Versal AI Edge Series Gen 2 SoCs are suitable for a wide spectrum of embedded markets, including those with high-security, high-reliability, long lifecycle, and safety-critical demands. Purposes include autonomous driving, industrial PCs, autonomous robots, edge AI boxes and ultrasound, endoscopy and 3D imaging in health care.
The processing system of the integrated CPUs includes up to 8x Arm Cortex-A78AE application processors, up to 10x Arm Cortex-R52 real-time processors, and support for USB 3.2, DisplayPort 1.4, 10G Ethernet, PCIe Gen5, and more.
The devices meet ASIL D / SIL 3 operating requirements and are compliant with a range of other safety and security standards. They reportedly offer up to three times the TOPS/watt for AI inference and up to ten times the scalar compute with powerful CPUs for postprocessing.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Salil Raje, senior vice president of AMD’s Adaptive and Embedded Computing Group, said, “The demand for AI-enabled embedded applications is exploding and driving the need for solutions that bring together multiple compute engines on a single chip for the most efficient end-to-end acceleration within the power and area constraints of embedded systems. Backed by over 40 years of adaptive computing leadership in high-security, high-reliability, long-lifecycle, and safety-critical applications, these latest generation Versal devices offer high compute efficiency and performance on a single architecture that scales from the low-end to high-end.”
Early access documentation and evaluation kits for the devices are available now. The first silicon samples of Versal Series Gen 2 are expected at the start of next year, with production slated to begin late 2025.
Verizon’s just posted an awesome bonus deal for both existing and new customers today. For a limited time only, you’ll be able to add on the carrier’s Disney+ bundle alongside your unlimited data plan for a full six months free of charge.
That means six months of unlimited access to Disney+, Hulu, and ESPN+ at no extra cost. Under the usual terms, this bundle costs $10 per month as part of the carrier’s ‘myPlan’ system so you’re saving $60 in total.
As a quick explainer, Verizon’s myPlan system is available on all three of the carrier’s unlimited data plans and allows you to ‘add-on’ perks for $10 per month each. These include things like cloud storage, international data usage, Netflix, and of course this Disney+ bundle. It’s a clever way to customize your own plan without paying for things you don’t need, although the best Verizon plans do still run at a premium.
Note that this free six months of Disney is only eligible on either the Unlimited Plus or Unlimited Ultimate plans, so you can’t claim it if you’re on the Welcome Unlimited plan. If you’re already signed up for the Disney+ bundle then note that this freebie won’t automatically replace your existing setup, so you may have to cancel and re-apply for the perk to get your freebie.
If you’re interested, we’ve rounded up a few more of today’s best Verizon deals just below.
Whether that will happen remains to be seen, but Googleis ending the era of free access to its Gemini API, signaling a new financial strategy within its AI development.
Developers previously enjoyed free access to lure them towards Google’s AI products and away from OpenAI’s, but that is set to change. OpenAI was first to market and has already monetized its APIs and LLM access. Now Google is planning to emulate this through its cloud and AI Studio services, and it seems the days of unfettered free access are numbered.
RIP PaLM API
In an email to developers, Google said it was shutting down access to its PaLM API (the pre-Gemini model which was used to build custom chatbots) to developers via AI Studio on August 15. This API was deprecated back in February.
The tech giant is hoping to convert free users into paying customers by promoting the stable Gemini 1.0 Pro. “We encourage testing prompts, tuning, inference, and other features with stable Gemini 1.0 Pro to avoid interruptions,” The email reads. “You can use the same API key you used for the PaLM API to access Gemini models through Google AI SDKs.”
Pricing for the paid plan begins at $7 for one million input tokens and rises to $21 for the same number of output tokens.
There is one exception to Google’s plans – PaLM and Gemini will remain accessible to customers paying for Vertex AI in Google Cloud. However, as HPCWirepoints out, “Regular developers on cheaper budgets typically use AI Studio as they cannot afford Vertex.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Fallout games are having a moment in the wake of glowing reviews for the new TV series adaptation on Prime Video. Amazon has added two of the series’ best games as freebies for Prime members on Luna, its cloud streaming service. Fallout 4 is also getting some love, as Bethesda said it will drop the PS5 and Xbox Series X/S update for the 2015 game on April 25.
Amazon Prime members can playFallout 3 and Fallout: New Vegas on Amazon Luna for the next six months at no extra charge. Like other cloud streaming services, Luna requires a stable and low-latency internet connection since the games you play are processed on remote servers. Amazon recommends a network that can sustain at least 10Mbps for 1080p quality. An ethernet connection works best, but if you’re on Wi-Fi, using the 5GHz band is preferable if your router supports it.
They join Fallout 76, already announced as an Amazon Prime Gaming free game for April. As long as you claim it this month, you can download and keep it forever. It’s redeemable for both Xbox and PC. In addition, the game has a free-play week for all platforms. From Thursday through April 18, you can playFallout 76 for free on PlayStation, Xbox and Steam. You only need to download the game and sign in with a Bethesda account.
Bethesda
The long-delayed big console update for Fallout 4 finally arrives for PS5 and Xbox Series X/S on April 25. The “next-gen” (now current-gen, if we’re being technical) version lets you choose between Performance and Quality modes for prioritizing speed or spectacle. It also supports 60fps and higher resolutions alongside stability improvements and bug fixes. The stability fixes will also arrive in a Fallout 4 update for PS4 and Xbox One consoles to provide a more dependable experience for older hardware users.
The Fallout TV series is damn good — and possibly the second-best gaming adaptation behind The Last of Us. The show starts with a bang and reels you in with magnetic characters and alluring visuals. “Fallout is more than just a video game adaptation,” Engadget’s Sam Rutherford wrote in his review. “It’s a really good show in its own right — an apocalyptically good one at that.”
Fallout is now streaming on Prime Video. It stars Ella Purnell (Yellowjackets) as Lucy, Aaron Clifton Moten (Father Stu) as Maximums, Kyle MacLachlan (Twin Peaks) as Hank and the scene-stealing master of ornery characters, Walton Goggins (Justified), as The Ghoul.
This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.
Two months after I started using the Apple Vision Pro, it hasn’t transformed the way I live. It hasn’t replaced my TV, and it doesn’t make me want to give up my powerful desktop or slim laptops. It’s just another tool in my gadget arsenal — one I can don to catch up on X-Men ’97 in bed, or to help me dive deep into research while I’m away from my office. The Vision Pro becomes normal so quickly, it’s almost easy to forget how groundbreaking it actually is. Its screens are still absolutely stunning, and the combination of eye tracking and Apple’s gesture controls makes for the most intuitive AR/VR interface I’ve seen yet.
While the Vision Pro still isn’t something most people should consider buying, Apple has thrown out a few bones to early adopters. There are more games popping up on the App Store and Arcade every week, and there are also a handful of 3D films being offered to Apple TV+ subscribers. The addition of Spatial Personas also goes a long way towards making the Vision Pro more of a telepresence machine (more on that below). But we’re still waiting for the company to make good on the promise of 180-degree Immersive Video, as well as to let users create higher quality Spatial Videos on iPhones.
Photo by Devindra Hardawar/Engadget
How I use the Apple Vision Pro
Once the pressure of reviewing every aspect of the Vision Pro was over, I started incorporating it into my life like a typical user. (Full disclosure: I returned the unit I originally bought, but Apple sent along a sample for further testing.) Mostly, that means not forcing myself to use the headset for large chunks of the day. Instead, my Vision Pro time is more purpose-driven: I slip it on in the morning and project my MacBook’s screen to clear out emails and catch up on Slack conversations, all while a YouTube video is virtually projected on my wall.
In the middle of a work session, or sometimes right before diving into a busy workday, I run through a five- or ten-minute meditation session with the Mindfulness app. I can easily meditate without any headgear, but I’ve found the app’s calm narration and the immersive environment it creates (since it completely blocks out the real world) to be incredibly helpful. It’s like having your own yoga teacher on staff, ready to help calm your brain whenever you have a free moment.
I’ve also learned to appreciate the Vision Pro as a way to expand where I can get work done. As someone who’s been primarily working from home since 2009, I learned early on that changing locations was an easy way to keep myself engaged. I try not to write in the same place where I’ve been checking email in the morning, for example. I normally hop between a PC desktop and large monitor (currently it’s Alienware’s 32-inch 4K OLED) in my office, and a MacBook Air or Pro for writing around the house. Sometimes I’ll go to a nearby park or cafe when I need to zone into a writing assignment for several hours.
Photo by Devindra Hardawar/Engadget
With the Vision Pro, I can actually handle some serious multitasking from my deck or kitchen without being tied to a desktop computer. I’ve found that useful for covering events to avoid getting stuck inside my basement office (I can have a video streaming on a virtual window, as well as Slack and web browsers open via a projected MacBook). I’ve juggled conference calls while being sick in bed with the Vision Pro, because it felt more comfortable than staring down at a tiny laptop display.
I still haven’t traveled much with the headset, but I can foresee it being useful the next time I take a weekend trip with my family. Tested’s Norman Chan told me he’s used the Vision Pro during long flights, where it makes the hours just disappear. I’m still working myself up to that — I’d much rather use a small laptop and headphones on planes, but I can imagine the beauty of watching big-screen movies on the Vision Pro while everyone else is staring at tablets or cramped seat-back displays.
The Vision Pro remains a fantastic way to watch movies or TV shows at home, as well. When I’m too tired to head downstairs after putting my kids to sleep, I sometimes just veg in bed while projecting YouTube videos or anime on the ceiling. That’s where I experienced a trippy temporal shift while watching X-Men ’97: As soon as its remastered theme song spun up, I was immediately transported back to watching the original show on a 13-inch TV in my childhood bedroom. If I could somehow jump back into the past, Bishop-style, it would be impossible to convince my 10-year-old self that I’d eventually be watching a sequel series in a futuristic headset, projected in a 200-inch window. How far we’ve come.
Photo by Devindra Hardawar/Engadget
Spatial Personas are a telepresence dream
When Apple first announced the Vision Pro last year, I couldn’t help but be creeped out by its Persona avatars. They looked cold and inhuman, the exact sort of thing you’d imagine from soulless digital clones. The visionOS 1.1 update made them a bit less disturbing, but I didn’t truly like the avatars until Apple unveiled Spatial Personas last week. Instead of being confined to a window, Spatial Personas hover in your virtual space, allowing you to collaborate with friends as if they were right beside you.
The concept isn’t entirely new: I tested Microsoft Mesh a few years ago with a HoloLens 2 headset, which also brought digital avatars right into my home office. But they looked more like basic Miis from the Nintendo Wii than anything realistic. Meta’s Horizon Workrooms did something similar in completely virtual spaces, but that’s not nearly as impressive as collaborating digitally atop a view of the real world.
Apple’s Spatial Personas are far more compelling than Microsoft’s and Meta’s efforts because they’re seamless to set up — you just have to flip on Spatial mode during a FaceTime chat — and they feel effortlessly organic. During a Spatial Persona call with Norm from Tested, we were conversing as if he was sitting right in front of me in my home theater. We were able to draw and write together in the Freeform app easily — when I stood up and reached out to the drawing board, it was almost as if we were standing beside each other at a real white board.
Photo by Devindra Hardawar/Engadget
SharePlay with Spatial Personas
We were also able to customize our viewing experiences while watching a bit of Star Trek Beyond together using SharePlay in the Vision Pro. Norm chose to watch it in 2D, I watched in 3D, and our progress was synchronized. The experience felt more engrossing than a typical SharePlay experience, since I could just lean over and chat with him instead of typing out a message or saying something over a FaceTime call. I also couldn’t help but imagine how easy it would be to record movie commentaries for podcasts using Spatial Personas. (We’d have to use separate microphones and computers, in addition to Vision Pros, but it would make for a more comfortable recording session than following movies on a monitor or TV.)
Our attempts to play games together failed, unfortunately, because we were running slightly different versions of Game Room. We also didn’t have enough time during our session to sync our apps up. I eventually was able to try out Chess and Battleship with other Vision Pro-equipped friends and, once again, it felt like they were actually playing right beside me. (Norm and CNET’s Scott Stein also looked like they were having a ball with virtual chess.)
The main stumbling block for Spatial Personas, of course, is that they require a $3,500 headset. Apple is laying the groundwork for truly great telepresence experiences, but it won’t matter for most people until they can actually afford a Vision Pro or a cheaper Apple headset down the line.
With Horizon Workrooms, Meta allowed non-VR users to join virtual meetings using Messenger on phones and computers, so that they weren’t left out. Standard FaceTime users can also join Vision Pro chats alongside spatial personas, but they’ll be stuck in a window. And unlike Meta’s offering, regular users won’t be able to see any virtual environments (though you could still collaborate on specific apps like FreeForm). Meta’s big advantage over Apple was with capacity: Horizon Workrooms supports up to 16 people in VR, as well as 34 more calling in from other devices. Spatial Persona chats, on the other hand, are limited to five participants.
Apple
No momentum for Immersive Video
Apple’s 180-degree Immersive Video format was one of the most impressive aspects of the Vision Pro when I previewed it last year, and the handful of experiences at launch were pretty compelling. But the Immersive Video well has been dry since launch — the only new experience was a five-minute short showing off the 2023 MLS Playoffs, which was mostly disappointing.
While that short had such great resolution and depth that it felt like I was actually on the pitch, the MLS experience is disorienting because it cuts far too often, and with no sense of rhythm. Once you get settled into a scene, perhaps watching someone gear up for a well-placed goal, the camera view changes and you have no idea where you are. It’s almost like a five-minute lesson in what not to do with Immersive Video. Hopefully, the MLS has a longer experience in the works.
I’m not expecting a tsunami of Immersive Video content, since the Vision Pro is still an obscenely expensive device meant for developers and professionals, but it would be nice to see more of a push from Apple. The company is teasing another six-minute episode of Prehistoric Planet for later this month, but again that isn’t really much. Where are the creators pushing Immersive Video to new heights? While the content is likely hard to work with since it’s shot in 3D and 8K, the format could be a perfect way for Apple to extol the virtues of its new chips.
In lieu of more Immersive Videos, I’ve been spending more time re-watching Spatial Videos captured with my iPhone 15 Pro. They still look more realistic than 2D clips, but I’ve grown to dislike the 1080p/30fps limitation. It’s just hard to accept that resolution when I know my phone can also produce crisp 4K and 60fps footage. The $3 app Spatialify helps somewhat by unlocking 1080p/60fps and 4k/30fps spatial video capture, but its footage is also more shaky and buggy than the iPhone’s built-in camera. At this point, I’ll consider using Spatialify if my phone is on a tripod or gimbal, but otherwise I’ll stick with the native camera app.
Photo by Devindra Hardawar/Engadget
What’s next for the Apple Vision Pro
We’ll likely have to wait until Apple’s WWDC 24 event in June before we hear about any more major upgrades for Vision Pro or visionOS. That would be appropriate, since last year’s WWDC was the headset’s big debut (and a hellish day for us trying to cover all the news). Now that the hardware is in the wild, Apple has to convince developers that it’s worth building Vision Pro apps alongside their usual iOS, iPadOS and macOS wares. It’s not just some mythical spatial computing platform anymore, after all.