I went with the clamp since I knew it would be easy to hook onto my thin wooden side table or metal bed frame, and neither had a paint or finish that would be damaged by the clamp. Some folks also attach it to a headboard.
It was perfect for reading in bed or on the side of my couch. The Lamicall isn’t so long that I needed to add a loop to make it sit far enough away from my eye for comfortable reading, and usually I felt like I had just enough slack to perfectly place it within my preferred reading range. I could keep my Kindle’s text size tiny and put it right next to my face, or push it back farther if I wanted. It floated nicely above or near my head, whether I was lying in bed or sitting up on the couch while my son played nearby.
The base clamp is made of light plastic you secure with a screw top sitting on top of the clamp, which I liked instead of one that pinches on its own–especially since there are tiny grabby hands in my home. The clasp for the Kindle itself is also made of a light plastic, but was still stable and secure. Plus, you can rotate that upper clamp to get the perfect angle.
The neck of the arm is the most resistant part of it: It does take a little effort to move and angle the arm, but that strength and resistance are what keeps it from falling forward or out of place while you read. Even with the resistance, this Kindle holder is still plenty adjustable and goes in any direction you like.
To store it, I usually just push it out of the way toward the wall from wherever it’s clamped. It isn’t foldable, nor does it break down, so if you want it out of sight when you aren’t using it you’ll need a closet or long enough space to store its 3-foot form. It was a little weird to see it floating alone in the living room, but I didn’t find it obtrusive when I used it as a bed stand and simply pushed it against the wall when I was done using it.
It’s designed to be a universal tablet holder, so it’s big enough to hold tablets up to the 11-inch iPad Pro. It can hold a Nintendo Switch, too, along with other popular e-readers. (If only I had this in 2020!) It’s not the right dimensions to hold a bulky Steam Deck by itself, but I still used it to help me prop up a Steam Deck and take weight off my hands and wrists, though it’s not stable enough to float like a Kindle or iPad. It’s able to hold up smartphones, too, and it was similarly comfortable to read with either a Kindle or my iPhone on the Lamicall stand.
Not Quite Hands-Free
Photograph: Nena Farrell
While it won’t fall out of place, the stand is easy to jostle, and I wouldn’t call it hands-free reading—at least not on its own.
Before we were obsessed with AI, the world was mesmerized by Aibo, a robotic dog whose name was short for “artificial intelligence robot” and also had the same pronunciation in Japanese as the word “friend”.
Aibo was doing AI before it was cool, and it was a sophisticated consumer robot dog long before SPOT was a gleam in Boston Dynamics’ eyes. My history with the pint-size bot goes way back.
The history of Aibos (Image credit: Lance Ulanoff)
I’ve tested or tried virtually every Sony Aibo robot since its introduction 25 years ago. Somehow, I never owned one, perhaps because much in the way I’m allergic to real dogs, I’m allergic to the price of this mechanical one. Ignoring the robot pup’s early and continuing innovation is impossible, even as Sony sometimes denied it.
On May 11, 1999, it wasn’t clear Sony had any intention of broad commercial availability. The company produced roughly 5,000 Sony Aibo robots, selling just 2,000 in the US for approximately $3,000 each. They were rare but also a bit of a status symbol, especially after one appeared in Janet Jackson’s 2000 hit It Doesn’t Really Matter.
Hello, little friend
Soon after that first run, though, Sony execs arrived in my office (I was then at PCMag) with the official First Generation Model, the ESR-111. Larger and dare I say cuter than the original Aibo, the ERS-111 was also more adept. Once again a trendsetter, Aibo rode a scooter across my conference room desk.
Sony often adamantly claimed that Aibo was not a robot dog, even though the ERS-111 had far more dog-like features: its snout was a bit more rounded, the ears stood up and it had replaced the almost antenna-like tail of the original with something that might look at home on a terrier.
Aibo ERS-111 (Image credit: Lance Ulanoff)
The missteps
It wasn’t just Aibo’s looks. From its earliest iterations, Aibo’s sophisticated motors produced lifelike movement and the robot had enough autonomy to seem alive. There was a camera in the snout and some simple processing that let it do things like respond to voices, touch, and carry out canned routines like riding a scooter. I do recall Aibo scooting itself right off the table. A side panel popped off but it was otherwise fine.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Sony could never make Aibo more affordable because it stubbornly refused to skimp on the components or intelligence. The electronics giant tried to satisfy the more affordable Aibo urge with a pair of $800 and far less compelling bear-like bots, one was called Macaroon and the other Latte. They did not catch on.
Macaroon was cute and cheaper but uninspiring. (Image credit: Lance Ulanoff)
In 2001, Sony made a brief detour with Aibo. The ERS-220 turned it into a terrifying cross between the original pup and Robocop’s ED 209. Honestly, the less said about this misfire, the better.
Yes, it’s a dog robot
Sony returned to my offices in 2003 with a completely redesigned model, the ERS-7. Gone were most of the original Aibo’s sharp edges, as Sony finally embraced the “dog” label. It had floppy ears and a smooth Snoopy (circa 1970s) face. It could yap at you, follow you around, find its own charger, cuddle in your lap, and perform tricks based on the cards you showed it.
Sony Aibo ERS-7 (Image credit: Lance Ulanoff)
Sony also designed the ERS-7 to be a sort of watchdog and mobile entertainment system. I took my review unit home, tried to play MP3 music through its tiny speakers, and let it take videos and photos (416- by 320-pixel resolution!) that it could email to me (when it worked) or store on its internal Sony Memory Stick that I could remove and look at later, assuming I had a Sony Vaio PC with a Memory Stick reader 🤦♂️. Its AI allowed it to learn and change over time. Don’t get me wrong, this was no ChatGPT puppy but the ERS-7 was still impressive for its time.
Battery life was never great on the Aibo and I vividly remember returning home one day to find it sprawled out and lifeless on the kitchen floor. Aibo had once again failed to find its charge base. I also recall us all being visibly upset at the discovery. We’d become attached to the little robot and couldn’t bear to see it in distress.
The desktop-control interface for the ERS-7 (Image credit: Lance Ulanoff)
Return of the dog king
The ERS-7 still listed for almost $2,000 and so it failed to catch on in the US. By 2006, Sony had discontinued Aibo.
More than a decade later Sony revived Aibo as a redesigned and unmistakable puppy. With the adorability, features, and technology (new actuators, WiFI, SLAM, a mobile app) cranked to 11, ERS-1000 cost almost $3,000. Still, when I finally brought a test unit home, it felt like it was worth every dime.
The new Aibo could learn 100 different faces and, like a real dog, remember its interactions with individuals. Its movements were nothing but cute and I still remember how my wife – who initially expressed disinterest in the new Aibo – ended up many evenings sitting with it in her lap, as she absentmindedly stroked its plastic back and head.
The author and Sony Aibo ERS-1000 (Image credit: Lance Ulanoff)
25 years on, is Sony Aibo still a thing? Hard to say. Aside from this year’s Limited Edition Espresso Aibo, which added different colored eyes and three years of AI cloud updates, there hasn’t been a major update or redesign since 2018. Worse yet, Sony appears to be treating the robot dog as a sort of clearance sale, reminding buyers, ” Aibo is sold as FINAL SALE – NO RETURN”.
The most recent firmware update was a few months ago, but the release notes make no mention of richer AI or even dipping a paw into Large Language Models (LLMs) and Generative AI. Still, I like to imagine how access to ChatGPT, Gemini, and other LLMs might transform Aibo. Such a move could reduce Sony’s development costs and open the door to an updated and cheaper model. They could call it Aibo AI if it weren’t redundant.
Regardless, my affection for Aibo in virtually all of its iterations remains strong and I may someday save up to buy this iconic robot pup – assuming it’s still around.
Apple may be late to the generative AI party but just don’t count it out yet. According to Bloomberg’s Marc Gurman and MacRumors’ Hartley Charlton, the company will use the M2 Ultra in its own servers – in its own data centres – to power its growing GAI ambitions. Launched in June 2023, the CPU – as used in the Mac Studio – remains the most complex piece of silicon ever released by Apple with 24 compute cores, up to 76 GPU cores and 32 AI accelerators.
The report neither mentions whether Apple plans to revive its defunct Xserve range of rack servers nor if it will bring back its Mac OS X serveroperating system. Both products have been mothballed for years as Apple moved its focus away from the enterprise market at the beginning of the last decade. A separate article from WSJ also adds that Apple is using the internal code name ACDC (Apple Chips in the Data Center).
In this piece, authors Aaron Tilley and Yang Jie posits that Apple would use the formidable firepower of its data centers for training or for more complex inference while lighter workloads (or those that would require access to personal data) would be handled locally on the device itself, eliminating the need to run in the cloud.
This mirrors what chip manufacturers like x86 stalwarts AMD and Intel have been advocating, in unison with Microsoft, with the AI PC paradigm: big server chips (like Epyc and Xeon) working in tandem with smaller client processors (Ryzen or Core). The difference being, of course, that Apple is using an existing processor rather than a new one.
Another way to AI hegemony?
Which raises another question; Did Apple plan this – having a jack-of-all-trade CPU family – from the onset? Bearing in mind that the M2 Ultra would probably be the only server processor in the world to have a GPU and an AI engine. Could it give way to a server-only version (the S1?) geared towards a data-center environment, with far more cores, no GPU and far, far more memory?
All in all though, there was never a doubt that sooner or later Apple would have started dabbling with server processors. It was a matter of when, rather than if. Reports of Apple building its own servers date back from as far as 2016 and is in line with Apple’s doctrine of owning the stack. In 2022, the company also looked to recruit an “upbeat and hard-working hardware validation engineer” to “develop, implement and complete hardware validation plans for its next generation hyperscale and storage server platforms”.
Then a year later, research carried out by analyst firm Structure Research found out that Apple was planning to triple its critical power capacity of its data centers to accommodate its two billion active devices (and nearly one billion iOS users) and deliver more services.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Of course, hardware requires software and Apple has been increasingly vocal over the past 12 months, releasing MLX, a machine learning framework designed specifically for Apple Silicon, a glimpse at an AI-enhanced Siri and a new suite of AI tools called OpenELMs.
It will be immensely instructive to see how Apple manages to do generative AI at scale using anything other than brute force GPU (à H100). This may well have a direct impact on the fate of another trillion-dollar company called Nvidia). WWDC, the annual Apple developer conference, takes places next month and it will have AI written all over it.
Apple’s iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models concurrently, which is why we sometimes get rumored feature leaks so far ahead of launch. The iPhone 17 series is no different, and already we have some idea of what to expect from Apple’s 2025 smartphone lineup.
If you plan to skip this year’s iPhone 16, or if you’re just plain curious about what’s on the horizon, here are 10 rumored features that we are expecting to arrive in time for its successor, the iPhone 17 series, which is likely to be released in September 2025.
The iPhone 17 Pro is expected to be the first iPhone to feature under-panel Face ID technology. The only external indication of the under-display Face ID technology will likely be a circular cutout for the front-facing camera. This will probably be Apple’s last premium model to include a circular cutout for the front-facing camera. Apple is then expected adopt under-display cameras in 2027’s “Pro” iPhone models for a true “all-screen” appearance. 2. New Display Sizes
iPhone 17 & iPhone 17 Plus
This year’s iPhone 16 Pro and iPhone 16 Pro Max are rumored to be getting bigger display sizes, going from 6.12- and 6.69-inches to 6.27- and 6.86-inches, respectively. For 2025, Apple is also expected to bring the larger 6.27-inch display size to its standard iPhone model, while the equivalent “iPhone 17 Plus” model could adopt completely new display dimensions.
3. 120Hz ProMotion (Always-on Display)
iPhone 17 & iPhone 17 Plus
Apple intends to expand ProMotion to its standard models in 2025, allowing them to ramp up to a 120Hz refresh rate for smoother scrolling and video content when necessary. Notably, ProMotion would also enable the display on the iPhone 17 and iPhone 17 Plus to ramp down to a more power-efficient refresh rate as low as 1Hz, allowing for an always-on display that can show the Lock Screen’s clock, widgets, notifications, and wallpaper even when the device is locked.
4. Apple-Designed Wi-Fi 7 Chip
iPhone 17 Pro & iPhone 17 Pro Max
Apple’s premium 2025 models are expected to be equipped with an Apple-designed Wi-Fi 7 chip for the first time. Wi-Fi 7 support would allow the “Pro” models to send and receive data over the 2.4GHz, 5GHz, and 6GHz bands simultaneously with a supported router, resulting in faster Wi-Fi speeds, lower latency, and more reliable connectivity. The Wi-Fi chip would also allow Apple to further reduce its dependance on external suppliers like Broadcom, which currently supplies Apple with a combined Wi-Fi and Bluetooth chip for iPhones.
5. 48MP Telephoto Lens
iPhone 17 Pro Max
An upgraded 48-megapixel Telephoto lens on Apple’s largest premium device is expected to be optimized for use with Apple’s upcoming Vision Pro headset, which launches on February 2, 2024. (The current iPhone 15 Pro models feature 48-megapixel main, 12-megapixel ultra wide, and 12-megapixel telephoto lenses.) That would make 2025’s “Pro Max” the first iPhone to have a rear camera system composed entirely of 48-megapixel lenses, making it capable of capturing even more photographic detail.
6. 24MP Selfie Camera
All iPhone 17 Models
The iPhone 17 lineup will feature a 24-megapixel front-facing camera with a six-element lens, according to one rumor. The iPhone 14 and 15 feature a 12-megapixel front-facing camera with five plastic lens elements, and this year’s iPhone 16 lineup is expected to feature the same hardware. The upgraded resolution to 24 megapixels on the iPhone 17 will allow photos to maintain their quality even when cropped or zoomed in, while the larger number of pixels will capture finer details. The upgrade to a six-element lens should also slightly enhance image quality.
7. Scratch Resistant Anti-Reflective Display
All iPhone 17 Models
The iPhone 17 will feature an anti-reflective display that is more scratch-resistant than Apple’s Ceramic Shield found on iPhone 15 models, according to one rumor. The outer glass on the iPhone 17 is said to have a “super-hard anti-reflective layer” that is “more scratch-resistant.” It’s not clear whether Apple is planning to adopt the Gorilla Glass Armor that Samsung uses in its Galaxy S24 Ultra, but the description of Corning’s latest technology matches the rumor.
8. More Memory
iPhone 17 Pro & iPhone 17 Pro Max
Apple’s Pro models next year will come with 12GB of RAM, claims Jeff Pu of investment firm Haitong. For comparison, the iPhone 15 Pro models have 8GB of RAM, while the iPhone 16 Pro models are also expected to have 8GB of RAM. Any such increase would allow for improved multitasking on the iPhone, as well as provide additional resources for any artificial intelligence features that require large-language models to be resident in memory.
Apple’s highest-end 2025 iPhone will feature a significantly narrower Dynamic Island, thanks to the device’s adoption of a smaller “metalens” for the Face ID system, claims Haitong’s Jeff Pu. Assuming that’s the case, it would be the first time that Apple has changed the Dynamic Island since it debuted on the iPhone 14 Pro in 2022.
10. iPhone 17 “Slim”
iPhone 17 Plus
Apple’s iPhone 17 Plus will feature a 6.55-inch display, according to analyst Ross Young of Display Supply Chain Consultants (DSCC). Responding to a claim by Jeff Pu that the iPhone 17 Plus will be replaced by an “iPhone 17 Slim,” Young said to expect a reduction of 2% over the previous, current, and next-generation models. Based on his logic, a smaller display would help differentiate the larger iPhone 17 model from the iPhone 17 Pro Max, while remaining larger than the iPhone 17 and iPhone 17 Pro.
Apple has lots of new products on the way and is officially discontinuing its ninth-generation iPad. But, before the curtain falls on this reliable device, you can pick it up for a steal. Our favorite budget iPad for comes with two years of AppleCare+ and is down to $298 from $398 — a 25 percent discount. This deal is for the 64GB model with Wi-Fi in either Silver or Space Gray.
Apple
If you’re looking for a more affordable entry point into the world of iPads or want to grab one as a gift then the ninth-gen model gives you a solid balance of quality and cost. We gave it an 86 in our review when it first debuted in 2021 thanks to updates like True Tone technology and color changing based on ambient light. It also has a 12-MP front camera, Apple’s A13 Bionic chip and up to 10 hours of battery life while in use.
Take the strain, and three, two, one, pull! No, I’m not in the gym lifting weights, but in the woods with my Nikon DSLR and raising its optical viewfinder to my eye to compose a picture. It’s my D800‘s first outing in years and it’s quickly reminding me why I was so happy to switch to mirrorless. At 31.7oz / 900g and combined with my Nikon 70-200mm AF-S f/2.8 VR lens (50.4oz / 1430g) it’s well over 80oz / 2300g, and being cumbersome isn’t even the worst part.
Don’t get me wrong, I’ll come away from this walk in my local woods that’s bursting with fragrant bluebells and wild garlic with some pictures I’m super-excited about (see below), but boy do I have to work that much harder to get the results I want. And without wanting to lug a tripod around, I actually can’t get the same degree of sharpness in my pictures from this day in the dim conditions under a dense tree canopy.
There are aspects of the Nikon D800’s handling that I really enjoy and mixing up creative tools keeps me fresh as a photographer, but overall my mirrorless camera is a much more streamlined experience and I’m still glad that I made the leap from a Nikon DSLR to the Z6 II. Let’s look at where my DSLR struggles begin.
1. Carrying the gear
(Image credit: Future | Tim Coleman)
My Nikon D800 from 2012 is an extra 50% heavier than the Nikon Z6 II I’m now used to, and also the Z7 II that is arguably my DSLR’s modern day equivalent. The 70-200mm f/2.8 F-mount lens is also heavier than the mirrorless Z-mount version, although not by much. Overall, there’s approximately a 20% reduction in weight in the mirrorless version of my DSLR camera and lens pairing.
The DSLR camera body is also bulkier, and I notice this quickly with the chunkier handgrip. In some ways it’s actually a better balance with the fairly large telephoto lens than what my mirrorless camera offers, but in practice I’m wanting to place the DSLR down quicker than mirrorless.
When you’re repeating the motion of bringing the camera’s viewfinder up to your eye to compose a shot, the strain starts to take hold quite quickly.
2. Composing the shot
(Image credit: Future | Tim Coleman)
I like the D800’s optical viewfinder (OVF), a lot. It’s a bright and big display through which I can immerse myself in the scene. And it’s one less digital screen to look at, and I’m all for that.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
However, what you don’t get with an OVF, like you do with a mirrorless camera’s electronic viewfinder (EVF), is exposure preview, which is supremely helpful as you go about taking photos. You get a bright display but potentially a very different looking final image, both in brightness and depth of field / bokeh.
That can cause a problem for me because I tend to fiddle with exposure compensation based on the mood I want in the picture. It’s all too easy to leave the camera at -2EV for a low-key effect and unwittingly carry on shooting dark pictures because the end result is not reflected in the OVF display. Overall, I prefer an optical viewfinder display for the feeling and an electronic viewfinder to meet my practical needs.
Another point regarding my D800 is that its screen is fixed, whereas my mirrorless camera has a tilt display which is super helpful for shooting at low angles, which I often do especially in scenarios like this. Some DSLRs like the Nikon D850 also have a moveable screen, but most don’t, and once you’re used to working from a tilting or swivel screen, it’s hard to go back to a fixed one.
3. Focusing issues
(Image credit: Future | Tim Coleman)
Focusing isn’t bad with the D800. It’s actually very good, but it’s not as refined as the Z6 II mirrorless camera. It’s evident as I pinpoint certain bluebells – the focus points simply aren’t small enough. I wrestle with autofocus as it hunts for the subject that’s right there, more so than with mirrorless.
If I was taking portraits today, I’d be much more relaxed with my mirrorless camera too thanks to its reliable subject and eye detection autofocus, whereas my D800 has regular back-focusing issues.
I’ve also become accustomed to composing shots through the Z6 II’s LCD display, often instead of the viewfinder. If I try to do the same – focusing through the D800’s Live View – it is a significantly worse experience, too. Nikon DSLRs aren’t really designed to be used for photography with autofocus through Live View, though Canon DSLRs do a better job.
4. No image stablization
When looking closely at the detail of the tree bark in sharp focus, there’s a subtle softness that comes with shooting handheld using a high-resolution DSLR like the D800. (Image credit: Future | Tim Coleman)
The single thing I miss the most when opting for my DSLR over mirrorless is in-body image stabilization, which in the Z6 II enables me to shoot handheld in more situations.
I remember when I first bought my D800 just how unforgiving its 36MP sensor was regarding camera shake and its resulting effect – softening detail. At the time, my golden rule to calculate the minimum acceptable shutter speed for sharp shots was shutter speed equals the focal length of your lens – for example, 1/200sec when shooting at 200mm.
That rule went out the window with the D800, the highest resolution full-frame sensor ever, and I would have to be conservative by around 2EV. At the same 200mm focal length a faster than normal 1/1000sec was as slow as I could go really. Or I could bring out the tripod to eliminate camera shake.
Image 1 of 7
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
I don’t want a tripod for my shooting techniques where I need maximum portability, like this day in the woods, nor do I want to damage the woodland and bluebells – I need a light footprint. No, I’m going handheld all the way.
Now I’m in these woods shaded by a dense tree canopy and the shutter speed I need to use with the 70-200mm lens requires a high ISO, even with the f/2.8 aperture. Put simply, the quality of detail I can get in this scenario cannot match what I can with my mirrorless camera which is equipped with image stabilization and able to shoot at slower shutter speeds and low ISO because it compensates camera shake.
The photos I came away with using my DSLR
Visually most woodlands are messy. You have to search long and hard for tidy compositions such as a single tree standing out from the rest. Or you can embrace and work with the chaos.
I’ve intentionally used a telephoto lens and shot through branches and leaves to add layers, a sense of depth and to bring in those elements that you otherwise have to work so hard to avoid. And I’m certainly not about to cut away branches or rip up flowers to get the shot I want.
Image 1 of 5
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
My overall experience bringing my DSLR back out of retirement was fine, but it has reminded me how mirrorless has evolved the camera experience for the better. Ultimately mirrorless is a more refined experience than a DSLR in just about every department.
Images are better, too. I haven’t been able to shoot handheld at ISO 100 under dense tree cover like I could with mirrorless, and there’s just an edge of softness in my pictures caused by subtle camera shake that I don’t have with mirrorless. I’m less concerned with my DSLR’s inferior corner sharpness and pronounced vignetting compared to mirrorless.
I’m not about to sell my DSLR – I’ll give it another run out soon. It’s just I’ve been reminded the extra dedication needed to the craft in order to come away with the pictures that I’m happy with. As I own both a DSLR and mirrorless camera, opting for the DSLR feels like taking the hard path.
Apex Legends will soon offer a Solos mode for the first time since 2019, even though developer Respawn Entertainment said earlier this year it had no plans to let players run amok in the battle by themselves again. When the next season starts, Solos will replace the Duos mode for six weeks.
The game is designed and tuned for squads of three, but Respawn recently told reporters that it “wanted to acknowledge the growing interest in Solos from our players,” many of whom were looking for new ways to play the game. Running the mode for half of season 21 will give the developers a chance to gain plenty of feedback from players. Perhaps that could help them figure out if Solos could become a more permanent fixture.
“With growing demand from players and a desire on the team to explore the concept again with everything we’ve learned since the mode’s last appearance in 2019, Upheaval felt like the right time to reintroduce a Solos experience to Apex,” events lead Mike Button said.
To compensate for the lack of support from teammates, the revived Solos mode will have three unique features. If you’re eliminated in the first four rounds, you’ll be able to use a one-time respawn token to rejoin the action. Any unused tokens after the fourth circle closes are converted to Evo, which is used for shields and ability upgrades. The idea behind this, according to the developers, is to encourage players to be more engaged in the early going.
Respawn has also created a mechanic for Solos called Battle Sense. This gives you an audio and visual cue whenever an enemy is within 50 meters. Last but not least, you’ll heal passively when you’re out of combat. It’ll take a moment for the gradual health regeneration to start, but you can skip that initial timer by securing a kill. You’ll still be able to use med kits and such to heal manually. Respawn is making some other tweaks for Solos, including adding fully kitted-out weapons, adjusting circle sizes and reducing the lobby size from 60 to 50 players.
Respawn Entertainment/EA
Alongside some map, cosmetic, balance and ranked changes, there’ll be a new legend for players to check out. Alter hails from another dimension and that plays into her kit. She can create portals through walls, ceilings and floors.
The Void Passage ability can be fired from some distance away and it has a maximum depth of 20 meters, so it can’t go through mountains. After going through a portal, you’ll have a few seconds of safety to assess your surroundings and prepare for a fight if need be. Allies and enemies can use the portals too, so Void Passage can open up all kinds of opportunities for flanking and rotations.
With her passive ability, Alter is able to see death boxes through walls and snatch an item from one. Alter’s ultimate is called Void Nexus. This drops a device that you and your teammates can interact with remotely, even while knocked down. Doing so will teleport you back to the regroup point. However, enemies have a short window to follow you. Alter’s upgrades include the ability to see enemy health bars while moving through a portal.
You’ll be able to check out the revived Apex Legends Solos mode and play as Alter when the Upheaval season starts on May 7.
Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.
A bioluminescent octocoral, Iridogorgia magnispiralis.Credit: NOAA Office of Ocean Exploration and Research, Deepwater Wonders of Wake
An ancient group of glowing corals pushes back the origin of bioluminescence in animals to more than half a billion years ago. “We had no idea it was going to be this old,” says evolutionary marine biologist and study co-author Danielle DeLeo. Tiny crustaceans that lived around 270 million years ago were previously thought to be the earliest glowing animals. Genetic analysis and computer modelling revealed that octocorals probably evolved the ability to make light much earlier, around the time when the first animals developed eyes.
A virulent strain of the monkeypox virus might have gained the ability to spread through sexual contact. The strain, called clade Ib, has caused a cluster of infections in a conflict-ridden region of the Democratic Republic of the Congo (DRC). This isn’t the first time scientists have warned that the monkeypox virus could become sexually transmissible: similar warnings during a 2017 outbreak in Nigeria were largely ignored. The strain responsible, clade II, is less lethal than clade Ib, but ultimately caused an ongoing global outbreak that has infected more than 94,000 people and killed more than 180. “The DRC is surrounded by nine other countries — we’re playing with fire here,” says virologist Nicaise Ndembi.
The World Health Organization (WHO) has changed how it classifies airborne pathogens. It has removed the distinction between transmission by smaller virus-containing ‘aerosol’ particles and spread through larger ‘droplets’. The division, which some researchers argue was unscientific, justified WHO’s March 2020 assertion that SARS-CoV-2, the virus behind the COVID-19 pandemic, was not airborne. Under the new definition, SARS-CoV-2 would be recognized as spreading ‘through the air’ — although some scientists feel this term is less clear than ‘airborne’. “I’m not saying everybody is happy, and not everybody agrees on every word in the document, but at least people have agreed this is a baseline terminology,” says WHO chief scientist Jeremy Farrar.
The development of lethal autonomous weapons, such as AI-equipped drones, is on the rise. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car,” says computer scientist and campaigner against AI weapons Stuart Russell. Some argue that accurate AI weapons could reduce collateral damage while helping vulnerable nations to defend themselves. At the same time, observers are concerned that passing targeting decisions to an algorithm could lead to catastrophic mistakes. The United Nations will discuss AI weapons at a meeting later this year — potentially a first step towards controlling the new threat.
In early April, the European Court of Human Rights ruled in favour of a group of more than 2,500 Swiss female activists aged 64 or over who argued that Switzerland was doing too little to protect them as a group particularly vulnerable to health effects stemming from climate change. “This marks the first time that an international human-rights court has linked protection of human rights with duties to mitigate global warming, clarifying once and for all that climate law and policy do not operate in a human-rights vacuum,” says legal scholar Charlotte Blattner, who advised the court. “The ruling is bound to alter the course of climate protection around the world.”
When US scientists needed a place to test the first birth-control pill, they looked to Puerto Rico. But many of the working-class women who took the pill were unaware that they were part of a clinical trial. Debilitating side effects were dismissed as psychosomatic. And when the final product came onto the market, it was too expensive for women like them to afford. The play Las Borinqueñas revisits this complicated history. “It’s a long-overdue tribute and, most importantly, a reminder to remain vigilant against abuse and disrespect in studies involving human participants,” writes Nature reporter Mariana Lenharo in her review.
Winning image: Glaciologist Richard Jones captured the moment a crew member on RV Polarstern prepared to rescue a measuring device trapped in ice.Credit: Richard Jones
This image, taken on top of the icebreaker research vessel Polarstern, shows the delicate process of retrieving an ocean-monitoring instrument called a CTD (short for conductivity, temperature, depth) that had become trapped under sea ice off the coast of northeastern Greenland. CTDs, which are anchored to the sea floor, measure how properties such as salinity and temperature vary with depth. The photo is the winner of Nature’s 2024 Working Scientist photography competition. See the rest of the winning images from the competition here.
Yesterday we told you that NASA had reconnected with its spacecraft Voyager 1, the first human-made object to leave the Solar System. But was it really the first? A reader question sparked a debate in the newsroom about whether that accolade should rightfully go to Pioneer 10.
A lot depends on how we define the edge of the Solar System, explains Nature reporter Sumeet Kulkarni. “But I think it’s safe to say Voyager 1 left it first,” he says. The craft overtook Pioneer 10 in 1998, and left the heliosphere — the reach of the Sun’s influence — in 2013, by which time it was travelling much faster than Pioneer 10.
Let us know how your journeys are progressing — and any other feedback on this newsletter — at [email protected].
With contributions by Katrina Krämer and Sarah Tomlin
Want more? Sign up to our other free Nature Briefing newsletters:
• Nature Briefing: Microbiology — the most abundant living entities on our planet — microorganisms — and the role they play in health, the environment and food systems.
Millions of devices are still connected to the PlugX malware, despite its creators abandoning it months ago, experts have warned.
Cybersecurity analysts Sekoia managed to obtain the IP address associated with the malware’s command & control (C2) server, and observed connection requests over a six-month period.
During the course of the analysis, infected endpoints attempted 90,000 connection requests every day, amounting to 2.5 million connections in total. The devices were located in 170 countries, it was said. However, just 15 of them made up more than 80% of total infections, with Nigeria, India, China, Iran, Indonesia, the UK, Iraq, and the United States making up the top eight.
Still at risk
While at first it might sound like there are many infected endpoints around the world, the researchers did stress that the numbers might not be entirely precise. The malware’s C2 does not have unique identifiers, which messes with the results, as many compromised workstations can exit through the same IP address.
Furthermore, if any of the devices use a dynamic IP system, a single device can be perceived as multiple ones. Finally, many connections could be coming in through VPN services, making country-related statistics moot.
PlugX was first observed in 2008 in cyber-espionage campaigns mounted by Chinese state-sponsored threat actors, the researchers said. The targets were mostly organizations in government, defense, and technology sectors, located in Asia. The malware was capable of command execution, file download and upload, keylogging, and accessing system information. Over the years, it grew additional features, such as the ability to autonomously spread via USB drives, which makes containment today almost impossible. The list of targets also expanded towards the West.
However, after the source code leaked in 2015, PlugX became more of a “common” malware, with many different groups, both state-sponsored and financially-motivated, using it, which is probably why the original developers abandoned it.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A bioluminescent octocoral, Iridogorgia magnispiralis.Credit: NOAA Office of Ocean Exploration and Research, Deepwater Wonders of Wake
Some 540 million years ago, an ancient group of corals developed the ability to make its own light1.
Scientists have previously found that bioluminescence is an ancient trait — with one group of tiny crustaceans first making their own light an estimated 267 million years ago. But this new finding pushes back the origins of bioluminescence even further by around 270 million years.
“We had no idea it was going to be this old,” says Danielle DeLeo, an evolutionary marine biologist at Florida International University in Miami, who led the study, which was published on 24 April in Proceedings of the Royal Society B. “The fact that this trait has been retained for hundreds of millions of years really tells us that it is conferring some type of fitness advantage.”
Bioluminescence has evolved independently at least 100 times in animals and other organisms. Some glowing species, such as fireflies, use their light to communicate in the darkness. Other animals, including anglerfish, use it as a lure to attract prey, or to scare away predators.
However, it’s not always clear why bioluminescence evolved. Take octocorals. These soft-bodied organisms are found in both shallow water and the deep ocean, and produce an enzyme called luciferase to break down a chemical to make light. But whether glowing octocorals use their light to attract zooplankton as prey or for some other purpose is unclear.
First light
Searching for answers, DeLeo and her colleagues analysed a large data set of genetic sequences and the sparse octocoral fossil record to reconstruct the animals’ evolutionary history. They then used a computer model to determine how likely it was that ancestral species were bioluminescent.
The model revealed that the common ancestor of all octocorals — which lived around 540 million years ago — was probably bioluminescent. The finding suggests that luciferase-based biofluorescence evolved early and was lost by non-bioluminescent descendants of ancient glowing octocorals.
The study shows that bioluminescence has been around since at least the Cambrian period (around 540 million to 485 million years ago), when the first animal species developed eyes. That’s surprising, says evolutionary biologist Todd Oakley, at the University of California, Santa Barbara, because bioluminescence is a trait that “tends to blink on and off” across evolutionary time.
Luciferase is just one way animals make light. Other organisms use different chemistry to get their telltale glow. In the case of octocorals, the luciferase system could have evolved for the production of an antioxidant, says DeLeo. Later, the light-generating aspect of the reaction would have become useful for communication.
In any case, the deep origin of bioluminescence suggests that it could be one of the oldest forms of communication on Earth, she says. “If you’re producing light — whether or not it’s intentional — you are signalling other animals,” she says. “Like, ‘Hey! I’m over here!”