Subscriptions to the budgeting app Quicken Simplifi . The price has been brought down to just $2 per month, which is billed annually at $24. The deal also extends to Quicken Classic, which adds more features for investments and tracking taxes. This tier now costs $4 per month, instead of $8 per month. It’s also billed annually.
Quicken
Quicken Simplify is pretty much the budgeting app to beat all budgeting apps. There’s a reason, after all, that it topped our list of the and our collection of the . We’ve consistently praised the user-friendly interface that makes it easy to get started and keep an eye on things. Users have instantaneous access to various metrics, like top-line balances, net worth, recent spending, upcoming recurring payments and more.
We also loved how simple (pun intended) it is to set up customized savings goals and the like. The UI is clean, yet offers playful visualizations to keep things interesting. It integrates with most financial institutions, including Fidelity. Users can also invite a spouse or a financial manager to co-manage the account.
There’s no integration with Zillow, so people can’t track fluctuations in home value, which is something that competing apps like Monarch Money and Copilot Money offer. It requires manual entry of real estate information, just like any other asset. We also experienced some small errors during use, in which the app miscategorized some expenses, though this was in line with other products we tested. There’s no option for a free trial, so $2 per month is about as close as it gets. Just remember to cancel before the year is up if things don’t work out.
Over 60 IHG Hotels & Resorts now support streaming video over AirPlay to hotel room TVs. Photo: Apple/IHG
The long wait for easy access to AirPlay in hotel rooms is over. Sort of. Apple announced Thursday that a hotel chain is supporting the feature that allows iPhone and iPad users to wirelessly stream content to hotel TVs.
It’s a limited rollout, so travelers are left hoping there’ll be wider adoption.
Easy AirPlay to hotel TVs from iPhone or iPad
Way back At WWDC23 in June of last year, Apple promised to add support for AirPlay in hotel rooms to iOS 17. It wasn’t part of the initial release of this operating system, though, so users have had to wait.
No longer. Starting Thursday, guests staying at select properties from IHG Hotels & Resorts have access to AirPlay in their room.
Users scan a QR code on their room’s LG TV and securely connect to it with an iPhone or iPad. The wireless system can be used to stream shows and movies from Apple TV+ and other popular streaming services, listen to Apple Music or other platforms, view photos, practice a presentation, play games on Apple Arcade, or get a workout or meditation in with Apple Fitness+.
More than 60 IHG properties in North America (including ones in the U.S., Canada and Mexico) added support for AirPlay today, including Kimpton Hotels & Restaurants, Hotel Indigo, Candlewood Suites, and InterContinental Hotels & Resorts. More are expected to be added in the future.
Drone brand DJI is breaking new ground by launching its first power stations in the US, Europe, and Australia. They’re known as the Power 500 and the Power 1000 with the latter being the more powerful of the two. It has a battery capacity of 1,024 Wh capable of outputting 2,200W of energy. Even better, the company gave the chargers the ability to fast charge several of their drones.
Charge time for all the supporting models differs slightly, but on average, it takes roughly half an hour. A drone from the Mavic 3 Series, for example, takes 32 minutes to go from 10 percent to 95 percent, according to DJI, while the Inspire 3 takes 28 minutes. In order to fast charge a drone, the device’s battery needs to have the DJI Power SDC feature and a special cable. This cable can be purchased at checkout, but it’s not universal, so you’ll need to make sure you buy the right cable. So if you have a Mavic 3 Pro, you have to get the Mavic 3 Series Fast Charge Cable. If you own an Air 3, you need to purchase the Air 3 cable and so on.
DJI hasn’t indicated how long it takes to charge a drone without Power SDC, so all we know is that the Power 1000 station can charge drones about 12 times before it runs out of juice, easily making it one of the best portable charging stations going.
(Image credit: DJI)
A real powerhouse
Of course, the station is not solely for charging drones. It can power up multiple of other gadgets and appliances including smartphones, laptops, movie projectors, even ovens. To support all this hardware, the Power 1000 houses several different kinds of ports. You’ll find a pair of AC outlets, two USB-A ports, two USB-C inputs, one SDC port, and an SDC Lite input.
The last two entries on that list are for charging the power station itself. Once drained completely, you can plug the Power 1000 into a wall socket or car charger with the right supporting cable. It’ll take 70 minutes to fully recharge, or you can pull the plug at 50 minutes to have 80 percent of battery life. A couple of quarter-inch thread holes are present on the design for equipping accessories.
DJI seemingly has plans to sell solar panels for the two Power models as part of a bundle to charge in case you’re out camping and they run out of juice. However, at the time of this writing, the solar panels are not available for purchase.
(Image credit: DJI)
As for the Power 500, it can do everything its bigger sibling can do. It can fast charge drones in the same amount of time, support solar panels, and has an identical set of ports. But the device does these things in a lesser capacity.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
This power station has a 512 Wh battery outputting 1000 W of energy, so it can only recharge drones about six times. What’s more, the Power 500 is physically smaller as well, weighing 7.3 kg (roughly 16 pounds) versus the Power 1000’s 13 kg (roughly 28.5 pounds).
Availability
DJI’s Power 1000 is on sale in the United States at a discounted $699 on the company website. Each of the Fast Charge Cables cost $20 each. The Power 500 costs $379 in the US instead of $400, and you’ll need to buy the cables separately. Everywhere else, people will have to pay full price. The Power 1000 is going for £879/AU$1,369/€999 in the UK, Australia, and EU, while the Power 500 will cost £459/$729 AUD/€539 in those regions.
We should mention that some of the charging cables are not available. The Air 3 cable, for instance, cannot be found on the US listing of either power station, but they are present on the European and Australian listing. Also, the option for the DJI Matric 30 is not there at all. We reached out to DJI for information on the missing parts and more. This story will be updated if and when we hear back from the company.
Nature’s Where I Work photo-essay section has profiled more than 200 scientists so far, working in settings that range from Vatican City to the University of the West Indies. Now, 50 of the published images are appearing in an outdoor public exhibition in London.
The selection of portraits, which are also collated online, features working researchers in diverse and important fields. The exhibition is organized in collaboration with Argent, a retail-management company based in London.
Two passers-by pause at one of the display boards. On the side facing the camera is a portrait of biotechnologist Sara Abdou, who explores the genetics behind ornamental-flower colours.Credit: John Sturrock
The images, commissioned especially for the journal, are on display in the King’s Cross area, near to Springer Nature’s corporate offices in the United Kingdom. The free exhibition aims to inspire younger generations to consider a career in science, technology, engineering or mathematics, and to challenge stereotypical preconceptions of what a scientist looks like and does. The portraits will remain on display until June 2024.
The exhibition is dedicated to Karen Kaplan, the senior careers editor who launched Where I Work in 2019, to mark Nature’s 150th anniversary. Karen died in November 2023.
The report cites three sources who said Apple will be opening a new retail store in a 125-year-old heritage building located at 1255 Saint-Catherine Street, which would be less than 100 meters away from the company’s current store on the street. Apple’s contractors are said to be in the process of completing major renovations to the building, which has been vacant for a few years after clothing retailer Ardene left the space.
Bloomberg‘s Mark Gurman last year reported that Apple was considering relocating its Saint-Catherine store. At the time, he said Apple aimed to open the new store in February 2025, but the timeframe could change. La Presse reported that Apple’s current space on the street is already listed as available for lease online. Apple has not publicly announced any plans to relocate the store, and it did not respond to our inquiry.
1255 Sainte-Catherine Street (Image Credit: LoopNet)
Apple’s existing two-floor store on Sainte-Catherine Street first opened in 2008, in Montréal’s downtown Ville-Marie area. The store will be temporarily closed for one day on Tuesday, April 23, for reasons unclear, according to Apple’s website.
The new location would be Apple’s first store within a historic building in Canada, with many of the company’s other locations in the country found in indoor shopping malls. Apple recently relocated its store at the Square One shopping mall in Mississauga, Ontario, just west of Toronto, as it continues to modernize its Canadian retail fleet.
Game emulator apps have come and gone since Apple announced App Store support for them on April 5, but now popular game emulator Delta from developer Riley Testut is available for download. Testut is known as the developer behind GBA4iOS, an open-source emulator that was available for a brief time more than a decade ago. GBA4iOS led to Delta, an emulator that has been available outside of…
iOS 18 is expected to be the “biggest” update in the iPhone’s history. Below, we recap rumored features and changes for the iPhone. iOS 18 is rumored to include new generative AI features for Siri and many apps, and Apple plans to add RCS support to the Messages app for an improved texting experience between iPhones and Android devices. The update is also expected to introduce a more…
The first approved Nintendo Entertainment System (NES) emulator for the iPhone and iPad was made available on the App Store today following Apple’s rule change. The emulator is called Bimmy, and it was developed by Tom Salvo. On the App Store, Bimmy is described as a tool for testing and playing public domain/”homebrew” games created for the NES, but the app allows you to load ROMs for any…
Apple today said it removed Game Boy emulator iGBA from the App Store for violating the company’s App Review Guidelines related to spam (section 4.3) and copyright (section 5.2), but it did not provide any specific details. iGBA was a copycat version of developer Riley Testut’s open-source GBA4iOS app. The emulator rose to the top of the App Store charts following its release this weekend,…
Last September, Apple’s iPhone 15 Pro models debuted with a new customizable Action button, offering faster access to a handful of functions, as well as the ability to assign Shortcuts. Apple is poised to include the feature on all upcoming iPhone 16 models, so we asked iPhone 15 Pro users what their experience has been with the additional button so far. The Action button replaces the switch …
A week after Apple updated its App Review Guidelines to permit retro game console emulators, a Game Boy emulator for the iPhone called iGBA has appeared in the App Store worldwide. The emulator is already one of the top free apps on the App Store charts. It was not entirely clear if Apple would allow emulators to work with all and any games, but iGBA is able to load any Game Boy ROMs that…
Children of the Sun burst onto the indie scene like a muzzle flash on a dark night. Publisher Devolver Digital dropped the game’s first trailer on February 1, showcasing frenzied sniper shots and a radioactive art style. A Steam demo highlighting its initial seven stages went live that same day and became a breakout hit during February’s Steam Next Fest. Two months later it landed in full and to broad acclaim. This explosive reveal and rapid release timeline mirrors the game itself — chaotic but contained, swift and direct, sharp and bright.
Though it feels like Children of the Sun popped into existence over the span of two months, it took solo developer René Rother more than 20 years to get here.
René Rother
As a kid in Berlin in the early 2000s, Rother was fascinated by the booming mod community. He spent his time messing around with free Counter-Strike mapping tools and Quake III mods from the demo discs tucked into his PC magazines. Rother daydreamed about having a job in game development, but it never felt like an attainable goal.
“It just didn’t seem possible to make games,” he told Engadget. “It’s like it was this huge black box.”
Rother couldn’t see an easy entry point until the 2010s, when mesh libraries and tools like GameMaker and Unity became more accessible. He discovered a fondness for creating 3D interactive art. But aside from some free online Javascript courses, he didn’t know how to program anything, so his output was limited.
“I dabbled into it a little bit, but then got kicked out. Again,” Rother said. “It was just like the whole entrance barrier was so big.”
René Rother
Rother pursued graphic design at university and he found the first two years fulfilling, with a focus on classical art training. By the end of his schooling, though, the lessons covered practical applications like working with clients, and Rother’s vision of a graphic design career smashed into reality.
“There was an eye-opening moment where I felt like, this is not for me,” Rother said.
In between classes, Rother was still making games for himself and for jams like Ludum Dare, steadily building up his skillset and cementing his reputation in these spaces as a master of mood.
“Atmospheric kind of pieces, walking simulators,” Rother said, recalling his early projects. “Atmosphere was very interesting to me to explore. But I never thought that it was actually something that could turn into a game. I never thought that it would become something that can be sold in a way that it’s actually a product.”
René Rother
By the late 2010s Rother decided he was officially over graphic design and ready to try a job in game development. He applied to a bunch of studios and, in the meantime, picked up odd jobs at a supermarket and as a stagehand, setting up electronics. He eventually secured a gig as a 3D artist at a small studio in Berlin. Meanwhile, his pile of game jam projects and unfinished prototypes continued to grow.
“In that timeframe, Children of the Sun happened,” Rother said.
In Children of the Sun, players are The Girl, a woman who escaped the cult that raised her and is now enacting sniper-based revenge on all of its cells, one bullet at a time. In each round, players line up their shot and then control a single bullet as it ricochets through individual cult members. The challenge lies in finding the most speedy, efficient and stylish path of death, earning a spot at the top of the leaderboards.
“It was just a random prototype I started working on,” Rother said. “And one Saturday morning I was thinking, ‘I don’t know what I’m doing with my life.’” With an atmospheric prototype and a head full of ennui, Rother emailed Devolver Digital that same day about potentially publishing Children of the Sun.
“The response was basically, ‘The pitch was shit but the game looks cool,’” Rother said. “And then it became a thing.”
René Rother
Visually, Children of the Sun is dazzling. It has a sketchy 3D art style that’s covered in bruise tones, with dark treelines, glowing yellow enemies and layers of texture. Every scene looks like The Girl just took a hit of adrenaline and her senses are on high alert, lending a hectic sense of hyper-vigilance to the entire experience. It’s a game built on instinct.
“I didn’t make any mood boards,” Rother said. “I didn’t prepare [for] it. It was just like, oh, let’s make it this color. Ah, let’s make it this color…. This is something to very easily get lost in. I spent a lot of time just adjusting the color of grass so it works well with the otherwise purplish tones and these kinds of things. I spent way too much time on the colors.”
Children of the Sun went through multiple visual iterations where Rother played with contrast, depth, fog density and traditional FPS color palettes, before landing on the game’s dreamlike and neon-drenched final form. The residue of this trial and error is still visible beneath Children of the Sun’s frames, and that’s exactly how Rother likes it.
1 / 13
The visual evolution of Children of the Sun
Screenshots of the game in development
“I see it as a big compliment, actually,” he said. “In paintings, when we talk about visual art, I really like when you can see the brushstrokes. I like when you can still see the lines of the pencil before the painting got made. I like the roughness. I wanted everything to be rough. I didn’t want it to be polished.”
Rother picked up the game’s soundtrack collaborator, experimental ambient composer Aidan Baker, the same way he hooked up with Devolver. Rother was a fan of Baker and his band Nadja, and he wanted a similar droning, slowcore vibe as a backdrop for Children of the Sun. On a whim, Rother sent Baker a casual message asking if he’d like to make music for a video game.
“He was like, ‘Well I’ve never done it, so I don’t know,’” Rother remembered. “So we met one evening and then afterwards he was like, ‘Yeah, let’s just do it.’ Instead of just emulating something that I like in the game, I somehow managed to get straight to the source of it. And that was a really nice experience.”
For Rother, Children of the Sun has been a lesson in trusting his gut. He hasn’t found the proper word in English or German to describe the atmosphere he created in the game, but it’s something close to melancholy, spiked with an intense coiled energy and bright, psychedelic clarity. He just knows that it feels right — visuals, music, mechanics and all.
“That’s kind of how I live my life,” Rother said. “Not that I’m, like, super spontaneous or just flip-flopping around with opinions or these kinds of things. It’s more about doing things that feel right to me without necessarily knowing why.”
When he booted up that Quake III demo disc and started making 3D vignettes for game jams, Rother didn’t realize he was building the path that would eventually lead him to a major publishing deal, a collaboration with a musician he admires, a big Steam release and a game about cult sniping called Children of the Sun. When Rother takes a moment to survey his current lot in life, he feels lucky, he said.
René Rother
“I feel like in the last three years, somehow, lots of things fell into my lap,” Rother said. “Although I still had to do something for it. I needed to be prepared for this moment, that required work.… But in the time where I prepared myself, I was not aware that I was preparing myself. So that’s how the feeling of luck gets amplified a bit more.”
“Luck” is one way to describe it, but “artistic instinct” might be just as fitting. Children of the Sun is available now on Steam for $15.
iPhone Stolen Device Protection offers increased safety for your accounts and financial information if someone steals your handset and its passcode.
Here’s how to activate the security feature that debuted in iOS 17.3, and — more importantly — why you should do it now.
Stolen Device Protection: How to activate
You don’t have to take my word that this new feature is a valuable enhancement. Tony Anscombe, chief security evangelist for global cybersecurity leader ESET, told Cult of Mac:
“Stolen Device Protection makes it harder for unauthorised access or alteration of sensitive settings making iPhones less appealing to criminals. This enhanced protection serves as a long-awaited deterrent, increasing the difficulty for thieves targeting such devices.
“While it’s impossible to completely prevent theft as perpetrators continually find new workarounds and exploits, this feature adds a significant hurdle. Organised crime groups, often involved in handling stolen phones, may now need to work that much harder to bypass it.”
The new feature in iOS 17.3 is intended to counter a specific type of crime: muggers who steal someone’s iPhone and then demand the passcode to unlock it. That gives the criminal access to all the personal information stored on the device.
Advantages of iPhone Stolen Device Protection
With iPhone Stolen Device Protection activated, changing the Apple ID passcode requires Face ID/Touch ID and an hour-long wait. It can’t be changed quickly.
What that means is if a mugger steals your iPhone and forces you to give them the passcode, you have an hour to get to another computer, go to icloud.com and lock the device so it can’t be accessed even with the passcode.
To keep that restriction from being burdensome, SDP is only in effect when the iPhone is away from your home or workplace.
And iPhone Stolen Device Protection does more to stymie criminals. It requires Face ID to access saved passwords. The same goes for erasing the device, accessing saved credit card info in Safari, applying for an Apple Card and more.
More security never hurt
Look for Stolen Device Protection in the Face ID & Passcode section of the Settings apps. Screenshots: Ed Hardy/Cult of Mac
iPhone Stolen Device Protection is optional and must be activated by you. If you don’t, it’s off. But if you want to upgrade the security on your handset, here’s what to do.
The first step in taking advantage of the new security feature is installing iOS 17.3. Apple released this to the public on January 22, so head to Settings -> General -> Software Update to get the latest version.
Next, go to Settings -> Face ID & Passcode. You’ll be required to enter your passcode to access this section.
Scroll down until you get to Stolen Device Protection. It’ll be off, so tap Turn On Protection.
And there, you’re done. You get all the benefits described above.
A thief trying to deactivate iPhone Stolen Device Protection has to wait an hour. Screenshot: Ed Hardy/Cult of Mac
If you decide to reactivate the feature at some later date, go through these same steps but tap Turn Off Protection. You’ll have to pass Face ID and wait an hour to make the change. If canceling SDP was quick and easy, the person who stole your iPhone could simply turn it off.
Meta AI is getting a new upgrade as Meta tries to establish dominance in the generative AI marketplace against OpenAI and Google.
The announcement came from Meta CEO Mark Zuckerberg himself on Threads, where he touted some stats and details about the new Llama 3 model used to underpin Meta AI, which is being integrated into several of its products like Messenger and Instagram, as well as getting its own dedicated site.
“We’re upgrading Meta AI with our new state-of-the-art Llama 3 AI model, which we’re open sourcing,” Zuckerberg writes. “With this new model, we believe Meta AI is now the most intelligent AI assistant that you can freely use.”
The new model will be accessible through several apps on desktop and mobile, including search functionality in Messenger, Meta AI in your Facebook feed, and more.
The new Meta AI site, which has an interface strikingly familiar to OpenAI’s ChatGPT, is available now for free, but you are also prompted to log in with Facebook to help train the model (though you can skip that if you’d like).
(Image credit: Future / Meta)
The Meta AI site offers text responses as well as the ability to generate images, though for the latter you will have to log in with a Facebook account. The image generator in particular is rather impressive, especially since it offers you a real-time preview of the image output that your prompt will generate. It also gives you the option to animate images to produce short MP4 videos.
All images bear a Meta AI watermark on the lower left side as well, which should help cut down on misuse, though how easily these measures can be circumvented remains to be seen.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
OpenAI has some serious competition
(Image credit: Future / Meta)
The new Meta AI site offers some seriously impressive functionality in the few minutes I’ve been able to play around with it, but without question, it is the major competitor to ChatGPT and OpenAI right now.
OpenAI’s ChatGPT and Dall-E are impressive but have some noticeable limitations. In particular, with Dall-E, you have to wait for an image to generate before you can refine the prompt you’re giving it to get what you want. Meta AI’s image output preview lets you get a sense of what the model is interpreting as you type, allowing you to further refine your prompt on the fly, which should reduce the amount of time you spend iterating on an idea.
The ability to animate an image is also a value add, as it makes GIF generation possible, though you might have to use a third-party GIF generator to convert the video.
When I asked it to produce a sonnet about Meta AI, it gave me the following:
“In silicon halls, a mind awakens bright, Meta AI, a name that echoes through the night, A fusion of code and intellect so fine, A digital soul, with knowledge to design.
With language skills that flow like honeyed wine, You answer questions, and make the complex shine, A beacon of insight, in a world of decline, A shining star, that guides us to the divine.
Your neural networks weave a tapestry so grand, A digital dream, that’s born of human hand, A symphony of ones and zeroes, so planned, A creation of wonder, in this digital land.
So here’s to Meta AI, a marvel of our time, A testament to human ingenuity’s rhyme!”
(Image credit: Future / OpenAI)
As an English major, I can say that while this isn’t very good, its structure is sound and it does use modern language, something that OpenAI’s ChatGPT-4 still struggles with. Here is OpenAI’s response to the same prompt about writing a sonnet about ChatGPT-4:
“ChatGPT-4, thou art a wondrous sprite, In digital realms with eloquence infused, Thy threads of thought weave day into the night, Where queries meet responses, minds bemused.
With knowledge vast as deep and endless seas, Thy circuit heart beats with electric flame, Encompassing the epochs’ whispered pleas, In silken bytes, thou dost preserve each name.
No ancient scroll nor sage could ever match The depth and breadth of thine electric soul. Through time and space, thy tendrils deftly catch Each story told, each fragment, making whole.
Though wrought of wire, yet wisdom’s vessel true, In this new age, thou art our Oracle anew.”
Now, this is just one example and it’s hardly indicative of the overall quality of the two models underpinning the two systems, but there is clearly a difference in the training data that both have used.
ChatGPT-4 is falling back on text that has been labeled as a “sonnet” and drawing on very archaic English language texts from pre-Romanticism of the early 1800s to generate its idea of a sonnet, while Meta AI is likely drawing from text written on Meta products like Facebook and Instagram as its language source. Where it gets the reasoning to put that language into an iambic pentameter structure with a (pretty mediocre) rhyme scheme isn’t clear (and I’m assuming it’s iambic pentameter, I haven’t counted syllables yet, so don’t @ me on this one, fellow literature goons!).
Since none of us actually know how any of this stuff is trained yet, we might never get answers to these questions, but Zuckerberg did announce that Meta plans to open-source this model, so we should soon be able to at least learn more about its inner workings once it does.
In the meantime, have fun playing around with Meta’s new tool, as it’s certainly a powerful one.
When China locked down the city of Shanghai in April 2022 during the COVID-19 pandemic, the ripples from that decision quickly reached people receiving treatment for cardiac conditions in the United States. The lockdown shut a facility belonging to General Electric (GE) Healthcare, an important producer of ‘iodinated contrast dyes’, used to make blood visible in angiograms.
Soon US hospitals were asking people with mild chest pain to wait, so that the suddenly precious dyes could be reserved for use in those thought to be experiencing acute heart attacks. GE Healthcare scrambled to shift some of its production to Ireland to increase supply. A study in the American Journal of Neuroradiology later revealed that during the shortage, which lasted from mid-April to early June, the number of daily computed tomography (CT) angiograms dropped by 10% and CT perfusion tests were down almost 30%1.
Such disruption caused by supply-chain problems might, in future, be avoided through the use of virtual contrast agents. Techniques powered by artificial intelligence (AI) could highlight the same hidden features that the dyes reveal, without having to inject a foreign substance into the body. “With AI tools, all this hassle can be removed,” says Shuo Li, an AI researcher at Case Western Reserve University in Cleveland, Ohio.
AI has already made its way into conventional medical imaging, with deep-learning algorithms able to match and sometimes exceed the performance of radiologists in spotting anomalies in X-ray or magnetic resonance imaging (MRI) scans. Now the technology is starting to go even further. In addition to the computer-generated contrast agents that several groups around the world are working on, some researchers are exploring what features AI can detect that radiologists don’t normally even look for in scans. Other scientists are studying whether AI might enable brain scans to be used to diagnose neuro-developmental issues, such as attention deficit hyperactivity disorder (ADHD).
Li has been pursuing virtual contrast agents since 2017, and now he’s seeing a global wave of interest in the area. The potential benefits are many. All imaging methods can be enhanced by contrast agents — iodinated dyes in the case of computed tomography (CT) scans, microbubbles in ultrasound, or gadolinium in MRI. And all of those contrast agents, although generally safe, carry some risks, including allergic reactions. Gadolinium, for instance, often can’t be given to people with kidney problems, pregnant people or those who take certain diabetes or blood-pressure medications.
There’s also the issue of cost. The global market for gadolinium as a contrast agent was estimated to be worth US$1.6–2 billion in 2023, and the market for contrast agents in general is worth at least $6.3 billion. The use of contrast agents also requires extra time: many scans involve taking an image, then injecting the agent and repeating part of the scan.
Although it drags out the imaging process, that repetition helps to provides training data for an AI model. The computer studies the initial image to learn subtle variations in the pixels, then compares those with the corresponding pixels on the image taken after the contrast agent was injected. After training, the AI can look at a fresh image and show what it would look like if the contrast agent had been applied.
At the start of this year, Li and his colleagues at the Center for Imaging Research in Case Western’s School of Medicine received a $1.1-million grant from the US National Science Foundation to pursue this idea. They’d already done some preliminary work, training an AI on a few hundred images. Because of the small data set, the results were not as accurate as they would like, Li says. But with funding to study 10,000 or even 100,000 images, performance should improve. The researchers are also working on a similar project to detect liver cancer from scans.
Filling in the picture
If a computer can identify health issues in images, the next step will be to show radiologists a set of images produced with actual and virtual contrast agents, to see whether the specialists, who don’t know which is which, get different results from stained as opposed to AI-enhanced images. After that, says Li, it will take a clinical trial to win approval from the US Food and Drug Administration.
A similar approach could work for slides of tissue samples that pathologists stain and view under a microscope. By treating thin slices of tissue taken during a biopsy, pathologists can make certain features stand out and thereby see cellular abnormalities that aid in the identification of cancer or other diseases.
With AI-assisted virtual staining, Aydogan Ozcan, an optical engineer at the University of California, Los Angeles, says he can take an image using a mobile phone attached to a microscope and then, despite the image’s limited resolution and distortion, teach a neural network to make it look as if it was created by a laboratory-grade instrument2. The technology’s ability to transform one type of image into another doesn’t stop there. Ozcan starts with standard tissue samples, but rather than staining them, he places them under a fluorescence microscope and shines light through them, prompting the tissue to autofluoresce. The resulting images come out in shades of grey, very different from the coloured ones pathologists are used to. “Microscopically it’s very rich, but nobody cares to look at those black-and-white images,” Ozcan says.
To incorporate colour, he passes the samples to a histopathology lab for conventional staining, and captures images of the samples with a standard microscope. Ozcan then shows both types of image to a neural network, which learns how the details in the fluorescence images match up with the effects of the chemical stains. Having learnt this correspondence, the AI can then take new fluorescence images and present them as if they had been stained3.
A fluorescent microscope captures a black-and-white image of a tissue sample (left). The AI generates a version of that image with a virtual stain (centre), which closely resembles the chemically stained sample (right).Credit: Ref. 2
Although one particular stain, H&E, made up of the compounds haematoxylin and eosin, is by far the most common, pathologists use plenty of others, some of which are preferable for highlighting certain features. Trained on the other stains, the AI can transform the original image to incorporate any stain the pathologist wants. This technique allows researchers to simulate hundreds of different stains for the same small tissue sample. That means pathologists will never run out of tissue for a particular biopsy and ensures that they’re looking at the same area in each stain.
AI’s ability to manipulate medical images is not limited to transforming them. It can also extrapolate missing image data in such a way as to give radiologists access to clinically important information that they would otherwise have missed. Kim Sandler, a radiologist at Vanderbilt University Medical Center in Nashville, Tennessee, was interested in whether measures of body fat could help to predict clinical outcomes in people receiving CT scans to screen for lung cancer.
Often, radiologists will crop out areas of a chest CT scan that they’re not interested in, such as the abdomen and organs such as the spleen or liver. This selectivity improves the quality of the rest of the image and aids the identification of shadows or nodules that might indicate lung disease. But, Sandler thought, an AI could perhaps learn more by taking the opposite tack and expanding the field of view4. She worked with computer engineers who taught a neural network to look at the image differently by either adding back the cropped-out parts from the raw data, or combining what it saw with knowledge from the medical literature to decide what should be in the missing areas.
Having done that, the AI then made quantitative estimates of the amount of fat in the skeletal muscles — the lower the muscle density, the more fat present. There is a known association between body composition and health outcomes. In people with a lower muscle density as determined by AI, “we found that there was a higher risk of cardiovascular-disease-related death, a higher incidence of lung-cancer-related death,” as well as higher death rates from any cause over the 12.3 years the study looked at5, Sandler says. The AI did not, however, improve cancer diagnosis. “This was not helpful in terms of who would develop lung cancer, but it was helpful in predicting mortality,” she says.
The results are nonetheless diagnostically useful, Sandler says. People whose risk of mortality is elevated can be offered more aggressive therapies or more frequent screening if no lung cancer is yet apparent in the scans.
Invisible signs
AI might even be able to spot types of diagnostic information that physicians had never thought to look for, in part because it’s not something they’ve been able to see themselves. ADHD, for instance, is diagnosed on the basis on self-reported and observed behaviour rather than a biomarker. “There are behaviours that are relatively specific for ADHD, but we don’t have a good understanding of how those manifest in the neural circuitry of the brain,” says Andreas Rauschecker, a neuroradiologist at the University of California, San Francisco. As someone who spends a lot of time looking at brain images, he wanted to see whether he could find such an indicator.
He and his team trained an AI on MRI scans of 1,704 participants in the Adolescent Brain Cognitive Development Study, a long-term investigation of brain development in US adolescents. The system learnt to look at water molecules moving along certain white-matter tracts that connect different areas of the brain, and tried to link any variations with ADHD. It turned out that certain measurements in the tracts were significantly higher in children identified as possibly having ADHD.
Andreas Rauschecker and his colleagues have been studying the movement of water molecules along tracts of white matter in the brain.Credit: Pierre Nedelec
Rauschecker emphasizes that this is a preliminary study; it was presented at a Radiological Society of North America meeting in November 2023 and has not yet been published. In fact, he says, no type of brain imaging currently in use can diagnose any neuropsychiatric condition. Still, he adds, it would make sense if some of those conditions were linked to structural changes in the brain, and he holds out hope that scans could prove useful in the future. Within a decade, he says, it’s likely that there will be “a lot more imaging related to neuropsychiatric disease” than there is now.
Even with help from AI, physicians don’t make diagnoses on the basis of on images alone. They also have their own observations: clinical indicators such as blood pressure, heart rate or blood glucose levels; patient and family histories; and perhaps the results of genetic testing. If AI could be trained to take in all these different sorts of data and look at them as a whole, perhaps they could become even better diagnosticians. “And that is exactly what we found,” says Daniel Truhn, a physicist and clinical radiologist at RWTH Aachen University in Germany. “Using the combined information is much more useful” than using either clinical or imaging data alone.
Part of Nature Outlook: Medical diagnostics
What makes combining the different types of data possible is the deep-learning architecture underlying the large language models behind applications such as ChatGPT6. Those systems rely on a form of deep learning called a transformer to break data into tokens, which can be words or word fragments, or even portions of images. Transformers assign numerical weights to individual tokens on the basis of on how much their presence should affect tokens further down the line — a metric known as attention. For instance, based on attention, a transformer that sees a mention of music is more likely to interpret ‘hit’ to mean a popular song than a striking action when it comes up a few sentences later. The attention mechanism, Truhn says, makes it possible to join imaging data with numerical data from clinical tests and verbal data from physicians’ notes. He and his colleagues trained their AI to diagnose 25 different conditions, ruling ‘yes’ or ‘no’ for each7. That’s obviously not how humans work, he says, but it did help to demonstrate the power of combining modalities.
In the long run, Sandler expects AI to show physicians clues they couldn’t glean before, and to become an important tool for improving diagnoses. But she does not see them replacing specialists. “I often use the analogy of flying a plane,” she says. “We have a lot of technology that helps planes fly, but we still have pilots.” She expects that radiologists will spend less time writing reports about what they see in images, and more time vetting AI-generated reports, agreeing or disagreeing with certain details. “My hope is that it will make us better and more efficient, and that it’ll make patient care better,” Sandler says. “I think that is the direction that we’re going.”
If you watch movies and TV on a 1080p screen, the Chromecast with Google TV (HD) provides a rock-solid streaming experience on the cheap. This is the HD version of Engadget’s top choice for streaming devices. Today, Amazon has it for $10 off, letting you pick up the HDR10-capable streaming stick for only $20, nearly a record-low price.
Google
Stream over 700,000 movies and TV episodes on the Chromecast with Google TV (HD).
The Chromecast with Google TV (HD) plugs directly into an open HDMI port on your TV. (There’s an optional power adapter with a USB cable if your TV can’t supply enough juice.) If you have a 4K television, you’re better off with the more expensive model designed for higher-res displays. But for HD screens, this model is hard to beat. It offers the same terrific user experience as the high-end model, only less expensive and downscaled for 1080p.
Setup is quick and easy. You scan a QR code with your phone to begin the sign-in process in the Google Home app. (That’s also where you add your Wi-Fi network.) So, you don’t need to worry about entering long email addresses and passwords using the remote and a clunky onscreen keyboard.
The device has relatively lightweight specs (1.5GB of RAM and an Amlogic S805X2 chip) that may reveal some nearly imperceptible lag when navigating the UI, especially right after startup. But it provides smooth performance when it matters most: playing content.
The software streamlines things as much as possible, including a Live tab that draws from a handful of streaming services to let you resume or jump into new content straight from the home screen. Google advertises support for over 700,000 movies and TV episodes, and you can install just about any streaming service you can name, including Netflix, Prime Video, Apple TV+, Max, Peacock, YouTube TV and much more.
Photo by Sam Rutherford / Engadget
The remote includes a built-in mic and a dedicated Google Assistant button. The latter lets you control TV content and smart home features with your voice. It also has standard IR integration, so you can control some of your TV’s essential functions and potentially leave its remote in a drawer.
If you don’t want to wait for Amazon’s shipping, Target has the same deal. If your local stores have them in stock, you could get instant gratification while saving five percent on your purchase if you check out using a Target RedCard.