A fertility treatment that has been used for 45 years is once again available in Alabama. Invitro fertilization (IVF) procedures in the state were halted after the Alabama Supreme Court ruled in February that embryos created using the technique have the same rights as children. A new state law protecting clinics from legal fallout has allowed IVF treatments to resume — but clinicians and scientists in the United States who are working with human embryos are not totally reassured and fear that they will face an increasing number of legal and constitutional challenges.
Physicians are especially worried that officials might cap the number of embryos that can be created in each treatment cycle, which often entails the fertilization of several eggs. Lawmakers could also ban the freezing of backup embryos, which doctors say would result in less efficient and more expensive treatments.
The fact that IVF is so popular in the United States could protect the practice to some extent, says Hank Greely, director of the Center for Law and the Biosciences at Stanford University in California. But research using human embryos — which is already restricted or even banned in some states — might be an easier target for anti-abortion advocates, some of whom contend that life begins at conception and that discarding an embryo is akin to killing a child. “From a researcher’s perspective, there’s reason to be worried,” he says.
But IVF seemed to remain protected, says Eli Adashi, a reproductive endocrinologist at Brown University in Providence, Rhode Island. “Because in so many ways you could look at IVF as a pro-life proposition, IVF was by and large left alone,” he says.
That changed after three couples in Alabama filed a lawsuit against a fertility clinic for the accidental destruction of their frozen embryos. The suit claimed that the loss violated the 1872 Wrongful Death of a Minor Act, a state law that allows family members to sue when their child dies owing to negligence.
People rally for IVF rights outside the Alabama State House after a state supreme court ruling led clinics to put IVF treatments on hold.Credit: Stew Milne/AP for RESOLVE: The National Infertility Association via Alamy
The Alabama Supreme Court ruled on 16 February that the act covers “all unborn children”, including embryos outside the uterus. The decision meant that the lawsuit was valid — and that clinics and doctors could be liable for the destruction of embryos created by fertility procedures. Clinics suspended IVF treatments, and the resulting backlash prompted lawmakers to quickly pass legislation on 6 March to provide immunity to providers and patients for the destruction of embryos.
Several states, including Alabama, have laws conferring rights to embryos. Because there is no federal law protecting IVF, state laws could potentially be targeted at the technique, which often involves discarding embryos, such as those with genetic abnormalities.
Complicated politics
The Alabama ruling was a warning shot, Greely contends. It signalled that some anti-abortion forces are now interested in protecting embryos outside the womb. If “you’ve just won this great victory in overturning Roe v. Wade, you’re going to be looking for what’s next”, he says.
Mary Szoch, the director of the Center for Human Dignity at the Family Research Council, an anti-abortion organization in Washington DC, didn’t directly answer a written question from Nature about whether anti-abortion organizations are pushing for restrictions on IVF in the United States. The council recognizes the value of the lives of children born as a result of the procedure, she says. However, “millions more lives have been lost as the result of human life being made in the laboratory”, she adds. “Society must stop viewing these embryos as mere products.”
It’s not clear how far anti-abortion groups will go to campaign to restrict IVF. These groups have consistently opposed the destruction of embryos for any reason, says Jennifer Holland, a historian at the University of Oklahoma in Norman. But they have been cautious about advocating against IVF because of concerns about whether “this erodes the kind of political support that they’ve gotten from the Republican Party”, Holland says. Many Republican leaders have openly supported IVF.
Eroding efficiency
Even if IVF is not banned, clinicians worry about the prospect of restrictions on disposal of embryos. Other countries have imposed such constraints: a law in Italy, for example, mandated that only three embryos could be produced per round of IVF, and required all embryos to be transferred “as soon as possible”. “It was very inefficient, and they finally overturned that,” says Eric Forman, a reproductive endocrinologist at Columbia University in New York City.
How much should having a baby cost?
If embryo freezing is considered legally risky, “couples will limit the number of eggs retrieved or inseminated [per treatment cycle] to avoid any frozen embryos”, says Nanette Santoro, chair of obstetrics and gynecology at the University of Colorado in Aurora. That would make each round of IVF much less efficient, she notes, which could raise the number of cycles couples undergo, drive up costs and increase exposure to risks from the procedure and fertility drugs.
Forman is also concerned with potential restrictions on genetic testing of embryos, which helps providers to select embryos that are more likely to result in a viable pregnancy and avoid certain genetic conditions. “I worry that [would] result in fewer healthy babies from this technology,” he says.
Fears of restrictions
The study of human embryos is already heavily restricted in the United States. Since 1996, federal funding for research involving the creation or destruction of human embryos has been barred. In 11 states, human embryo research is banned. For scientists, the Alabama ruling sounded an alarm about the prospect of increased constraints.
“I’m concerned, obviously, about what the consequences of this decision are going to be,” says Ali Brivanlou, an embryologist at The Rockefeller University in New York City who conducts research involving human embryonic stem cells.
He says that he understands why people might find it easier to support IVF than human embryo research. With IVF, “you’re trying to help couples to have kids who otherwise would not have kids, so it’s easier to accept why this technology is important,” he says. That doesn’t take into consideration, however, “the fact that IVF could not exist without basic research and that most other aspects of medical practice are derived from the basic research approach”.
Sony will soon release a new pair of mid-range wireless headphones capable of delivering bassy audio on the same level as high-end models. At least, that’s what this latest leak from industry insider Roland Quandt would lead us to believe. He recently spilled the beans on what may be the Sony WH-ULT900N headphones, which Quandt claims are the successors to the WH-XB910N.
This information comes from German tech news site WinFuture, but we’re going to be using a translation provided by NotebookCheck. According to the report, the pair will sport a feature called “Ultra Power Sound,” giving the pair a frequency range stretching from 5 Hz up to 20 kHz.
Having a bottom as low as 5 Hz is particularly notable because it’s a range close to what you see on high-end headphones such as Sony’s WH-1000XM4. NotebookCheck points out this would allow the WH-ULT900N to output “more bassy sound than many of its competitors.”
(Image credit: Roland Quandt/WinFuture)
Images within the leak also give us our first look at what the headphones may look like. At a glance, they resemble the high-end WH-1000XM4 although with a few design tweaks. The USB-C ports on the outside of the cups are no longer there, having been replaced by what appears to be a speaker grill. And the company logo is in a different position.
(Image credit: Roland Quandt/WinFuture)
Specs
Quandt’s leak goes on to list some of the WH-ULT900N’s specifications. Inside the cups will be 40mm drivers, and its battery will last up to “50 hours of music playback”. That number drops down to 30 hours with ANC (active noise canceling) enabled. It’ll use the company’s own LDAC (Lossless Digital Audio Codec) connectivity standard to ensure high resolution audio across wireless connections.
With Bluetooth 5.2, the WH-ULT900N supports Bluetooth Multipoint. This will let future owners hop between simultaneously connected audio sources.
No word on when the Sony WH-ULT900N will launch, but when they do, the pair should cost around $215/£170 and come in three different colors: Black, White, and Forest Grey.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Apple’s M3 Ultra chip may be designed as its own, standalone chip, rather than be made up of two M3 Max dies, according to a plausible new theory.
The theory comes from Max Tech‘s Vadim Yuryev, who outlined his thinking in a post on X earlier today. Citing a post from @techanalye1 which suggests the M3 Max chip no longer features the UltraFusion interconnect, Yuryev postulated that the as-yet-unreleased “M3 Ultra” chip will not be able to comprise two Max chips in a single package. This means that the M3 Ultra is likely to be a standalone chip for the first time.
This would enable Apple to make specific customizations to the M3 Ultra to make it more suitable for intense workflows. For example, the company could omit efficiency cores entirely in favor of an all-performance core design, as well as add even more GPU cores. At minimum, a single M3 Ultra chip designed in this way would be almost certain to offer better performance scaling than the M2 Ultra did compared to the M2 Max, since there would no longer be efficiency losses over the UltraFusion interconnect.
Furthermore, Yuryev speculated that the M3 Ultra could feature its own UltraFusion interconnect, allowing two M3 Ultra dies to be combined in a single package for double the performance in a hypothetical “M3 Extreme” chip. This would enable superior performance scaling compared to packaging four M3 Max dies and open the possibility of even higher amounts of unified memory.
Little is currently known about the M3 Ultra chip, but a report in January suggested that it will be fabricated using TSMC’s N3E node, just like the A18 chip that is expected to debut in the iPhone 16 lineup later in the year. This means it would be Apple’s first N3E chip. The M3 Ultra is rumored to launch in a refreshed Mac Studio model in mid-2024.
Phishing attacks taking advantage of Apple’s password reset feature have become increasingly common, according to a report from KrebsOnSecurity. Multiple Apple users have been targeted in an attack that bombards them with an endless stream of notifications or multi-factor authentication (MFA) messages in an attempt to cause panic so they’ll respond favorably to social engineering. An…
iOS 18 will give iPhone users greater control over Home Screen app icon arrangement, according to sources familiar with the matter. While app icons will likely remain locked to an invisible grid system on the Home Screen, to ensure there is some uniformity, our sources say that users will be able to arrange icons more freely on iOS 18. For example, we expect that the update will introduce…
The next-generation iPad Pro will feature a landscape-oriented front-facing camera for the first time, according to the Apple leaker known as “Instant Digital.” Instant Digital reiterated the design change earlier today on Weibo with a simple accompanying 2D image. The post reveals that the entire TrueDepth camera array will move to the right side of the device, while the microphone will…
Apple today announced that its 35th annual Worldwide Developers Conference is set to take place from Monday, June 10 to Friday, June 14. As with WWDC events since 2020, WWDC 2024 will be an online event that is open to all developers at no cost. Subscribe to the MacRumors YouTube channel for more videos. WWDC 2024 will include online sessions and labs so that developers can learn about new…
Apple today released macOS Sonoma 14.4.1, a minor update for the macOS Sonoma operating system that launched last September. macOS Sonoma 14.4.1 comes three weeks after macOS Sonoma 14.4. The macOS Sonoma 14.4.1 update can be downloaded for free on all eligible Macs using the Software Update section of System Settings. There’s also a macOS 13.6.6 release for those who…
iOS 18 will allow iPhone users to place app icons anywhere on the Home Screen grid, according to sources familiar with development of the software update. This basic feature has long been available on Android smartphones. While app icons will likely remain locked to an invisible grid system on the Home Screen, our sources said that users will be able to arrange icons more freely on iOS 18….
Apple may be planning to add support for “custom routes” in Apple Maps in iOS 18, according to code reviewed by MacRumors. Apple Maps does not currently offer a way to input self-selected routes, with Maps users limited to Apple’s pre-selected options, but that may change in iOS 18. Apple has pushed an iOS 18 file to its maps backend labeled “CustomRouteCreation.” While not much is revealed…
A pig kidney is unpacked for transplant into 62-year-old Richard Slayman of Massachusetts.Credit: Massachusetts General Hospital
Early success in the first transplant of a pig kidney into a living person has raised researchers’ hopes for larger clinical trials involving pig organs. Such trials could bring ‘xenotransplantation’, the use of animal organs in human recipients, into the clinic.
The recipient of the pig kidney was a 62-year-old man with end-stage renal failure named Richard Slayman. He is recovering well after his surgery on 16 March, according to his transplant surgeon. The kidney was taken from a miniature pig carrying a record 69 genomic edits, which were aimed at preventing rejection of the donated organ and reducing the risk that a virus lurking in the organ could infect the recipient.
Monkey survives for two years after gene-edited pig-kidney transplant
The case demonstrates that, at least in the short term, these organs are safe and function like kidneys, says Luhan Yang, chief executive of Qihan Biotech in Hangzhou, China, who is also a founder of the biotech firm that produced the pigs, eGenesis in Cambridge, Massachusetts. The company is in discussions with the US Food and Drug Administration (FDA) about planning clinical trials for its programmes for transplanted pig kidneys, livers and paediatric hearts, says Wenning Qin, a molecular biologist at eGenesis.
Hopes for full-scale tests
All US transplants of animal organs into living humans, including Slayman’s, received FDA approval as a ‘compassionate use’, granted in narrow cases when a person’s life is at risk and there are no other treatments. But Yang hopes that the new results will push the FDA towards approval of full-scale clinical trials. Xenotransplants can “provide hope and life for patients and their families”, Yang says.
The surgery also brings clinicians closer to relieving the shortage of life-saving human organs by using animal organs. In the United States alone, there are nearly 90,000 people waiting for a kidney transplant, and more than 3,000 people die every year while still waiting. “Even though organ donation rates have increased massively, we still need millions of organs to transplant into patients,” says Wayne Hawthorne, a transplant surgeon at the University of Sydney in Westmead, Australia.
“This is great news for the field,” says Muhammad Mohiuddin, a surgeon and researcher at the University of Maryland School of Medicine in Baltimore, who led the first pig-heart transplant in a living person. Mohiuddin, who is also president of the International Xenotransplantation Association, says clinical trials would produce much-needed rigorous data about the safety and efficacy of xenotransplantation.
The operation to give Slayman a pig kidney took four hours, says Tatsuo Kawai, one of the transplant surgeons who conducted the surgery. On his right side, Slayman retained a donated human kidney that Kawai had transplanted into him in 2018, but that had begun to fail. As a result, Slayman had resumed regular dialysis, but he developed complications that required frequent hospital visits, which made him a candidate for xenotransplantation.
Surgeons in Boston, Massachusetts, perform the first transplant of a pig kidney into a living person.Credit: Massachusetts General Hospital
Slayman’s newest kidney came from a pig that had undergone CRISPR–Cas9 genome editing by eGenesis’s scientists to modify 69 of the animal’s genes. Monkeys called cynomolgus macaques (Macaca fascicularis) that received the company’s pig organs with these same genomic edits survived for months to years1. Qin says she is hopeful that Slayman’s xenotransplanted kidney will survive for just as long or even longer, particularly because her team devised the edits with humans, not monkeys, in mind.
The edits included removal of three genes that contribute to the production of a protein on the surface of pig cells. The human immune system attacks cells bearing this protein, which it takes as the hallmark of a foreign invader. Seven genes were added because they produce human proteins that help to prevent organ rejection.
Antiviral meaures
Another 59 genetic changes were made to inactivate viruses embedded in the pig genome. These changes address the risk that the viruses will become active once in the human body. So far, researchers have not seen this happen in transplants to living humans, people who are clinically dead or non-human primates, says Yang. But some laboratory experiments have shown that these viruses can be transmitted from pig tissue to human cells and to mice with compromised immune systems2.
First pig-to-human heart transplant: what can scientists learn?
The first genetically modified pig heart to be successfully transplanted into a living person turned out to be tainted with a latent virus, which might have contributed to the organ’s eventual failure3. A major concern for the FDA ahead of approving the operation was the risk that pig pathogens could infect the recipient, Kawai says. eGenesis tests its pigs on a regular basis for pathogens including porcine cytomegalovirus, which can linger quietly in its animal hosts, Qin says.
Before the procedure, the researchers collected and froze blood samples from Slayman, his family members, and his surgeons. If Slayman develops an infection, researchers can test these blood samples to determine whether they were the source of the pathogen, says Kawai.
Slayman will continue to be tested regularly for pathogens, and if he develops symptoms, his family members and caregivers will also be tested.
These precautions are important because a healthy pig is very different to an immunocompromised individual, says Yang. Even though no viruses, bacteria or fungi were detected in the pigs prior to the transplant, they could still be present and grow in an immunocompromised person, she says. “We don’t know what we don’t know.”
Healthy kidney
Kidneys filter out toxic substances from the body, produce urine and help to control blood pressure. Once the surgeons restored blood flow to the transplanted pig organ, it immediately became pink and started to produce urine, says Kawai, a sign that the transplant had been successful.
First pig liver transplanted into a person lasts for 10 days
Another metric of kidney health is the level in the blood of a chemical compound known as creatinine — high levels indicate that the kidney is not performing its waste-filtering role well. Kawai says that prior to the transplant, Slayman’s creatinine level was 10 milligrams per decilitre, but it had gone down to 2.4 by the fourth day. He hopes it will drop to 1.5, which is around the normal range.
“It seems like so far this kidney is functioning the way that it is supposed to,” Mohiuddin says.
Slayman could be released from the hospital as early as tomorrow, Qin says. He is receiving immunosuppressive medications, and has so far shown no signs of organ rejection. Qin says that eGenesis’s goal is to find the right combination of genetic edits in pigs to make it unnecessary for organ recipients to take immunosuppressive drugs, which weaken the body’s ability to fight off pathogens.
“There was always a saying that xenotransplantation is around the corner, and will always be,” Qin says. “Well, now we have someone among us that carries a porcine kidney — it’s just amazing.”
March 22, 1993: Apple launches the PowerCD, the first device from the company that doesn’t require a computer to work.
A portable CD player that also works as an external CD drive for Macs, it offers a glimpse of the extremely lucrative path Apple will follow a decade later. However, the PowerCD itself will ultimately fail in the marketplace.
This post contains affiliate links. Cult of Mac may earn a commission when you use our links to buy items.
The bad good old days at Apple
While the 1990s proved the least-profitable period in Apple’s history, I’ve always had a soft spot for the company’s products from that era. Part of this is, admittedly, nostalgia for the Apple devices I saw growing up.
True, most of those products flopped. But it’s remarkable how many of them wound up, with a few tweaks, laying the foundations for Apple’s massive success in the following decade.
The PowerCD did exactly that. Launched for $499 in 1993 — the equivalent of more than $1,000 today — the standalone CD player also served as a Mac peripheral. Available alongside the AppleDesign Powered Speakers, the PowerCD reminded some of Sony’s portable Discman CD player. However, the PowerCD did much more. It could read Kodak photo CDs and data CDs as well as playing regular audio discs. It even came with its own remote control.
PowerCD: Apple’s spin on the CD player
The PowerCD was a pretty neat product, despite its lack of success. Photo: Jonathan Zufi
The PowerCD, which worked without a computer when powered by six AA batteries, wasn’t actually manufactured by Apple. Instead, it was a rebranded Philips CDF 100 (which Kodak also sold as the PCD 880).
Still, Apple added a few neat touches that made the PowerCD memorable. If plugged into a Mac via SCSI, the device worked as a peripheral to provide an external CD-ROM drive. At a time when not all Macs came with CD drives, the device provided an easy option for upgrading a Mac’s functionality.
Sadly, like so many of the aforementioned great Apple products from the ’90s, the PowerCD failed to catch on with consumers. It wound up being the only product released by the Apple design subgroup, Mac Like Things, which Cupertino established following the launch of the Newton. Apple discontinued the PowerCD just a few years after the device’s launch.
The use of at-home diagnostic tests soared during the omicron wave of the coronavirus.Credit: Tang Ming Tung/Getty
During the COVID-19 health emergency, two strategies for detecting coronavirus infections were commonly adopted around the world.
Part of Nature Outlook: Medical diagnostics
Initially, in countries equipped with the necessary laboratory infrastructure, nasal swabs were analysed by polymerase chain reaction (PCR) — a method known for its sensitivity, but also for being slow and expensive. People often endured long waits for tests.
Subsequently, rapid antigen tests gained favour, owing to their speed, low cost and ease of use, despite being less precise at identifying positive cases.
It was a trade-off that public-health officials and individuals grappled with: balancing the need for timely information at an affordable price against the risk of false negatives.
But there was a third way. In countries including Israel, India, the United States and New Zealand, portable tests became available that combined the molecular precision of PCR with the expediency of rapid antigen kits (also known as lateral flow assays).
Like PCR, these ‘isothermal’ tests amplify small segments of the virus’s genetic material to detectable levels. However, they streamline the process by operating at a consistent temperature, eliminating the need for the repetitive heating and cooling cycles of PCR. This not only simplifies the equipment required and eliminates the need for centralized laboratories, but also accelerates the testing process from days to less than half an hour.
“It is providing near-PCR-level sensitivity with antigen usability,” says Nathan Tanner, head of the applied molecular biology division at the firm New England Biolabs in Ipswich, Massachusetts, which produces kits for doing these kinds of constant-temperature (isothermal) test in research laboratories. The main downside, Tanner says, is price: isothermal tests generally cost about US$50 per sample. That’s roughly the same as PCR in most Western countries, but about 5–10 times the cost of rapid antigen assays.
Despite the premium price, these speedy genetic tests secured their place across diverse and critical settings during the pandemic. Care homes, schools, prisons, remote health clinics and even professional sports organizations — sectors in which people were willing to pay more for dependable results — adopted the technology.
Then came the omicron variant. This highly transmissible version of the coronavirus prompted a flood of COVID-19 cases and deaths, leading to a spike in global demand for accurate testing methods in late 2021 and into 2022. Developers of at-home molecular tests seized the moment, ramping up manufacturing capacity and launching intense advertising campaigns.
Daily usage of these test kits soared into the tens of thousands in countries, such as the United States, where the at-home assays were available. But as infection rates declined, so did demand for these products. This downturn was further accelerated by initiatives from various national governments that provided free rapid antigen tests during the omicron surge. The market for more expensive COVID-19 diagnostics collapsed, forcing manufacturers of isothermal tests to shift their focus to other disease areas. Many failed and went out of business.
Consider the cautionary tale of Lucira Health in Emeryville, California — once a leader in isothermal diagnostics. Looking to carve out a new niche for its technology, Lucira pursued regulatory approval for a dual-purpose test designed to simultaneously identify and discriminate between COVID-19 and influenza. In August 2022, authorities in Canada gave this two-in-one test the go-ahead.
But regulators in the United States were slow to provide an approval. According to Lucira’s co-founder and former chief technology officer Debkishore Mitra, the US Food and Drug Administration (FDA) wanted to see extra clinical data, along with product design changes, “for reasons we did not understand”.
Flu season then arrived, and Lucira’s massive manufacturing infrastructure, built up during the omicron COVID-19 wave, sat largely idle. “It was a frustrating and confusing period of time,” says Mitra. Lucira ultimately ran out of money and filed for bankruptcy on 22 February 2023. A mere two days later, the FDA issued emergency authorization for the company’s combined flu and COVID-19 test.
“If this was not a tragedy, I would definitely consider it a comedy,” Mitra says.
Lucira’s efforts were not for naught, however. Although the company no longer exists, its test lives on, and is now marketed by the pharmaceutical giant Pfizer, which purchased Lucira’s assets at a bankruptcy auction in April 2023. For around $50, anyone can buy the Lucira by Pfizer COVID-19 & Flu Home Test. And that product could soon have competition.
Building on technological advances made in response to COVID-19, many companies are now developing isothermal genetic tests that can diagnose a wide array of respiratory diseases, sexually transmitted infections and more. These products aim to provide precise and prompt diagnostic information, enabling people to quickly seek appropriate medical treatment.
“We are in a new era,” says Wilbur Lam, a paediatric haematologist and biomedical engineer at Georgia Institute of Technology, Atlanta. “The pandemic has really brought point-of-care and at-home testing into its own.”
The challenge now, he adds, lies in pinpointing the most relevant clinical applications and, crucially, in establishing sustainable business models for diagnostic-test providers. Both are essential steps to ensure that these technologies continue to improve disease management in a post-pandemic world.
In the loop
Isothermal methods were developed in the early 1990s, shortly after the invention of PCR. But the main technique now in use emerged at the turn of the millennium. That’s when researchers at Eiken Chemical, a manufacturer of clinical diagnostic tools in Tokyo, described how to eliminate the need for thermal cycling1.
From left to right: the Lucira by Pfizer test and isothermal tests by the firms Detect and Aptitude.Credit: Nathan Frandino/REUTERS; Detect, Inc.; Black Bronstad
There were two key components to the method, known as loop-mediated isothermal amplification (LAMP). These were the use of more primers — short, single-stranded pieces of DNA that help to jump-start the gene-amplification process — and a special kind of DNA-extending enzyme.
A typical PCR reaction uses two sets of primer, which require repeated bouts of heating and cooling to bind their targets and extend copied DNA strands. But the Eiken team demonstrated that increasing the number of primers and using a specialized enzyme allowed the LAMP method to extend DNA at a constant temperature. It worked best at around 65 °C, and produced a single, ladder-like block of DNA, with dumb-bell-shaped rungs that double back on themselves again and again.
There are other isothermal techniques, some of which are found in commercially available COVID-19 tests. But many are protected by intellectual-property rights, says Paul Yager, a bioengineer and diagnostics inventor at the University of Washington in Seattle. By comparison, the foundational patents surrounding LAMP have all expired. What’s more, LAMP works well with minimal sample preparation on crude specimens, such as nasal swabs. These advantages “seem to drive people into the arms of LAMP”, Yager says.
Even with the same core technology underpinning them, the LAMP-based tests on the market are not all the same. They differ in terms of proprietary reagents and in how assay results are identified. Methods for detecting LAMP readouts include fluorescent probes, pH-induced colour shifts, electrochemical assays and CRISPR-mediated recognition strategies. Despite these differences, all of the tests generally achieve comparable levels of accuracy and performance.
A more important distinction, therefore, lies in aspects of the device design that substantially affect the user experience. Although certain products require compact, reusable pieces of hardware to interpret results from disposable test cartridges, others — including the Lucira by Pfizer test — offer the convenience of fully integrated, single-use kits.
According to Mitra, Lucira adopted this all-in-one design strategy because it thought the up-front cost of equipment would turn off would-be buyers. “That was our vision from day one,” he says. At their high point during the pandemic, at-home test readers cost upwards of $250.
But prices have come down drastically — for around $50, it’s now possible to buy a machine from a company such as Aptitude Medical Systems in Goleta, California, and then spend just $25 on an individual test (less if bulk purchasing). Aptitude’s platform also has another advantage: it’s compatible with saliva. Saliva samples are simpler to collect than nasal swabs, and so the likelihood of an error during sample acquisition is lower.
But even $25 exceeds what many people are willing to spend on a test, and not all medical-insurance companies cover the cost. Rapid antigen tests now retail for just $5 or less. And although certain at-risk groups, such as people with a compromised immune system, might be willing to shell out the extra for the diagnostic accuracy of isothermal tests, most people are not.
Economic considerations
Price sensitivity explains why the firm Detect, another isothermal-test developer that made waves in the early days of the pandemic, stopped offering its at-home COVID-19 diagnostic test about a year after its launch. The company, which is based in Guilford, Connecticut, instead opted to concentrate on making a LAMP-based platform that could be run in physicians’ offices rather than in people’s homes.
Although the technical aspects of testing in either setting are comparable, the commercial implications of this decision are considerable. Detect is able to leverage an established path for test reimbursement, particularly in the United States, where insurance companies seldom cover the expenses of at-home diagnostics but do reimburse tests ordered by physicians. “The economics just make a lot more sense,” says Eric Kauderer-Abrams, co-founder and chief executive of Detect.
Tests run in physicians’ offices can be less convenient for would-be users, however – especially those who are loathe to seek medical attention. That is why many researchers continue to push for wider adoption of at-home molecular tests.
Enabling people to test at home holds particular promise for the diagnosis of sexually transmitted infections. Personal fears and societal taboos often present obstacles to effective screening and treatment for these infections. With at-home diagnostics, “people can do it in the privacy of their own homes”, says Deborah Dean, an infectious-disease specialist at the University of California, San Francisco, who previously collaborated with Lucira to study prototype LAMP tests for gonorrhoea and chlamydia2. “They don’t have the stigma of going to a clinic, and having everybody else in the waiting room wondering why they’re there.”
Juliet Iwelunmor sees opportunities to harness the power of LAMP testing in low-resource settings. A global-health researcher at Washington University School of Medicine in St. Louis, Missouri, Iwelunmor is leading an initiative to introduce LAMP testing for human papillomavirus (HPV), a leading cause of cervical cancer, in Nigeria, where she grew up. An estimated 3.5% of women in the country harbour HPV infections, but less than 15% of the population are ever tested. Iwelunmor’s goal is to reduce the per-test cost to below $5. “We’re trying to make LAMP as cheap and easy as possible,” she says.
Other efforts are aiming to bring LAMP-based assays to sub-Saharan Africa for two mosquito-borne viral diseases: Zika and chikungunya.
The benefits of LAMP testing can also be seen in countries with greater resources but fragmented health-care systems. A notable example is the Home Test to Treat programme, which launched in 2023 with funding from the US National Institute of Biomedical Imaging and Bioengineering (NIBIB) with the goal of distributing free Lucira by Pfizer test kits to vulnerable communities across the United States. Those who test positive for the viruses can then receive free telehealth consultations and, when appropriate, have antiviral treatments such as Paxlovid (nirmatrelvir and ritonavir) or Tamiflu (oseltamivir) delivered to their homes or local pharmacies.
Before the advent of at-home molecular testing, a definitive diagnosis of influenza relied on a PCR assay, with lab confirmation often required to initiate treatment with Tamiflu — a drug that is most effective when administered shortly after the onset of symptoms. Few people ever get tested, however, and antivirals are an underused weapon.
The distribution of Lucira by Pfizer tests, paired with telemedicine services, removes this barrier. “It allows them to receive care without going to a clinician’s office,” says Apurv Soni, a digital-health researcher at the UMass Chan Medical School in Worcester, Massachusetts, who is leading the analysis of the Home Test to Treat clinical data.
The dual-purpose LAMP test offers the unprecedented ability to quickly differentiate between two respiratory viruses that often present with similar symptoms, yet require distinct treatment approaches. “That’s a tremendous advantage,” Soni says. “You can pick up infections early on and initiate the appropriate treatment in a timely manner.”
So far, the Home Test to Treat programme has distributed LAMP tests to tens of thousands of study participants, identifying infections early and providing antiviral medication to many individuals — evidence, according to NIBIB director Bruce Tromberg, of the technology’s public-health benefit. But will that be enough to convince consumers and insurance providers to pick up the diagnostic tab when the government is not footing the bill? “This is one of the key questions,” Tromberg says. “Now that we’ve created a consumer awareness, will it be sustainable?”
Mitra left Lucira in November 2022 and no longer works on isothermal diagnostics. He now leads technology development at a cannabis-testing firm called Hound Labs, in Fremont, California. Despite his shift away from LAMP testing, he continues to closely monitor the industry that his innovations helped to unleash.
“My hope is that at-home testing becomes routine and regular beyond the pandemic,” Mitra says. Yet his tenure at Lucira has imbued him with a pragmatic outlook on the adoption of isothermal tests. He recognizes that the intricacies and bureaucratic hurdles in health-care systems often dictate the use of new technologies more than their clinical merits alone. “Technology,” Mitra observes, “never gets used in a vacuum.”
Microsoft recently brought in a new feature for DirectX 12 that could considerably improve gaming performance, and thanks to AMD, we have a clearer idea of what it might mean for frame rates in the future.
This is Work Graphs, a tech previously in testing but now officially introduced for DX12. It’s a feature that aims to reduce CPU bottlenecking effects on PC games, shifting some of the heavy processing work from the processor to the GPU. (And reducing communication between the CPU and graphics card, cutting out overheads in that respect, too).
As Tom’s Hardware noticed, AMD has announced that it’s using draw calls and mesh nodes as part of Work Graphs to usher in better gaming performance, and has provided us with an idea of what impact those advancements will (or should) have.
We are told that a benchmark using an RX 7900 XTX GPU produced a major performance gain for Work Graphs with mesh shaders compared to the frame rate without the shaders. Indeed, in the latter case performance was 64% slower.
This is while “using the AMD procedural content Work Graphs demo with the overview, meadow, bridge, wall, and market scene views.”
AMD has a demo video to watch in its GDC 2024 blog post on this development.
Analysis: Managing expectations
This is exciting stuff on the face of it – we’ve already been tempted by the sound of Work Graphs and how it could pan out by comments from a couple of game developers. This feature has serious potential for faster fps and this benchmark from AMD illustrates the gains that may be in the pipeline – may being the operative word.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
As ever, a single benchmark on a particular test system is of limited use in gauging the wider implications of Work Graphs, so let’s keep a level head here. As we’ve mentioned previously, there could be a broad range of gains dependent on all sorts of factors, and we have to remember that Work Graphs itself does use resources to function – this isn’t a free boost.
But by all accounts, it could be a large step forward, and Work Graphs is an enticing prospect for the future of smoother gaming, there’s no denying that. More testing and benchmarking, please…
The age of humanoid robots could be a significant step closer thanks to a new release from Nvidia.
The computing giant has announced the launch of Project GROOT, its new foundational model aimed at helping the development of such robots in industrial use cases.
Revealed at Nvidia GTC 2024, the company says its new launch will enable robots to be smarter and more functional than ever before – and they’ll do so by watching humans.
We are GROOT?
Announcing the launch of Project GROOT (standing for “Generalist Robot 00 Technology”) on stage at Nvidia GTC 2024, Jensen Huang, company founder and CEO revealed robots powered by the platform will be designed to understand natural language and emulate movements by observing human actions.
This will allow them to quickly learn coordination, dexterity and other skills in order to navigate, adapt and interact with the real world – and definitely not lead to a robot uprising at all.
Huang went on to show off a number of demos which saw Project GROOT-powered robots carry out a number of tasks, from XXX, showing their possibility.
“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” Huang said. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The scale of importance for Project GROOT was also highlighted by the fact Nvidia has built a new computing system, Jetson Thor, specifically designed for humanoid robots.
The SoC includes a GPU based on the latest Nvidia Blackwell architecture, which includes a transformer engine able to delivering 800 teraflops of AI performance, allowing them to run multimodal generative AI models like GR00T.
(Image credit: Nvidia)
The company also revealed upgrades to its Nvidia Isaac robotics platform, designed to make robotic arms smarter, more flexible and more efficient than ever – making them a much more appealing choice for factories and industrial use cases across the world.
This includes new collections of robotics pretrained models, libraries and reference hardware aimed at helping faster learning and better efficiency.
In today’s era of data-driven decision-making, the marriage of machine learning and open banking data is transforming financial services. In recent years, we’ve seen its successful applications across various domains, including enhancing fraud protection through the analysis of extensive datasets with sophisticated algorithms that identify patterns indicative of fraudulent activity.
The technologies have also played pivotal roles in algorithmic trading with real-time analysis of market trends, and in supporting regulatory compliance, where it has helped financial institutions in meeting and navigating complex regulatory requirements. The financial services industry has shown others that it is dynamic, and adopting evolving technologies has certainly played a core part of this evolution.
Now more than ever, machine learning and open banking data are also poised to shake up the lending landscape. The convergence of these technologies presents a real opportunity for lenders to better understand their customers, personalize their products as a result, and foster a more transparent and responsive lending ecosystem.
In this article, I delve into my thoughts on the three key ways machine learning technology is redefining the game for the lending industry, and where the opportunities are to offer mutually beneficial outcomes for both lenders and customers alike.
Nick Allen
Chief Technology Officer at Aro.
Marrying Open Banking data with machine learning
One prevailing trend that the lending industry can take advantage of is the increasing demand for personalized products from customers. The fusion of machine learning and open banking data is becoming a linchpin for how lenders engage with their customers, increase their satisfaction and build brand loyalty. Moreover, the pairing of the open banking data and machine learning algorithms enables lenders to gain unparalleled and deeper insights into customer profiles. With access to rich insight of approximately a hundred individual attributes (including data from utility payments, rental history, public records, spending habits etc.), lenders can assess their customers’ creditworthiness more accurately to customize financial products, ensuring that they respond to customers’ specific needs and financial capabilities. For example, this could lead to the introduction of credit options that are presently unavailable or even result in lower interest rates for customers who connect their data and demonstrate their sustainable affordability.
What’s more, the incorporation of open banking data introduces a layer of transparency and accuracy to the credit matching experience. Not only do borrowers benefit from a more holistic evaluation that goes beyond the archaic credit scoring approach, but they also get a fairer representation of their financial standing with real time and accurate data. This not only instils more confidence in the lending process, it also boosts financial inclusivity by offering opportunities for individuals who may have limited credit history, despite exhibiting responsible financial behaviors.
Improved personalized credit matching
The adoption of machine learning and its advantages should also extend beyond lenders’ internal operations. While borrowers now anticipate tailored offerings from lenders that align precisely with their unique financial requirements and capabilities, achieving this high degree of customization demands more than just implementing the latest cutting-edge technology. In fact, it requires a nuanced understanding of borrowers’ behaviors and preferences, emphasizing the importance of a customer-centric approach that goes beyond assessing surface-level data. Advanced machine learning algorithms are now capable of evaluating customer profiles against available financial offers, boosting offer acceptance and completion rates. This approach levels the playing field for lenders, keeping the best interests of customers at the forefront.
Before now, many customers were excluded from accessing credit services through no fault of their own. Thin credit files, bias affordability calculations and one-size-fits-all credit decisioning has left many unable to access credit they can afford, or matched with unsuitable products. Machine learning algorithms, however, bring objectivity and speed to this process. Notably, machine learning algorithms can streamline the loan application processes by rapidly analyzing open banking data to improve the overall customer experience and efficiency of lenders. For instance, with machine learning, credit decisions have gone from taking days, to a matter of hours.
In addition to individual credit assessments, machine learning algorithms empower lenders to stay ahead of dynamic market conditions. In particular, lenders with this technology can continually analyze market trends, customer preferences and other economic indicators in real time. These algorithms can be crucial to provide lenders with valuable insights for strategic decision making when it comes to developing products and risk management in times of economic downturn.
Empowering consumers to navigate the complexities of personal finance
The advantages of machine learning are not exclusive to lenders. It is also becoming a powerful tool to enhance financial literacy among customers. By analyzing their income and expenditure data, machine learning can provide customers with personalized insights into their financial health to highlight what they can afford, and ultimately enable them to make more informed borrowing decisions.
Financial literacy is the cornerstone of a responsible lending environment. As customers gain insights into what they can afford, they become more aware of their financial capabilities and potential risks. Machine learning, in this context, acts as an educational guide, promoting transparency and responsible borrowing practices. The result is a customer base that is more financially savvy and less susceptible to pitfalls associated with uninformed financial decisions.
Entering a new frontier of optimized credit matching
As these innovative approaches continue to gain traction in financial services, the integration of machine learning and open banking data is expected to bring about a more efficient and customer-centric lending ecosystem. Lenders equipped with a robust machine learning approach are those who will better serve their clients, offering tailored solutions, while customers gain the ability to make more informed financial choices, fostering a responsible and transparent lending ecosystem.
In the coming years, the marriage between machine learning and open banking data will continue to evolve further to unlock new possibilities for the lending sector and broader financial services industry. It’s an exciting time for the lending industry and with a focus on customer-centricity and the responsible use of data, we’ll see the lending landscape undergo welcomed change from lenders and consumers alike.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Conversing with your computer has been a dream of futurists and technologists for decades. When you look at 2004’s state of the art, it’s staggering to see how far we’ve come. There are now billions of devices in our hands, and homes that listen to our queries and do their very best to answer them. But for all of the time, money and effort, chatbots of any stripe have not swallowed the world as their creators intended. They’re miraculous. They’re also boring. And it’s worth asking why.
Chatbot is a term covering a lot of systems, from voice assistants to AI and everything else in the middle. Talking to your computer in the not-so-good old days meant typing into a window and watching the machine attempt a facsimile of the act of conversation rather than the real thing. The old ELIZA (1964 to 1967) trick of restating user inputs in the form of a question helped sell this performance. And this continued even as far as 2001’s SmarterChild chatbot. The other branch of this work was to digitize the analog with voice-to-text engines, like Nuance’s frustrating but occasionally wonderful product.
In 2011, the ideas in that early work joined up to make Siri for the iPhone 4S, which was quietly built on Nuance’s work. Amazon founder, Jeff Bezos, saw Siri’s promise early and launched a large internal project to make a homegrown competitor. In 2014, Alexa arrived, with Cortana and Google Assistant following in subsequent years. Natural language computing was now available on countless smartphones and smart home devices.
Companies are largely reticent to be specific about the price of building new projects, but chat has been costly.Forbes reported in 2011 that buying the startup behind Siri cost Apple $200 million. In 2018,The Wall Street Journal quoted Dave Limp, who said Amazon’s Alexa team had more than 10,000 employees. ABusiness Insider story from 2022 suggested the company pegged more than $10 billion in losses on Alexa’s development. Last year,The Information claimed Apple is now spending a million dollars a day on AI development.
So, what do we use this costly technology for? Turning our smart bulbs on and off, playing music, answering the doorbell and maybe getting the sports scores. In the case of AI, perhaps getting poorly summarized web search results (or an image of human subjects with too many fingers.) You’re certainly not having much in the way of meaningful conversation or pulling vital data out of these things. Because in pretty much every case, its comprehension sucks and it struggles with the nuances of human speech. And this isn’t isolated. In 2021,Bloomberg reported on internal Amazon data saying up to a quarter of buyers stop using their Alexa unit entirely in the second week of owning one.
The oft-cited goal has been to make these platforms conversationally intelligent, answering your questions and responding to your commands. But while it can do some basic things pretty well, like mostly understanding when you ask it to turn your lights down, everything else isn’t so smooth. Natural language tricks users into thinking the systems are more sophisticated than they actually are. So when it comes time to ask a complex question, you’re more likely to get the first few lines of a wikipedia page, eroding any faith in their ability to do more than play music or crank the thermostat.
The assumption is that generative AIs bolted onto these natural language interfaces will solve all of the issues presently associated with voice. And yes, on one hand, these systems will be better at pantomiming a realistic conversation and trying to give you what you ask for. But, on the other hand, when you actually look at what comes out the other side, it’s often gibberish. These systems are making gestures toward surface level interactions but can’t do anything more substantive. Don’t forget when Sports Illustrated tried to use AI-generated content that boldly claimed volleyball could be “tricky to get into, especially without an actual ball to practice with.” No wonder so many of these systems are, asBloomberg reported last year, propped up by underpaid human labor.
Of course, the form’s boosters will suggest it’s early days and, like OpenAI CEO Sam Altman has said recently, we still need billions of dollars in more chip research and development. But that makes a mockery of the decades of development and billions of dollars already spent to get where we are today. But it’s not just cash or chips that’s the issue: Last year,The New York Times reported the power demands of AI alone could skyrocket to as much as 134 terawatt hours per year by 2027. Given the urgent need to curb power consumption and make things more efficient, it doesn’t bode well for either the future of its development or our planet.
We’ve had 20 years of development, but chatbots still haven’t caught on in the ways we were told they would. At first, it was because they simply struggled to understand what we wanted, but even if that’s solved, would we suddenly embrace them? After all, the underlying problem remains: We simply don’t trust these platforms, both because we have no faith in their ability to do what we ask them to and because of the motivations of their creators.
One of the most enduring examples of natural language computing in fiction, and one often cited by real-world makers, is the computer from Star Trek: The Next Generation. But even there, with a voice assistant that seems to possess something close to general intelligence, it’s not trusted to run the ship on its own. A crew member still sits at every station, carrying out the orders of the captain and generally performing the mission. Even in a future so advanced it’s free of material need, beings still crave the sensation of control.
To celebrate Engadget’s 20th anniversary, we’re taking a look back at the products and services that have changed the industry since March 2, 2004.
This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.