My colleague, Dr. Srinivas Mukkamala, recently made that assertion, and I’ve been repeating it ad infinitum—for good reason. Dr. Mukkamala made the wise point that those who collect, own, and control data have enormous influence over our world. Data informs everything from how much funding schools get to selecting the right lifesaving treatment for a critical patient.
You’ve likely heard the phrase, “garbage in, garbage out.” It’s pretty self-explanatory. If your inputs are flawed, your outputs will be flawed.
Daren Goeson
SVP Product Management for Secure Unified Endpoint Management at Ivanti.
But what if you don’t know your inputs are flawed?
Data bias is an epidemic. As much as we’re increasingly entering an AI-driven world, humans are still the nuclei of source material. Spoiler alert: humans are flawed. Humans are biased. Therefore, data is flawed and biased. Data models are flawed and biased. Outputs are flawed and biased.
Depending on the context, the implications can be catastrophic. In some cases, those implications are quite literally life or death. At an enterprise level, flawed data tends to catalyze a vicious cycle: bad data informs a model; that model’s outputs are used to inform future models; bad data metastasizes. Repeat. Anyone who’s seen The Last of Us can support this narrative with a visualization of nefarious, out-of-control growth. (Sorry for that. But at least I got your attention!).
Long story short: Data accuracy is everything. Without complete, accurate data, you can only be reactive; you can’t be proactive.
From streamlining operations to driving strategic decisions, organizations rely on accurate and accessible data to stay competitive and meet evolving demands. The problem: it can be incredibly challenging to ensure that data is accurate and accessible, and that challenge weighs heavily on those attempting to navigate competitive asset management and service delivery.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Aside from all the fearmongering I waged above, inaccurate data can lead to slower ticket resolution, increased security risks and higher costs, hindering your ability to provide market-competitive, efficient services.
The good news is that you don’t need to endure all the shenanigans from The Last of Us to combat the challenge of data accuracy and accessibility. To address these challenges, savvy organizations are discovering new and innovative solutions and best practices that shore up governance, elevate data reconciliation, and make the most of AI-driven insights. Here’s a primer for you:
Are your insights actually insightful?
Data reconciliation is at the heart of effective asset management and service delivery. It involves comparing and aligning data from multiple sources to identify discrepancies and ensure consistency. By reconciling data across various systems and sources, organizations can establish a single source of truth, enabling informed decision-making and actionable insights.
Data reconciliation is crucial in ensuring the accuracy and trustworthiness of AI-driven recommendations, which rely heavily on high-quality data for training and operation.
Eliminating blind spots and data silos
Lack of visibility is bad data’s best friend. Bad data will happily sneak through any blind spot available. The opposite is also true. To leverage AI and advanced analytics effectively, enterprises must first eliminate blind spots and break down data silos. Blind spots, or gaps in data visibility, can lead to significant overspending and inefficiencies.
Improving asset visibility and monitoring pays off in big ways, highlighting hidden costs and proactively reducing expenses. Similarly, consolidating data silos — disparate repositories of information — facilitates a holistic view of enterprise assets and operations. By combining asset inventory, management software, service maps and other data sources, enterprises can achieve a 360-degree view of assets and, ultimately, optimize performance.
Establishing data governance and privacy frameworks
So far, we’ve covered data reconciliation and eliminating blind spots. But that’s only half the battle. Robust data governance is critical to ensuring data integrity — and so is establishing privacy. Your inputs can be top-tier, but if there’s a risk of breaches or tampering (malicious or inadvertent) along the way, you’re out of luck—unless you have data governance and privacy frameworks in place.
At the simplest level, data governance includes the policies, procedures, and controls that govern data usage, ensuring data quality, integrity, and security. Data governance and privacy frameworks help enforce data standards, ensure compliance, safeguard consumer trust, enhance data quality and mitigate risks associated with data misuse or unauthorized access.
AI and automation in service management
As enterprises embrace AI and automation, the role of IT teams is evolving. AI-powered enterprise service management (ESM) solutions are revolutionizing workflows, throughput and agility – enabling organizations to deliver efficient and responsive services.
By leveraging AI and automation, it’s simpler than ever to streamline IT operations, reduce manual tasks and enhance digital employee experience (DEX). From automating ticket resolution to empowering helpdesk specialists with AI-driven insights, AI and automation can take on the heavy burden of rote, routine tasks and free up IT specialists to focus on strategic initiatives.
Of course, that comes with a caveat: remember “garbage in, garbage out”? AI and automation only positively enhance operations if they’re based on platforms/solutions and models rooted in accurate, secure data. That means deploying a trusted, proven solution — ideally, a streamlined suite of solutions that can eliminate gaps and simplify administration.
If you made it this far, congrats: you get the summary of takeaways. If you remember nothing else, remember this:
Accurate data is essential for proactive, efficient service.
Inaccurate data leads to slower ticket resolution, increased security risks, and higher costs — at best.
Data reconciliation is crucial for accurate, trustworthy AI recommendations.
To leverage AI in service management, make sure you eliminate blind spots, remove data silos and establish data governance/privacy frameworks.
Data is only going to become more important. That means data accuracy is only going to become more important. Now’s the time to get ahead of it by establishing the infrastructure needed to make data a source of power, not a liability.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Dell has begun sending breach notification emails to some 49 million people whose data was apparently stolen in a recent cyberattack.
The type of information involved includes people’s names, postal addresses, and Dell hardware and order information, such as service tags, item description, order dates, and different warranty information.
“We are currently investigating an incident involving a Dell portal, which contains a database with limited types of customer information related to purchases from Dell,” the company said in the notification letter. “We believe there is not a significant risk to our customers given the type of information involved.”
Tangible risk
Dell has notified relevant authorities and brought in third-party cybersecurity experts to assess the damage. So far we don’t know if this was a simple data smash-and-grab, or a ransomware attempt.
The company believes the risk to its customers is not significant since financial and payment information, email addresses, and phone numbers were not stolen in this attack.
However, the risk of phishing or even major malware and ransomware attacks still exists, since threat actors can send out personalized letters with removable drives and deploy malicious code that way. It has happened in the past.
At the same time, there is always a risk someone most likely already bought the database on the dark web.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A cybercriminal with the alias Menelik posted a new thread on a dark web forum, advertising a Dell database fitting the company’s description: “49 million customer and other information systems purchased from Dell between 2017-2024.” The thread was quickly deleted, which usually happens if someone buys the database.
Since the information was most likely already acquired, if you are a Dell customer who purchased hardware between 2017 and 2024, it would be wise to be extra wary of any communication claiming to be from the company, especially if you get it in the mailbox.
The U.S. Patent and Trademark Office (USPTO) kept an open, internet-accessible database of private postal addresses belonging to patent filers for more than eight months.
The U.S. government agency, responsible for handling patents and trademarks, sent a notification letter to affected individuals, explaining what had happened, and what it did following the discovery.
As reported by TechCrunch, which saw a copy of the letter, the USPTO was transitioning from an old IT system to a new one, and during the migration it “inadvertently exposed” a database containing sensitive filer data.
Unprotected databases
The addresses are a mandatory requirement in order to prevent fraud, it was said. One could not have found them simply by searching for the addresses on the website, but if one were to open a dataset the USPTO publishes to help researchers, they would have found them in bulk. Roughly 14,000 addresses were exposed this way.
The USPTO was apparently the first one to spot its own mistake, after which it “blocked access to the impacted bulk data set, removed files, implemented a patch to fix the exposure, tested our solution, and re-enabled access,” it said in the letter. The dataset was exposed between mid-August 2023, and mid-April 2024. USPTO believes no threat actors found or stole the data.
Unprotected and misconfigured databases are one of the most common causes of data spills and leaks these days. Different companies, from both private and public sectors, are often found exposing sensitive customer and citizen data this way. In one notable example, the Brazilian government recently managed to inadvertently expose sensitive data on its entire population – more than 220 million people.
Android has always been great with files and file sharing. One of the most critical times when you need access to files is switching from one device to another. Android has a built-in tool to migrate data from an old phone to a new one, and Samsung has its own data migration tool called Smart Switch, and it could get better in the future.
Android to get better and faster when setting up a new device
Google seems to be working on a new solution that merges two kinds of data migration/transfer methods to make new device setup much faster. Right now, there are three ways to import data when setting up a new phone: importing data from a Google account (cloud), importing data wirelessly, and importing data using a wired connection.
According to an Android Authority report, Google is working on a new data transfer method, ‘ MultiTransportD2dTransport,’ which can use Wi-Fi and USB cable to transfer data simultaneously using both connections. This will accelerate the data transfer mechanism and speed up the new device setup process.
X user Assemble Debug spotted this new feature in the latest version (1.0.624892571) of Google’s Data Restore Tool app.
Moreover, Google is finally bringing a feature Samsung phones have had for years. The feature called ‘Restore Anytime’ will now allow users to transfer data from one device to another anytime. Earlier, users could transfer data from one device to another only after performing a factory reset on one of the devices. New data will get merged with old data on the phone.
In classic Google fashion, there is a limitation on that feature as well. You can only continue importing data from the device that you used previously for importing data.
Nvidia continues to invest in AI initiatives and the most recent one, ChatRTX, is no exception thanks to its most recent update.
ChatRTX is, according to the tech giant, a “demo app that lets you personalize a GPT large language model (LLM) connected to your own content.” This content comprises your PC’s local documents, files, folders, etc., and essentially builds a custom AI chatbox from that information.
Because it doesn’t require an internet connection, it gives users speedy access to query answers that might be buried under all those computer files. With the latest update, it has access to even more data and LLMs including Google Gemma and ChatGLM3, an open, bilingual (English and Chinese) LLM. It also can locally search for photos, and has Whisper support, allowing users to converse with ChatRTX through an AI-automated speech recognition program.
Nvidia uses TensorRT-LLM software and RTX graphics cards to power ChatRTX’s AI. And because it’s local, it’s far more secure than online AI chatbots. You can download ChatRTX here to try it out for free.
Can AI escape its ethical dilemma?
The concept of an AI chatbot using local data off your PC, instead of training on (read: stealing) other people’s online works, is rather intriguing. It seems to solve the ethical dilemma of using copyrighted works without permission and hoarding it. It also seems to solve another long-term problem that’s plagued many a PC user — actually finding long-buried files in your file explorer, or at least the information trapped within it.
However, there’s the obvious question of how the extremely limited data pool could negatively impact the chatbot. Unless the user is particularly skilled at training AI, it could end up becoming a serious issue in the future. Of course, only using it to locate information on your PC is perfectly fine and most likely the proper use.
But the point of an AI chatbot is to have unique and meaningful conversations. Maybe there was a time in which we could have done that without the rampant theft, but corporations have powered their AI with stolen words from other sites and now it’s irrevocably tied.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Given that it’s highly unethical that data theft is the vital part of the process that allows you to make chats well-rounded enough not to get trapped in feedback loops, it’s possible that Nvidia could be the middle ground for generative AI. If fully developed, it could prove that we don’t need the ethical transgression to power and shape them, so here’s to hoping Nvidia can get it right.
US drug regulators dropped a bombshell in November 2023 when they announced an investigation into one of the most celebrated cancer treatments to emerge in decades. The US Food and Drug Administration (FDA) said it was looking at whether a strategy that involves engineering a person’s immune cells to kill cancer was leading to new malignancies in people who had been treated with it.
Bruce Levine, an immunologist at the University of Pennsylvania Perelman School of Medicine in Philadelphia who helped to pioneer the approach known as chimeric antigen receptor (CAR) T-cell therapy, says he didn’t hear the news until a reporter asked him for comments on the FDA’s announcement.
“Better get smart about it quick,” he remembers thinking.
Although the information provided by the FDA was thin at the time, the agency told reporters that it had observed 20 cases in which immune-cell cancers known as lymphomas had developed in people treated with CAR T cells. Levine, who is a co-inventor of Kymriah, the first CAR-T-cell therapy to be approved, started jotting down questions. Who were these patients? How many were there? And what other drugs had they received before having CAR-T-cell therapy?
How to supercharge cancer-fighting cells: give them stem cell skills
The FDA has since documented more cases. As of 25 March, the agency had received 33 reports of such lymphomas among some 30,000 people who had been treated. It now requires all CAR-T therapies to carry a boxed warning on the drug’s packaging, which states that such cancers have occurred. And the European Medicines Agency has launched its own investigation. But many of the questions that Levine had in November remain unanswered. It is unclear how many, if any, of the observed cancers came directly from the manipulations made to the CAR T cells. A lot of cancer therapies carry a risk of causing secondary malignancies, and the treated individuals had received other therapies. As Crystal Mackall, a paediatric oncologist who heads the cancer immunotherapy programme at Stanford University in California, puts it: “Do you have a smoking gun?”
Scientists are now racing to determine whether the cellular therapy is driving these cancers or contributing in some way to their development. From the data available so far, the secondary cancers seem to be a rare phenomenon, and the benefits of CAR T cells still outweigh the risks for most prospective recipients. But it’s an important puzzle to solve so that researchers can improve and expand the use of these engineered cells in medicine. CAR-T-cell treatments were once reserved for people who had few other options for therapy. But the FDA has approved several of these treatments as a relatively early, second-line option for lymphoma and multiple myeloma. And some companies are working to expand the therapy’s repertoire to solid tumours, autoimmune diseases, ageing, HIVand more.
Aric Hall, a haematologist at the University of Wisconsin–Madison, says that despite the enthusiasm for CAR-T therapy, the technology is still new. “I used to joke that for the first ten years there were more review articles about CAR T than there were patients who had been treated by CAR T products,” he says. He adds that the risks might be rare, but as CAR-T therapy moves into a bigger pool of patients who aren’t desperately ill, the calculus could change. “The problem is rare risks become a bigger deal when patients have better options.”
Vector safety
Throughout the development of these blockbuster therapies, researchers had reason to think that CAR T cells could become cancerous. CAR-T therapies are personalized — created from a person’s immune cells. Their T cells are extracted and then genetically modified in the laboratory to express a chimeric antigen receptor — or CAR — a protein that targets the T cell to specific cells that they will kill. T cells sporting these receptors are made to multiply and grow in the lab, and physicians then infuse them back into the individual, where they start battling cancer cells. The six CAR-T-cell therapies currently approved in the United States and Europe target antigens on the immune system’s B cells, so they work only against B-cell malignancies — leukaemias, lymphomas and multiple myeloma. But researchers are aiming to develop CAR-T therapies that work on other kinds of cancer, and for other conditions.
The genetic engineering is the step that creates a risk of malignancy. All six FDA-approved CAR-T therapies rely on a retrovirus — typically a lentivirus, such as HIV, or a gammaretrovirus — to ferry the genetic information into the cell. Scientists remove the parts of the viral genome that allow the virus to replicate, making room for the gene they want the virus vector to carry. Once inside a cell, the virus inserts the gene for the CAR into the cell’s genome. But there isn’t a good way to control exactly where the gene goes. If it slips in near a gene that can promote cancer development and activates it, or if it deactivates a tumour-suppressing gene, that boosts the risk of causing a T-cell cancer (see ‘CAR-T concerns’).
This phenomenon, known as insertional mutagenesis, is a risk with most gene therapies. About 20 years ago, for example, groups in London and Paris treated 20 infants who had severe combined immunodeficiency syndrome (SCID) with a gene therapy that used a retrovirus. The therapy worked for most participants, but the retrovirus switched on cancer genes in some. That activation led to leukaemia in five of the participants; four recovered and one died.
As a result, scientists have reworked the vectors to make them safer, ensuring that their genes don’t recombine, for example. The FDA recommends that CAR-T products undergo testing to prove that the vectors cannot replicate. “The scrutiny we’ve been under has been tremendous,” says Hans-Peter Kiem, an oncologist at the Fred Hutchinson Cancer Center in Seattle, Washington, who has studied viral vectors for decades. Many felt confident about using viral vectors in CAR-T therapies, because T cells are difficult to prod towards malignancy, says Marco Ruella, a haematologist at the Perelman School of Medicine. “Truly the general feeling was that, in T cells, lenti- and retroviruses are extremely safe.”
Search for the smoking gun
When the FDA issued its warning in November, it wasn’t clear what specific reports had prompted the agency to act, or whether the link was causal. Levine recruited some of the biggest names in CAR-T therapies to co-write a commentary on the matter and discuss some of the questions he still had1. “I felt — we felt — that it was important to say, ‘Well, let’s take a step back for a minute and see what we really know,’” he says.
Turbocharged CAR-T cells melt tumours in mice — using a trick from cancer cells
In January, the FDA released more information. In an article in the New England Journal of Medicine, Peter Marks and Nicole Verdun at the FDA’s Center for Biologics Evaluation and Research in Silver Spring, Maryland, revealed that the agency had received 22 reports of leukaemia out of more than 27,000 people treated with various CAR-T therapies2. In three secondary cancers that were sequenced, the agency found that the cancerous T cells contained the CAR gene, “which indicates that the CAR-T product was most likely involved in the development of the T-cell cancer”, the authors wrote.
According to Paul Richards, a spokesperson for the FDA, 11 further reports of secondary cancer have since come in, as of 25 March. None of the extra cases has been confirmed as having the CAR gene, but neither are any of the cases so far definitively CAR-negative, Richards said in an e-mail. In many instances, the agency didn’t have a sample of the secondary cancer to analyse; in others, the genomic analysis isn’t yet complete. He adds that certain reports, specifically those positive for the CAR gene, “strongly suggest” that T-cell cancer should be considered a risk of the therapy.
But even when the CAR gene is present, proving causality can be tricky. In one case study, researchers in Australia described3 a 51-year-old man who had been treated for multiple myeloma with a CAR-T therapy. The treatment was part of a clinical trial of Carvykti, made by Legend Biotech in Somerset, New Jersey, in partnership with the drug giant Johnson & Johnson. The treatment worked to clear his cancer, but five months later he developed an unusual, fast-growing bump on his nose. A biopsy revealed that it was T-cell lymphoma. When the team examined the cancerous cells, they found the gene for the CAR wedged into the regulatory region of a gene called PBX2.
Cutting-edge CAR-T cancer therapy is now made in India — at one-tenth the cost
The finding is provocative, Mackall says, but still not a smoking gun, in her opinion. The researchers found that the cancer cells also carried a mutation often seen in lymphomas, and the person had a genetic variant that put him at increased risk of developing cancer, even without the CAR insertion. It’s likely that the cells harvested to create the therapy contained some pre-cancerous T cells, says Piers Blombery, a haematologist at the Peter MacCallum Cancer Centre in Melbourne, Australia, who leads the diagnostic lab that assessed the tumour samples. Now, the team is looking at samples taken before the therapy to determine whether that’s the case.
Other people who have received Carvykti have developed secondary cancers, too. The FDA’s initial warning focused on T-cell malignancies. But long-term follow-up of participants who’d been in an early trial of Carvykti revealed that 10 out of 97 people developed either myelodysplastic syndrome (a kind of pre-leukaemia) or acute myeloid leukaemia (see go.nature.com/3q8vrym). Nine of them died. As a result, in December 2023, Legend Biotech added language to its boxed warning for Carvykti about the risk of secondary blood cancers.
Craig Tendler, head of oncology clinical development and global medical affairs at Johnson & Johnson Innovative Medicine in Raritan, New Jersey, says that the company looked for the CAR gene in cancer cells from these individuals, but didn’t find it. When the researchers looked at samples taken before the trial participants received treatment, they found pre-malignant cells with the same genetic make-up as the cancer cells. “So, it is likely that, in many of these cases, the prior therapies for multiple myeloma may have already predisposed these patients to secondary malignancy,” Tendler says. Then, it’s possible that the prolonged immune suppression related to the CAR-T treatment process nudged the cells to become cancerous.
Can autoimmune diseases be cured? Scientists see hope at last
When Ruella first saw the FDA warning, he immediately thought back to a 64-year-old man he had treated who, in 2020, developed a T-cell lymphoma 3 months after receiving CAR-T therapy for a B-cell lymphoma. Ruella and his colleagues identified the CAR gene in the biopsy taken from the man’s lymph node4. But it was at such low levels that it seemed unlikely it had integrated into the cancer cells, Ruella says. Instead, the genes could have come from CAR T cells that just happened to be circulating through that lymph node at the time the biopsy was taken. “We thought this was just an accidental finding,” Ruella says.
But after Ruella saw the FDA’s warning, he decided to revisit the case. He and his colleagues went back to a blood sample taken before the person received CAR-T therapy. The team assessed whether T cells with the same T-cell receptor as the lymphoma cells were present before treatment. They were, suggesting that the seeds of the lymphoma pre-dated the therapy. (The low number of cells made further analysis difficult.) Ruella adds that it’s possible the CAR-T treatment produced an inflammatory environment that allowed such seeds to become cancerous. “So this is not something that appears magically out of nowhere,” Ruella says.
Rare outcome
The good news is that these secondary cancers — CAR-driven or not — seem to be rare. After the FDA warning, Ruella and his colleagues also looked back at the files of people who had been treated with commercial CAR-T products at the University of Pennsylvania. Between January 2018 and November 2023, the centre treated 449 individuals who had leukaemias, lymphomas or multiple myeloma with CAR-T therapies4. Sixteen patients (3.6%) went on to develop a secondary cancer. But most of those were solid tumours, not the kind of cancer one would expect to come directly from the treatment. Only five of the treated patients developed blood cancers, and only one of those developed a T-cell cancer.
At the Mayo Clinic in Phoenix, Arizona, haematologist Rafael Fonseca and his colleagues also wondered whether the incidence of secondary cancers in people who had received CAR-T therapy differed from the incidence in those with the same cancers who had not. They combed through a data set containing medical records from 330 million people to find individuals who had been newly diagnosed with multiple myeloma or diffuse large B-cell lymphoma between 2018 and 2022. They then looked at how many of them developed T-cell lymphomas. The prevalence didn’t differ drastically from the 22 cases out of 27,000 people that the FDA had reported. The researchers published their findings on the online newsletter platform Substack (see go.nature.com/3u97s38). “We wanted to get it out as soon as possible because of the timeliness of what was going on,” Fonseca says.
The race to supercharge cancer-fighting T cells
Since the FDA’s warning, Hall has started talking about the possibility of secondary cancers to individuals who are contemplating CAR-T therapy. He presents it as a real risk, but a rare one — and explains that it is dwarfed by the risk posed by their current cancer. “For my late-stage myeloma patient, the main risk is that the CAR T doesn’t work and they die of their myeloma,” he says. Mackall and others agree. “I don’t think anyone believes that this will change practice in any way at the current time,” she adds. “Most cancer therapies can cause cancer. This is one of the paradoxes of our business.”
But what about other diseases? Researchers have already tested CAR T cells as a therapy for the autoimmune condition lupus, with impressive results5. And more clinical trials of these therapies for other autoimmune diseases are likely to follow. If most of the secondary cancers seen in people treated with CAR T cells are related to the litany of treatments they received beforehand, people with these conditions might not all have the same risk. But even if the therapy is driving some cancers, many say the benefits might still be worth the risk. “Autoimmune diseases are not benign diseases,” said Marks in response to an audience question at an industry briefing in January (see go.nature.com/3jpk6qj). “Anyone who’s ever known somebody who’s had lupus cerebritis or lupus nephritis will know that those are potentially lethal diseases.”
CAR T cells also hold promise as a treatment for HIV infection, and a trial to test this idea kicked off in 2022. Researchers are also studying how CAR T cells could be used as a way to curb rejection of transplanted kidneys, or to clear out zombie-like senescent cells that have been implicated in ageing. The possibilities are continuously expanding.
As for whether the benefit of CAR-T therapy outweighs the risk of secondary cancers for these other indications, only time will tell.
The United States Federal Communications Commission (FCC) today announced [PDF] that it has fined AT&T, Verizon, and Sprint/T-Mobile $196 million collectively for illegally selling access to customer location information without consent.
Sprint and T-Mobile (now merged into T-Mobile) have been fined $12 million and $80 million, respectively. Verizon has been fined almost $47 million, and AT&T has been fined more than $57 million.
The FCC first began investigating the four major U.S. carriers in 2019 after they were found selling real-time location information from customer devices to third-party data aggregators, which led to that location data being sold a second time to private investigators, bounty hunters, law enforcement agencies, credit card companies, and more.
Following the investigation, the FCC confirmed that wireless carriers had violated federal law by sharing consumer location data. Fines were proposed back in 2020, but carriers were given an opportunity to provide evidence and legal arguments against the decision before the fines were formally imposed.
The fines vary based on the length of time that each carrier sold access to customer location information without safeguards, and the number of entities that were provided access. The FCC determined that carriers were obligated to protect the personal information of their customers, which they did not do.
“Our communications providers have access to some of the most sensitive information about us. These carriers failed to protect the information entrusted to them. Here, we are talking about some of the most sensitive data in their possession: customers’ real-time location information, revealing where they go and who they are,” said FCC Chairwoman Jessica Rosenworcel. “As we resolve these cases – which were first proposed by the last Administration – the Commission remains committed to holding all carriers accountable and making sure they fulfill their obligations to their customers as stewards of this most private data.”
The four carriers had different practices, but each carrier relied on contract-based assurances that the data aggregators purchasing the real-time location information would get consent from customers before accessing their location, which did not happen. Even after learning that data was being misused in this way, the FCC says the carriers “continued to sell access to location information without taking reasonable measures to protect it from unauthorized access.”
Apple is set to unveil iOS 18 during its WWDC keynote on June 10, so the software update is a little over six weeks away from being announced. Below, we recap rumored features and changes planned for the iPhone with iOS 18. iOS 18 will reportedly be the “biggest” update in the iPhone’s history, with new ChatGPT-inspired generative AI features, a more customizable Home Screen, and much more….
There are widespread reports of Apple users being locked out of their Apple ID overnight for no apparent reason, requiring a password reset before they can log in again. Users say the sudden inexplicable Apple ID sign-out is occurring across multiple devices. When they attempt to sign in again they are locked out of their account and asked to reset their password in order to regain access. …
Apple used to regularly increase the base memory of its Macs up until 2011, the same year Tim Cook was appointed CEO, charts posted on Mastodon by David Schaub show. Earlier this year, Schaub generated two charts: One showing the base memory capacities of Apple’s all-in-one Macs from 1984 onwards, and a second depicting Apple’s consumer laptop base RAM from 1999 onwards. Both charts were…
On this week’s episode of The MacRumors Show, we discuss the announcement of Apple’s upcoming “Let loose” event, where the company is widely expected to announce new iPad models and accessories. Subscribe to The MacRumors Show YouTube channel for more videos Apple’s event invite shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Apple CEO Tim…
In his Power On newsletter today, Bloomberg’s Mark Gurman outlined some of the new products he expects Apple to announce at its “Let Loose” event on May 7. First, Gurman now believes there is a “strong possibility” that the upcoming iPad Pro models will be equipped with Apple’s next-generation M4 chip, rather than the M3 chip that debuted in the MacBook Pro and iMac six months ago. He said a …
Apple has announced it will be holding a special event on Tuesday, May 7 at 7 a.m. Pacific Time (10 a.m. Eastern Time), with a live stream to be available on Apple.com and on YouTube as usual. The event invitation has a tagline of “Let Loose” and shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Subscribe to the MacRumors YouTube channel for more …
Due to its explosive growth, the management and storage of unstructured data is becoming increasingly challenging for organizations to contend with. This unprecedented expansion, however, is a double-edged sword: while the opportunities for leveraging this treasure trove abound, so do the issues in orchestrating it. Another major factor impacting data management, is that according to Gartner, by 2025, 75% of enterprise data will be created and processed at the edge – outside traditional centralized data centers or clouds. Today, companies across the globe are grappling with an increasing array of data-related problems, from cyber threats and compliance headaches, to the intricacies of data sovereignty.
Enrico Signoretti
VP of Product and Partnerships at Cubbit.
Navigating cybersecurity challenges in 2024: a closer look
At the forefront of cybersecurity concerns is data sovereignty. Despite major cloud providers’ best efforts to align with strict regulations such as NIS2, ISO 27001, and GDPR, the landscape remains fraught with complexities. For many organizations handling sensitive data, depending on cloud service providers inherently comes with a myriad of hurdles, particularly concerning the location of data storage (whether it resides within or outside national borders) and the jurisdiction under which the company operates, with the Cloud Act being a major issue.
Data independence and control have never been more critical. The market is flooded with cloud storage solutions, yet, once data is integrated within these systems, transferring it to alternative environments — be it other clouds, data centers, or on-premises — becomes arduous, leading to potential vendor lock-ins that hinder innovative hybrid and multi-cloud strategies.
The threat landscape is also evolving. On the one hand, we’re witnessing an uptick in regional disasters, ranging from data center fires to earthquakes, affecting service continuity. On the other, ransomware attacks are growing in sophistication, targeting both client-side and server-side vulnerabilities with unprecedented precision. For this reason, from the point of view of the user, a ransomware attack can be considered even worse than a natural disaster.
Cost considerations further complicate the scenario. Expansion efforts by cloud providers involving the construction of new physical sites not only exacerbate environmental and sustainability concerns, but also lead to spiraling costs. Additionally, the hidden fees imposed by some of the leading cloud storage providers — for egress, 90-day deletion policies, redundancy, and more — make cost predictability a considerable challenge. Often, these supplementary charges can equal or surpass the initial storage costs, effectively doubling the financial burden on organizations.
Centralized and distributed cloud: what’s new
At first glance, cloud solutions offered by hyperscalers might seem widely distributed. However, they often rely on a centralized infrastructure, with data housed within a few, albeit large, data centers.
Distributed cloud storage takes a fundamentally different approach by separating the control plane from the data itself. This facilitates data storage across multiple locations, both on-premises and across multiple cloud platforms, enhancing redundancy and resilience. This paradigm shift is game-changing for several reasons. Not only does it eliminate many traditional barriers and paves the way for more robust multi-cloud strategies, it also raises flexibility and resilience in data storage and management to a whole new level. Under this model, while the service provider maintains control over the control plane, the actual computing resources can be deployed and moved flexibly by the organisation. Whether within a single public cloud ecosystem, over multiple cloud environments, or within a private data center, the essence of distributed cloud lies in its ubiquity.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Control and sovereignty, reimagined
One of the distributed cloud’s paramount benefits is the unprecedented degree of control it offers. Indeed, the distributed model eradicates the common issue of vendor lock-in, while also allowing organizations to precisely dictate the geographical perimeter where their data resides. This could mean having parts of your data securely stored in France, Italy, Germany, or literally any place you want, offering unprecedented levels of redundancy while complying with data localization requirements. Beyond data sovereignty, distributed cloud storage facilitates comprehensive independence over all facets of data management, ensuring that organizations can comply with evolving regional, European, and global regulations without relinquishing control to third-party providers and hyperscalers.
The significance of this cannot be overstated, especially in regions where data governance and digital sovereignty are key. In this context, the distributed cloud uniquely meets the need for sovereignty, cost control, and policy management, offering a balanced compromise between the on-premises and public cloud models. It combines the control over IT infrastructure traditionally associated with on-premises storage with the scalability and flexibility of public cloud services.
Applications and benefits of the distributed cloud
Distributed cloud storage technology is versatile, supporting a wide array of use cases, from backup and disaster recovery to fostering collaboration and housing expansive data lakes for AI and machine learning endeavors. Its latest developments unlock unprecedented resiliency, through encryption, fragmentation, and replication across customizable storage networks, and empowering MSPs and enterprises alike to build and deploy their own hyper-resilient, sovereign, 100% S3 compatible object storage network in minutes, with full control over data, infrastructure, and costs.
For MSPs and VARs, this autonomy transforms them into independent object storage providers, enabling them to offer secure and compliant storage solutions, maintain direct customer relationships, and enjoy enhanced profit margin. Full customization also means that MSPs can craft tailor-made industry clouds designed to meet the specific requirements of the industries and regions in which their customers operate.
Enterprises, on the other hand, benefit from a hybrid model that combines the best aspects of cloud storage and on-premises solutions, minus the drawbacks.
The distributed cloud’s ability to tailor storage networks to meet specific national compliance requirements such as GDPR, ISO 27001, and CCPA, further underscores its utility. Its architecture, designed to prevent any single point of failure, can ensure up to 15 nines of data durability and minimizes the risks of downtime and data breaches, making it particularly suited to scenarios where cybersecurity, digital sovereignty, and independence are mission-critical.
Lastly, this model optimises resources by reusing what is already present in the premises of companies and data centres. This extends the storage hardware’s lifespan while reducing carbon footprint and electronic waste. This eco-friendly approach not only addresses environmental concerns but also aligns with the growing demand for sustainable IT solutions.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Monkeypox virus particles (artificially coloured).Credit: UK Health Security Agency/Science Photo Library
A virulent strain of the monkeypox virus has gained the ability to spread through sexual contact, new data suggest. This has alarmed researchers, who fear a reprise of the worldwide mpox outbreak in 2022.
Evidence from past outbreaks indicates that this strain, called clade I, is more lethal than the one that sparked the 2022 global outbreak. Clade I has for decades caused small outbreaks, often limited to a few households or communities, in Central Africa. Sexually-acquired clade I infections had not been reported before 2023.
But since then, a clade I strain with an apparent capacity for sexual transmission has caused a cluster of infections in a conflict-ridden region of the Democratic Republic of the Congo (DRC), in Central Africa. A preprint1 posted on 15 April reports that 241 suspected and 108 confirmed infections are connected to this outbreak — and these numbers are probably a vast undercount because of limited testing capacity. Almost 30% of the confirmed infections were in sex workers.
Adding to the challenges, the region is facing a humanitarian crisis, and the DRC is contending with the aggressive spread of other diseases, such as cholera. The combination means there is a “substantial risk of outbreak escalation beyond the current area”, says Anne Rimoin, an epidemiologist at the University of California, Los Angeles, who has worked on mpox outbreaks in the DRC since 2002.
Unheeded warnings
Monkeypox virus can cause painful, fluid-filled lesions on the skin and, in severe cases, death. (While the disease was renamed ‘mpox’ in 2022, the virus continues to be called ‘monkeypox virus.’) The virus persists in wild animals in several African countries, including the DRC, and occasionally spills into people.
The first large reported outbreak with human-to-human transmission , which was in 2017 in Nigeria, caused more than 200 confirmed and 500 suspected cases of the disease. Researchers warned at the time that the virus might have adapted to spread through sexual contact.
Their warnings were not heeded; in 2022, a global outbreak driven in part by sexual contact prompted the World Health Organization (WHO) to declare it a public health emergency. That ongoing outbreak is caused by a strain of monkeypox virus called clade II, which is less lethal than clade I, and has infected more than 94,000 people and killed more than 180.
Monkeypox in Africa: the science the world ignored
Although mpox infections have waned globally since 2022, they have been trending upwards in the DRC: in 2023 alone, the country reported more than 14,600 suspected infections and more than 650 deaths. In September, 2023, a new cluster of suspected cases arose in the DRC’s South Kivu province. This cluster especially concerns researchers, as it has been spreading largely among sex workers, suggesting that the virus has adapted to transmit readily through sexual contact.
This could lead to faster human-to-human spread, potentially with few symptoms, says Nicaise Ndembi, a virologist at the Africa Centres for Disease Control and Prevention who is based in Addis Ababa. “The DRC is surrounded by nine other countries — we’re playing with fire here,” he says.
Health officials are so concerned that representatives of the DRC and 11 nearby countries met earlier this month to plan a response and to commit to stepping up surveillance for the virus. Only about 10% of the DRC’s suspected mpox cases in 2023 were tested, due to limited testing capacity, meaning health officials “don’t have a full picture of what’s going on”, Ndembi says.
Genetic analyses of the virus responsible for the outbreak show mutations such as the absence of a large chunk of the virus’s genome, which researchers have previously noted as a sign of monkeypox viral adaptation. This has led the study’s authors to give a new name to the strain circulating in the province: clade Ib.
Making matters more fraught, South Kivu borders Rwanda and Burundi and is grappling with “conflict, displacement, food insecurity, and challenges in providing adequate humanitarian assistance”, which “might represent fertile ground for further spread of mpox”, the WHO warned last year.
Vaccines and treatment needed
In 2022, many wealthy countries offered vaccines against smallpox, which also protect against mpox, to individuals at high risk of contracting the disease. But few vaccine doses have reached African countries, where the disease’s toll has historically been highest.
While the DRC weighs regulatory approval for these vaccines, the United States has committed to providing the DRC with enough doses to inoculate 25,000 people, and Japan has said it will also provide vaccines, says Rosamund Lewis, technical lead for mpox at the WHO in Geneva, Switzerland. But a vaccination drive in the DRC would require hundreds of thousands — if not millions — of doses to inoculate individuals at high risk of infection, she says.
It’s not clear how much protection these vaccines will provide against clade I mpox, but Andrea McCollum, a poxvirus epidemiologist at the US Centers for Disease Control and Prevention in Atlanta, Georgia, says that data from tests in animals are promising. Researchers are also conducting a trial in the DRC of tecovirimat, an antiviral that is thought to be effective against mpox. Results are expected in the next year, McCollum says.
The WHO and CDC have helped to procure equipment that will allow for more rapid diagnosis of the disease in the DRC, especially in rural areas, Lewis says. She adds that says the rapid mobilization of African health officials gives her hope that the outbreak can be controlled before clade Ib mpox starts spreading elsewhere.
UnitedHealth Group has issued an update on the data breach that recently struck its subsidiary, Change Healthcare.
The healthcare giant suffered a ransomware attack that knocked some of its services offline and affected different pharmacies and other adjacent businesses across the United States.
In an update, UnitedHealth Group said that based on initial targeted data sampling to date, the company found “files containing protected health information (PHI) or personally identifiable information (PII), which could cover a substantial proportion of people in America.”
Ransomware fiasco
So far, there has been no evidence that the hackers stole materials such as doctors’ charts, or full medical histories.
The company further explained that the data review is ongoing and complex, and that it will likely take a few months to conclude the investigation, suggesting that the type of stolen data, as well as its scope, might change.
In the meantime, it set up a dedicated website http://changecybersupport.com/ where affected individuals can get more information and details. It also set up a dedicated call center, and is offering free credit monitoring and identity theft protection for two years.
The ransomware attack suffered something of a fiasco on both sides. The company was apparently attacked by an affiliate of the infamous ALPHV (BlackCat) ransomware-as-a-service (RaaS). To address the problem and get its data back, the company paid the attackers $22 million in cryptocurrency. However, due to the nature of RaaS, the affiliates who breached Change never got the money, as ALPHV took all of it and shut the entire operation down.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This also meant that Change never got its data back. In the meantime, a separate threat actor came forward, claiming to be in possession of the data, and asking for even more money.
UnitedHealth Group said that it’s monitoring the internet and the dark web, together with industry experts, to determine if any data made it online.
“There were 22 screenshots, allegedly from exfiltrated files, some containing PHI and PII, posted for about a week on the dark web by a malicious threat actor. No further publication of PHI or PII has occurred at this time,” the notification concludes.