American telecommunications behemoth AT&T has finally confirmed the authenticity of the 2021 data breach that spilled sensitive user information on the dark web, and has initiated a mass reset of user passcodes.
Roughly three years ago, privacy blog RestorePrivacy broke the news of a hacker selling sensitive data belonging to more than 70 million AT&T customers. The data allegedly contained people’s names, phone numbers, postal addresses, email addresses, social security numbers, and dates of birth.
While AT&T initially denied the breach, saying the data wasn’t from the company, the hacker, going by the name “ShinyHunters” said the organization will likely continue denying until they leak it all.
Mass reset
Surely enough, last month, a seller published the full database, affecting 73 million people – and TechCrunchanalyzed the database, confirming its authenticity, and also establishing that it contained user passcodes, prompting a swift alert towards AT&T.
Passcodes are four-digit numbers that work as the second security layer, and are used to access user accounts. Even though they were encrypted, some researchers argued that it is something that can be worked around. Apparently, there is not enough randomness in the encrypted data, which means that in theory, a threat actor could guess the passcode.
It seems the threat is more than just theoretical, as AT&T initiated a mass-reset of the passcodes over the weekend.
“AT&T has launched a robust investigation supported by internal and external cybersecurity experts,” the company said in a statement published on Saturday. “Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set,” the statement said.
While the telco did confirm the breach, it says that it still doesn’t know where the data came from, whether it was directly from its servers, or from its vendors.
Organizations today heavily rely on big data to drive decision-making and strategize for the future, adapting to an ever-expanding array of data sources, both internal and external. This reliance extends to a variety of tools used to harness this data effectively.
In the modern business environment, with an estimated 2.5 quintillion bytes of data generated daily, big data is undoubtedly pivotal in understanding and developing all aspects of an organization’s goals. However, known for its vast volume and rapid collection, big data can overwhelm and lead to analysis paralysis if not managed and analyzed objectively. But, when dissected thoughtfully, it can provide the critical insights necessary for strategic advancement.
The evolution of big data in business strategy
In the past, businesses primarily focused on structured data from internal systems, but today, they navigate a sea of unstructured data from varied sources. This transition is fueled by key market trends, such as the exponential growth of Internet of Things (IoT) devices and the increasing reliance on cloud computing. Big data analytics has become essential for organizations aiming to derive meaningful insights from this vast, complex data landscape, transcending traditional business intelligence to offer predictive and prescriptive analytics.
Driving this big data revolution are several market trends. The surge in digital transformation initiatives, accelerated by the global pandemic, has seen a significant increase in data creation and usage. Businesses are integrating and analyzing new data sources, moving beyond basic analytics to embrace more sophisticated techniques. Now, it is about refining data strategies to align closely with specific business goals and outcomes. The increasing sophistication of analytics tools, capable of handling the 5 Vs of big data – volume, variety, velocity, veracity, and vulnerability – is enabling businesses to tap into the true potential of big data, transforming it from a raw resource into a valuable tool for strategic decision-making.
Amy Groden Morrison
VP of Marketing and Sales Operations, Alpha Software.
Practical applications of big data across industries
Big data’s influence is evident across various sectors, each utilizing it uniquely for growth and innovation:
Transportation: GPS applications use data from satellites and government sources for optimized route planning and traffic management. Aviation analytics process data from flights (about 1,000 gigabytes per transatlantic flight) to enhance fuel efficiency and safety.
Healthcare: Wearable devices and embedded sensors are often employed to collect valuable patient data in real-time for predicting epidemic outbreaks and improving patient engagement.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Banking and Financial Services: Banks monitor the purchase behavioral pattern of credit cardholders to detect potential fraud. Big data analytics are used for risk management and customer relationship management optimization. Government: Agencies like the IRS and SSA use data analysis to identify tax fraud and fraudulent disability claims. The CDC uses big data to track the spread of infectious diseases.
Media and Entertainment: Companies like Amazon Prime and Spotify use big data analytics to recommend personalized content to users.
Implementing big data strategies within organizations requires a nuanced approach. First, identifying relevant data sources and integrating them into a cohesive analytics system is crucial. For instance, banks have leveraged big data for fraud detection and customer relationship optimization, analyzing patterns in customer transactions and interactions. Additionally, big data aids in personalized marketing, with companies like Amazon using customer data to tailor marketing strategies, leading to more effective ad placements.
The key lies in aligning big data initiatives with specific business objectives, moving beyond mere data collection to generating actionable insights. Organizations need to invest in the right tools and skills to analyze data, ensuring data-driven strategies are central to their decision-making processes. Implementing these strategies can lead to more informed decisions, improved customer experiences, and enhanced operational efficiency.
Navigating data privacy and security concerns
Addressing data privacy and security in big data is crucial, given the legal and ethical implications. With regulations like the GDPR imposing fines for non-compliance, companies must ensure adherence to legal standards. 81% of consumers are increasingly concerned about online data usage, highlighting the need for robust data governance. Companies should establish clear policies for data handling and conduct regular compliance audits.
For data security, a multi-layered approach is essential. Practices include encrypting data, implementing strong access controls, and conducting vulnerability assessments. Advanced analytics for threat detection and a zero-trust security model are also crucial to maintain data integrity and mitigate risks.
Big data predictions and preparations
In the next decade, big data is set to undergo significant transformations, driven by advancements in AI and machine learning. IDC forecasts suggest the global data sphere will reach 175 zettabytes by 2025, underscoring the growing volume and complexity of data. To stay ahead, businesses must invest in scalable data infrastructure and enhance their workforce’s analytical skills. Adapting to emerging data privacy regulations and maintaining robust data governance will also be vital. With this proactive approach, businesses will be set to successfully utilize big data, ensuring continued innovation and competitiveness in a data-centric future.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
In today’s dynamic business landscape, data management stands as a critical cornerstone, directly influencing an organization’s agility and innovation capabilities. The digital age demands that companies reassess their data management strategies, particularly the reliance on traditional master data management (MDM) systems. These legacy systems, often entrenched due to the ‘sunk-costs’ fallacy, hinder progress and adaptability, locking businesses into outdated practices that impede growth.
Rules-based MDM solutions, with their rigid frameworks and manual-intensive operations, are increasingly misaligned with the needs of modern data environments. They struggle to manage the diversity and volume of data generated today, leading to inefficiencies that can ripple through an organization, affecting everything from decision-making speeds to customer experience and the ability to capitalize on emerging opportunities.
The shift towards AI-powered data management through data products revolutionizes traditional MDM, offering a solution that transcends its limitations. Data products employ artificial intelligence (AI) and machine learning (ML) to automate and refine data processes, enhancing accuracy, efficiency, and scalability. The integration of AI technologies ensures that data management systems can evolve in tandem with the changing data landscape, ensuring businesses remain at the forefront of innovation.
The advantages of transitioning to AI-driven data management systems are manifold. Beyond improving data quality and operational efficiencies, these systems unlock the most accurate insights, facilitating more informed business decisions, optimizing operations, and enriching customer experiences. This strategic enhancement in data management capabilities is invaluable in driving a company’s growth and competitive edge.
Integrating data products into legacy MDM systems is transformative, yet it’s the partnership between AI and human intelligence that truly unlocks their potential. AI automates and streamlines data management, but human oversight ensures accuracy, ethics, and context. This synergy between human intuition and AI’s capabilities fosters innovation, enhances decision-making, and ensures responsible data use. Businesses embracing this collaborative approach will navigate the complexities of modern data environments more effectively, securing a competitive edge in the digital age.
Anthony Deighton
Data Products General Manager, Tamr.
Take for example the competitive landscape of retail, a large chain might grapple with significant challenges that hinder its efficiency and customer satisfaction. One common issue is inconsistent product data across various platforms such as the website, mobile app, and in-store displays. This inconsistency can confuse customers and lead to inaccurate inventory management. Additionally, many retailers rely on basic customer demographics and purchase history for personalization, which often results in generic marketing campaigns that fail to engage customers on a deeper level. Another critical challenge is reactive inventory management, where manual forecasting and stock level assessments frequently result in either overstocking or understocking, negatively impacting both sales and profitability.
In contrast to traditional MDM solutions, AI-powered data products offer innovative solutions to these pervasive issues in the retail sector. For instance, AI-driven data management can dynamically unify and clean product data across various platforms, ensuring consistency on the website, mobile app, and in-store displays. This not only enhances the customer experience by providing accurate and coherent product information but also improves inventory management by enabling real-time tracking and updates.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Moreover, AI-powered systems go beyond basic customer demographics and purchase history to offer advanced personalization. By leveraging machine learning algorithms, these systems can analyze a wide array of data points, including browsing behavior, social media interactions, and even environmental factors, to deliver highly personalized and engaging marketing campaigns. This level of personalization not only enhances customer engagement but also significantly increases the effectiveness of marketing efforts.
When it comes to inventory management, AI-powered data products transform the traditional reactive approach into a proactive strategy. Predictive analytics and machine learning enable more accurate forecasting of demand, taking into account not just historical sales data but also trends, seasonality, and external factors such as economic indicators and social trends. This results in optimized stock levels, reducing the risks of overstocking or understocking, and consequently, improving sales and profitability.
Furthermore, AI-driven solutions can provide valuable insights into customer behavior, market trends, and operational efficiencies through advanced analytics and data visualization tools. These insights can inform strategic decisions, enabling retailers to adapt more swiftly to market changes and customer needs.
Modernization made easy: Integrating AI into existing MDM
For businesses tethered to legacy MDM systems, the path forward doesn’t necessitate a complete overhaul. Integrating AI-driven solutions with existing infrastructures offers a pragmatic approach to modernization, allowing for incremental improvements without substantial disruption or the abandonment of previous investments. This methodical integration can bring about significant enhancements in data management practices, ensuring a smoother transition and immediate benefits.
Embarking on this transition requires a strategic approach, beginning with a thorough assessment of current data management needs and a careful selection of appropriate AI solutions. Companies must navigate potential challenges, including cultural shifts, skill development, and implementation hurdles, with a clear strategy and vision.
Looking to the future, data management must prioritize flexibility, scalability, and agility to support ongoing business growth and adaptability. Embracing AI-powered data products is not merely a tactical move but a strategic imperative to future-proof data management practices. By continuously evolving and adapting to new technologies and data sources, businesses can ensure they remain competitive in an ever-changing digital landscape.
As industries worldwide continue to evolve at an unprecedented pace, the shift from legacy MDM to AI-driven data management is not just a trend but a fundamental requirement for maintaining relevance and competitiveness. The adoption of AI-enhanced systems enables organizations to harness the vast potential of their data, resulting in better and more accurate insights. These insights facilitate faster decision-making, leading to operational efficiencies, improved customer experiences, and increased ROI. Companies that understand the urgency of this shift and act decisively will find themselves at the forefront of the new data-driven era.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
‘Data is the new oil’ was term coined by British mathematician Clive Humby 2006. It’s become an overused phrase largely meaning that if your organization has access to vast amounts of data, you can use it to aid decision making and drive results.
While there is great truth in that having access to data can lead to greater business intelligence insights, what companies actually need is access to ‘good’ data and its insights. However, knowing what makes data valuable is something that many still struggle with. With considerations often including factors such as quantity, age, source or variety, not truly understanding what type of data is good for business means it’s easy to get lost in data sets that are ultimately poor quality and bad for decision making.
The big cost of the wrong big data
The cost of handling poor quality data is high. On average the price is $13m per business or 10-30% of revenue, and for companies of any size that’s a huge burden.
Companies have become used to making decisions based on big data sets. They use spreadsheet software to analyze and use that analysis to make decisions. But this approach fuels the need for even more data in which to spot trends of ‘statistical significance’. The challenge is that it’s difficult to truly scrutinize the source and authenticity of information. Take consumer insights for example. If a business acquires its data sets from a third party, can it ever be 100% certain that all of that information was provided by authentic respondents, with none of it from bots or people not being 100% truthful?
Jonas Alexandersson
Co-founder and CXO, GetWhy.
For case studies on why large data sets don’t always mean more accurate results, we can look to politics. 2024 is due to be a monumental year with both the US and UK set for elections and political polling will once again have a role to play in predicting outcomes. However, they’re not always right. During both the 2016 and 2020 US elections, the polls leading up to vote predicted some very big things wrong, the former even predicting that Hilary Clinton was going to celebrate a huge win. Reasons for the wildly incorrect predictions include nonresponse bias, where Trump voters were less likely to interact with polls, skewing the results towards Clinton.
Similarly in the UK, the ‘Shy Tory factor’ has been referred to during elections where the Conservative Party has performed better than polls predicted. In this case, respondents would say they were going to vote the opposite way to what they ultimately ended up doing.
While a handful of such respondents in large data sets may not provide too much influence over final analysis, the aforementioned election polls show what can happen when data isn’t truly reflectively of the external world. Now for businesses which use such analysis to drive decision making, acting on that information can cost them heavily.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Listening vs understanding
Relying on big data sets is also a sign that businesses are often set up to listen to, not understand, their consumers. This means that while they can use big data to see trends, they don’t understand why those trends exists. For instance, if an organisation knows that consumers like the colour blue but then don’t seek any further information, they’ve just listened. In the short-term this may prove successful but if that trend suddenly shifts and consumers start liking green, they’ll be slow to react.
Now, if a business knows consumers like blue, but then go a step further and discover why they do, they’ll understand what actually influences them. Perhaps blue is in response to an event or a particular mood, and when an organization has that information not only can they make decisions that are more empathic towards consumers, but they can better prepare for any evolution in requirements.
Ushering in a new age of empathy
Empathy is critical at a time when the world is facing a challenging time. With various significant geopolitics events occurring, understanding consumers is one of many things that can help bring about a new age of empathy. Also, companies have work to do to keep consumers onside as there is a growing distrust of brands driven by a number of reasons. For instance, consumers are frequently exposed online to unfair practices, including fake reviews and data concerns around targeted advertising.
To break the cycle, businesses need to revisit how they’re discovering insights. Collecting insights has typically involved huge time and cost investments, and the resulting big cumbersome data sets that dehumanize respondents down into a number are no longer suitable in a world where people’s views are consistently shifting. Not only do they take too long to collect, but data might be incorrect in the first place.
Organizations need to place more emphasis on understanding consumers. They need to know why they think a certain way, not just that they do. AI-driven qualitative insights enable businesses to quickly understand what audiences truly want. The AI can run survey tools with respondents from demographics across the globe before delivering analysis in hours, and with the same quality of traditional methods. Able to then watch recordings back, brands not only see what the respondent says, but how and why they say it.
Ultimately, bad data costs businesses a lot. Acting on information that isn’t inaccurate can have significant repercussions ranging from slightly unhappy consumers to complete failure. Companies have to do away with their old processes and adopt a new approach to insight collection. Bigger data sets doesn’t mean better insights, a more thoughtful, targeted approach does. And, when businesses truly understand consumers, it drives empathetic decision making, brand trust and greater results.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
If someone were to infect your Meta Quest VR headset with malware, they could trick you into seeing things in the virtual world which weren’t real, experts have warned.
Academics from Cornell University recently published a paper describing the possibility of hijacking people’s VR sessions and controlling their interactions with internal applications, external servers, and more.
As per the paper, hackers could, in theory, insert what they call an “Inception Layer” between the VR Home Screen and the VR User/Server. For example, the victim could open their banking app in virtual reality, and see their usual balance, while being completely bankrupt in reality. The hackers could also, potentially, trick the victim into initiating a wire transfer, while being completely oblivious to what’s actually going on.
VR phishing
Things can get even more crazy when you throw in generative AI, deepfakes, and other upcoming technology. People could end up thinking they were talking with their friends, coworkers, and bosses, in VR, while being taken for all they have, in the background.
While the threats sound ominous, it’s important to note that the researchers didn’t really explore the possibility of compromising these VR headsets. Whether or not they have a vulnerability that could be exploited this way is unknown at the time. What’s more, there is no proof-of-concept, no malware that could be able to pull such an attack off.
Instead, the researchers were just interested in whether or not people would notice anything was amiss if such an infection did occur.
In total, 27 people were tested to see if they would notice anything strange during their session of Beat Saber. The only visual clue was a little flickering on the home screen before playing the game. In total, 10 people noticed the change, nine of which attributed it to an innocuous system glitch.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
In other words, prepare to read about elaborate phishing scams in the metaverse.
Electrical engineer Gilbert Herrera was appointed research director of the US National Security Agency in late 2021, just as an AI revolution was brewing inside the US tech industry.
The NSA, sometimes jokingly said to stand for No Such Agency, has long hired top math and computer science talent. Its technical leaders have been early and avid users of advanced computing and AI. And yet when Herrera spoke with me by phone about the implications of the latest AI boom from NSA headquarters in Fort Meade, Maryland, it seemed that, like many others, the agency has been stunned by the recent success of the large language models behind ChatGPT and other hit AI products. The conversation has been lightly edited for clarity and length.
Gilbert HerreraCourtesy of National Security Agency
How big of a surprise was the ChatGPT moment to the NSA?
Oh, I thought your first question was going to be “what did the NSA learn from the Ark of the Covenant?” That’s been a recurring one since about 1939. I’d love to tell you, but I can’t.
What I think everybody learned from the ChatGPT moment is that if you throw enough data and enough computing resources at AI, these emergent properties appear.
The NSA really views artificial intelligence as at the frontier of a long history of using automation to perform our missions with computing. AI has long been viewed as ways that we could operate smarter and faster and at scale. And so we’ve been involved in research leading to this moment for well over 20 years.
Large language models have been around long before generative pretrained (GPT) models. But this “ChatGPT moment”—once you could ask it to write a joke, or once you can engage in a conversation—that really differentiates it from other work that we and others have done.
The NSA and its counterparts among US allies have occasionally developed important technologies before anyone else but kept it a secret, like public key cryptography in the 1970s. Did the same thing perhaps happen with large language models?
At the NSA we couldn’t have created these big transformer models, because we could not use the data. We cannot use US citizen’s data. Another thing is the budget. I listened to a podcast where someone shared a Microsoft earnings call, and they said they were spending $10 billion a quarter on platform costs. [The total US intelligence budget in 2023 was $100 billion.]
It really has to be people that have enough money for capital investment that is tens of billions and [who] have access to the kind of data that can produce these emergent properties. And so it really is the hyperscalers [largest cloud companies] and potentially governments that don’t care about personal privacy, don’t have to follow personal privacy laws, and don’t have an issue with stealing data. And I’ll leave it to your imagination as to who that may be.
Doesn’t that put the NSA—and the United States—at a disadvantage in intelligence gathering and processing?
II’ll push back a little bit: It doesn’t put us at a big disadvantage. We kind of need to work around it, and I’ll come to that.
It’s not a huge disadvantage for our responsibility, which is dealing with nation-state targets. If you look at other applications, it may make it more difficult for some of our colleagues that deal with domestic intelligence. But the intelligence community is going to need to find a path to using commercial language models and respecting privacy and personal liberties. [The NSA is prohibited from collecting domestic intelligence, although multiple whistleblowers have warned that it does scoop up US data.]
Four out of five organizations around the world (85%) suffered at least one data loss incident last year.
This is according to a new report from cybersecurity researchers Proofpoint, which says that most of the time, it’s not the computers’ fault – it’s ours.
Earlier this week, Proofpoint published its inaugural Data Loss Landscape report. This paper, which explores how current approaches to data loss prevention (DLP) are holding up against macro challenges, is based on a survey of 600 security professionals working in large enterprises, as well as data from the company’s Information Protection Platform, and Tessian.
The human factor is again to blame
According to the report, data loss is usually the result of poor interactions between humans and machines. “Careless users” are much more likely to cause data incidents, than compromised or otherwise misconfigured systems.
Proofpoint further claims that many organizations are happy to invest in DLP solutions, but these investments are “often inadequate”. Of all the organizations that suffered a data loss incident, almost nine in ten (86%) faced negative outcomes, such as business disruptions, or revenue losses (reported by more than half – 57% – of affected firms).
“Careless, compromised, and malicious users are and will continue to be responsible for the vast majority of incidents, all while GenAI tools are absorbing common tasks—and gaining access to confidential data in the process,” commented Ryan Kalember, chief strategy officer, Proofpoint. “Organizations need to rethink their DLP strategies to address the underlying cause of data-loss—people’s actions—so they can detect, investigate, and respond to threats across all channels their employees are using including cloud, endpoint, email, and web.”
Misconfigured databases – incidents in which employees, for example, forget to set up a password for a major database, are one of the most common causes of data leaks.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Over the years, we’ve witnessed millions of people lose their sensitive information that way. For example, early this year, Cybernews found an unprotected database holding sensitive information on the entire population of Brazil. Another example is a BMW security error that resulted in the leak of sensitive information belonging to its customers.
A hacker is selling a huge archive on the dark web, claiming it originated from a 2021 data breach at American telecommunications giant AT&T – however the company denies the data originated from its servers.
BleepingComputer reported a threat actor with the alias ShinyHunters posted an ad on the RaidForums for the sale of sensitive data belonging to 71 million AT&T customers.
The database contains people’s names, addresses, mobile phone numbers, birth dates, social security numbers, and other sensitive information. The publication analyzed a sample of the data and confirmed its authenticity. Whether or not the entire database is legitimate, is impossible to determine at this time.
History of breaches
ShinyHunters’s starting price is $200,000, with incremental offers of $30,000. They would sell the database immediately for an offer of $1 million, it was said.
However, when approached by the publication, AT&T said the data wasn’t theirs: “Based on our investigation today, the information that appeared in an internet chat room does not appear to have come from our systems,” AT&T told BleepingComputer in 2021. ShinyHunters, on the other hand, responded that they “don’t care if they don’t admit”.
AT&T has a history of malware and data breaches. Roughly a year ago, the company warned millions of its users that some of their sensitive data was exposed in a supply chain cyberattack. Apparently, a marketing vendor was breached a few months earlier, resulting in the theft of AT&T’s data.
In that incident, nine million of its customers were affected, with hackers stealing customer proprietary network information from some wireless accounts. That includes, among other things, the number of lines on an account or wireless rate plan.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Earlier still, in July 2020, news broke that AT&T some employees took bribes to install malware on their network. Two individuals, who were later uncovered to be Muhammad Fahd and Ghulam Jiwani, were charged with paying over $1 million in bribes to several AT&T employees at the telecom’s Mobility Customer Care call center in Washington.
Data centers produce a lot of waste heat that could one day be recycled and used to heat millions of homes.
Now, French data center company Data4 has partnered with the University of Paris-Saclay to launch a project that aims to use data center heat to grow algae, which can then be recycled into energy. The pilot project, set to commence early in 2024, will be trialed in the Paris region.
This initiative, led by a diverse team of experts from various fields, is driven by the French administration “Conseil Départemental de l’Essonne” and the Foundation Université Paris-Saclay. The project comes as a response to the escalating environmental impact of data centers, which have seen a 35% annual increase in data storage worldwide.
A more efficient alternative
(Image credit: Data4)
The algae grown from the captured CO2 will be recycled into biomass to create new circular energy sources and will also be used in the production of bioproducts for other industries.
According to a feasibility study conducted with start-up Blue Planet Ecosystems, the carbon capture efficiency of this method can be 20 times greater than that of a tree.
Data4 says using the data center waste heat for the growth of algae is a more efficient alternative to the common practice of using it to warm nearby homes, which only utilizes 20% of the heat produced.
“This augmented biomass project meets two of the major challenges of our time: food security and the energy transition. This requires close collaboration between all the players in the Essonne region, including Data4, to develop a genuine industrial ecology project, aimed at pooling resources and reducing consumption in the region. Thanks to this partnership with the Fondation de l’Université Paris Saclay, we have the opportunity to draw on one of the world’s most prestigious scientific communities to work towards a common goal of a circular energy economy,” says Linda Lescuyer, Innovation Manager, Data4.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A newly-discovered, Microsoft-branded SSD suggests the tech giant may be – or has been at least – exploring new ways to optimize its data center storage.
The leaked images of a Microsoft Z1000 SSD show a 1TB NVMe M.2 drive, apparently boasting sequential read speeds of up to 2,400MB/s and write speeds of 1,800MB/s.
The Z1000 SSD, originally revealed by @yuuki_ans on X, is made up of a mix of components from various companies, including Toshiba NAND flash chips, Micron’s DDR4 RAM cache, and a controller from CNEX Labs, a company best known for its work with data center hyperscalers.
(Image credit: @yuuki_ans on X)
Up to 4TB capacity
Back in 2018, CNEX Labs closed a $23 million Series D funding round led by Dell Technologies Capital which also included Microsoft’s venture fund M12. This money was partially used to fund a proprietary, advanced CNX-2670 controller that delivered 550,000 IOPS, a 25% performance increase over previously available M.2 form-factor SSDs at the time. The CNEX Labs controller in the leaked photos is CNX-2670AA-0821.
The SSD has a capacity of 960GB made up of four 256GB Toshiba BiCS4 96-layer eTLC chips and features a 1GB DDR4 RAM cache made by Micron to boost performance.
The leaked “engineering sample”, produced on May 18, 2020 when much of the world was in Covid lockdowns, suggests the drive is part of a broader portfolio of SSD models. Its design allows for the addition of more DRAM and capacitors, hinting at larger versions.
As Tom’s Hardware notes “several unused solder pads are on both sides of the PCB, presumably for additional capacitors. This implies that there may be larger versions of the Z1000 with 2TB and perhaps even 4TB of room, given that more capacity would require more DRAM and capacitors to ensure data protection.”
This isn’t the first time Microsoft has experimented with hardware design for its data centers, having recently revealed its own-brand silicon hardware in order to help further the development and use of AI in businesses.