Microsoft, which has gone “all-in” on artificial intelligence, has developed a generative AI model designed expressly for U.S. intelligence services. Unlike other AI platforms, such as Microsoft’s own Copilot, this one will be “air gapped” and won’t require a potentially unsafe connection to the internet.
Bloomberg notes, “It’s the first time a major large language model has operated fully separated from the internet… Most AI models, including OpenAI’s ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the US intelligence community.”
18 months of development
The tool will allow intelligence services to use AI for tasks such as analyzing vast swathes of classified data without the fear of data leaks or hacks that could potentially compromise national security.
William Chappell, Microsoft’s CTO for Strategic Missions and Technology, told Bloomberg that the company spent 18 months working on this special GPT-4-based tool which will be able to read and analyze content, answer questions and write code without needing to go online. Equally importantly, it reportedly won’t learn from, or be trained on, the data it is fed.
At a security conference last month, Sheetal Patel, assistant director of the CIA for the Transnational and Technology Mission Center, said, “There is a race to get generative AI onto intelligence data, and I want it to be us.”
More from TechRadar Pro
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Third-party screen protectors designed for the upcoming Galaxy Z Fold 6 have emerged in a series of photos earlier today, and they seemingly confirm a couple of critical design changes.
Although the Galaxy Z Fold 6 might not look all that different from its predecessor, it does seem to employ sharper edges and flatter surfaces. A photo of a cover screen protector (via @UniverseIce) also seems to reconfirm that the eternal display does indeed have sharper corners and thinner bezels all around.
More importantly, this cover screen protector suggests that Samsung is still searching for the sweet spot when it comes to the width of the Z Fold series’ cover screen.
Galaxy Z Fold 6 to have a wider cover screen
According to this photo, the distance between the side bezels of the Galaxy Z Fold 6 screen protector is 60.2 mm. In contrast, the Galaxy Z Fold 5’s screen protector clocks in at a narrower 57.4mm.
Although these screen protectors might not perfectly match the dimensions of the cover screens they’re supposed to protect, these measurements suggest that Samsung might be trying to find the sweet spot.
The company may have realized that the Z Fold series’ cover screen is a bit too narrow to fully replace a standard Galaxy phone. And as a result, the Galaxy Z Fold 6 could have a slightly wider cover display than its predecessors. If that’s the case, the upcoming foldable flagship phone might be easier to use when folded.
It might not seem like much, but a few millimeters can make a big difference when you’re dealing with these types of mobile devices.
Furthermore, considering that the Galaxy Z Fold 6 might have flatter surfaces and square corners all around, the upcoming foldable flagship could feel vastly different from its predecessors, even if it might end up looking like another incremental design refresh.
Samsung is expected to announce the Galaxy Z Fold 6 in early July at the next Unpacked event, which should be hosted in Paris, two weeks after the Olympic 2024 Games begin.
Nvidia continues to invest in AI initiatives and the most recent one, ChatRTX, is no exception thanks to its most recent update.
ChatRTX is, according to the tech giant, a “demo app that lets you personalize a GPT large language model (LLM) connected to your own content.” This content comprises your PC’s local documents, files, folders, etc., and essentially builds a custom AI chatbox from that information.
Because it doesn’t require an internet connection, it gives users speedy access to query answers that might be buried under all those computer files. With the latest update, it has access to even more data and LLMs including Google Gemma and ChatGLM3, an open, bilingual (English and Chinese) LLM. It also can locally search for photos, and has Whisper support, allowing users to converse with ChatRTX through an AI-automated speech recognition program.
Nvidia uses TensorRT-LLM software and RTX graphics cards to power ChatRTX’s AI. And because it’s local, it’s far more secure than online AI chatbots. You can download ChatRTX here to try it out for free.
Can AI escape its ethical dilemma?
The concept of an AI chatbot using local data off your PC, instead of training on (read: stealing) other people’s online works, is rather intriguing. It seems to solve the ethical dilemma of using copyrighted works without permission and hoarding it. It also seems to solve another long-term problem that’s plagued many a PC user — actually finding long-buried files in your file explorer, or at least the information trapped within it.
However, there’s the obvious question of how the extremely limited data pool could negatively impact the chatbot. Unless the user is particularly skilled at training AI, it could end up becoming a serious issue in the future. Of course, only using it to locate information on your PC is perfectly fine and most likely the proper use.
But the point of an AI chatbot is to have unique and meaningful conversations. Maybe there was a time in which we could have done that without the rampant theft, but corporations have powered their AI with stolen words from other sites and now it’s irrevocably tied.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Given that it’s highly unethical that data theft is the vital part of the process that allows you to make chats well-rounded enough not to get trapped in feedback loops, it’s possible that Nvidia could be the middle ground for generative AI. If fully developed, it could prove that we don’t need the ethical transgression to power and shape them, so here’s to hoping Nvidia can get it right.
Neuromorphic computing is about mimicking the human brain’s structure to deliver more efficient data processing, including faster speeds and higher accuracy, and it’s a hot topic right now. A lot of universities and tech firms are working on it, including scientists at Intel who have built the world’s largest “brain-based” computing system for Sandia National Laboratories in New Mexico.
Intel’s creation, called Hala Point, is only the size of a microwave, but boasts 1.15 billion artificial neurons. That’s a massive step up from the 50 million neuron capacity of its predecessor, Pohoiki Springs, which debuted four years ago. There’s a theme with Intel’s naming in case you were wondering – they’re locations in Hawaii.
Hala Point is ten times faster than its predecessor, 15 times denser, and with one million circuits on a single chip. Pohoiki Springs only had 128,000.
Making full use of it
Equipped with 1,152 Loihi 2 research processors (Loihi is a volcano in Hawaii), the Hala Point system will be tasked with harnessing the power of vast neuromorphic computation. “Our colleagues at Sandia have consistently applied our Loihi hardware in ways we never imagined, and we look forward to their research with Hala Point leading to breakthroughs in the scale, speed and efficiency of many impactful computing problems,” said Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs.
Since a Neuromorphic system of this scale hasn’t existed before, Sandia has been developing special algorithms to ultimately make use of the computer’s full capabilities.
“We believe this new level of experimentation – the start, we hope, of large-scale neuromorphic computing – will help create a brain-based system with unrivaled ability to process, respond to and learn from real-life data,” Sandia lead researcher Craig Vineyard said.
His colleague, fellow researcher Brad Aimone added, “One of the main differences between brain-like computing and regular computers we use today – in both our brains and in neuromorphic computing – is that the computation is spread over many neurons in parallel, rather than long processes in series that are an inescapable part of conventional computing. As a result, the more neurons we have in a neuromorphic system, the more complex a calculation we can perform. We see this in real brains. Even the smallest mammal brains have tens of millions of neurons; our brains have around 80 billion. We see it in today’s AI algorithms. Bigger is far better.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
An independent team could replicate select experiments in a paper before publication, to help catch errors and poor methodology.Credit: SolStock/Getty
Could the replication crisis in scientific literature be addressed by having scientists independently attempt to reproduce their peers’ key experiments during the publication process? And would teams be incentivized to do so by having the opportunity to report their findings in a citable paper, to be published alongside the original study?
These are questions being asked by two researchers who say that a formal peer-replication model could greatly benefit the scientific community.
Anders Rehfeld, a researcher in human sperm physiology at Copenhagen University Hospital, began considering alternatives to standard peer review after encountering a published study that could not be replicated in his laboratory. Rehfeld’s experiments1 revealed that the original paper was flawed, but he found it very difficult to publish the findings and correct the scientific record.
“I sent my data to the original journal, and they didn’t care at all,” Rehfeld says. “It was very hard to get it published somewhere where you thought the reader of the original paper would find it.”
The issues that Rehfeld encountered could have been avoided if the original work had been replicated by others before publication, he argues. “If a reviewer had tried one simple experiment in their own lab, they could have seen that the core hypothesis of the paper was wrong.”
Rehfeld collaborated with Samuel Lord, a fluorescence-microscopy specialist at the University of California, San Francisco, to devise a new peer-replication model.
The replication crisis won’t be solved with broad brushstrokes
In a white paper detailing the process2, Rehfeld, Lord and their colleagues describe how journal editors could invite peers to attempt to replicate select experiments of submitted or accepted papers by authors who have opted in. In the field of cell biology, for example, that might involve replicating a western blot, a technique used to detect proteins, or an RNA-interference experiment that tests the function of a certain gene. “Things that would take days or weeks, but not months, to do” would be replicated, Lord says.
The model is designed to incentivize all parties to participate. Peer replicators — unlike peer reviewers — would gain a citable publication, and the authors of the original paper would benefit from having their findings confirmed. Early-career faculty members at mainly undergraduate universities could be a good source of replicators: in addition to gaining citable replication reports to list on their CVs, they would get experience in performing new techniques in consultation with the original research team.
Rehfeld and Lord are discussing their idea with potential funders and journal editors, with the goal of running a pilot programme this year.
“I think most scientists would agree that some sort of certification process to indicate that a paper’s results are reproducible would benefit the scientific literature,” says Eric Sawey, executive editor of the journal LifeScienceAlliance, who plans to bring the idea to the publisher of his journal. “I think it would be a good look for any journal that would participate.”
Who pays?
Sawey says there are two key questions about the peer-replication model: who will pay for it, and who will find the labs to do the reproducibility tests? “It’s hard enough to find referees for peer review, so I can’t imagine cold e-mailing people, asking them to repeat the paper,” he says. Independent peer-review organizations, such as ASAPbio and Review Commons, might curate a list of interested labs, and could even decide which experiments will be replicated.
Lord says that having a third party organize the replication efforts would be great, and adds that funding “is a huge challenge”. According to the model, funding agencies and research foundations would ideally establish a new category of small grants devoted to peer replication. “It could also be covered by scientific societies, or publication fees,” Rehfeld says.
A controlled trial for reproducibility
It’s also important for journals to consider what happens when findings can’t be replicated. “If authors opt in, you’d like to think they’re quite confident that the work is reproducible,” says Sawey. “Ideally, what would come out of the process is an improved methods or protocols section, which ultimately allows the replicating lab to reproduce the work.”
Most important, says Rehfeld, is ensuring that the peer-replication reports are published, irrespective of the outcome. If replication fails, then the journal and original authors would choose what to do with the paper. If an editor were to decide that the original manuscript was seriously undermined, for example, they could stop it from being published, or retract it. Alternatively, they could publish the two reports together, and leave the readers to judge. “I could imagine peer replication not necessarily as an additional ‘gatekeeper’ used to reject manuscripts, but as additional context for readers alongside the original paper,” says Lord.
A difficult but worthwhile pursuit
Attempting to replicate others’ work can be a challenging, contentious undertaking, says Rick Danheiser, editor-in-chief of OrganicSyntheses, an open-access chemistry journal in which all papers are checked for replicability by a member of the editorial board before publication. Even for research from a well-resourced, highly esteemed lab, serious problems can be uncovered during reproducibility checks, Danheiser says.
Replicability in a field such as synthetic organic chemistry — in which the identity and purity of every component in a reaction flask should already be known — is already challenging enough, so the variables at play in some areas of biology and other fields could pose a whole new level of difficulty, says Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, and co-founder of the bioRxiv and medRxiv preprint servers. “But just because it’s hard, doesn’t mean there might not be cases where peer replication would be helpful.”
How to make your research reproducible
The growing use of preprints, which decouple research dissemination from evaluation, allows some freedom to rethink peer evaluation, Sever adds. “I don’t think it could be universal, but the idea of replication being a formal part of evaluating at least some work seems like a good idea to me.”
An experiment to test a different peer-replication model in the social sciences is currently under way, says Anna Dreber Almenberg, who studies behavioural and experimental economics at the Stockholm School of Economics. Dreber is a board member of the Institute for Replication (I4R), an organization led by Abel Brodeur at University of Ottawa, which works to systematically reproduce and replicate research findings published in leading journals. In January, I4R entered an ongoing partnership with Nature Human Behaviour to attempt computational reproduction of data and findings of as many studies published from 2023 onwards as possible. Replication attempts from the first 18 months of the project will be gathered into a ‘meta-paper’ that will go through peer review and be considered for publication in the journal.
“It’s exciting to see how people from completely different research fields are working on related things, testing different policies to find out what works,” says Dreber. “That’s how I think we will solve this problem.”