Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension, image classification and competition-level mathematics, according to a new report (see ‘Speedy advances’). Rapid progress in the development of these systems also means that many common benchmarks and tests for assessing them are quickly becoming obsolete.
These are just a few of the top-line findings from the Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report charts the meteoric progress in machine-learning systems over the past decade.
In particular, the report says, new ways of assessing AI — for example, evaluating their performance on complex tasks, such as abstraction and reasoning — are more and more necessary. “A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”
Stanford’s annual AI Index, first published in 2017, is compiled by a group of academic and industry specialists to assess the field’s technical capabilities, costs, ethics and more — with an eye towards informing researchers, policymakers and the public. This year’s report, which is more than 400 pages long and was copy-edited and tightened with the aid of AI tools, notes that AI-related regulation in the United States is sharply rising. But the lack of standardized assessments for responsible use of AI makes it difficult to compare systems in terms of the risks that they pose.
The rising use of AI in science is also highlighted in this year’s edition: for the first time, it dedicates an entire chapter to science applications, highlighting projects including Graph Networks for Materials Exploration (GNoME), a project from Google DeepMind that aims to help chemists discover materials, and GraphCast, another DeepMind tool, which does rapid weather forecasting.
Growing up
The current AI boom — built on neural networks and machine-learning algorithms — dates back to the early 2010s. The field has since rapidly expanded. For example, the number of AI coding projects on GitHub, a common platform for sharing code, increased from about 800 in 2011 to 1.8 million last year. And journal publications about AI roughly tripled over this period, the report says.
ChatGPT broke the Turing test — the race is on for new ways to assess AI
Much of the cutting-edge work on AI is being done in industry: that sector produced 51 notable machine-learning systems last year, whereas academic researchers contributed 15. “Academic work is shifting to analysing the models coming out of companies — doing a deeper dive into their weaknesses,” says Raymond Mooney, director of the AI Lab at the University of Texas at Austin, who wasn’t involved in the report.
That includes developing tougher tests to assess the visual, mathematical and even moral-reasoning capabilities of large language models (LLMs), which power chatbots. One of the latest tests is the Graduate-Level Google-Proof Q&A Benchmark (GPQA)1, developed last year by a team including machine-learning researcher David Rein at New York University.
The GPQA, consisting of more than 400 multiple-choice questions, is tough: PhD-level scholars could correctly answer questions in their field 65% of the time. The same scholars, when attempting to answer questions outside their field, scored only 34%, despite having access to the Internet during the test (randomly selecting answers would yield a score of 25%). As of last year, AI systems scored about 30–40%. This year, Rein says, Claude 3 — the latest chatbot released by AI company Anthropic, based in San Francisco, California — scored about 60%. “The rate of progress is pretty shocking to a lot of people, me included,” Rein adds. “It’s quite difficult to make a benchmark that survives for more than a few years.”
Cost of business
As performance is skyrocketing, so are costs. GPT-4 — the LLM that powers ChatGPT and that was released in March 2023 by San Francisco-based firm OpenAI — reportedly cost US$78 million to train. Google’s chatbot Gemini Ultra, launched in December, cost $191 million. Many people are concerned about the energy use of these systems, as well as the amount of water needed to cool the data centres that help to run them2. “These systems are impressive, but they’re also very inefficient,” Maslej says.
Costs and energy use for AI models are high in large part because one of the main ways to make current systems better is to make them bigger. This means training them on ever-larger stocks of text and images. The AI Index notes that some researchers now worry about running out of training data. Last year, according to the report, the non-profit research institute Epoch projected that we might exhaust supplies of high-quality language data as soon as this year. (However, the institute’s most recent analysis suggests that 2028 is a better estimate.)
AI ‘breakthrough’: neural net has human-like ability to generalize language
Ethical concerns about how AI is built and used are also mounting. “People are way more nervous about AI than ever before, both in the United States and across the globe,” says Maslej, who sees signs of a growing international divide. “There are now some countries very excited about AI, and others that are very pessimistic.”
In the United States, the report notes a steep rise in regulatory interest. In 2016, there was just one US regulation that mentioned AI; last year, there were 25. “After 2022, there’s a massive spike in the number of AI-related bills that have been proposed” by policymakers, Maslej says.
Regulatory action is increasingly focused on promoting responsible AI use. Although benchmarks are emerging that can score metrics such as an AI tool’s truthfulness, bias and even likability, not everyone is using the same models, Maslej says, which makes cross-comparisons hard. “This is a really important topic,” he says. “We need to bring the community together on this.”
Today we’re tracking a few deals offered by Anker and Jackery, including a wide array of wall chargers and other USB-C accessories. All of the products in this sale can be found on Amazon, and some will require you to clip an on-page coupon then head to the checkout screen before you see the final sale price.
Note: MacRumors is an affiliate partner with Amazon. When you click a link and make a purchase, we may receive a small payment, which helps us keep the site running.
Highlights of Jackery’s deals include the Explorer 100 Plus Portable Power Station, available for just $99.99, down from $149.00. This is a miniature-sized portable power station that can fit in the palm of your hand and weighs just 2.13 lbs, while featuring a 31,000 mAh capacity and 128W output.
iOS 18 is expected to be the “biggest” update in the iPhone’s history. Below, we recap rumored features and changes for the iPhone. iOS 18 is rumored to include new generative AI features for Siri and many apps, and Apple plans to add RCS support to the Messages app for an improved texting experience between iPhones and Android devices. The update is also expected to introduce a more…
A week after Apple updated its App Review Guidelines to permit retro game console emulators, a Game Boy emulator for the iPhone called iGBA has appeared in the App Store worldwide. The emulator is already one of the top free apps on the App Store charts. It was not entirely clear if Apple would allow emulators to work with all and any games, but iGBA is able to load any Game Boy ROMs that…
Apple’s hardware roadmap was in the news this week, with things hopefully firming up for a launch of updated iPad Pro and iPad Air models next month while we look ahead to the other iPad models and a full lineup of M4-based Macs arriving starting later this year. We also heard some fresh rumors about iOS 18, due to be unveiled at WWDC in a couple of months, while we took a look at how things …
Best Buy this weekend has a big sale on Apple MacBooks and iPads, including new all-time low prices on the M3 MacBook Air, alongside the best prices we’ve ever seen on MacBook Pro, iPad, and more. Some of these deals require a My Best Buy Plus or My Best Buy Total membership, which start at $49.99/year. In addition to exclusive access to select discounts, you’ll get free 2-day shipping, an…
Apple’s iPhone 16 Plus may come in seven colors that either build upon the existing five colors in the standard iPhone 15 lineup or recast them in a new finish, based on a new rumor out of China. According to the Weibo-based leaker Fixed focus digital, Apple’s upcoming larger 6.7-inch iPhone 16 Plus model will come in the following colors, compared to the colors currently available for the…
Apple will begin updating its Mac lineup with M4 chips in late 2024, according to Bloomberg’s Mark Gurman. The M4 chip will be focused on improving performance for artificial intelligence capabilities. Last year, Apple introduced the M3, M3 Pro, and M3 Max chips all at once in October, so it’s possible we could see the M4 lineup come during the same time frame. Gurman says that the entire…
Google is expected to launch a new Pixel 8a phone at its I/O conference next month, but if you’re willing to buy last year’s model, a new sale has dropped the Pixel 7a down to the lowest price we’ve tracked. The handset is now available for $349, which is $150 less than Google’s list price and $25 below the prior low we’ve seen in recent months. The only better deals we’ve found for an unlocked model have required you to trade in another device. This discount applies to the black, light blue and white colorways and is available at several retailers, including Amazon, Best Buy, Target and Google’s online store. Google says the offer will run through May 4.
Photo by Sam Rutherford / Engadget
This is a new low for the unlocked version of Google’s midrange smartphone.
The Pixel 7a is the top budget pick in our guide to the best Android phones, and Engadget’s Sam Rutherford gave it a score of 90 in our review last May. When it’s discounted to this extent, it remains a good value. Its cameras still outshine just about everything else in this price range, and it still provides a largely bloat-free version of Android. Its Tensor G2 can sometimes run hot but is still plenty quick for everyday tasks. Though it won’t be kept up to date for as long as the flagship Pixel 8, it’ll still receive OS updates through May 2026 and security updates through May 2028.
The mostly plastic design and 6.1-inch OLED display are both a step down from more expensive devices, but they should be more than acceptable at this price. The latter can run at a 90Hz refresh rate, which again isn’t on the level of top-tier models but makes scrolling feel smoother than it’d be on many cheap Android phones. Along those lines, while the Pixel 7a’s wireless charging tops out at a relatively slow 7.5W, the fact that it supports wireless charging at all is welcome. Battery life is solid but not class-leading in general, though wired charging also isn’t the fastest at 18W.
If you don’t need a new phone right this second, it still makes sense to see if Google followstradition and releases a new A-series phone in a few weeks. There’s been a spate of Pixel 8a leaks over the past several months, all of which suggest a device that’ll fall more closely in line with the current Pixel 8. Exactly how much the new phone will cost is unclear, however. If you want upgrade today and must stay on a tighter budget, we’d expect this deal to stay worthwhile after I/O has passed. It’s also worth noting that the Pixel 8 and Pixel 8 Pro are on sale for $549 and $799, respectively, though neither of those are all-time lows.
Could Apple finally solve the flaring issue on iPhone photos and videos? Photo: Leander Kahney/Cult of Mac
Lens flare has been a longtime issue with the iPhone’s camera. Apple could finally solve this issue on the iPhone 16 Pro with a new lens coating technology.
It’s common for flares to show up in photos or videos taken from an iPhone’s camera. The issue has been around for years, and Apple has made little improvement in this area over generations.
New ALD coating can help reduce lens flare
A rumor originating from China by leaker @Yeux1122 says Apple is testing new Atomic Layer Deposition (ALD) equipment for a special coating that will help reduce lens flares. The company will seemingly only use the coating on the iPhone 16 Pro lineup.
Lens fare on iPhones has been an issue since the 2012 iPhone 5. Over the years, Apple has used various coatings and improved glass lenses to enhance light transmission and reduce flares. But despite the company’s best efforts, the issue remains and is mainly prevalent while recording videos. The new ALD coating might help Apple reduce flaring to a large extent, if not fix it entirely.
Besides reducing glare, the coating can help improve light transmission and unwanted flaring.
Apple working on big anti-reflective upgrades for future iPhones
The new coating could be one of the many camera upgrades Apple is planning for the iPhone 16 Pro. A previous leak indicates the iPhone’s camera module could get a radical redesign, helping it stand out from the previous generation of the phone.
Given this leak’s timing, there’s a possibility of the ALD coating not being ready for use in the iPhone 16 Pro series. In that case, Apple could use the coating on the iPhone 17 Pro in 2025.
We were promised more AI video updates at Adobe Summit 2024 – and here’s the first. Adobe has offered a sneak peak at generative AI video tools coming to Premiere Pro.
Powered by Adobe Firefly, the new AI tools are set to give professional video editors new ways to add post-production polish. Early comments appear broadly positive, likening the tools to a VFX powerhouse After Effects – but we’ll have to wait until May to see how that comparison holds up.
We took a look at what’s new from Adobe and how the new non-destructive Firefly AI tools could change the way you edit videos.
A useful tool for when the narrative needs that extra beat, Generative Extend is the definition of ‘fix it in post’. The AI here adds additional frames to clips, giving editors more to play with. According to Adobe, the “breakthrough technology creates extra media for fine-tuning edits, to hold on a shot for an extra beat or to better cover a transition.”
2. Adding and removing objects
A familiar set of tools for genAI users, Object Addition and Object Removal are making their debut on Premiere Pro. In Adobe’s preview video, we’re shown a case of diamonds. As with any AI art generator, by selecting an area of the frame and writing a text-to-video prompt, users will be able to add to the scene. In this case, more diamonds. Other uses highlighted are adding or removing unwanted props, set dressing, brand logos, and crew, which may lead to a dangerous drop in IMDb-listed goofs.
3. Third-party support
This is an intriguing proposition for any video pro currently using other AI tools. Adobe Firefly’s Premiere Pro will let users use models from third-party sources, including Pika, Runway, and Sora from OpenAI to find the best shot for the project. These last two examples will use text-prompts directly inside Premiere Pro, creating variations that can be added straight to the timeline. Adobe are calling these ‘explorations,’ and since Sora itself is still very much in beta, expect this one to develop over time.
4. VFX workflows
As soon as Adobe revealed the tools, the inevitable comparisons to After Effects came tumbling in. From what we’ve seen, there’s no denying the tools are effectively light visual effects tools. Ok, it doesn’t quite look like an alternative to After Effects just yet. But the tools add an extra level of VFX, letting users tidy up footage without jumping between software.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
5. Audio workflows are changing too
Alongside the headline-grabbing video tools, the company is introducing a handful of generative AI audio tools – also set for a May release. Expect interactive fade handles, automatic AI tagging to categorize music, ambience, sound effects, or dialogue, and redesigned waveforms that should make it quicker to ‘read’ the project.
Bonus: Content credentials
Alright, it might not radically alter anyone’s workflow, but since Firefly’s introduction, Adobe has been championing more transparency around AI usage media. With Content Credentials, users can see if AI was used, and which training model, in the creation of the footage.
“Adobe is reimagining every step of video creation and production workflow to give creators new power and flexibility to realize their vision. By bringing generative AI innovations deep into core Premiere Pro workflows, we are solving real pain points that video editors experience every day, while giving them more space to focus on their craft,” said said Ashley Still, Senior Vice President, Creative Product Group at Adobe.
Anker’s 15W MagGo Power Bank was one of the first Qi2-certified devices available and can bring an iPhone 15 from zero to 50 percent in just 45 minutes. It can charge an iPhone to 100 percent and another time to 70 percent before needing another charge. It also offers a small screen indicating how much battery the power bank has left for charging or until it’s recharged. The device comes with a kickstand for easy support while fueling up a phone.
Other notable Anker devices on sale include the 552 USB-C Hub and the Prime 27,650mAh Power Bank with a 100W Charging Base. The 552 USB-C Hub is down to $30 from $70 — a 57 percent discount. It offers 9-in-1 connectivity and file transfer at up to five gigabytes per second. Then there’s Anker’s Prime 27,650mAh Power Bank, which is 30 percent off, dropping to $164.50 from $235. It offers two USB-C ports and one USB port that deliver up to 250W of power. The device charges at 100W with the included base or at 140W with a USB-C cable.
A surge in iPhone shipments in 2023 seems to have lead to a decline in demand in 2024. Photo: Apple
After a stellar 2023, iPhone shipments dropped 9.6% year over year in the first quarter of 2024, according to a market-research firm.
Arch-rival Samsung also saw a decline in shipments, but not as large a one. On the other hand, two Chinese phone makers saw extremely strong growth last quarter.
iPhone and Samsung shipments battle for top spot
Apple had a brilliant 2023, capturing the top spot in shipments for the first time ever, according to IDC. iPhone also led in the fourth quarter, with almost 25% of the world market — way ahead of Samsung’s 16% of the market.
But the shoe is on the other foot in 2024. Apple lost out to Samsung in the first quarter of the year, according to new research from IDC.
Cupertino shipped 50.1 million iPhones in the January-through-March period, down from 55.4 million in the same quarter of 2023. Samsung smartphone shipments dropped 0.7% year over year, and it shipped 60.1 million devices.
“While Apple managed to capture the top spot at the end of 2023, Samsung successfully reasserted itself as the leading smartphone provider in the first quarter,” said Ryan Reith, an IDC analyst.
Estimates from IDC and other market-research firms are necessary because Apple (like other device-makers) does not announce how many smartphones it sells.
A considerable share of the slip in iPhone demand reportedly happened in China.
Don’t overlook Xiaomi and Transsion
Two companies that are virtually unknown in the United States each took a significant chunk of the market in Q1. Xiaomi shipped 40.8 million Androids while Transsion shipped 28.5 million.
“There is a shift in power among the Top 5 companies, which will likely continue as market players adjust their strategies in a post-recovery world,” said IDC analyst Nabila Popal. “Xiaomi is coming back strong from the large declines experienced over the past two years and Transsion is becoming a stable presence in the Top 5 with aggressive growth in international markets.”
Last month, it was reported that Samsung could get a $6 billion grant from the US government to build advanced chip factories in the US. The US government has now officially announced that it is offering $6.4 billion in grants to Samsung Electronics for its chipmaking investments in Texas.
Samsung will invest $44 billion to make chips in Texas, USA
The US government announced earlier today that it plans to offer tax incentives of up to $6.4 billion to Samsung Electronics. The Commerce Department has reached this preliminary agreement with Samsung under the US government’s CHIPS and Science Act. This grant will ease Samsung’s efforts to build two chip plants (Austin and Taylor) in Texas. While the chip plant in Austin is old, the company is building an advanced chip plant in Taylor. Samsung is also said to be making a chip packaging facility and a chip research center.
Local chip production is set to reduce the USA’s reliance on other countries like China, South Korea, and Taiwan. It will also boost local aerospace, automobile, electronics, and defense industries.
White House National Economic Adviser Lael Brainard said, “The return of leading-edge chip manufacturing to America is a major new chapter in our semiconductor industry.” Biden’s administration has already granted similar tax incentives and sops to competing chip firms, including Intel, Global Foundries, Microchip Technology Inc., and TSMC.
Star Trek: Lower Decks – arguably the best Star Trek series, if its flawless ratings from critics are anything to go by – will make its final voyage later this year. The show will come to a close after its next season, its fifth, which is expected to air later in 2024.
But there’s some solace for Star Trek fans, as Stark Trek: Strange New Worlds has been renewed for a fourth season. If a fifth season is the streaming equivalent of the red uniform that indicates one of the teleported team isn’t coming back, that means Star Trek: Strange New Worlds should still be around for at least one more season after that.
The news of Star Trek: Lower Decks’ cancellation comes via show creator Mike McMahan and co-producer Alex Kurtzman. In a message shared with the Star Trek website, they wrote: “We wanted to let you know that this fall will be the fifth and final season of Star Trek: Lower Decks. While five seasons of any series these days seems like a miracle, it’s no exaggeration to say that every second we’ve spent making this show has been a dream come true.”
Stay tuned for the “hilarious” fifth and final season
According to the duo: “We’re excited for the world to see our hilarious fifth season which we’re working on right now, and the good news is that all previous episodes will remain on Paramount+ so there is still so much to look forward to as we celebrate the Cerritos crew with a big send-off… We remain hopeful that even beyond season five, Mariner, Boimler, Tendi, Rutherford and the whole Cerritos crew will live on with new adventures.”
Fans of the show will be disappointed. It’s been consistently great, with season four currently sitting at an impressive 100% on Rotten Tomatoes – that’s higher than even the first season of Star Trek: Strange New Worlds, which has 99%. As Rolling Stone put it: “Next Generation and Deep Space Nine took a while to find themselves, and so did Lower Decks” – the fourth season has “become a highlight of this current phase of TV Trek”.
Hopefully it’s some consolation that Star Trek: Strange New Worlds has been renewed for another season. It’s currently sitting with a 98% rating based on 99% for season one and 97% for season two. CBR says “it’s shows like Strange New Worlds that confirm there is plenty of life in the venerable science fiction franchise, giving fans plenty to look forward to every Thursday for the exciting adventures of Captain Pike and his Enterprise”, while The Mary Sue says that “Strange New Worlds shows that there’s still plenty of life left in the classic Star Trek format.”
All the seasons so far of Star Trek: Lower Decks and Star Trek: Strange New Worlds are streaming on Paramount Plus.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
A cubic millimetre is a tiny volume — less than a teardrop. But a cubic millimetre of mouse brain is densely packed with tens of thousands of neurons and other cells in a staggeringly complex architectural weave.
Reconstructing such elaborate arrangements requires monumental effort, but the researchers affiliated with the Machine Intelligence from Cortical Networks (MICrONS) programme pulled it off. It took US$100 million and years of effort by more than 100 scientists, coordinated by 3 groups that had never collaborated before. There were weeks of all-nighters and a painstaking global proofreading effort that continues even now — for a volume that represents just 0.2% of the typical mouse brain. Despite the hurdles, the core of the project — conceived and funded by the US Intelligence Advanced Research Projects Activity (IARPA) — is complete.
Human brain mapping
The resulting package includes a high-resolution 3D electron microscopy reconstruction of the cells and organelles in two separate volumes of the mouse visual cortex, coupled with fluorescent imaging of neuronal activity from the same volumes. Even the coordinators of the MICrONS project, who describe IARPA’s assembly of the consortium as a ‘shotgun wedding’ of parallel research efforts, were pleasantly surprised by the outcome. “It formed this contiguous team, and we’ve been working extremely well together,” says Andreas Tolias, a systems neuroscientist who led the functional imaging effort at Baylor College of Medicine in Houston, Texas. “It’s impressive.”
The MICrONS project is a milestone in the field of ‘connectomics’, which aims to unravel the synaptic-scale organization of the brain and chart the circuits that coordinate the organ’s many functions. The data from these first two volumes are already providing the neuroscience community with a valuable resource. But this work is also bringing scientists into strange and challenging new territory. “The main casualty of this information is understanding,” says Jeff Lichtman, a connectomics pioneer at Harvard University in Cambridge, Massachusetts. “The more we know, the harder it is to turn this into a simple, easy-to-understand model of how the brain works.”
Short circuits
There are many ways to look at the brain, but for connectivity researchers, electron microscopy has proved especially powerful.
In 1986, scientists at the University of Cambridge, UK, used serial-section electron microscopy to generate a complete map of the nervous system for the roundworm Caenorhabditiselegans1. That connectome was a landmark achievement in the history of biology. It required the arduous manual annotation and reconstruction of some 8,000 2D images, but yielded a Rosetta Stone for understanding the nervous system of this simple, but important, animal model.
The rise of digital twins
No comparable resource exists for more complex animals, but early forays into the rodent connectome have given hints of what such a map could reveal. Lichtman recalls the assembly he and his colleagues produced in 2015 from a 1,500-cubic-micron section of mouse neocortex — roughly one-millionth of the volume used in the MICrONS project2. “Most people were just shocked to see the density of wires all pushed together in any little part of brain,” he says.
Similarly, Moritz Helmstaedter, a connectomics researcher at the Max Planck Institute for Brain Research in Frankfurt, Germany, says that his team’s efforts3 in reconstructing a densely packed region of the mouse somatosensory cortex, which processes sensations related to touch, in 2019 challenged existing dogma — especially the assumption that neurons in the cortex are randomly wired. “We explicitly proved that wrong,” Helmstaedter says. “We found this extreme precision.” These and other studies have collectively helped to cement the importance of electron-microscopy-based circuit maps as a complement to techniques such as light microscopy and molecular methods.
Bigger and better
IARPA’s motivation for the MICrONS project was grounded in artificial intelligence. The goal was to generate a detailed connectomic map at the cubic-millimetre-scale, which could then be ‘reverse-engineered’ to identify architectural principles that might guide the development of biologically informed artificial neural networks.
Tolias, neuroscientist Sebastian Seung at Princeton University in New Jersey, and neurobiologist Clay Reid at the Allen Institute for Brain Science in Seattle, Washington, had all applied independently for funding to contribute to separate elements of this programme. But IARPA’s programme officers elected to combine the 3 teams into a single consortium — including a broader network of collaborators — issuing $100 million in 2016 to support a 5-year effort.
A Martinotti cell, a small neuron with branching dendrites, with synaptic outputs highlighted.Credit: MICrONS Explorer
The MICrONS team selected two areas from the mouse visual cortex: the aforementioned cubic millimetre, and a much smaller volume that served as a pilot for the workflow. These were chosen so the team could investigate the interactions between disparate regions in the visual pathway, explains Tolias, who oversaw the brain-activity-imaging aspect of the work at Baylor. To achieve that, the researchers genetically engineered a mouse to express a calcium-sensitive ‘reporter gene’, which produces a fluorescent signal whenever a neuron or population of neurons fires. His team then assembled video footage of diverse realistic scenes, which the animal watched with each eye independently for two hours while a microscope tracked neuronal activity.
Probing fine-scale connections in the brain
The mouse was then shipped to Seattle for preparation and imaging of the relevant brain volumes — and the pressure kicked up another notch. Nuno da Costa, a neuroanatomist and associate investigator at the Allen Institute, says he and Tolias compressed their groups’ schedules to accommodate the final, time-consuming stage of digital reconstruction and analysis conducted by Seung’s group. “We really pushed ourselves to deliver — to fail as early as possible so we can course-correct in time,” da Costa says. This meant a race against the clock to excise the tissue, carve it into ultra-thin slices and then image the stained slices with a fleet of 5 electron microscopes. “We invested in this approach where we could buy very old machines, and really automate them to make them super-fast,” says da Costa. The researchers could thus maximize throughput and had backups should a microscope fail.
For phase one of the project, which involved reconstructing the smaller cortical volume, sectioning of the tissue came down to the heroic efforts of Agnes Bodor, a neuroscientist at the Allen Institute, who spent more than a month hand-collecting several thousand 40-nanometre-thick sections of tissue using a diamond-bladed instrument known as a microtome, da Costa says. That manual effort was untenable for the larger volume in phase two of the project, so the Allen team adopted an automated approach. Over 12 days of round-the-clock, supervised work, the team generated almost 28,000 sections containing more than 200,000 cells4. It took six months to image all those sections, yielding some 2 petabytes of data.
The Allen and Baylor teams also collaborated to link the fluorescently imaged cells with their counterparts in the reconstructed connectomic volume.
A network of thousands of individual neurons from a small subset of cells in the Machine Intelligence from Cortical Networks project data set.Credit: MICrONS Explorer
Throughout this process, the Allen team relayed its data sets to the team at Princeton University. Serial-section electron microscopy is a well-established technique, but assembly of the reconstructed volume entails considerable computational work. Images must be precisely aligned with one another while accounting for any preparation- or imaging-associated deformations, and then they are subjected to ‘segmentation’ to identify and annotate neurons, non-neuronal cells such as glia, organelles and other structures. “The revolutionary technology in MICrONS was image alignment,” Seung says. This part is crucial, because a misstep in the positioning of a single slice can derail the remainder of the reconstruction process. Manual curation would be entirely impractical at the cubic-millimetre scale. But through its work in phase one, the team developed a reconstruction workflow that could be scaled up for the larger brain volume, and continuing advances in deep-learning methods made it possible to automate key alignment steps.
To check the work, Sven Dorkenwald, who was a graduate student in Seung’s laboratory and is now a research fellow at the Allen Institute, developed a proofreading framework to refine the team’s reconstructions and ensure their biological fidelity. This approach, which verified the paths of neuronal processes through the connectome, carved the volumes into ‘supervoxels’ — 3D shapes that define segmented cellular or subcellular features, which can be rearranged to improve connectomic accuracy — and Dorkenwald says the final MICrONS data set had 112 billion of them. The system is analogous to the online encyclopedia Wikipedia in some ways, allowing many users to contribute edits in parallel while also logging the history of changes. But even crowdsourced proofreading is slow going — Dorkenwald estimates that each axon (the neuronal projections that transmit signals to other cells) in the MICrONS data set takes up to 50 hours to proofread.
Charting new territory
The MICrONS team published a summary5 of its phase one results in 2022. Much of its other early findings still await publication, including a detailed description of the work from phase two — although this is currently available as a preprint article4. But there are already some important demonstrations of what connectomics at this scale can deliver.
FlyWire: online community for whole-brain connectomics
One MICrONS preprint, for example, describes what is perhaps the most comprehensive circuit map so far for a cortical column6, a layered arrangement of neurons that is thought to be the fundamental organizational unit of the cerebral cortex. The team’s reconstruction yielded a detailed census of all the different cell types residing in the column and revealed previously unknown patterns in how various subtypes of neuron connect with one another. “Inhibitory cells have this remarkable specificity towards some excitatory cell types, even when these excitatory cells are mixed together in the same layer,” says da Costa. Such insights could lead to more precise classification of the cells that boost or suppress circuit activity and reveal the underlying rules that guide the wiring of those circuits.
Crucially, says Tolias, the MICrONS project was about more than the connectome: “It was large-scale, functional imaging of the same mouse.” Much of his team’s work has focused on translating calcium reporter-based activity measurements into next-generation computational models. In 2023, the researchers posted a preprint that describes the creation of a deep-learning-based ‘digital twin’ on the basis of experimentally measured cortical responses to visual stimuli7. The predictions generated by this ‘twin’ can then be tested, further refining the model and enhancing its accuracy.
One surprising and valuable product of the MICrONS effort involves fruit flies. Early in the project, Seung’s team began exploring serial-section electron-microscopy data from the Drosophilamelanogaster brain produced by researchers at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia8. “I realized that because we had developed this image-alignment technology, we had a chance to do something that people thought was impossible,” says Seung. His team — including Dorkenwald — used the Janelia data as a proving ground for the algorithms that had been developed for MICrONS. The result was the first complete assembly of the fruit-fly brain connectome — around 130,000 neurons in total9.
Given that the wiring of the nervous system is generally conserved across fruit flies, Dorkenwald is enthusiastic about how these data — which are publicly accessible at http://flywire.ai — could enable future experiments. “You can do functional imaging on a fly, and because you can find the same neurons over in the connectome, you will be able to do these functional-structure analyses,” he says.
The mouse connectome will not be so simple, because connectivity varies from individual to individual. But the MICrONS data are nevertheless valuable for the neuroscience community, says Helmstaedter, who was not part of the MICrONS project. “It’s great data, and it’s inspiring people just to go look at it and see it,” he says. There’s also the power of demonstrating what is possible, and how it could be done better. “You’ve got to do something brute force first to find out where you can make it easier the next round,” says Kristen Harris, a neuroscientist at the University of Texas at Austin. “And the act of doing it — just getting the job done — is just spectacular.”
Terra incognita
Even as analysis of the MICrONS data set proceeds, its limitations are already becoming clear. For one thing, volumes from other distinct cortical regions will be needed to identify features that are broadly observed throughout the brain versus those features that are distinct to the visual cortex. And many axons from this first cubic millimetre will inevitably connect to points unknown, Lichtman notes, limiting researchers’ ability to fully understand the structure and function of the circuits within it.
Scaling up will be even harder. Lichtman estimates that a whole-brain electron-microscopy reconstruction would produce roughly an exabyte of data, which is equivalent to a billion gigabytes and is 1,000 times greater than the petabytes of data produced by the MICrONS project. “This may be a ‘Mars shot’ — it’s really much harder than going to the Moon,” he says.
Still, the race is under way. One major effort is BRAIN CONNECTS, a project backed by the US National Institutes of Health with $150 million in funding, which is coordinated by multiple researchers, including Seung, da Costa and Lichtman. “We’re not delivering the whole mouse brain yet, but testing if it’s possible,” da Costa says. “Mitigating all the risks, bringing the cost down, and seeing if we can actually prepare a whole-mouse-brain or whole-hemisphere sample.”
In parallel, Lichtman is working with a team at Google Research in Mountain View, California, led by computer scientist Viren Jain — who collaborated with MICrONS and is also part of the BRAIN CONNECTS leadership team — to map sizable volumes of the human cortex using electron microscopy. They’ve already released data from their first cubic millimetre and have plans to begin charting other regions from people with various neurological conditions10.
NatureTech hub
These efforts will require improved tools. The serial-section electron-microscopy strategy that MICrONS used is too labour-intensive to use at larger scales and yields relatively low-quality data that are hard to analyse. But alternatives are emerging. For example, ‘block-face’ electron-microscopy methods, in which the sample is imaged as a solid volume and then gradually shaved away with a high-intensity ion-beam, require less work in terms of image alignment and can be applied to thick sections of tissue that are easier to manage. These methods can be combined with cutting-edge multi-beam scanning electron microscopes, which image specimens using up to 91 electron beams simultaneously, thus accelerating data collection. “That’s one of the leading contenders for scale up to a whole mouse brain,” says Seung, who will be working with Lichtman on this strategy.
Further automation and more artificial-intelligence tools will also be assets. Helmstaedter and his colleagues have been looking into ways to simplify image assembly with an automated segmentation algorithm called RoboEM, which traces neural processes with minimal human intervention and can potentially eliminate a lot of the current proofreading burden11. Still, higher-quality sample preparation and imaging are probably the true key to efficiency at scale, Helmstaedter says. “The better your data, the less you have to worry about automation.”
However they are generated, making sense of these connectome maps will take more than fancy technology. Tolias thinks “it will be almost impossible” to replicate the coupling of structure and activity produced by MICrONS at the whole-brain scale. But it’s also unclear whether that will be necessary and to what extent functional information can be inferred through a better understanding of brain structure and organization.
For Lichtman, the connectome’s value will ultimately transcend conventional hypothesis-driven science. A connectome “forces you to see things you weren’t looking for, and yet they’re staring you in the face”, he says. “I think if we do a whole mouse brain, there will be just an infinite number of ‘wow, really?’ discoveries.”