Snapchat Anunció que comenzará a probar la ubicación de anuncios en dos lugares de su aplicación, mejorando sus integraciones de CRM. Se trata de una asociación con dos franquicias de alimentos y una empresa de entretenimiento cuyos anuncios serán visibles en la aplicación, lo que permitirá a las empresas interactuar visualmente con los consumidores. Los usuarios podrán interactuar con las instantáneas recibidas en la plataforma de mensajería instantánea respondiéndolas o accediendo a las opciones de llamado a la acción proporcionadas.
Anuncios en Snapchat
en un blog correoSnapchat ha detallado los anuncios que llegan a su aplicación. Será visible en la pestaña Chats y en el Mapa Snap. Después de su lanzamiento, los usuarios de Snapchat recibirán instantáneas promocionales de Disney. Mientras tanto, Snap Map mostrará anuncios de McDonald's y Taco Bell.
Las empresas pueden enviar videos verticales en pantalla completa directamente a los usuarios de Snapchat. Sin embargo, la empresa subraya que se trata simplemente de una elección. Los usuarios tendrán la opción de abrir un Snap o no, y serán eliminados si no se ven. Los usuarios de Snapchat también pueden interactuar con instantáneas patrocinadas enviando un mensaje al anunciante o utilizando la opción de llamado a la acción proporcionada para abrir un enlace predefinido.
La plataforma dice que estas instantáneas serán diferentes de los chats personales del usuario y no vendrán con una notificación automática.
Además de las instantáneas patrocinadas en la pestaña Chats, Snapchat también muestra anuncios en Snap Map, una función que permite a los usuarios compartir su ubicación con otros y buscar lugares de interés. Los lugares que ocurren con más frecuencia están marcados como Mejores tiros En Snap Map, la compañía afirma que puede aumentar las visitas típicas hasta en un 17,6 por ciento para los Snapchatters frecuentes en comparación con esa ubicación. Se dice que este es uno de los factores que contribuyen a la colocación de anuncios.
Otras características nuevas
Snapchat recientemente Arrollado Función de pasos exteriores para todos los usuarios. Esta característica, anteriormente exclusiva para los suscriptores de Snapchat, permite a los usuarios rastrear todos los lugares que descubrieron y desde donde enviaron instantáneas. Además de rastrear nuevos lugares, también puede aprovechar los recuerdos almacenados para tener en cuenta viajes pasados.
Los usos posteriores de esta función aprovecharán los datos de ubicación para realizar un seguimiento de los pasos, según Snapchat.
Samsung Australia is rolling out a new Fortnite Creative map for mobile gamers. The new experience called Clash of Commuters features “a uniquely Australian and public transport-inspired map.” And for a limited time, players will have the opportunity to win Samsung prizes.
The new map features Australian icons, including so-called bin chickens and the Sydney train. Clash of Commuters has a post-apocalyptic theme, and gamers will be tasked with defending an Aussie public bus across 5 checkpoints.
Virtual Galaxy S24 Ultra phones will also be hidden throughout the post-apocalyptic Australia Fortnite map, and players finding these hidden objects can unlock special bonuses.
As for the Australian player base, Samsung Electronics research suggests that 48% of Aussies most frequently use smartphones as their gaming devices. Meanwhile, 26% of Australians use consoles, and 24% use desktop PCs and laptops.
Play the map for real prizes
Clash of Commuters players have the chance to win prizes if they participate in the event before May 30.
This Fortnite Creative campaign consists of five rounds, and players who complete all five in the shortest amount of time will be eligible to win a Samsung gaming pack worth over $8,000.
This gaming pack prize consists of a 55-inch OLED TV, an Odyssey OLED G9 monitor, the Galaxy Buds 2 Pro earbuds, and the Galaxy S24 Ultra.
Samsung Australia offers additional prizes for players who compete in the event exclusively on mobile devices, and for players who complete the most side quest phones. Lastly, four winners will be picked at random from all completed time submissions.
Through this event, Samsung also invites Australians to experience the game on the Galaxy S24 Ultra at interactive pop-up locations in Melbourne and Sydney.
PUBG: Battlegrounds is somehow old enough to evoke nostalgia. The pioneering battle royale game, which entered Steam Early Access in 2017, will borrow a page from Fortnite’s playbook and honor its first map. Erangel Classic will recreate the old-school battlefield from the game’s inception for a limited two-week run in May and June.
Developer and publisher Krafton says the Erangel Classic map will reproduce the original’s concepts, graphics, atmosphere and UI. However, it will blend those with “modern tweaks” to deliver “the enjoyable gameplay experiences that players have grown accustomed to.” In other words, much like remasters of other classic games, the goal is to feel as close to the original as possible without chucking out all of its subtle quality-of-life improvements from the game’s evolution.
Krafton
If the revamped map idea sounds familiar, Fortnitebrought back its original 2018 island map late last year, breaking its records for player counts. (It peaked at 44.7 million players, marking its biggest day ever.) It’s easy to see why Krafton would want a piece of that action.
Specific nods to the original map include foggy and rainy weather to add an air of unpredictability. In addition, you’ll find bench weapons on the starting island (get ready to scramble for your favorite), and all weapons will have reduced recoil to match the original. It will also have a Tommy Gun in the care package, a vintage map UI and a “charmingly tacky font and graphics.”
The tiered rollout will arrive on PCs and consoles at different times, extending the playtime for those who own the game on multiple platforms (perhaps helping Krafton sell a few extra in-game items). Erangel Classic will be available in PUBG: Battlegrounds on PC from May 14 to May 28 and on consoles from May 23 to June 6, replacing the modern Erangel map during those periods. Krafton says the May 14 patch notes will go into more detail about all the map’s changes, so keep an eye out.
April 17, 1977: The Apple II launch at the West Coast Computer Faire positions Apple at the forefront of the looming personal computer revolution.
The company’s first mass-market computer, the Apple II boasts an attractively machined case designed by Jerry Manock (who will later design the first Macintosh). It also packs a keyboard, BASIC compatibility and, most importantly, color graphics.
Fueled by some marketing savvy from Steve Jobs, the Apple II launch makes quite a splash at the San Francisco Bay Area’s first personal computer convention.
However, the 1977 West Coast Computer Faire served as a massive “coming out” party for Apple, and the Apple II launch took center stage. This event featured all the big players in the burgeoning PC industry. In the days before the internet, it drew thousands of interested customers — many of whom became early tech devotees.
Apple showed off its new corporate logo at the computer fair for one of the first times. The multicolored design by Rob Janoff featured the outline of an apple with a bite taken out of it. The instantly iconic symbol replaced an earlier Apple logo by former Apple co-founder Ron Wayne.
Steve Jobs debuts the Apple II
Right from the start of his career, Jobs realized the importance of a good product introduction. With no special media event, he used the West Coast Computer Faire to launch the Apple II in a big way. (Interestingly, the event took place at the Bill Graham Civic Auditorium in San Francisco, which Apple returned to for its fall media event in 2015.)
Apple occupied the four booths directly facing the building’s front entrance. The strategic positioning made Apple the first of 175 companies at the Faire that visitors saw when they arrived. At a time when money was short, Apple splashed out on a backlit plexiglass display featuring the new logo.
The eye-catching setup showcased a dozen Apple II computers. Unbeknownst to customers, these were unfinished prototypes, since the actual computers wouldn’t be ready until June.
A major money spinner
Apple’s second computer became an enormously important product line. The year of the Apple II debut, the machine brought in $770,000 in revenue. That figure increased to $7.9 million the following year and a massive $49 million the year after. Apple continued producing versions of the Apple II until the early 1990s.
This computer brought a number of important players into the world of high tech, including Mitch Kapor (founder of Lotus Development Corporation and co-founder of the Electronic Frontier Foundation) and John Carmack (the legendary coder behind smash-hit games Wolfenstein 3D, Doom and Quake).
While Apple later moved away from the Apple II product line (much to the annoyance of Apple co-founder Steve Wozniak), this computer put Cupertino on the map. And the Apple II launch at the West Coast Computer Faire gave the world its first glimpse of the innovative machine.
A cubic millimetre is a tiny volume — less than a teardrop. But a cubic millimetre of mouse brain is densely packed with tens of thousands of neurons and other cells in a staggeringly complex architectural weave.
Reconstructing such elaborate arrangements requires monumental effort, but the researchers affiliated with the Machine Intelligence from Cortical Networks (MICrONS) programme pulled it off. It took US$100 million and years of effort by more than 100 scientists, coordinated by 3 groups that had never collaborated before. There were weeks of all-nighters and a painstaking global proofreading effort that continues even now — for a volume that represents just 0.2% of the typical mouse brain. Despite the hurdles, the core of the project — conceived and funded by the US Intelligence Advanced Research Projects Activity (IARPA) — is complete.
Human brain mapping
The resulting package includes a high-resolution 3D electron microscopy reconstruction of the cells and organelles in two separate volumes of the mouse visual cortex, coupled with fluorescent imaging of neuronal activity from the same volumes. Even the coordinators of the MICrONS project, who describe IARPA’s assembly of the consortium as a ‘shotgun wedding’ of parallel research efforts, were pleasantly surprised by the outcome. “It formed this contiguous team, and we’ve been working extremely well together,” says Andreas Tolias, a systems neuroscientist who led the functional imaging effort at Baylor College of Medicine in Houston, Texas. “It’s impressive.”
The MICrONS project is a milestone in the field of ‘connectomics’, which aims to unravel the synaptic-scale organization of the brain and chart the circuits that coordinate the organ’s many functions. The data from these first two volumes are already providing the neuroscience community with a valuable resource. But this work is also bringing scientists into strange and challenging new territory. “The main casualty of this information is understanding,” says Jeff Lichtman, a connectomics pioneer at Harvard University in Cambridge, Massachusetts. “The more we know, the harder it is to turn this into a simple, easy-to-understand model of how the brain works.”
Short circuits
There are many ways to look at the brain, but for connectivity researchers, electron microscopy has proved especially powerful.
In 1986, scientists at the University of Cambridge, UK, used serial-section electron microscopy to generate a complete map of the nervous system for the roundworm Caenorhabditiselegans1. That connectome was a landmark achievement in the history of biology. It required the arduous manual annotation and reconstruction of some 8,000 2D images, but yielded a Rosetta Stone for understanding the nervous system of this simple, but important, animal model.
The rise of digital twins
No comparable resource exists for more complex animals, but early forays into the rodent connectome have given hints of what such a map could reveal. Lichtman recalls the assembly he and his colleagues produced in 2015 from a 1,500-cubic-micron section of mouse neocortex — roughly one-millionth of the volume used in the MICrONS project2. “Most people were just shocked to see the density of wires all pushed together in any little part of brain,” he says.
Similarly, Moritz Helmstaedter, a connectomics researcher at the Max Planck Institute for Brain Research in Frankfurt, Germany, says that his team’s efforts3 in reconstructing a densely packed region of the mouse somatosensory cortex, which processes sensations related to touch, in 2019 challenged existing dogma — especially the assumption that neurons in the cortex are randomly wired. “We explicitly proved that wrong,” Helmstaedter says. “We found this extreme precision.” These and other studies have collectively helped to cement the importance of electron-microscopy-based circuit maps as a complement to techniques such as light microscopy and molecular methods.
Bigger and better
IARPA’s motivation for the MICrONS project was grounded in artificial intelligence. The goal was to generate a detailed connectomic map at the cubic-millimetre-scale, which could then be ‘reverse-engineered’ to identify architectural principles that might guide the development of biologically informed artificial neural networks.
Tolias, neuroscientist Sebastian Seung at Princeton University in New Jersey, and neurobiologist Clay Reid at the Allen Institute for Brain Science in Seattle, Washington, had all applied independently for funding to contribute to separate elements of this programme. But IARPA’s programme officers elected to combine the 3 teams into a single consortium — including a broader network of collaborators — issuing $100 million in 2016 to support a 5-year effort.
A Martinotti cell, a small neuron with branching dendrites, with synaptic outputs highlighted.Credit: MICrONS Explorer
The MICrONS team selected two areas from the mouse visual cortex: the aforementioned cubic millimetre, and a much smaller volume that served as a pilot for the workflow. These were chosen so the team could investigate the interactions between disparate regions in the visual pathway, explains Tolias, who oversaw the brain-activity-imaging aspect of the work at Baylor. To achieve that, the researchers genetically engineered a mouse to express a calcium-sensitive ‘reporter gene’, which produces a fluorescent signal whenever a neuron or population of neurons fires. His team then assembled video footage of diverse realistic scenes, which the animal watched with each eye independently for two hours while a microscope tracked neuronal activity.
Probing fine-scale connections in the brain
The mouse was then shipped to Seattle for preparation and imaging of the relevant brain volumes — and the pressure kicked up another notch. Nuno da Costa, a neuroanatomist and associate investigator at the Allen Institute, says he and Tolias compressed their groups’ schedules to accommodate the final, time-consuming stage of digital reconstruction and analysis conducted by Seung’s group. “We really pushed ourselves to deliver — to fail as early as possible so we can course-correct in time,” da Costa says. This meant a race against the clock to excise the tissue, carve it into ultra-thin slices and then image the stained slices with a fleet of 5 electron microscopes. “We invested in this approach where we could buy very old machines, and really automate them to make them super-fast,” says da Costa. The researchers could thus maximize throughput and had backups should a microscope fail.
For phase one of the project, which involved reconstructing the smaller cortical volume, sectioning of the tissue came down to the heroic efforts of Agnes Bodor, a neuroscientist at the Allen Institute, who spent more than a month hand-collecting several thousand 40-nanometre-thick sections of tissue using a diamond-bladed instrument known as a microtome, da Costa says. That manual effort was untenable for the larger volume in phase two of the project, so the Allen team adopted an automated approach. Over 12 days of round-the-clock, supervised work, the team generated almost 28,000 sections containing more than 200,000 cells4. It took six months to image all those sections, yielding some 2 petabytes of data.
The Allen and Baylor teams also collaborated to link the fluorescently imaged cells with their counterparts in the reconstructed connectomic volume.
A network of thousands of individual neurons from a small subset of cells in the Machine Intelligence from Cortical Networks project data set.Credit: MICrONS Explorer
Throughout this process, the Allen team relayed its data sets to the team at Princeton University. Serial-section electron microscopy is a well-established technique, but assembly of the reconstructed volume entails considerable computational work. Images must be precisely aligned with one another while accounting for any preparation- or imaging-associated deformations, and then they are subjected to ‘segmentation’ to identify and annotate neurons, non-neuronal cells such as glia, organelles and other structures. “The revolutionary technology in MICrONS was image alignment,” Seung says. This part is crucial, because a misstep in the positioning of a single slice can derail the remainder of the reconstruction process. Manual curation would be entirely impractical at the cubic-millimetre scale. But through its work in phase one, the team developed a reconstruction workflow that could be scaled up for the larger brain volume, and continuing advances in deep-learning methods made it possible to automate key alignment steps.
To check the work, Sven Dorkenwald, who was a graduate student in Seung’s laboratory and is now a research fellow at the Allen Institute, developed a proofreading framework to refine the team’s reconstructions and ensure their biological fidelity. This approach, which verified the paths of neuronal processes through the connectome, carved the volumes into ‘supervoxels’ — 3D shapes that define segmented cellular or subcellular features, which can be rearranged to improve connectomic accuracy — and Dorkenwald says the final MICrONS data set had 112 billion of them. The system is analogous to the online encyclopedia Wikipedia in some ways, allowing many users to contribute edits in parallel while also logging the history of changes. But even crowdsourced proofreading is slow going — Dorkenwald estimates that each axon (the neuronal projections that transmit signals to other cells) in the MICrONS data set takes up to 50 hours to proofread.
Charting new territory
The MICrONS team published a summary5 of its phase one results in 2022. Much of its other early findings still await publication, including a detailed description of the work from phase two — although this is currently available as a preprint article4. But there are already some important demonstrations of what connectomics at this scale can deliver.
FlyWire: online community for whole-brain connectomics
One MICrONS preprint, for example, describes what is perhaps the most comprehensive circuit map so far for a cortical column6, a layered arrangement of neurons that is thought to be the fundamental organizational unit of the cerebral cortex. The team’s reconstruction yielded a detailed census of all the different cell types residing in the column and revealed previously unknown patterns in how various subtypes of neuron connect with one another. “Inhibitory cells have this remarkable specificity towards some excitatory cell types, even when these excitatory cells are mixed together in the same layer,” says da Costa. Such insights could lead to more precise classification of the cells that boost or suppress circuit activity and reveal the underlying rules that guide the wiring of those circuits.
Crucially, says Tolias, the MICrONS project was about more than the connectome: “It was large-scale, functional imaging of the same mouse.” Much of his team’s work has focused on translating calcium reporter-based activity measurements into next-generation computational models. In 2023, the researchers posted a preprint that describes the creation of a deep-learning-based ‘digital twin’ on the basis of experimentally measured cortical responses to visual stimuli7. The predictions generated by this ‘twin’ can then be tested, further refining the model and enhancing its accuracy.
One surprising and valuable product of the MICrONS effort involves fruit flies. Early in the project, Seung’s team began exploring serial-section electron-microscopy data from the Drosophilamelanogaster brain produced by researchers at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia8. “I realized that because we had developed this image-alignment technology, we had a chance to do something that people thought was impossible,” says Seung. His team — including Dorkenwald — used the Janelia data as a proving ground for the algorithms that had been developed for MICrONS. The result was the first complete assembly of the fruit-fly brain connectome — around 130,000 neurons in total9.
Given that the wiring of the nervous system is generally conserved across fruit flies, Dorkenwald is enthusiastic about how these data — which are publicly accessible at http://flywire.ai — could enable future experiments. “You can do functional imaging on a fly, and because you can find the same neurons over in the connectome, you will be able to do these functional-structure analyses,” he says.
The mouse connectome will not be so simple, because connectivity varies from individual to individual. But the MICrONS data are nevertheless valuable for the neuroscience community, says Helmstaedter, who was not part of the MICrONS project. “It’s great data, and it’s inspiring people just to go look at it and see it,” he says. There’s also the power of demonstrating what is possible, and how it could be done better. “You’ve got to do something brute force first to find out where you can make it easier the next round,” says Kristen Harris, a neuroscientist at the University of Texas at Austin. “And the act of doing it — just getting the job done — is just spectacular.”
Terra incognita
Even as analysis of the MICrONS data set proceeds, its limitations are already becoming clear. For one thing, volumes from other distinct cortical regions will be needed to identify features that are broadly observed throughout the brain versus those features that are distinct to the visual cortex. And many axons from this first cubic millimetre will inevitably connect to points unknown, Lichtman notes, limiting researchers’ ability to fully understand the structure and function of the circuits within it.
Scaling up will be even harder. Lichtman estimates that a whole-brain electron-microscopy reconstruction would produce roughly an exabyte of data, which is equivalent to a billion gigabytes and is 1,000 times greater than the petabytes of data produced by the MICrONS project. “This may be a ‘Mars shot’ — it’s really much harder than going to the Moon,” he says.
Still, the race is under way. One major effort is BRAIN CONNECTS, a project backed by the US National Institutes of Health with $150 million in funding, which is coordinated by multiple researchers, including Seung, da Costa and Lichtman. “We’re not delivering the whole mouse brain yet, but testing if it’s possible,” da Costa says. “Mitigating all the risks, bringing the cost down, and seeing if we can actually prepare a whole-mouse-brain or whole-hemisphere sample.”
In parallel, Lichtman is working with a team at Google Research in Mountain View, California, led by computer scientist Viren Jain — who collaborated with MICrONS and is also part of the BRAIN CONNECTS leadership team — to map sizable volumes of the human cortex using electron microscopy. They’ve already released data from their first cubic millimetre and have plans to begin charting other regions from people with various neurological conditions10.
NatureTech hub
These efforts will require improved tools. The serial-section electron-microscopy strategy that MICrONS used is too labour-intensive to use at larger scales and yields relatively low-quality data that are hard to analyse. But alternatives are emerging. For example, ‘block-face’ electron-microscopy methods, in which the sample is imaged as a solid volume and then gradually shaved away with a high-intensity ion-beam, require less work in terms of image alignment and can be applied to thick sections of tissue that are easier to manage. These methods can be combined with cutting-edge multi-beam scanning electron microscopes, which image specimens using up to 91 electron beams simultaneously, thus accelerating data collection. “That’s one of the leading contenders for scale up to a whole mouse brain,” says Seung, who will be working with Lichtman on this strategy.
Further automation and more artificial-intelligence tools will also be assets. Helmstaedter and his colleagues have been looking into ways to simplify image assembly with an automated segmentation algorithm called RoboEM, which traces neural processes with minimal human intervention and can potentially eliminate a lot of the current proofreading burden11. Still, higher-quality sample preparation and imaging are probably the true key to efficiency at scale, Helmstaedter says. “The better your data, the less you have to worry about automation.”
However they are generated, making sense of these connectome maps will take more than fancy technology. Tolias thinks “it will be almost impossible” to replicate the coupling of structure and activity produced by MICrONS at the whole-brain scale. But it’s also unclear whether that will be necessary and to what extent functional information can be inferred through a better understanding of brain structure and organization.
For Lichtman, the connectome’s value will ultimately transcend conventional hypothesis-driven science. A connectome “forces you to see things you weren’t looking for, and yet they’re staring you in the face”, he says. “I think if we do a whole mouse brain, there will be just an infinite number of ‘wow, really?’ discoveries.”
The front of the Simons Observatory’s Large Aperture Telescope Receiver, the largest receiver for observing the cosmic microwave background built so far.Credit: Mark Devlin/University of Pennsylvania
Cosmologists are preparing to cast their sharpest-ever eyes on the early Universe. From an altitude of 5,300 metres on Cerro Toco, in northern Chile’s Atacama Desert, the Simons Observatory will map the cosmic microwave background (CMB) — sometimes called the afterglow of the Big Bang — with a sensitivity up to ten times greater than that of the previous gold standard, Europe’s Planck space probe.
“It will be the best view of the CMB that we’ve ever had,” says Jo Dunkley, a cosmologist at Princeton University in New Jersey and one of the leading researchers in the observatory’s team. Construction of the US$109.5-million observatory is due to be completed in a matter of weeks.
Big Bang telescope finale marks end of an era in cosmology
One of the project’s goals is to find fingerprints left in the CMB by gravitational waves that originated from the Big Bang itself. These would provide the first incontrovertible evidence for cosmic inflation, a brief moment in which expansion is thought to have proceeded at an exponential rate. During that time, quantum fluctuations on a microscopic scale are thought to have seeded the Universe with what became its large-scale structure — including the current distribution of clusters of galaxies across space.
The scientific collaboration is led by five US universities and the Lawrence Berkeley National Laboratory in Berkeley, California. The project is named after Jim Simons, a mathematician, billionaire hedge-fund investor and philanthropist, and his wife Marilyn. The Simons Foundation in New York City contributed grants of around $90 million to build the observatory.
Once construction is complete, engineers will begin the months-long processof fine-tuning and testing the observatory’s instruments before its science programme can fully begin.
Signs of inflation
The Simons Observatory is an array of four telescopes. Three are identical 0.4-metre small aperture telescopes (SATs) and one is the 6-metre Large Aperture Telescope (LAT). Together, they will map minuscule variations in the temperature of the CMB from one patch of the sky to the next, as well as the CMB’s polarization, which is a preferential direction in which the radiation’s electric fields wiggle as the microwaves propagate through space.
China’s mountain observatory begins hunt for origins of cosmic rays
The three SATs will concentrate on a patch covering 20% of the southern sky. The aim is for them to study large-scale swirls — spanning an area several times the apparent size of the Moon in the sky — in the polarization field of the CMB. (Polarization maps look like arrays of sticks, and the orientations of the sticks can form specific swirling patterns called vortices.) It is here that the signals of cosmic inflation, known as B-mode patterns, are expected to show up.
Many cosmologists see inflation as the most plausible mechanism for the process that gave the Universe its structure, caused by an energy field called the inflaton. The nature and properties of the inflaton are mysterious. Many theories have been proposed, predicting gravitational-wave signatures of a wide range of intensities.
It is therefore not guaranteed that the signal, if it exists, is strong enough for the Simons Observatory to see it, says Suzanne Staggs, another Princeton cosmologist who is the observatory’s co-director. “But oh my gosh — if they were right there, it would be amazing.”
The Simons Observatory site photographed from the side of Cerro Toco in the Atacama Desert, Chile.Credit: Mark Devlin/University of Pennsylvania
The known physics of quantum fields suggests that the signatures should be within theSimons Observatory’s sensitivity range, or close to it, says Marc Kamionkowski, a theoretical astrophysicist at Johns Hopkins University in Baltimore, Maryland, who was among the first researchers to predict the existence of B-mode patterns in 19971,2.
While the SATs focus on a relatively small area, the LAT will map 40% of the sky, at much finer resolution, and record temperature fluctuations in the CMB, as well as the CMB’s polarization. Cosmologists working on Planck and other past CMB projects have been able to extract troves of information by plotting the intensity of those temperature fluctuations against the area of sky that they span. Such graphs enabled cosmologists to produce precise estimates of both the Universe’s age (13.8 billion years) and its composition (only around 4% of which is ordinary matter).
The LAT data could help researchers to detect signals of cosmic inflation in low-resolution polarization maps made by the smaller telescopes. In particular, they will be crucial for separating that pattern from spurious signals produced by effects such as dust in the Milky Way, explains Mark Devlin, a cosmologist at the University of Pennsylvania in Philadelphia who is co-director of the observatory. The experiment will be six times more sensitive to the polarization patterns than any previous attempts3 to measure them.
Exploring unknowns
However, searching for signals of inflation is only one of the project’s goals: the Simons team plans to get much more science from the observatory’s high-resolution map of the CMB. It will enable researchers not only to visualize the Universe at an early age, but also to study how its primordial radiation was affected during the 13.8 billion years it spent travelling in space, before it got to Earth.
In particular, the CMB is diverted by the gravity of large clumps of galaxies and dark matter — a phenomenon known as gravitational lensing — and this can be exploited to produce 3D maps of those clusters. Devlin, Staggs and their collaborators pioneered this technique with an earlier high-precision CMB project called the Atacama Cosmology Telescope, which was in operation from 2007 to 2022, also at Cerro Toco, and has yet to publish its final results. The Simons team will reconstruct the gravitational lensing experienced by the CMB, and determine how much of this is due to the Universe’s neutrinos. This will enable them to calculate the mass of these particles, which is still unknown. “It is a guaranteed signal,” says Brian Keating, an astrophysicist at the University of California, San Diego, who is the project’s principal investigator.
Physicists Anna Kofman and Simon Dicker at the University of Pennsylvania in Philadelphia work on the dilution refrigerator inside the Large Aperture Telescope Receiver.Credit: Mark Devlin/University of Pennsylvania
Because the LAT will be scanning the same regions of sky repeatedly over its lifetime, it will also be able to track the motion of asteroids in the Solar System and monitor active black holes at the centres of other galaxies — and how their output changes over time. “We’re going to be able to track 20,000 or more active galactic nuclei, which are, we think, supermassive black holes with jets,” Dunkley says.
Ambitious follow-up
The observatory will have two runs, each lasting about four years, with a planned $53-million upgrade in between. An even more ambitious project called CMB-S4, to be led by the US Department of Energy and the National Science Foundation, is planned as a follow-up, with observations beginning in the mid-2030s. With telescopes at both Cerro Toco and the South Pole, the $800-million array will improve sensitivity to the inflationary signal by another factor of six.
It is hoped that some of the Simons Observatory’s hardware could be reused as part of the CMB-S4, although the details have not yet been ironed out, says John Carlstrom, an astrophysicist at the University of Chicago in Illinois who is project scientist for the CMB-S4 collaboration.
In 2014, a team working on a CMB experiment at the South Pole called BICEP2 made a bold claim to have detected the inflationary signature, but later retracted it when it became clear that what they were seeing was galactic dust. Since then, direct detection of gravitational waves — coming not from the Big Bang, but from astrophysical phenomena such as pairs of black holes merging — has become routine. Cosmologists are excited to get another chance to spot the primordial signals. “We’re talking about 13.8 billion years ago, with energy densities 15 orders of magnitude larger than anything we can create in the lab,” says Kamionkowski. “It’s kind of remarkable that we can even talk about this.”
Samsung has formally introduced the 3D Map View feature to the SmartThings app. This new view improves the usability and visibility of smart home devices, and it is now available on refrigerators, smartphones, tablets, and TVs.
SmartThings gets the 3D Map View feature
Samsung’s smart home platform, SmartThings, is quite popular. The company introduced a 2D Map View feature six months ago during the Samsung Developer Conference 2023. At CES 2024, the company revealed the 3D Map View feature, which is now available on the SmartThings app. This new view streamlines smart home management by displaying where each smart home product is placed in your home.
You can create a real floor plan of your home or office in the SmartThings app, place furniture markings, and add the positions of each smart home product. This gives you a better idea of what your home looks like and where each item is placed. You can also adjust the AC temperature and lighting and check air quality and energy consumption directly from the 3D Map View.
A 3D Map View can also be automatically created using Samsung home appliances with a LiDAR sensor, such as the Bespoke Jet Bot. The new version of the SmartThings app is now available for Android, iOS, and Tizen-based smart TVs. The feature will also be available on Family Hub series refrigerators before the end of the first half of this year.