Do you sometimes struggle to maintain focus while you’re working, or find it difficult to unwind and relax at the end of the day? Believe it or not, your iPhone could help with that. Keep reading to learn how.
If you are looking to minimize distractions when you focus on something, or just want to zone out after a hard day’s work, it’s well worth checking out the Background Sounds feature on your iPhone or iPad. Whether you are at home or in a public place, you can play calming sounds with just a few taps on your device to help you concentrate or rest.
In iOS and iPadOS, Apple’s Background Sounds include balanced, bright, and dark noise, as well as natural sounds like ocean, rain, and stream. All of the sounds can be set to play in the background to mask unwanted environmental or external noise, and the sounds mix into or duck under other audio and system sounds, so you don’t have to drown out what’s important to you.
Enable Background Sounds on iPhone and iPad
Launch the Settings app on your iPhone or iPad.
Tap Accessibility.
Under “Hearing,” tap Audio & Visual.
Tap Background Sounds, then tap the switch to turn on Background Sounds on the next screen.
Tap Sound to choose a sound effect. Choose from Balanced Noise, Bright Noise, Dark Noise, Ocean, Rain, and Stream.
Your device will need to download individual sound effects when you play them for the first time, so make sure you have an internet connection, but after that you can play the background sound wherever you are.
Note that the last screen in the steps optionally lets you set the volume of the sound, and includes an additional option to automatically stop the sound when you lock your device.
Accessibility Shortcut Access
Once you have downloaded the background sounds, you can quickly start and stop the audio via an Accessibility Shortcut. Here’s how to set one up.
Set Up the Accessibility Shortcut
Tap through to Settings ➝ Accessibility.
Select Accessibility Shortcut near the bottom of the menu.
Tap Background Sounds to select it. You can also drag the three lines icon at the far right to change the order in which it appears in the shortcuts menu.
You can triple click your iPhone’s Side button to access the Accessibility Shortcut at any time. Alternatively, you can add an Accessibility Shortcut button to your device’s Control Center in the following way.
Access Accessibility Shortcut
Tap through to Settings ➝ Control Center.
Find Accessibility Shortcuts under the “More Controls” list and then tap the entry to include it in Control Center.
Once that’s done, swipe from the top-right corner of your screen to bring up Control Center, then simply tap Accessibility Shortcut button and then tap Background Sounds to turn the audio on or off.
Background sounds are also available on Macs running macOS Ventura or later. Check out our dedicated how-to article for all the details.
Kidney disease is growing worldwide. The secretariat of the World Health Organization has welcomed the call to include it as a non-communicable disease that causes premature deaths.Credit: Vsevolod Zviryk/SPL
A quiet epidemic is building around the world. It is the third-fastest-growing cause of death globally. By 2040, it is expected to become the fifth-highest cause of years of life lost. Already, 850 million people are affected, and treating them is draining public-health coffers: the US government-funded health-care plan Medicare alone spends US$130 billion to do so each year. The culprit is kidney disease, a condition in which damage to the kidneys prevents them from filtering the blood.
And yet, in discussions of priorities for global public health, the words ‘kidney disease’ do not always feature. One reason for this is that kidney disease is not on the World Health Organization (WHO) list of priority non-communicable diseases (NCDs) that cause premature deaths. The roster of such NCDs includes heart disease, stroke, diabetes, cancer and chronic lung disease. With kidney disease missing, awareness of its growing impact remains low.
Chronic kidney disease and the global public health agenda: an international consensus
The authors of an article in Nature Reviews Nephrology this week want to change that (A. Francis et al. Nature Rev. Nephrol. https://doi.org/10.1038/s41581-024-00820-6; 2024). They are led by the three largest professional organizations working in kidney health — the International Society of Nephrology, the American Society of Nephrology and the European Renal Association — and they’re urging the WHO to include kidney disease on the priority NCD list.
This will, the authors argue, bring attention to the growing threat, which is particularly dire for people in low- and lower-middle-income countries, who already bear two‑thirds of the world’s kidney-disease burden. Adding kidney disease to the list will also mean that reducing deaths from it could become more of a priority for the United Nations Sustainable Development Goals target to reduce premature deaths from NCDs by one-third by 2030.
As of now, rates of chronic kidney disease are likely to increase in low- and lower-middle-income countries as the proportion of older people in their populations increases. Inclusion on the WHO list could provide an incentive for health authorities to prioritize treatments, data collection and other research, along with funding, as with other NCDs.
Kidney disease often accompanies other conditions that do appear on the NCD list, such as heart disease, cancer and diabetes — indeed, kidney-disease deaths caused specifically by diabetes are on the list. But the article authors argue that “tackling diabetes and heart disease alone will not target the core drivers of a large proportion of kidney diseases”. Both acute and chronic kidney disease can have many causes. They can be caused by infection or exposure to toxic substances. Increasingly, the consequences of global climate change, including high temperatures and reduced availability of fresh water, are thought to be contributing to the global burden of kidney disease, as well.
The kidney glomerulus filters waste products from the blood. In people with damaged kidneys, this happens through dialysis.Credit: Ziad M. El-Zaatari/SPL
The WHO secretariat, which works closely with the nephrology community, welcomes the call to include kidney disease as an NCD that causes premature deaths, says Slim Slama, who heads the NCD unit at the secretariat in Geneva, Switzerland. The data support including kidney disease as an NCD driver of premature death, he adds.
The decision to include kidney disease along with other priority NCDs isn’t only down to the WHO, however. There must be conversations between the secretariat, WHO member states, the nephrology community, patient advocates and others. WHO member states need to instruct the agency to take the steps to make it happen, including providing appropriate funding for strategic and technical assistance.
Data and funding gaps
Three reports based on surveys by the International Society of Nephrology since 2016 highlight the scale of data gaps (A. K. Bello et al. Lancet Glob. Health12, E382–E395; 2024). In many countries, screening for kidney disease is difficult to access and a large proportion of cases go undetected and therefore uncounted. For example, it is not known precisely how many people with kidney failure die each year because of lack of access to dialysis or transplantation: the numbers are somewhere between two million and seven million, according to the WHO. Advocates must push public-health officials in more countries to collect the data needed to monitor kidney disease and the impact of prevention and treatment efforts.
Even with better data, treatments for kidney disease are often prohibitively expensive. They include dialysis, an intervention to filter the blood when kidneys cannot. Dialysis is often required two or three times weekly for the remainder of the recipient’s life, or until they can receive a transplant, and it is notoriously costly. In Thailand, for example, it accounted for 3% of the country’s total health-care expenditures in 2022, according to the country’s parliamentary budget office.
End chronic kidney disease neglect
These costs could come down if people who have diabetes or high blood pressure, for example, could be routinely screened for impaired kidney function, because they are at high risk of developing chronic kidney disease. This would enable kidney damage to be detected early, before symptoms set in, opening the way for treatments that do not immediately require dialysis or transplant surgery.
New drugs that boost weight loss and treat type 2 diabetes could also help to prevent or reduce stress on the kidneys, but these, too, are too expensive for many people in need. That is why something needs to be done to make drugs more affordable. The pharmaceutical industry, which has become extremely profitable, has a crucial role. In Denmark, for example, the industry’s profits helped to tip the national economy from recession into growth in 2023, according to the public agency Statistics Denmark. The COVID-19 pandemic showed that making profits and making drugs available, and affordable, to a wide population need not be mutually exclusive. Similarly innovative thinking is now needed. “The whole world needs to reckon with this kidney problem,” says Valerie Luyckx, a biomedical ethicist at the University of Zurich in Switzerland.
The WHO adding kidney disease to its priority list could also attract funding for treatment, research and disease registries. That could jump-start the development of new treatments and help to make current treatments more affordable and accessible.
NCDs are responsible for 74% of deaths worldwide, but the world’s biggest donors to global health currently devote less than 2% of their budgets for international health assistance to NCD prevention and control, and not including kidney disease. Drawing more attention to the quiet rampage of kidney disease among some of the most vulnerable people would be one important step in turning these statistics around.
Apple’s M3 Ultra chip may be designed as its own, standalone chip, rather than be made up of two M3 Max dies, according to a plausible new theory.
The theory comes from Max Tech‘s Vadim Yuryev, who outlined his thinking in a post on X earlier today. Citing a post from @techanalye1 which suggests the M3 Max chip no longer features the UltraFusion interconnect, Yuryev postulated that the as-yet-unreleased “M3 Ultra” chip will not be able to comprise two Max chips in a single package. This means that the M3 Ultra is likely to be a standalone chip for the first time.
This would enable Apple to make specific customizations to the M3 Ultra to make it more suitable for intense workflows. For example, the company could omit efficiency cores entirely in favor of an all-performance core design, as well as add even more GPU cores. At minimum, a single M3 Ultra chip designed in this way would be almost certain to offer better performance scaling than the M2 Ultra did compared to the M2 Max, since there would no longer be efficiency losses over the UltraFusion interconnect.
Furthermore, Yuryev speculated that the M3 Ultra could feature its own UltraFusion interconnect, allowing two M3 Ultra dies to be combined in a single package for double the performance in a hypothetical “M3 Extreme” chip. This would enable superior performance scaling compared to packaging four M3 Max dies and open the possibility of even higher amounts of unified memory.
Little is currently known about the M3 Ultra chip, but a report in January suggested that it will be fabricated using TSMC’s N3E node, just like the A18 chip that is expected to debut in the iPhone 16 lineup later in the year. This means it would be Apple’s first N3E chip. The M3 Ultra is rumored to launch in a refreshed Mac Studio model in mid-2024.
Phishing attacks taking advantage of Apple’s password reset feature have become increasingly common, according to a report from KrebsOnSecurity. Multiple Apple users have been targeted in an attack that bombards them with an endless stream of notifications or multi-factor authentication (MFA) messages in an attempt to cause panic so they’ll respond favorably to social engineering. An…
iOS 18 will give iPhone users greater control over Home Screen app icon arrangement, according to sources familiar with the matter. While app icons will likely remain locked to an invisible grid system on the Home Screen, to ensure there is some uniformity, our sources say that users will be able to arrange icons more freely on iOS 18. For example, we expect that the update will introduce…
The next-generation iPad Pro will feature a landscape-oriented front-facing camera for the first time, according to the Apple leaker known as “Instant Digital.” Instant Digital reiterated the design change earlier today on Weibo with a simple accompanying 2D image. The post reveals that the entire TrueDepth camera array will move to the right side of the device, while the microphone will…
Apple today announced that its 35th annual Worldwide Developers Conference is set to take place from Monday, June 10 to Friday, June 14. As with WWDC events since 2020, WWDC 2024 will be an online event that is open to all developers at no cost. Subscribe to the MacRumors YouTube channel for more videos. WWDC 2024 will include online sessions and labs so that developers can learn about new…
Apple today released macOS Sonoma 14.4.1, a minor update for the macOS Sonoma operating system that launched last September. macOS Sonoma 14.4.1 comes three weeks after macOS Sonoma 14.4. The macOS Sonoma 14.4.1 update can be downloaded for free on all eligible Macs using the Software Update section of System Settings. There’s also a macOS 13.6.6 release for those who…
iOS 18 will allow iPhone users to place app icons anywhere on the Home Screen grid, according to sources familiar with development of the software update. This basic feature has long been available on Android smartphones. While app icons will likely remain locked to an invisible grid system on the Home Screen, our sources said that users will be able to arrange icons more freely on iOS 18….
Apple may be planning to add support for “custom routes” in Apple Maps in iOS 18, according to code reviewed by MacRumors. Apple Maps does not currently offer a way to input self-selected routes, with Maps users limited to Apple’s pre-selected options, but that may change in iOS 18. Apple has pushed an iOS 18 file to its maps backend labeled “CustomRouteCreation.” While not much is revealed…
Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
Some models are more likely to associate African American English with negative traits than Standard American English.Credit: Jaap Arriens/NurPhoto via Getty
Some large language models (LLMs), including those that power chatbots such as ChatGPT, are more likely to suggest the death penalty to a fictional defendant presenting a statement written in African American English (AAE) compared with one written in Standardized American English. AAE is a dialect spoken by millions of people in the United States that is associated with the descendants of enslaved African Americans. “Even though human feedback seems to be able to effectively steer the model away from overt stereotypes, the fact that the base model was trained on Internet data that includes highly racist text means that models will continue to exhibit such patterns,” says computer scientist Nikhil Garg.
A drug against idiopathic pulmonary fibrosis, created from scratch by AI systems, has entered clinical trials. Researchers at Insilico Medicine identified a target enzyme using an AI system trained on patients’ biomolecular data and scientific literature text. They then used a different algorithm to suggest a molecule that would block this enzyme. After some tweaks and laboratory tests, researchers had a drug that appeared to reduce inflammation and lung scarring. Medicinal chemist Timothy Cernak says he was initially cautious about the results because there’s a lot of hype about AI-powered drug discovery. “I think Insilico’s been involved in hyping that, but I think they built something really robust here.”
Researchers built a pleurocystitid robot to investigate how the ancient sea creature moved. Pleurocystitids lived 450 million years ago and were probably one of the first echinoderms (animals including starfish and sea urchins) that could move from place to place using a muscular ‘tail’. The robot moved more effectively on a sandy ‘seabed’ surface when it had a longer tail, which matches fossil evidence that pleurocystitids evolved longer tails over time.
The tail of the pleurocystitid replica (nicknamed ‘Rhombot’) was built out of wires that contract in response to electrical stimulation to simulate the flexibility and rigidity of a natural muscular tail.(Carnegie Mellon University – College of Engineering)
Features & opinion
Scientists hope that getting AI systems to comb through heaps of raw biomolecular data could reveal the answer to one of the biggest biological questions: what does it mean to be alive? AI models could, with enough data and computing power, build mathematical representations of cells that could be used to run virtual experiments — as well as map out what combination of biochemistry is required to sustain life. Researchers could even use it to design entirely new cells, that, for example, can explore a diseased organ and report on its condition. “It’s very ‘Fantastic Voyage’-ish,” admits biophysicist Stephen Quake. “But who knows what the future is going to hold?”
The editors of Nature Reviews Physics and Nature Human Behaviour have teamed up to explore the pros and cons of using AI systems such as ChatGPT in science communication. Apart from making up convincing inaccuracies, write the editors, chatbots have “an obvious, yet underappreciated” downside: they have nothing to say. Ask an AI system to write an essay or an opinion piece and you’ll get “clichéd nothingness”.
In Nature Human Behaviour, six experts discuss how AI systems can help communicators to make jargon understandable or translate science into various languages. At the same time, AI “threatens to erase diverse interpretations of scientific work” by overrepresenting the perspectives of those who have shaped research for centuries, write anthropologist Lisa Messeri and psychologist M. J. Crockett.
In Nature Reviews Physics, seven other experts delve into the key role of science communication in building trust between scientists and the public. “Regular, long-term dialogical interaction, preferably face-to-face, is one of the most effective ways to build a relationship based on trust,” notes science-communication researcher Kanta Dihal. “This is a situation in which technological interventions may do more harm than good.”
Technology journalist James O’Malley used freedom-of-information requests to unveil how one of London’s Underground stations spent a year as a testing ground for AI-powered surveillance. Initially, the technology was meant to reduce the number of people jumping the ticket barriers, but it was also used to alert staff if someone had fallen over or was spending a long time standing close to the platform edge. Making every station ‘smart’ would undoubtedly make travelling safer and smoother, argues O’Malley. At the same time, there are concerning possibilities for bias and discrimination. “It would be trivial from a software perspective to train the cameras to identify, say, Israeli or Palestinian flags — or any other symbol you don’t like.”
Simon R Anuszczyk and John O Dabiri/Bioinspir. Biomim. (CC BY 4.0)
A 3D-printed ‘hat’ allows this cyborg jellyfish to swim almost five times faster than its hat-less counterparts. The prosthesis could also house ocean monitoring equipment such as salinity, temperature and oxygen sensors. Scientists use electronic implants to control the animal’s speed and eventually want to make it fully steerable, in order to gather deep ocean data that can otherwise only be obtained at great cost. “Since [jellyfish] don’t have a brain or the ability to sense pain, we’ve been able to collaborate with bioethicists to develop this biohybrid robotic application in a way that’s ethically principled,” says engineer and study co-author John Dabiri. (Popular Science | 3 min read)
Machine-learning engineer Rick Battle says that chatbots’ finicky and unpredictable performance depending on how they’re prompted makes sense when thinking of them as algorithmic models rather than anthropomorphized entities. (IEEE Spectrum | 12 min read)
A star in the process of consuming a planet (artist’s conception).Credit: NG Images/Alamy
Stellar detectives have identified seven stars that recently dined on a rocky planet. The study doubles the number of binary stars known to have consumed a planet, and questions the perception that mature solar systems harbouring Earth-like planets are usually stable.
The findings, published in Nature on 20 March1, show “strong evidence of planet ingestion”, says Jianrong Shi, an astronomer at the National Astronomical Observatories in Beijing. The planets seem to have been eaten during their stars’ relatively stable main-sequence period, adds study co-author Fan Liu, an astronomer at Monash University in Melbourne, Australia.
If this is true, it means these systems have continued to be chaotic long after their formation, with planets disintegrating or falling into their star, says Johanna Teske, an astronomer at the Carnegie Institution for Science in Washington DC. “It’s an inference at this point. We need to look at these systems in more detail,” she says.
Swallowed by stars
Last year, for the first time, astronomers observed a star in the process of eating a planet. But unravelling whether a star has done so in the past is challenging, because planets are tiny compared with their hosts, and their contents soon get diluted.
Different elements absorb and emit light of different wavelengths, so the composition of a star’s surface leaves a fingerprint on the light reaching Earth. But detecting whether a star has eaten a planet is similar to spotting a chocolate chip that’s been swirled into a bowl of vanilla ice cream, says Teske. Stars also vary a lot in their make-up, making it tough to prove that a star has a particular composition because it ingested a planet.
The 2,000 stars where aliens would catch a glimpse of Earth
To hunt for planet-eating stars, Liu and his colleagues performed a cosmic-twin study. Using the Gaia space telescope, they found 91 pairs of Sun-like stars nearby in the Milky Way, whose motions suggested that they were both born in the same gas cloud. The stars in such paired systems should have near-identical compositions and their similar lives should rule out many potential causes for discrepancies.
The team then used three ground-based telescopes to study the abundance of 21 elements in the pairs. If there were notable differences between a pair of stars, the researchers looked at whether this could be explained through noise in the data or other sources of variation. For seven pairs, “the difference has to be explained by one [star] ingesting a planet and the other not”, says Meridith Joyce, an astrophysicist at the Konkoly Observatory in Budapest, and a co-author on the paper.
Secret planet-eaters
The study suggests that around 8% of Sun-like star pairs in our region of the Milky Way harbour a planet-eater, says Liu. He adds that this estimate is conservative, because the team considered only stars ingesting rocky planets, whereas other stars might have eaten gaseous Jupiter- or Neptune-like bodies. The method would also have missed cases in which both stars had eaten a planet of similar composition.
Finding clear signs of planet ingestion in billion-year-old stars is “something unexpected”, he says. Astronomers often consider planet-eating to be a feature of a star’s early life, when planetary orbits are unstable and collisions are probable. But these meals must have been relatively recent, in the last few hundred millions of years, or theory suggests the evidence would’ve been undetectable, says Liu. The planets could have met their fate when their eroding atmospheres caused them to spiral inward, or some stars might have captured untethered rogue planets as they flew by, he adds.
Shi says that astronomers should examine these systems to see if any sibling exoplanets remain. The findings should make Earth-dwellers grateful, he says. The diversity of exoplanets has continued to shock astronomers; now it seems that “our Solar System is not only unique, but also undoubtedly peaceful”.
Since the 1950s, scientists have had a pretty good idea of how muscles work. The protein at the centre of the action is myosin, a molecular motor that ratchets itself along rope-like strands of actin proteins — grasping, pulling, releasing and grasping again — to make muscle cells contract.
The basics were first explained in a pair of landmark papers in Nature1,2, and they have been confirmed and elaborated on by detailed molecular maps of myosin and its partners. Researchers think that myosin generates force by cocking back the long lever-like arm that is attached to the motor portion of the protein.
The only hitch is that scientists had never seen this fleeting pre-stroke state — until now.
In a preprint published in January3, researchers used a cutting-edge structural biology technique to record this moment, which lasts just milliseconds in living cells.
‘The entire protein universe’: AI predicts shape of nearly every known protein
“It’s one of the things in the textbook you sort of gloss over,” says Stephen Muench, a structural biologist at the University of Leeds, UK, who co-led the study. “These are experiments that people wanted to do 40 years ago, but they just never had the technology.”
That technology — called time-resolved cryo-electron microscopy (cryo-EM) — now has structural biologists thinking like cinematographers, turning still snapshots of life’s molecular machinery into motion pictures that reveal how it works.
Muench and his colleagues’ myosin movie isn’t feature-length; it consists of just two frames showing different stages of the molecular motion. Yet it confirmed a decades-old theory and settled debates over the order of the steps in myosin’s choreography. Other researchers are focusing their new-found director’s eye on understanding cell-signalling systems, including those underlying opioid overdoses, the gene-editing juggernaut CRISPR–Cas9 and other molecular machines that have been mostly studied with highly detailed, yet static structural maps.
Researchers have been able to capture images of individual myosin proteins as they pull on an actin filament during muscle contraction, confirming key details of the motion. First, myosin becomes cocked or primed, then it attaches to actin and its lever arm swings in a power stroke that slides the filament by about 34 nanometres.Credit: Sean McMillan
“The big picture is to move away, as much as possible, from this single, static snapshot,” says Georgios Skiniotis, a structural biologist at Stanford University in California, whose team used the technique to record the activation of a type of cell-signalling molecule called a G-protein-coupled receptor (GPCR)4. “I want the movie.”
Freeze frame
To underscore the power of cryo-EM, Skiniotis and others like to draw a comparison with one of the first motion pictures ever made. In the 1870s, photographer Eadweard Muybridge used high-speed photography technology, which was cutting edge at the time, to capture a series of still images of a galloping horse. They showed, for the first time, that all four of the animal’s hooves leave the ground at once — something that the human eye could not distinguish.
Similar insights, Skiniotis says, will come from applying the same idea to protein structures. “I want to get a dynamic picture.”
The ability to map proteins and other biomolecules down to the location of individual atoms has transformed biology, underpinning advances in gene editing, drug discovery and revolutionary artificial-intelligence tools such as AlphaFold, which can predict protein structures. But the mostly static images delivered by X-ray crystallography and cryo-EM, the two technologies responsible for the lion’s share of determined protein structures, belie the dynamic nature of life’s molecules.
“Biomolecules are not made up of rocks,” says Sonya Hanson, a computational biophysicist at the Flatiron Institute in New York City. They exist in water and are constantly in motion. “They’re more like jelly,” adds Muench.
The secret lives of cells — as never seen before
Biologists often say that “structure determines function”, but that’s not quite right, says Ulrich Lorenz, a molecular physicist at the Swiss Federal Institute of Technology in Lausanne (EPFL). The protein poses captured by most structural studies are energetically stable ‘equilibrium’ states that provide limited clues to the short-lived, unstable confirmations that are key to chemical reactions and other functions performed by molecular machines. “Structure allows you to infer function, but only incompletely and imperfectly, and you’re missing all of the details,” says Lorenz.
Cryo-EM is a great way to get at the details, but capturing these fleeting states requires careful preparation. Protein samples are pipetted onto a grid and then flash frozen with liquid ethane. They are then imaged using powerful electron beams that record snapshots of individual molecules (sophisticated software classifies and morphs these pictures into structural maps). The samples swim in water before being frozen, so any chemical reaction that can happen in a test tube can, in theory, be frozen in place on a cryo-EM grid — if researchers can catch it quickly enough.
That’s one of the first big challenges says Joachim Frank, a structural biologist at Columbia University in New York City who shared the 2017 Nobel Prize in Chemistry for his work on cryo-EM. “Even for very dexterous people, it takes a few seconds.” In that time, any chemical reactions — and the intermediate structures that mediate the reactions — might be long gone before freezing. “This is the gap we want to fill,” says Frank.
Caught in translation
Frank’s team has attempted to solve this problem using a microfluidic chip. The device quickly mixes two protein solutions, allows them to react for a specified time period and then delivers reaction droplets onto a cryo-EM grid that is instantly frozen.
This year, Frank’s team used their device to study a bacterial enzyme that rescues ribosomes, the cell’s protein-making factories, if they stall in response to antibiotics or other stresses. The enzyme, called HflX, helps to recycle stuck ribosomes by popping their two subunits apart.
Frank’s team captured three images of HflX bound to the ribosome, over a span of 140 milliseconds, which show how it splits the ribosome like someone carefully removing the shell from an oyster. The enzyme breaks a dozen or so molecular bridges that hold a ribosome’s two subunits together, one by one, until just two are left and the ribosome pops open5. “The most surprising thing to me is that it’s a very orderly process,” Frank says. “You would think the ribosome is being split and that’s it.”
Muench and his colleagues, including Charlie Scarff, a structural biologist at the University of Leeds, and Howard White, a kineticist at Eastern Virginia Medical School in Norfolk, Virginia also used a microfluidic chip to make their myosin movie by quickly mixing myosin and actin3.
‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures
But the molecular motor is so fast that, to slow things down ever further, they used a mutated version of myosin that operates about ten times slower than normal. This allowed the team to determine two structures, 110 milliseconds apart, that showed the swing of myosin’s lever-like arm. The structures also showed that a by-product of the chemical reaction that powers the motor — the breakdown of a cellular fuel called ATP — exits the protein’s active site before the lever swings and not after. “That is ending decades of conjecture,” says Scarff.
With this new model in mind, Scarff, whose specialty is myosin, and Muench are planning to use time-resolved cryo-EM to study how myosin dynamics are affected by certain drugs and mutations that are known to cause heart disease.
Microfluidic chips aren’t the only way researchers are putting time stamps on protein structures. A team led by Bridget Carragher, a structural biologist and the technical director at the Chan Zuckerberg Imaging Institute in Redwood City, California, developed a ‘spray and mix’ approach that involves shooting tiny volumes of reacting samples onto a grid before flash-freezing them6.
In another set-up — developed by structural physiologist Edward Twomey at Johns Hopkins University School of Medicine in Baltimore, Maryland, and his team — a flash of light triggers light-sensitive chemical reactions, which are stopped by flash-freezing7. Lorenz’s kit, meanwhile, takes already frozen samples and uses laser pulses to reanimate them for a few microseconds before they refreeze, all under the gaze of an electron microscope8.
‘Limitations everywhere’
The different approaches have their pros and cons. Carragher’s spray and mix approach uses minute sample volumes, which should be easy to obtain for most proteins; Twomey says his ‘open-source’ light-triggered device is relatively inexpensive and can be built for a few thousand dollars; and Lorenz says his laser-pulse system has the potential to record many more fleeting events than other time-resolved cryo-EM technologies — down to a tenth of a microsecond.
Revolutionary cryo-EM is taking over structural biology
But these techniques are not yet ready to be rolled out. Currently, there are no commercial suppliers of time-resolved cryo-EM technology, limiting its reach, says Rouslan Efremov, a structural biologist at the VIB-VUB Center for Structural Biology in Brussels. “All these things are fussy and hard to control and they haven’t really caught on,” adds Carragher.
Holger Stark, a structural biologist at the Max Planck Institute for Multidisciplinary Sciences in Göttingen, Germany, says that current forms of time-resolved cryo-EM might be useful for some molecular machines that operate on the basis of large-scale movements — for example, the ribosome. However, the technology is not ready for use on just any biological system. “You have to cherry pick your subject,” he says. “We have limitations everywhere.”
Despite the shortcomings, there are plenty of interesting questions for researchers to start addressing now using these techniques. Twomey is using time-resolved cryo-EM to study Cas9, the DNA-cutting enzyme behind CRISPR gene editing, and says the insights could help to make more efficient gene-editing systems.
Lorenz used his laser-melting method to show how a plant virus swells up after it infects a cell to release its genetic material7 (see ‘Viral blow-up’). He is now studying other viral entry molecules such as HIV’s envelope protein. “We have these static structures, but we don’t know how the system makes it from one state to the other, and how the machinery works,” he says.
Source: Ref.8
Skiniotis’s team is investigating GPCRs, including one called the β-adrenergic receptor, which has been implicated in asthma. Their work4 shows how activating the receptor triggers it to shed its partner G-protein, a key step in propagating signals in cells.
The researchers are now studying the same process in a GPCR called the µ-opioid receptor, which is activated by morphine and fentanyl among other drugs. In preliminary unpublished results, they have found that the dynamics of the receptor help to explain why some drugs such as fentanyl are so potent in promoting G-protein activation, while others aren’t. Such insights, says Skiniotis, are glimpses of unseen biology that molecular movies promise to reveal. Just don’t forget the popcorn.
Being a parent is often seen as a career obstacle, but it can actually make you a better scientist, says nutrition epidemiologist Lindsey Smith Taillie.Credit: Paul Taillie
More than once in the past few years, in a variety of informal settings, I’ve overheard senior scientists recommend hiring people without children over those who are parents. Their reasoning, I gather, is that a parent might be smart and well-trained, but wouldn’t have the time or dedication to cut it in research. As a mid-career scientist with two young children, these comments floored me.
In my experience, these assumptions, typically aimed at faculty members or postdocs, are all too frequent. And although people tend to phrase their concerns in a gender-neutral way, about ‘parents’, they’re almost always talking about women. Women, who comprise only 33% of full professors despite accounting for more than 50% of the PhDs awarded each year, and who consistently have lower salaries than men across all ranks. Women, who still disproportionately do the bulk of domestic work, including childcare, around the globe. Although I’ve heard these comments more often from men, I’ve also heard female scientists essentially dismiss someone if they become pregnant, as if their career is over before really getting started.
The superhero skills needed to juggle science and motherhood
It’s true that being an academic woman with children is hard. In my field of global nutrition, it’s very common to have meetings at odd hours or need to travel at short notice. Dealing with school closures and frequent illnesses feels similar to playing whack-a-mole, needing to keep research moving while juggling childcare.
I have benefited from being white, heterosexual, married, neurotypical and working at a prestigious university. Crucially, I also benefit from having a husband, also a scientist, who does at least half of the childcare, cooking and cleaning, something that I think is still rare in heterosexual co-parenting relationships. Still, even with all of this privilege, it’s hard: there are many days when my brain feels shattered.
But, becoming a parent has also undoubtedly helped my career; both my rate of publishing and my number of grants won have increased substantially since my first daughter was born in 2017. I’ve become a more productive scientist. Here’s why.
Time scarcity
Those senior scientists who say that parents have less time are probably right: before I had children, I worked longer hours. I would go down rabbit holes into the early evening and often on weekends. I felt like I was always working and filling up all of my available time with research. But now, I write e-mails, papers and grant drafts like I am taking an exam: with intense focus and high speed. Having time constraints has forced me into a mindset of relentless prioritization, which has increased my scientific acumen and decision-making.
For example, last December, I was asked to present my research at a US Senate committee hearing on type 2 diabetes. I had only four days to put together a written testimony summarizing decades of data and build a case for why nutrition matters in diabetes prevention. My husband was out of town and, in a cruel twist of fate, one of my children got a throat infection. It was stressful, but I was able to draft the entire testimony in a single workday — something that, before having children, would have easily taken the entire four days. Also, because I knew that I’d need to rush off any second to tend to my sick child, I was able to push through my anxiety about writing such an important document and focus on getting pen to paper.
Why two scientist-mums made a database of parental-leave policies
Arguably, you could achieve this effect without children by having stronger work–life boundaries. That’s great, but it never worked for me. Having a non-negotiable deadline of school or day-care pick-up forced me to let go of my perfectionist tendencies, supercharging my productivity.
A fresh perspective
Becoming a parent also gave me a first-hand perspective on my field of nutrition. For example, similar to most young children, my three- and six-year-olds are picky eaters, and it’s been a challenge for me to get them to try new foods and eat veggies while also keeping food waste to a minimum. From social media, I discovered that giving my daughters tiny portions presented in a cute way — for example, a single broccoli floret with a toothpick and dip or a few spoons of soup in a colourful cupcake tin — helped with this. These experiences with my own children have helped me to incorporate families’ perspectives into my research design and to test interventions to prevent household food waste, increasing the chances that our interventions will be more effective for more people.
Parent networks
Even more importantly, becoming a parent has allowed me to create networks. I collaborate with colleagues who are also parents, and sharing our experiences has helped us to become friends, able to empathize and help each other out in a pinch, with work or with parenting.
Lindsey Smith Taillie’s experiences as a mother have helped to improve her food-waste intervention designs.Credit: Hacienda Mucuyche
This network has extended far beyond my immediate colleagues, too. Through the social-media platform Facebook, I have found an online community of academic mothers, which has become a treasure trove of help and advice. More than just tips on sippy cups or football clubs, people in the group share the hidden rules of playing the academic game, from handling job searches as a couple of two academics to going up for tenure or accepting tough grant reviews.
The networking benefits of parenthood translate to the team science, too. Sharing experiences about children helps to build rapport with collaborators — we’re able to bond over our common scientific challenges and laugh about our children’s silly stunts.
Emotional intelligence
Parenting has also made me a more effective teacher. For example, because my older daughter is obsessed with mythical creatures, I’m the proud owner of a giant inflatable pink unicorn costume — something that I have worn in class to demonstrate the power of food marketing, when discussing the Starbucks pink unicorn frappuccinos. It was silly, but that silliness has been helpful for connecting with students. Beyond pink unicorns, telling stories about my children in the classroom has made me more relatable and helped me to show key points about nutrition by invoking real-world examples. Parenting has helped me to expand my horizons and relate more to my students.
Could roving researchers help address the challenge of taking parental leave?
Being a parent has made me a better mentor, more able to support students who have children and helping me to treat all students as whole people, with a life outside science — whether or not that includes children. Because of my own experience, I feel better equipped to help my students to integrate the facets of their lives and find what balance looks like for them. It’s been difficult to speak out publicly about both the challenges and merits of parenting as a scientist. When I push back against things such as out-of-hours meetings, I worry about increasing biases against parents and especially mothers, perpetuating challenges to hiring and retaining them in the scientific workforce. But as time goes on, and I see these biases persist, I think that now is the time to speak up and be clear. Parenting isn’t my scientific kryptonite; it’s my superpower.
A couple of years ago, Apple introduced the Voice Control feature without a lot of fanfare, but it was quickly considered a valuable feature that improved user accessibility.
Even if it doesn’t necessarily impact your accessibility, you can still try a different way of using your computer where you use your voice to control more, and your mouse or trackpad less.
Hidden Features
We love a good hidden feature, that functionality on your device that may not necessarily be active at default but which elevates your experience (or even your life). This series explores our pick of them – and you can read them all here.
Previously, I pointed out a way that Windows users could more easily navigate their memory (both their own and digital) in Clipboard History. Here, I’m going to talk about a way macOS users can extend their reach when it comes to controlling their devices beyond their mice, trackpads, and keyboards.
This feature isn’t just impressive as a demonstration of technology or of Apple’s consideration for users (Voice Control is included for free on every suitable Mac, iPhone, and iPad), it’s also important for users who have little or limited mobility in their hands or arms. Many people can already experience the possible benefits, and I think it’s worth giving it a spin even just to familiarize yourself in case you ever need it.
In this article, I’ll tell you how you can get started with Voice Control, and there’s some decent customization options that you can use to tweak this feature to work best for you.
You can always revert to your previous settings, or you might just fall in love with your macOS device all over again.
Who is Voice Control for and what can you do with it?
Using your macOS device with your voice may feel unusual to many able-bodied users. Still, it’s a crucial feature for many who have different user needs. With Voice Control, you can do many things like navigate desktops and apps, interact with what’s onscreen, and edit and dictate text.
You can also get more into the details of Voice Control to fine-tune it to your needs and preferences further by going beyond the standard commands that come built-in with the feature. You can craft your own commands, delete commands, and import and export custom commands. If this isn’t tuned to your liking enough, you can go further and add specific individual vocabulary terms, as long as custom vocabulary is supported in the language you’re using Voice Control in.
To be able to use Voice Control, you’ll have to make sure that your device is running macOS Catalina 10.15 or later. Apple recommends that you keep your laptop lid open if you’re using a laptop, use an external microphone if you can, or use a display that comes with a microphone built-in. It also reassures users that all the processing of any audio happens right on your device, so whatever you tell it stays private.
Also, in a somewhat cruel but necessary irony, you do have to set up Voice Control using traditional peripheral or built-in controls. If you have accessibility needs. this probably means you’ll need to ask for help in getting Voice Control set up.
How to turn on Voice Control in macOS
Again, make sure you’re running macOS Catalina 10.15 or later. Once you’ve made sure this is the case, follow the following steps:
1. Open your Apple menu which can be found by clicking the Apple icon. Click on System Settings (which might be System Preferences in some macOS versions).
2. Select Accessibility. In Accessibility settings, click on Voice Control.
3. Turn Voice Control on.
When you turn on Voice Control for the first time, your Mac device might be prompted to do a one-time download of the feature from Apple. Allow it to do this and then you’ll be able to use the feature after this is complete.
Once you have Voice Control turned on
If you’re using macOS Sonoma or a later version, a blue icon depicting a sound signature will appear in your menu bar. Clicking this icon will open the actions you can do with Voice Control like start or stop listening, change the microphone or language, and opening more Voice Control settings.
You can also stop and start Voice Control listening by using the commands “Go to sleep” and “Wake up.”
If you’re using macOS Ventura or an earlier version, a microphone icon will appear on your screen. Clicking this icon will open your Voice Control actions like ‘Wake up’ to start listening and ‘Sleep’ to stop listening.
You can also say “Wake up” or “Go to sleep” to start or stop Voice Control listening.
How to get started using Voice Control with a guide
You can familiarize yourself with the built-in voice commands of Voice Control or bring them up whenever you’d like to see a full list by telling it “Show commands.” This will show you the list of existing commands that you can use based on the context of what you have open at that moment.
Then you can say one of the commands from the list to carry out an action.
In macOS Sonoma or later, you can go for a practice run with Voice Control to get used to it with an interactive guide that Apple has included.
To access this interactive guide, follow the following path:
Go to the Apple menu by clicking the Apple icon > Click on System Settings > Select Accessibility in the sidebar of settings > Click on Voice Control in the Accessibility menu > Select Open Guide.
It’s in these same Voice Control settings where you can also turn the Play sound when command is recognised setting to make it easier to know if and when Voice Control has recognized a command (similar to Siri).
In the ever-evolving world of technology, the iPhone stands out as a beacon of innovation and user-friendly design. However, even the most seasoned users might not be aware of all the tricks up its sleeve. Thanks to insights from tech enthusiast Brandon Butch in his latest video we get to find out about 15 hidden iPhone tricks, to elevate your user experience. These tricks span across the iPhone’s operating system and applications, offering practical solutions and shortcuts for everyday tasks. Let’s dive into these hidden gems that can make your iPhone usage more efficient and enjoyable.
Effortless Photo Sharing in Messages: Have you ever noticed underlined prompts in your messages, like “send pics”? By tapping on these, you can quickly jump to your photo album to select images for sharing. Moreover, pressing and holding the plus icon next to the iMessage field reveals your recent photos, making attachment a breeze.
Sound Search in Photos: The Photos app now allows you to search for specific sounds within your videos. Whether it’s the sound of clapping or someone talking, a simple keyword search will bring up videos with matching audio waveforms.
Customizing the Always On Display: For iPhone 15 Pro/Pro Max users, managing the Always On Display feature just got easier. You can turn it off when the phone is in your pocket or during certain Focus modes to save battery life and minimize distractions.
Editing Thumbnails for People and Pets: The Photos app recognizes faces of people and pets, allowing you to select more appealing thumbnails for them. This enhances the visual organization and personalization of your photo albums.
Adding Nicknames to Contacts: Simplify your interactions with Siri and make searching easier by assigning nicknames to your contacts. This small change can significantly streamline communication and information retrieval.
Simplifying Hyperlinks in Mail: Embedding web addresses in your emails is now quicker. Simply paste a copied link directly over the selected text to hyperlink it, eliminating extra steps.
Locating People and Checking Elevation with Siri: Siri isn’t just for setting reminders. You can also use it to find someone’s location or check the current elevation, adding a layer of convenience to navigation and information gathering.
Logging Health Data via Siri: Keep track of health-related data, such as weight or blood sugar levels, with simple voice commands. This seamless integration makes health tracking a part of your daily routine.
Reducing System Data Storage: iPhone 15 Pro/Pro Max users can free up storage space consumed by cache files through the ProRes video feature, tackling the issue of bloated system data.
Customizing Camera Focal Lengths: Switch between different focal lengths (e.g., 24mm, 28mm, 35mm) directly from the camera interface. This feature offers creative control over photo composition, catering to your artistic vision.
Enhanced Text Selection: Moving the text cursor is now more efficient with a two-finger swipe anywhere on the screen, providing a quicker alternative to the traditional space bar trackpad method.
Dynamic Wallpaper Changes: With iOS 17, changing wallpapers automatically is straightforward. Set a photo shuffle from a specific album with customizable frequency to keep your screen looking fresh.
These tips, while not widely known, provide practical enhancements to the iPhone experience, showcasing the depth and versatility of iOS functionalities. Whether it’s through improved navigation, personalized settings, or streamlined processes, these tricks are designed to make your daily iPhone use more efficient and enjoyable. By exploring these features, you can unlock the full potential of your device, ensuring that your iPhone continues to serve as a powerful tool in your technology arsenal.
Source & Image Credit: Brandon Butch
Filed Under: Apple, Apple iPhone
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
The Samsung Galaxy S24 is a device brimming with features designed to elevate the user experience through customization and efficiency. If you’re looking to harness the full potential of your device, you’re in the right place. The video below from WhatGear will walk you through some of the most compelling hidden tips and tricks that can transform how you use your Samsung Galaxy S24.
Custom Fingerprint Actions: First off, did you know that your Galaxy S24 allows for custom fingerprint actions? That’s right, you can assign unique tasks to different fingerprints. Imagine unlocking your phone and launching Google Wallet with just the touch of your middle finger. This feature not only adds a layer of convenience but also speeds up access to your most-used apps directly from the lock screen.
Enhanced Always On Display: Next, let’s talk about personalizing the Always On Display. The Galaxy S24 enables you to use photos with transparent backgrounds to make your phone’s display truly yours. This not only enhances the visibility of your notifications and time but also adds a personal touch to your device even when the screen is off.
Photo Editing with AI: The phone’s advanced AI capabilities offer a significant upgrade in photo editing. Adjusting the size and position of subjects within images or filling in backgrounds seamlessly has never been easier. This AI-driven feature ensures that your photos look professional with minimal effort.
Creating Custom Wallpapers: For those who love to personalize every aspect of their device, the Galaxy S24 teaches you how to create unique wallpapers. By combining different stickers, you can design a phone that reflects your style and personality.
Handwritten Note Conversion: The ability to convert handwritten notes into digital text using the phone’s camera is a boon for anyone looking to digitize their notes or documents efficiently. This feature saves time and makes organizing your digital documents a breeze.
Organizing Photos with Auto Albums: Keeping your photos organized is simpler with the auto-updating album feature. It automatically sorts photos by recognizing faces, which means finding pictures of your friends and family is now more straightforward.
Preventing Form Refresh: Have you ever been frustrated by forms refreshing and losing all your data when switching between apps? The Galaxy S24 offers a solution by allowing you to lock apps in the background, preserving your progress on any task.
Minimizing Call Interruptions: If you find incoming calls intrusive, the Galaxy S24 provides settings adjustments to make these notifications less disruptive, ensuring a smoother user experience.
Back Tap Actions: Borrowing a feature found in other smartphones, the Galaxy S24 introduces back tap actions for quick access to features or apps. This intuitive method allows for faster navigation and customization of your device.
Separate App Sound: For the multitaskers, the Separate App Sound feature is a game-changer. It allows you to play media from different apps through separate audio outputs. For instance, you can enjoy music through a Bluetooth speaker while watching a video on the phone’s speaker, optimizing your audio experience.
Mobile Hotspot Optimization: The Galaxy S24 offers tips for optimizing your mobile hotspot, including adjusting the Wi-Fi band and enabling Wi-Fi 6 for different uses like gaming. This ensures a stable and fast connection for all your devices.
Quick Share Enhancements: Integration of Samsung’s Quick Share with Google’s Nearby Share now includes setting time limits on shared files for added privacy. This feature makes sharing files with friends and colleagues more secure and straightforward.
Reminder Alarms: Utilize the alarm feature for more than just waking up. Set visual reminders for recurring tasks, like taking out the recycling, using images as alarm backgrounds. This creative use of alarms helps keep your daily tasks organized.
S Pen Lock Screen Artwork: For Galaxy Ultra users, the device offers the ability to create custom lock screen artwork with the S Pen. This adds a personal and creative touch to your phone, making it stand out.
These hidden tips and tricks reveal the depth of customization and efficiency the Samsung Galaxy S24 offers. By exploring these features, you can enhance your user experience and make your device truly personal. Whether it’s through optimizing your productivity or expressing your style, the Galaxy S24 has something for everyone.
Source & Image Credit: WhatGear
Filed Under: Android News, Mobile Phone News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.